image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Hello everyone, I`m new to using MongoDB and I need to write an essay about the storage of images in MongoDB. Do you know where I can find ressources that explain the theory of how images are stored in MongoDB? Like explaining the process step by step and how they are retrieved? I’d be thankful for every piece of information I can get.\nThank you!",
"username": "Maria_N_A"
},
{
"code": "",
"text": "Please go through the documentation for more details.https://docs.mongodb.com/manual/core/gridfs/",
"username": "Sudhesh_Gnanasekaran"
},
{
"code": "businesslogoUrl",
"text": "If you need to store the actual file on MongoDB, then GridFS is the way to go (as suggested by @Sudhesh_Gnanasekaran). However there is an alternative – store a url to an image in your document.For example, I have a business collection which has a field, logoUrl. This is a url to an actual cloud storage solution such as Amazon S3. It could also be a url to a CDN like Cloudflare or Fastly.MongoDB used to have an application called Stitch that made S3 integration easy. It transitioned into Realm which may have a different process.Otherwise, the manual process is straight forward:\nA frontend/client-facing application allows a user to upload a file, the file is sent via an API to a storage solution (AWS S3, Google Cloud Storage, Backblaze, etc), the URL response from the upload is sent back and the record in the database is updated/created with the URL to the image.",
"username": "Andrew_W"
},
{
"code": "",
"text": "Hello @Maria_N_A, welcome to the MongoDB Community forum.In addition to GridFS and image locations within the document, you can store small image data (like profile photos) within a document itself. MongoDB has a data type called as “Binary data” (see MongoDB BSON Types). Note that a MongoDB document can be up to 16 Megabytes.",
"username": "Prasad_Saya"
},
{
"code": "GridFSfs.chunksfs.filesfs.*InlineReference",
"text": "Welcome to the MongoDB Community @Maria_N_A!Per the earlier suggestions, there are three common approaches for working with images and other binary assets:GridFS: As suggested by @Sudhesh_Gnanasekaran, large images (or binary blobs) can be stored using the GridFS API. This API is supported by official MongoDB drivers: it splits large files into smaller chunks (255KiB by default) which are stored as separate documents in an fs.chunks collection with a reference document including metadata in an fs.files collection (note: the default fs.* namespace can be changed). The GridFS API is a client-side implementation – a MongoDB deployment doesn’t have any special configuration for the underlying collection data. For more info on the implementation, see the GridFS spec on GitHub.Inline: As suggested by @Prasad_Saya, smaller images (within the 16MB document size limit) can be stored directly in a MongoDB document using the BinData (binary data) BSON type.Reference: As suggested by @Andrew_W, images can be saved to an API or filesystem, with only the image reference stored in the database.Storing binary files in a database can be convenient for distributing across multiple locations (via replication), for working around file system limitations (eg files per directory or file naming), for serving streaming or protected content, or for storing larger assets that aren’t going to be served directly to end users. Aside from the GridFS documentation page that has already been linked in an earlier comment, Building MongoDB Applications with Binary Files using GridFS (part 1 and part 2) may also be helpful reading.If images or large binary assets are being served directly to end users, the Reference approach is usually most suitable because files can be pushed out to an API and/or CDN (Content Delivery Network) and cached/resized for better user experience. There is less overhead serving images directly from a web server versus going through an application server and database server for every request. A downside of using references is that they can get out of sync with the source document.There are also hybrid use cases, such as storing large images (for example, raw images from a digital camera or phone) in the database and then passing those to an API or image processing library to create resized versions which will be served directly to end users.Before deciding to store images in your database, I would make sure there is a clear benefit for the intended use case.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "A post was split to a new topic: Guide or tutorial to referencing images with S3 in Realm functions?",
"username": "Stennie_X"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Process of storing images in MongoDB | 2021-01-30T19:17:02.147Z | Process of storing images in MongoDB | 143,317 |
null | [
"queries",
"atlas-search",
"atlas",
"text-search"
] | [
{
"code": "{\n index: 'fts',\n\"wildcard\": {\n \"query\": \"blo* *orders\",\n \"path\": \"Text\",\n allowAnalyzedField:true\n }\n}\n \"Text\": {\n \"analyzer\": \"lucene.standard\",\n \"multi\": {\n \"mySecondaryAnalyzer\": {\n \"analyzer\": \"lucene.english\",\n \"type\": \"string\"\n },\n \"wildcardAnalyzer\": {\n \"analyzer\": \"lucene.keyword\",\n \"type\": \"string\"\n }\n },\n \"type\": \"string\"\n }\n",
"text": "Hello!\nIs there a way to do wildcard search in phrases?\nI have a term “blood disorder” and my search query is “blo order”. I would like to see “blood disorder” in the result. Any ideas of how to do this?\nI tried the following and it didn’t work:Snippet of the analyzer:I have the lucene.keyword and standard analyzer on the field that has terms.Thanks,\nSupriya",
"username": "Supriya_Bansal"
},
{
"code": "",
"text": "Hi @Supriya_Bansal,It sounds like this type of search is more suitable to regex operator no?https://docs.atlas.mongodb.com/reference/atlas-search/regex/I haven’t tested the wildcard pattern yet, but using a custom analyzer with ngram adjustment might better tokenize results for your searches…Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "minGramwildcard{\n index: 'fts',\n \"compound\": {\n \"must\":[\n {\n \"wildcard\": {\n \"query\": \"blo*\",\n \"path\": \"Text\",\n allowAnalyzedField:true\n }\n },\n {\n \"wildcard\": {\n \"query\": \"*orders*\",\n \"path\": \"Text\",\n allowAnalyzedField:true\n }\n }\n ]\n }\n}\n",
"text": "Thanks @Pavel_Duchovny. I tried nGram analyzer and it does work. However the search results are little different than expected because of the minGram. For now, I have decided to go with wildcard approach.Best,\nSupriya",
"username": "Supriya_Bansal"
}
] | Wildcard search in phrases | 2021-03-30T21:14:14.838Z | Wildcard search in phrases | 4,888 |
null | [
"compass"
] | [
{
"code": "",
"text": "Does MongoDB Compass use any kind of keep alive technology? It was noticed last night that we had occasional connections to the DB from Compass - which I had left running on my PC.",
"username": "Brian_Lang"
},
{
"code": "",
"text": "I don’t believe Compass itself sends any keep alive but the Node.js driver on top of which Compass is built likely does.",
"username": "Massimiliano_Marcon"
}
] | Compass question | 2021-03-31T15:13:19.313Z | Compass question | 1,614 |
[
"node-js"
] | [
{
"code": "",
"text": "I have been trying to develop a command for a Discord bot using NodeJs through Visual Studio Code. The command basically calculates your “weight” just by taking a random number from 1 to 1000. I wanted the bot to be able to memorize that number and return it if the person uses the command more than once. I really have no idea how to pull data from the database and return it through a message. Here is what I have so far in the code.",
"username": "horman_coax"
},
{
"code": "",
"text": "First of all, @horman_coax, welcome to the MongoDB Community Forums! We’re lucky to have you here! Second, let’s focus on the MongoDB aspect of this problem. Looks like you have commented out the code that interfaces with MongoDB. Could you please share what error you get when you run the code? Thank you!",
"username": "JoeKarlsson"
}
] | Need assistance pulling information from a database | 2021-04-05T01:29:54.072Z | Need assistance pulling information from a database | 2,129 |
|
null | [] | [
{
"code": "",
"text": "Is there anyway to recover data in collection-.wt and index-.wt without the WiredTiger.wt metafile?I am running into a corrupt WiredTiger.wt file:2021-04-04T23:09:43.541+0000 E STORAGE [initandlisten] WiredTiger (-31802) [1617577783:541456][18206:0x7fb27b94cdc0], file:WiredTiger.wt, connection: WiredTiger.wt: handle-read: pread: failed to read 4096 bytes at offset 73728: WT_ERROR: non-specific WiredTiger errorAny thoughts?",
"username": "William_Crowell1"
},
{
"code": "",
"text": "Welcome to the MongoDB Community @William_Crowell1!What specific version of MongoDB server are you running?Salvaging data from corrupted files is challenging, but your general options are as described in Recover corrupted files (MongoDB 3.0.12) .wt - #4 by Stennie.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi Stennie,\nI am running Mongo 3.2.7.\nThanks,\nBill Crowell",
"username": "William_Crowell1"
},
{
"code": "",
"text": "Hi @William_Crowell1,Please have a below link if that will help in your case.April, 1 2019: I've received a LOT of feedback on this article since it was published. I would like to point out that although the methods described here may still work, MongoDB introduced a --repair flag in 4.0.3 that simplifies this process...",
"username": "ROHIT_KHURANA"
},
{
"code": "",
"text": "Rohit and Stennie,Thanks for the link. I have read through this and tried the steps in the article. The issue is that you need a non-corrupt WiredTiger.wt file to use the wt commands in that article.How we resolved it is a bit of a hack, but here is what we did. We created a new and empty MongoDB 4.0.5 instance. Then, we created a new collection in this new instance for each of the collections in the old instance. The collection names do not have to be the same, but the collection-.wt filenames must be static on the new instance. These collection-.wt file names cannot be predetermined and cannot change. We had about 42 collections in our old instance, therefore, we created 42 new collections on the 4.0.5 instance. Next, we inserted an arbitrary string (document) into these new collections. They cannot be empty collections because the repair step will not work. We then copied over the old collection*-.wt files while again keeping the new collection file names. Next, we started the MongoDB 4.0.5 instance with the --repair option. This added the metadata to the WiredTiger.wt file. The challenge is mapping the collection names from the old instance over to the new instance, but we at least have the data. Lastly, we ran a script to recreate the indexes on the collections.Again, a hack but it worked.Regards,Bill Crowell",
"username": "William_Crowell1"
},
{
"code": "",
"text": "The engineer who resolved this issue wrote the following article on Medium on how it was resolved: Repairing MongoDB When WiredTiger.wt File is Corrupted | by Ido Ozeri | Medium",
"username": "William_Crowell1"
}
] | Is there anyway to recover data in collection-*.wt and index-*.wt without the WiredTiger.wt metafile? | 2021-04-04T23:10:18.685Z | Is there anyway to recover data in collection-*.wt and index-*.wt without the WiredTiger.wt metafile? | 11,969 |
null | [
"atlas-search"
] | [
{
"code": "",
"text": "We love the atlas search but are there any plans to relax this restriction that it must be the first stage? We run multi-tenant setups and if we could add a match before the search it would improve performance a lot.",
"username": "Mark_Lynch"
},
{
"code": "$search: {\n compound: {\n // You can use `should`, `must`, or `mustNot` as needed for your actual query in here\n filter: [{ // You can filter on multiple things in here\n equals: {\n path: '_tenantIdFieldHere',\n value: ObjectId('tenant id'),\n },\n }],\n },\n},\n",
"text": "Depending on how you are handling multi-tenancy, you can filter with the compound operator in the $search pipeline stage.For example:",
"username": "Nathan_Knight"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | $search must be the first stage of any pipeline it appears in | 2020-05-08T13:30:29.258Z | $search must be the first stage of any pipeline it appears in | 2,678 |
null | [
"swift"
] | [
{
"code": "Projectrealm.objects(Project.self)schemaVersionmigrationBlockconfig.schemaVersion = 1 // Older version was 0\nconfig.migrationBlock = { migration, oldSchemaVersion in\n logger.debug(\"Performing migration since old schema \\(oldSchemaVersion) is behind current schema \\(currentSchemaVersion) ...\")\n if oldSchemaVersion < 1 {\n migration.enumerateObjects(ofType: Project.className()) { (old, new) in\n new![\"isDeleted\"] = false\n }\n })\n}\n...\nschemaVersion = 1;\nmigrationBlock = <__NSMallocBlock__: 0x600001537ed0>;\n...\nRealm.asyncOpen(configuration: config)Realm(configuration: config)...\nschemaVersion = 1;\nmigrationBlock = (null);\n...\nProject.selfSwiftUI.App",
"text": "Hi,I’m using Realm database with iOS and SwiftUI. I added one new field to one of my Realm object schema, let’s call it Project. Now when I do realm.objects(Project.self) it complaints that I need to do a migration. So as per the iOS migration doc, I added a schemaVersion and migrationBlock to my realm init code as follows.I printed out config just before opening the Realm, and I can see that the config has a migration block field set.I printed the Realm config again after I tried opening Realm as Realm.asyncOpen(configuration: config) and Realm(configuration: config), but the migration block is missing.My migration block is not getting triggered and I suspect that I’m missing something! Thank you in advance for any help in the right direction EditAs per some Stack Overflow posts, I also tried using a bigger schema version (2, 5, 10 to be specific). Also, I initialize my main realm (which Project.self is a part of) when the SwiftUI.App is initialized.",
"username": "siddharth_kamaria"
},
{
"code": "didFinishLaunchingWithOptionsRealm.Configuration.defaultConfiguration = config",
"text": "It think a more complete code sample and additional info is needed.Oh and this is a cross post to SO in case an answer pops up at either site",
"username": "Jay"
},
{
"code": "didFinishLaunchingWithOptionsRealm.Configuration.defaultConfiguration = configwithRealmConfig()AppStateuser=\\(user.id)// AppState.swift\n\nimport Foundation\nimport RealmSwift\nimport Combine\n\nclass AppState : ObservableObject {\n\n static let shared = AppState()\n\n let app = RealmSwift.App(id: \"tasktracker-abcd\")\n\n @Published var isPremiumUser = false\n @Published var user: RealmSwift.User? = nil\n @Published var userRealm: Realm? = nil\n\n // Other fields and vars\n\n private init() {\n // Open realm when a user logs in\n $user.compactMap { $0 }\n .eraseToAnyPublisher()\n .receive(on: DispatchQueue.main)\n .flatMap { openRealm(for: \"user=\\($0.id)\") }\n .sink(receiveCompletion: { [weak self] completion in\n if case let .failure(error) = completion {\n print(\"Unable to open realm due to error: \\(error.localizedDescription)\")\n }\n }, receiveValue: { [weak self] realm in\n print(\"User realm opened successfully! Realm config: \\(realm.configuration)\")\n self?.userRealm = realm\n })\n .store(in: &subscriptions)\n }\n}\n.../Documents/<user_id>/<user_partition>.realm// RealmService.swift\n\nimport Foundation\nimport RealmSwift\nimport Combine\n\nfunc openRealm(for partition: String) -> AnyPublisher<Realm, Error> {\n return Just(partition)\n .filter { !$0.isEmpty }\n .receive(on: DispatchQueue.main)\n .compactMap(withRealmConfig(partitionValue:))\n .flatMap(openCorrectRealmFlavor(with:))\n .eraseToAnyPublisher()\n}\n\nprivate func withRealmConfig(partitionValue: String) -> Realm.Configuration? {\n \n var config: Realm.Configuration\n\n // Init realm config with or without sync \n if AppState.shared.isPremiumUser {\n config = user.configuration(partitionValue: partitionValue)\n } else {\n config = Realm.Configuration.defaultConfiguration\n }\n deletePathComponentsTillDocsDir(config.fileURL) // Reduces path to app containers \"Documents\" dir\n config.fileURL?.appendPathComponent(user.id) // Creates a user dir with the user.id\n \n // Create dir if not exists using FileManager\n do {\n try createRealmDirIfNotPresent(dir: config.fileURL!)\n } catch let error {\n logger.error(\"Error creating directory for realm: \\(error.localizedDescription)\")\n return nil\n }\n \n // Set realm filename and extension\n config.fileURL?.appendPathComponent(partitionValue.encoded())\n config.fileURL?.appendPathExtension(\"realm\")\n\n // Perform migrations if required\n config.schemaVersion = currentSchemaVersion\n config.migrationBlock = { migration, oldSchemaVersion in\n logger.debug(\"Performing migration since old schema \\(oldSchemaVersion) is behind current schema \\(currentSchemaVersion) ...\")\n if oldSchemaVersion < 1 {\n migration.enumerateObjects(ofType: Project.className()) { (old, new) in\n new![\"isDeleted\"] = false\n }\n }\n }\n \n print(\"Realm config: \\(config)\")\n return config\n}\n\nprivate func openCorrectRealmFlavor(with config: Realm.Configuration) -> AnyPublisher<Realm, Error> {\n if AppState.shared.isPremiumUser {\n print(\"Opening online sync'ed realm...\")\n return openSyncedRealm(with: config)\n } else {\n print(\"Opening local realm...\")\n return openLocalRealm(with: config)\n }\n}\n\nprivate func openSyncedRealm(with config: Realm.Configuration) -> AnyPublisher<Realm, Error> {\n return Realm.asyncOpen(configuration: config).eraseToAnyPublisher()\n}\n\nprivate func openLocalRealm(with config: Realm.Configuration) -> AnyPublisher<Realm, Error> {\n return Result { try Realm(configuration: config) }.publisher.eraseToAnyPublisher()\n}\n",
"text": "Hi @Jay,I’ll provide some more context and code. First let me answer your 3 questions.Full Code - I initialize a class called AppState as a singleton. This class handles user login and creation of user realm with partition as user=\\(user.id). Since, it is a static singleton object, it will be instantiated at application start.And here’s my Realm initialization code. As described earlier, I use Sync’ed realm as well local realm based on whether the user is a premium user or not. Also, my custom config mainly changes the realm path to .../Documents/<user_id>/<user_partition>.realm instead of using the default one. Apart from that, it also adds the migration block and schema version.Thank you in advance for all the assistance!",
"username": "siddharth_kamaria"
},
{
"code": "didFinishLaunchingWithOptionsimport SwiftUI\nimport RealmSwift\n\nclass AppDelegate: NSObject, UIApplicationDelegate {\n func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey : Any]? = nil) -> Bool {\n print(\"Migration Code Goes here\")\n return true\n }\n}\n\n@main\nstruct SwiftUI_TestApp: App {\n \n @UIApplicationDelegateAdaptor(AppDelegate.self) var appDelegate\n \n var body: some Scene {\n WindowGroup {\n ContentView()\n }\n }\n}\nRealm.Configuration.defaultConfiguration = configwithRealmConfig()",
"text": "I think didFinishLaunchingWithOptions is a UIKit concept from AppDelegate. But I’m using SwiftUI so I’m not 100% sure what is the equivalent of it there. Correct me if I’m mistaken.The migration needs to start at app launch. This is untested but you can add a class to give your app the traditional appDelegate functionality like thisand then add your migration code in the AppDelegate. Again, that’s off the top of my head and 100% untested so if it doesn’t work, I will update. There are probably better options as well, but this was the first that came to mind considering the ObjC underpinnings of Realm.I’m using Realm Sync for premium usersI don’t think you can migrate sync’d realms - the file structure is different. It’s not clear if you are attempting to migrate a local realm as well as a sync’d realm though. I could be wrong on that point so please correct me if soI tried assigning Realm.Configuration.defaultConfiguration = config in my withRealmConfig() method below, but that had no effect on migration block whatsoever.This should now work if the AppDelegate code works.",
"username": "Jay"
},
{
"code": "",
"text": "Hi @Jay,I’ll try point 1 and 3, out and let you know. Regarding point 2 - yes my migration covers sync’ed realms as well as local realms. The case that I tried was for a sync’ed realm. If the migration doesn’t work in sync’ed realms, what is the alternate option - populate the new fields manually in Atlas, delete the realm and reinitialize it?Regards,\nSid",
"username": "siddharth_kamaria"
},
{
"code": "",
"text": "Hi @Jay,Thank you for pointing out that migration blocks don’t work for sync’d realms. I tried my code for local realm and it triggers the migration block when schema version is changed.It would be a good idea to have this behavior highlighted on the docs page as a gotcha! I hope someone from Realm team updates it.Regards,\nSid",
"username": "siddharth_kamaria"
},
{
"code": "",
"text": "Adding @Chris_Bush here since this might be of interest for the docs team too. ",
"username": "Dominic_Frei"
}
] | Realm schema migration block not getting triggered | 2021-03-19T19:25:18.613Z | Realm schema migration block not getting triggered | 4,678 |
null | [
"data-modeling"
] | [
{
"code": "{\n \"Kai\"\n {\n \"Texts\"\n {\n \"text\": \"my text\"\n {\n {\"lik\": \"lisa\", \"date\": \"124561\"},\n {\"lik\": \"lisa\", \"date\": \"124561\"}\n }\n },\n \"Pictures\"\n {\n {\"url\": \"vffdvgf\", \"date\": \"124561\"},\n {\"url\": \"bfgfgbfg\", \"date\": \"124561\"}\n }\n }\n }\n }\n}\n",
"text": "I want to build a website where people can register, they create a profile and they share text and pixtures and other can like them.I want to save all users in a collection, my idea is to save them like i would do with a JSON File.I dont know if MongoDB have limits or how good it would perform, i have here a example how my idea looks in Json format:users:i hope that example helps to understand my idea, so into a Mongo Collection (Tabel) i would save all users and also the pictures and text from the users and if somebody like somethink i put it also into there. Then if somebody open the user profil i would only need to read everythink which is saved into the Mongo Entry for that username, what do you think will this work good?And is there some limits how many users i can save into one collection (table)?",
"username": "Florian_Silbereisen"
},
{
"code": "{\n \"Kai\"\n {\n \"Texts\"\n\"lik\": \"lisa\"\"lik\"stringnumberdatearrayobject\"Pictures\"\"pictures\": [\n { \"url\": \"https://picsum.photos/200/300\", \"date\": ISODate(\"2021-03-30T05:04:55.041Z\") },\n { \"url\": \"https://picsum.photos/200\", \"date\": ISODate(\"2021-03-28T00:00:00Z\") }\n]\npicturesurldateurldateusers",
"text": "Hello @Florian_Silbereisen, welcome to the MongoDB Community forum!MongoDB data is stored as documents in collections. Each document has fields and their values - it is JSON like structure. The actual data is stored in the database as BSON types.Some comments about your document structure. A document has fields and values - and your structure has some things missing. For example, the following do not make a JSON (you are intending to create a JSON structure, and a JSON must have a field and value). See JSON.And, this \"lik\": \"lisa\" - it doesn’t comprehend very well. What is \"lik\"? If it is a field name it needs to be meaningful to everyone who work with the structure.There are number of field types a document can contain - string, number, date, array and object are some of them. You can take advantage of these in building your document structure. And a document can store upto 16 MB (mega bytes) of data.Let us take the \"Pictures\" from your structure. The pictures can be stored as an array of picture information. For example:The pictures field is an array, i.e., its type is array. Each element in the array is a sub-document (a.k.a. embedded document) and represents a picture’s attributes, the url and the date. The field type of the url is a string and that of the date is of type date. This structure allows that you can store dozens or hundreds or thousands of picture information within a document. Also, the MongoDB query language (MQL) allows querying these picture information efficiently.I suggest you make your present structure a proper JSON. The usage is an aspect of how you structure the document. How are the documents structured? It is another subject, generally known as data modeling or database design (see Data modeling Introduction).And is there some limits how many users i can save into one collection (table)?The first question is: “How many users are you planning to store in your users collection?”. Based on that information you can plan the storage requirements for your database. Also, see MongoDB Limits and Thresholds.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Great answer, Prasad!As you mentioned, the way that you’ll query the data is super important in determining how you should store your data. The rule of thumb when modeling data in MongoDB is: data that is accessed together should be stored together.",
"username": "Lauren_Schaefer"
},
{
"code": "",
"text": "i have a new question, currently i dont understand why the mongo database show me the info that i have 1 document, please look to my picture:\nmongo1312×900 45.2 KBi see that i have 3 entrys in the collection, in mysql this would mean i table with 3 entrys, why here in mongo it show me 1 collection and 1 documents? and how much Megabyte can i save into such a collection or document?",
"username": "Florian_Silbereisen"
},
{
"code": "",
"text": "Hi Florian,I’m not sure why you’re seeing inconsistent information about how many documents are in your collection. I’m hopeful that if you refresh the page, you’ll see 3 documents in your collection.A document is roughly equivalent to a row in MySQL. A collection is roughly equivalent to a table in MySQL. For more information on how terms map between relational databases and MongoDB, check out my blog post on the topic:\nhttps://www.mongodb.com/article/map-terms-concepts-sql-mongodb/The inner screenshot above shows you have 3 documents in the the users collection.Documents have a 16mb size limit. See https://docs.mongodb.com/manual/reference/limits/#:~:text=The%20maximum%20BSON%20document%20size,MongoDB%20provides%20the%20GridFS%20API for more information.",
"username": "Lauren_Schaefer"
},
{
"code": "",
"text": "You are right when i load the website new it does show me now 3 documents. I did not know that you need to reload the site.thank you also for the other infos.",
"username": "Florian_Silbereisen"
},
{
"code": "",
"text": "I’m happy to hear the reload fixed the problem!",
"username": "Lauren_Schaefer"
},
{
"code": "",
"text": "Hello, i have a question about the following, in my database i have save a user with his login infos as a document into the “users” collection. I want to add into this user a couple of “user questions”, the questions i want to write as a object which contains all questions. I see now during testing that my querys does update all user questions, what i want to have is that a user question gets only updated if already exist or it should be add if not exist. What i understand currently is that the databse does see my query for insert and update like a query which should update the complete questions object, but i want the query to only update the questions which already exist or that it adds the questions as a new object entry if the questions does not already exist and every questions also have one number, so maybe it could be possible to archive this, but i dont know how to write the query correctly, please take a look at my current database entry and my query:\nquestion1370×690 44.7 KB",
"username": "Florian_Silbereisen"
},
{
"code": "",
"text": "@Florian_Silbereisen This looks like a new question. Can you open a new topic for it? This allows us to ensure questions don’t get buried and allows for others in the future to quickly find the answer that has been marked as correct.",
"username": "Lauren_Schaefer"
},
{
"code": "",
"text": "ok here is the new topic: Insert or update into a object with one query",
"username": "Florian_Silbereisen"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Questions about MongoDB Document Structure | 2021-03-29T17:42:58.026Z | Questions about MongoDB Document Structure | 3,451 |
null | [] | [
{
"code": "",
"text": "Hi Team,https://jira.mongodb.org/browse/SERVER-53477 is a Jira ticket where some code changes were done for ThreadPool implementation. But the fixed Version Says 4.8.0. is it a internal development branch numbering?How do i request mongo to backport the fix in 4.0 release. I added a comment in the Jira ticket but want to know what is the formal procedure.",
"username": "venkataraman_r"
},
{
"code": "X.YY",
"text": "Hi @venkataraman_r,The fixVersion for SERVER-53477 is 4.9.0, which refers to a development/unstable version that will eventually become MongoDB 5.0. The historical MongoDB versioning scheme uses odd numbered release series (X.Y, where Y is odd) to indicate development series. The next major release series of MongoDB will be 5.0, which also includes a new quarterly release schedule and versioning approach: Accelerating Delivery with a New Quarterly Release Cycle, Starting with MongoDB 5.0.SERVER-53477 was raised during the development cycle and it is not clear if the code changes are relevant for 4.0. If there is benefit in a backport, the team responsible for that development area will assess the effort and risk involved.I assume you are hoping this may address SERVER-54805, but more investigation is needed before determining if that is the case.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | What is the Mongo version 4.9.0 in Fixed Version refers to | 2021-04-05T21:14:46.592Z | What is the Mongo version 4.9.0 in Fixed Version refers to | 3,104 |
[
"swift"
] | [
{
"code": "import SwiftUI\nimport RealmSwift\n\nclass AutoMaker: Object, ObjectKeyIdentifiable {\n @objc dynamic var name = \"\"\n}\n\nstruct ContentView: View {\n @State private var text = \"\"\n @ObservedResults(AutoMaker.self) var autoMakers\n \n var body: some View {\n TextField(\"Automaker Name\", text: $text)\n Button(action: addItem) {\n Text(\"Add\")\n }\n List {\n ForEach(autoMakers) {autoMaker in\n Text(autoMaker.name)\n }\n .onDelete(perform: $autoMakers.remove)\n }\n }\n \n func addItem() { \n let maker = AutoMaker()\n maker.name = text\n $autoMakers.append(maker)\n }\n}\n\nstruct ContentView_Previews: PreviewProvider {\n static var previews: some View {\n ContentView()\n }\n}\n",
"text": "Hi, I’m new to the MongoDB community with an interest in using MongoDB Realm with SwiftUI. Using some of the SwiftUI samples provided with Realm as guidance, I wanted to write an example of using Realm with the smallest amount of code. This is what I came up with (not even an HStack or padding to make it look nice):And this is what it looks like:It amazes me that I can create a working sample that adds and deletes items from a database with less than 20 lines of added code.I do have a question however. In keeping with the theme of writing the smallest amount of code, if I want to add sample data to show up in the Preview window, what is the simplest way to do that? I found code in Andrew Morgan’s RChat example where there is a Realm.bootstrap() function called as the first line in PreviewProvider code (here). The function loads a realm database before the preview code is called. In that example, each Realm object class implements a protocol called Samplable in an extension and then provides sample data.Is this the simplest way to test a SwiftUI view that uses Realm? Or is there a briefer way?Thanks!\n–Tom",
"username": "TomF"
},
{
"code": "",
"text": "Thanks for the little code! That will be very useful to a lot of people that just getting started.It sounds like you’re asking for a pre-populated Realm with test data. Is that correct? One option is to bundle a Realm with your app and then make a copy of it. Bundled Realms are read only so the copy would then provide read/write access.There’s a section in the legacy Realm Documentation about Bundling a Realm.Alternately on app start you can query data or the Realm file and if it doesn’t exist, create some realm objects and write them.If that’s not what you’re asking, can you clarify?",
"username": "Jay"
},
{
"code": "struct ContentView_Previews: PreviewProvider {\n static var previews: some View {\n ContentView()\n }\n}\nstruct AuthorView_Previews: PreviewProvider {\n static var previews: some View {\n Realm.bootstrap()\n \n return AppearancePreviews(AuthorView(userName: \"[email protected]\"))\n .previewLayout(.sizeThatFits)\n .padding()\n }\n}\nextension Realm: Samplable {\n static var samples: [Realm] { [sample] }\n static var sample: Realm {\n let realm = try! Realm()\n try! realm.write {\n realm.deleteAll()\n User.samples.forEach { user in\n realm.add(user)\n }\n Chatster.samples.forEach { chatster in\n realm.add(chatster)\n }\n ChatMessage.samples.forEach { message in\n realm.add(message)\n }\n }\n return realm\n }\n \n static func bootstrap() {\n do {\n let realm = try Realm()\n try realm.write {\n realm.deleteAll()\n realm.add(Chatster.samples)\n realm.add(User(User.sample))\n realm.add(ChatMessage.samples)\n }\n } catch {\n print(\"Failed to bootstrap the default realm\")\n }\n }\n}\n",
"text": "Actually I was referring to providing data to be used in the Preview area of XCode. So test Realm data to be used in this part of my code:I want to be able to add test data to the PreviewProvider so that I can work on my app using the Preview canvas, which will allow me to work without having to build and run the app each time I make a change to a view. In the RChat app, test data is provided to the Preview area of some views by calling Realm.bootstrap():That bootstrap function on Realm was added as an extension (the RChat source code is here), and looks like this:So I’m just wondering if there is a simpler way to provide test Realm data to the Preview window (I’m trying to figure out the simplest way). Or would I need to create a similar bootstrap method?",
"username": "TomF"
},
{
"code": "ContentView",
"text": "Hi @TomF, love your sample app!How to work with SwiftUI (Canvas) previews is a bit of an ongoing obsession of mine. I find them incredibly useful, but frustrating at times!When they’re set up, being able to see a live preview of any view (including light and dark modes simultaneously) is great. Trying to debug them, not so much – unexplained, opaque error messages and no ability to add breakpoints or even print to the console. It’s worth remembering that these previews are just another view, and so I will sometimes replace my ContentView with the contents of one of my previews so that I can debug it.I’m looking forward to seeing what other ideas people have to work with Realm data (especially when working with Realm Sync and partitions).",
"username": "Andrew_Morgan"
},
{
"code": "let app: RealmSwift.App? = nil\nvar gRealm: Realm! = nil\n\nstruct ContentView: View {\n @ObservedResults(Your_observed_model.self) var myModel\n @State var navigationViewIsActive: Bool = false\n \n init() {\n gRealm = try! Realm(configuration: Realm.Configuration(inMemoryIdentifier: \"MyInMemoryRealm\"))\n\n let user0 = UserClass(name: \"Jay\")\n let user1 = UserClass(name: \"Leroy J.\")\n let chat0 = ChatsterClass(chat: \"chat 0\")\n let chat1 = ChatsterClass(chat: \"chat 1\")\n let message0 = MessageClass(msg: \"Hello, World\")\n let message1 = MessageClass(msg: \"To the moon!\")\n\n try! self.realm.write {\n gRealm.add([user0, user1, chat0, chat1, message0, message1])\n }\n }\n ...\n",
"text": "If you want to provide Realm backing data to SwiftUI, an in-memory Realm is a super simple approach and a minimal amount of code. This can be used with Swift or SwiftUI as Realm is the backing data is a separate from the UI itself.Here’s a quickie example to populate an in memory realm with two users, two chat and two messagesThe above code would run at app start and would make those objects available throughout the app. You could add, edit and remove from the in memory realm just like an on disk or sync’d realm.if there is a simpler way to provide test Realm data to the Preview windowI think that’s about as simple as it can get!",
"username": "Jay"
},
{
"code": "struct ContentView_Previews: PreviewProvider {\n \n static var previews: some View {\n createData()\n return ContentView()\n }\n \n static func createData() {\n let realm = try! Realm()\n try! realm.write {\n realm.deleteAll()\n realm.add(AutoMaker(name: \"Ford\"))\n realm.add(AutoMaker(name: \"Honda\"))\n realm.add(AutoMaker(name: \"Volkswagen\"))\n }\n }\n}\nimport SwiftUI\nimport RealmSwift\n\nclass AutoMaker: Object, ObjectKeyIdentifiable {\n @objc dynamic var name = \"\"\n \n convenience init (name: String) {\n self.init()\n self.name = name\n }\n}\n\nstruct ContentView: View {\n @State private var text = \"\"\n @ObservedResults(AutoMaker.self) var autoMakers\n \n var body: some View {\n VStack {\n HStack {\n TextField(\"Automaker Name\", text: $text)\n Button(action: addItem) {\n Text(\"Add\")\n }\n }\n List {\n ForEach(autoMakers) {autoMaker in\n Text(autoMaker.name)\n }\n .onDelete(perform: $autoMakers.remove)\n }\n }\n .padding()\n }\n \n func addItem() {\n $autoMakers.append(AutoMaker(name: text))\n text = \"\"\n }\n}\n\nstruct ContentView_Previews: PreviewProvider {\n \n static var previews: some View {\n createData()\n return ContentView()\n }\n \n static func createData() {\n let realm = try! Realm()\n try! realm.write {\n realm.deleteAll()\n realm.add(AutoMaker(name: \"Ford\"))\n realm.add(AutoMaker(name: \"Honda\"))\n realm.add(AutoMaker(name: \"Volkswagen\"))\n }\n }\n}\n",
"text": "Thanks @Jay , I like the idea of using an in-memory Realm for adding some test code to the running app. However, what I was trying to figure out here is the simplest way to add some test data to Previews. In the WWDC 2020 presentation “Introduction to Swift” (here), the presenter, Jacob Xiao, builds up a sample Sandwich app over the course of the presentation. At the very end he says, “But there’s one last thing that I want to point out, and it’s something we didn’t see. We just built up this entire application and tested all of these rich behaviors without ever once building and running our app. Xcode Previews let us view, edit, and debug our application much faster than was ever possible before.”So in the spirit of trying to write a sample SwiftUI Realm app with the least amount of code, I wanted to also provide some test data to the Previews window with the least amount of code. That way I could continue building up the app without ever hitting the Build and Run button.I looked at Andrew’s RChat example some more and figured out how to add the test code directly to the Preview code:This worked to provided Realm data for the Preview window:What surprised me with this code is that the Realm sample data in the Previews window is independent of the data I add to the running app. I thought they both might be using the same default Realm instance and therefore might interfere with one another (the Previews code might erase what I add to the running code), but they don’t, which is great.Thanks also @Andrew_Morgan for the insights into using Previews. I like the tip on how you can debug Preview code. Hopefully Apple will introduce some improvements to debugging Previews in June. Yea, dealing with Realm Sync and partitions will be another level up.Here is my final code for creating a working Realm SwiftUI app with the least amount of code:Thanks all for the help.\n–Tom",
"username": "TomF"
},
{
"code": "",
"text": "One more thing I should add. For those wanting to run the code above, here are the complete steps:You now have a working SwiftUI app that persists data to a Realm database (using realm-cocoa version 10.7.2).",
"username": "TomF"
},
{
"code": "RemoteHumanReadableError: Failed to update preview.\n\nThe preview process appears to have crashed.\n\nError encountered when sending 'display' message to agent.\n\n==================================\n\n| RemoteHumanReadableError: The operation couldn’t be completed. (BSServiceConnectionErrorDomain error 3.)\n| \n| BSServiceConnectionErrorDomain (3):\n| ==BSErrorCodeDescription: OperationFailed\n",
"text": "Does this work reliably for anyone at all? When I try to add mock data in the preview, if I run the preview with the bootstrap method, I get an error. But then I take out the bootstrap method, and the mock data is there.",
"username": "Lukasz_Ciastko"
},
{
"code": "",
"text": "Hi @Lukasz_Ciastko,that preview is working for me image1040×974 119 KBIn general, I find Canvas previews one of the coolest and most frustrating/unreliable features of SwiftUI. I keep hoping that the next version will be more robust (or at least come with some debugging features) – perhaps there will be good news at the next WWDC!",
"username": "Andrew_Morgan"
}
] | A SwiftUI example with the smallest amount of code - How to add test data to preview | 2021-03-10T14:27:02.462Z | A SwiftUI example with the smallest amount of code - How to add test data to preview | 4,969 |
|
null | [
"student-developer-pack"
] | [
{
"code": "",
"text": "I unable to find the process to avail the discount on certification exam. I have already mailed to [email protected] but there has been no reply since the last 5 days. what should i do?",
"username": "Hardik_Gupta"
},
{
"code": "",
"text": "Hi Hardik,Welcome to the forum!I have responded to your email (I was out of the office on Monday and Friday due to our national holidays). I am closing this topic.Good luck with your exam!Lieke",
"username": "Lieke_Boon"
},
{
"code": "",
"text": "",
"username": "Lieke_Boon"
}
] | How to get the 100% discount on certification exam | 2021-04-05T18:44:40.541Z | How to get the 100% discount on certification exam | 5,837 |
null | [
"replication"
] | [
{
"code": " rs.conf()\n {\n \t\"_id\" : \"rs0\",\n \t\"version\" : 11,\n \t\"protocolVersion\" : NumberLong(1),\n \t\"writeConcernMajorityJournalDefault\" : true,\n \t\"members\" : [\n \t\t{\n \t\t\t\"_id\" : 1,\n \t\t\t\"host\" : \"192.168.123.86:27017\",\n \t\t\t\"arbiterOnly\" : false,\n \t\t\t\"buildIndexes\" : true,\n \t\t\t\"hidden\" : false,\n \t\t\t\"priority\" : 3,\n \t\t\t\"tags\" : {\n \t\t\t\t\n \t\t\t},\n \t\t\t\"slaveDelay\" : NumberLong(0),\n \t\t\t\"votes\" : 1\n \t\t},\n \t\t{\n \t\t\t\"_id\" : 2,\n \t\t\t\"host\" : \"192.168.123.87:27017\",\n \t\t\t\"arbiterOnly\" : false,\n \t\t\t\"buildIndexes\" : true,\n \t\t\t\"hidden\" : false,\n \t\t\t\"priority\" : 1,\n \t\t\t\"tags\" : {\n \t\t\t\t\n \t\t\t},\n \t\t\t\"slaveDelay\" : NumberLong(0),\n \t\t\t\"votes\" : 1\n \t\t}\n \t],\n \t\"settings\" : {\n \t\t\"chainingAllowed\" : true,\n \t\t\"heartbeatIntervalMillis\" : 2000,\n \t\t\"heartbeatTimeoutSecs\" : 10,\n \t\t\"electionTimeoutMillis\" : 10000,\n \t\t\"catchUpTimeoutMillis\" : 60000,\n \t\t\"catchUpTakeoverDelayMillis\" : 30000,\n \t\t\"getLastErrorModes\" : {\n \t\t\t\n \t\t},\n \t\t\"getLastErrorDefaults\" : {\n \t\t\t\"w\" : 1,\n \t\t\t\"wtimeout\" : 0\n \t\t},\n \t\t\"replicaSetId\" : ObjectId(\"58764207c0fb84b262e464aa\")\n \t}\n }\n rs.printSlaveReplicationInfo()\n source: 192.168.123.87:27017\n \tsyncedTo: Mon Jun 29 2020 22:59:56 GMT+0200 (CEST)\n \t221205 secs (61.45 hrs) behind the primary \n-- primary log --\n 2020-07-02T10:38:50.733+0200 I COMMAND [LogicalSessionCacheRefresh] command config.$cmd command: update { update: \"system.sessions\", ordered: false, allowImplicitCollectionCreation: false, writeConcern: { w: \"majority\", wtimeout: 15000 }, $db: \"config\" } numYields:0 reslen:383 locks:{ Global: { acquireCount: { r: 1253, w: 1165 } }, Database: { acquireCount: { w: 1165 } }, Collection: { acquireCount: { w: 1165 } } } storage:{} protocol:op_msg 30651ms\n 2020-07-02T10:38:50.743+0200 I CONTROL [LogicalSessionCacheRefresh] Failed to refresh session cache: WriteConcernFailed: waiting for replication timed out; Error details: { wtimeout: true }\n-- secondary log --\n 2020-07-02T10:39:32.575+0200 I NETWORK [LogicalSessionCacheReap] Starting new replica set monitor for rs0/192.168.123.86:27017,192.168.123.87:27017\n 2020-07-02T10:39:32.577+0200 I NETWORK [LogicalSessionCacheReap] Successfully connected to 192.168.123.86:27017 (1 connections now open to 192.168.123.86:27017 with a 0 second timeout)\n 2020-07-02T10:39:32.577+0200 I NETWORK [LogicalSessionCacheRefresh] Successfully connected to 192.168.123.86:27017 (2 connections now open to 192.168.123.86:27017 with a 0 second timeout)\n 2020-07-02T10:39:32.577+0200 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for rs0/192.168.123.86:27017,192.168.123.87:27017\n 2020-07-02T10:39:32.577+0200 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for rs0/192.168.123.86:27017,192.168.123.87:27017\n 2020-07-02T10:39:48.441+0200 I CONTROL [LogicalSessionCacheRefresh] Failed to refresh session cache: WriteConcernFailed: waiting for replication timed out; Error details: { wtimeout: true }\n",
"text": "Hi,I have MongoDB 4.0 with two replicaset members.After the initial synchronization the secondary stays days behind the primary server.There are timeout errors every five minutes in the logs:What can I do to synchronize the replica set?wbr Tomaz",
"username": "Tomaz_Beltram"
},
{
"code": "rs.status()rs.printReplicationInfo()",
"text": "Hi Tomaz,I’m not sure why your replication state is like this, although it’s been some time since you posted this question. Are you still having this issue?If yes, could you post:And also please describe the hardware provisioned for the two nodes.Note that having an even number replica set nodes is not a recommended configuration. It is recommended to have at least three nodes for high availability purposes. Please see Replica Set Deployment Architectures for more information.Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi Kevin,\nThe MongoDB version is 4.0.19 running on Ubuntu 18.04 and it now doesn’t have this errors any more. I think the system was just overloaded, the load average was almost at 5. The rs.status() showed heartbeat was working but the rs.printReplicationInfo() stated that secondary is more than 60 hours behind.\nCould the reason be that the total index size is higher than system memory (64GB) and MongoDB is continusly reloading indices?\nThanks for your suggestions.\nwbr Tomaz",
"username": "Tomaz_Beltram"
},
{
"code": "",
"text": "Hi Kevin,I also have PSA architecture setup and in logs I am also seeing same error message and it takes almost 10 hours for my secondary server to sync and once it syncs back, again after few hours it will start lagging from primary. I have resized the oplog also almost to 2 TB , still the issue is not resolved. Please can you let us know how to fix this issue?\nWhich way can we handle this issue as we have baremetals and load average is not more than 5.",
"username": "Mamatha_M"
},
{
"code": "",
"text": "Hi Kevin,I am still facing the same lag issue , I tried initial sync twice but it was of no use. I see the same msg continuously in the log:Thu Feb 4 12:20:09.239 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for MONGO_PR/xxx.com:12011,xxx.com:12012,yyy.com:12013.",
"username": "Mamatha_M"
},
{
"code": "",
"text": "Hi Mamatha,Are you still facing this issue. As requested by Venkataraman in other thread:\nCan you please check the following output to check if SEC is running into any hang issue.Also please share the OS information.Thanks,\nKiran",
"username": "Kiran_Pamula"
}
] | Replication errors | 2020-07-02T11:18:58.606Z | Replication errors | 6,689 |
null | [
"connecting"
] | [
{
"code": "",
"text": "Before I begin, please let me be specific that issue may not be related to MongoDB, this could be some error/mistake at my end. I am just seeking help from the community.I have a mongodb atlas cluster where I have added 2 IPs in its whitelist (say ip1 and ip2)\nip1 belongs to a windows webserver hosted on google cloud. I am able to connect to atlas through this ip.ip2 belongs to a centos7 instance running tomcat on google cloud. I have a webapp which is trying to make a connection to mongodb atlas but I am getting timeout exception saying permission denied.My connection string is the same for both cases so that is not the point of failure. I believe there is some issue in tomcat which is blocking this webapp to connect to mongodb atlas.Please help.",
"username": "Mydesk_Mydesk"
},
{
"code": "ping <hostname>\ntelnet <hostname> 27017\n27017curl http://portquiz.net:27017\n",
"text": "Hi @Mydesk_Mydesk,Welcome to the community!My connection string is the same for both cases so that is not the point of failure. I believe there is some issue in tomcat which is blocking this webapp to connect to mongodb atlas.As it’s working for the client on ip1, your hypothesis that the issue existing on the centos7 client is probable. Have you performed any network troubleshooting tests to try rule out any network issues? If not, you can try the following from the centos7 client:Atlas clusters operate on port 27017 . You must be able to reach this port to connect to your clusters.The output from the above command should provide a response containing the outgoing IP that is attempting to connect to the Atlas cluster. This must be on your Network Access list.To obtain the hostname, you can click on the “Metrics” button in the Clusters tab from the Atlas UI. From here, you should see the hostnames of all nodes for a particular cluster.Example:\n\nimage2285×276 40.8 KB\nNote: It would be best to perform the commands against the PRIMARY member hostnameYou may also want to check out the Troubleshoot Connection Issues documentation.Hopefully this helps.Kind Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "ping telnetcurl http://portquiz.net:27017",
"text": "HI @Jason_Tran,Thanks for replying.Both ping and telnet commands are working for primary node and port 27017.\ncurl http://portquiz.net:27017 test is also successful and this ip is added in the mongodb cluster network access list.After a detailed analysis I was able to find out that selinux was the culprit. As per my understanding (which may be incorrect), It was blocking all outgoing requests from the tomcat webapp. I was able to connect to mongodb through my tomcat webapp after disabling selinux with help from this link.I am not very much aware of what selinux is and what are its advantages/disadvantages. Is this a good fix? Are there any workarounds?",
"username": "Mydesk_Mydesk"
},
{
"code": "enforcing",
"text": "Hi @Mydesk_Mydesk ,After a detailed analysis I was able to find out that selinux was the culprit. As per my understanding (which may be incorrect), It was blocking all outgoing requests from the tomcat webapp. I was able to connect to mongodb through my tomcat webapp after disabling selinux with help from this 1 link.\nI am not very much aware of what selinux is and what are its advantages/disadvantages. Is this a good fix? Are there any workarounds?Glad to hear you’ve worked out the cause of the connection failures from the centos client!SELinux (Security-Enhanced Linux) is a security module that provides stronger security mechanisms than the default Linux kernel. Similar to other security measures like firewalls and IP access lists, it is best to properly configure your environment rather than disabling the security measure altogether. The MongoDB documentation has more information on how to Configure SELinux if your environment is set to policy enforcing mode.If you want to disable or reduce the SELinux security for a development environment, you can find more information about this in your O/S reference documentation. The only instructions specific to MongoDB are in our installation tutorials.Best Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unable to connect to MongoDB Atlas through tomcat webapp centos7 GCP | 2021-04-03T14:41:45.836Z | Unable to connect to MongoDB Atlas through tomcat webapp centos7 GCP | 3,541 |
null | [
"vscode"
] | [
{
"code": "",
"text": "Mongo has so many jstests and it’s very important to look through them. However I find Go to Definition is not working in js files. If I read my own js file, it is OK. Anything I missed ? How do you guys jump among jstest files quickly ?",
"username": "Lewis_Chan"
},
{
"code": "",
"text": "Any help is appreciated .",
"username": "Lewis_Chan"
}
] | How to jump from jstest files in VS Code? | 2021-04-01T07:30:01.350Z | How to jump from jstest files in VS Code? | 3,407 |
[
"queries",
"dot-net"
] | [
{
"code": "private void ObterCidadeMaisProxima(Infectado infectado){\n var collection = GetCidadeCollection();\n\n string coordinates = $\"[ {infectado.Localizacao.Latitude}, {infectado.Localizacao.Longitude} ]\";\n\n string query = \"{ location: { $near : { $geometry: { type: \\\"Point\\\",\";\n\n query += $\"coordinates: {coordinates}\";\n\n query += \"}, $maxDistance: 1000000 } } }\";\n\n var e = BsonDocument.Parse(query);\n var doc = new QueryDocument(e);\n\n var result = collection.Find<Cidade>(doc).ToList<Cidade>();\n\n //So far I just want to test the function, that's why I don't return the result \n foreach(var r in result) Console.WriteLine(r);\n}\npublic GeoJson2DGeographicCoordinates Localizacao { get; set; }",
"text": "I have a little project in C# which consists of an API that receives data about (made up) infection cases. Each case object have the geographic coordinates of such case.The program is supposed to query the nearest city to that case and return it as a response. I try to do this with the following method:Both my classes (Infectado and Cidade) have the following attribute:public GeoJson2DGeographicCoordinates Localizacao { get; set; }And in MongoDB Atlas I have created the following indexes for both collections (infectado and cidade):So, I have two collections using geospatial indexes, and I’m trying to execute a $near query, but MongoDB can’t find the indexes.Here’s the output:fail: Microsoft.AspNetCore.Server.Kestrel[13] Connection id “0HM7HA1MKQ9RL”, Request id “0HM7HA1MKQ9RL:00000001”: An unhandled exception was thrown by the application. MongoDB.Driver.MongoCommandException: Command find failed: error processing query: ns=covidDB.cidadeTree: GEONEAR field=location maxdist=1e+06 isNearSphere=0 Sort: {} Proj: {} planner returned error :: caused by :: unable to find index for $geoNear query.Here’s some information about my enviroment:I could really use some help. Thanks in advance.",
"username": "Neophyte"
},
{
"code": "",
"text": " Hi @Neophyte and welcome to the MongoDB Community Forums!Thanks for posting your detailed question.Can you also share a sample document from the collection you’re querying?",
"username": "yo_adrienne"
},
{
"code": "",
"text": "Hi @yo_adrienne , sureHere’s the collection (printed from Atlas):\n",
"username": "Neophyte"
},
{
"code": "2d2dsphere2dlocation: [ <longitude>, <latitude> ]\n{ location : { $near : [ <longitude>, <latitude> ], $maxDistance: , <maxDistanceInRadians> } }2dsphere2dspherecoordinates2d string query = \"{ location: { $near : { $geometry: { type: \\\"Point\\\",\";\n\n query += $\"coordinates: {coordinates}\";\n\n query += \"}, $maxDistance: 1000000 } } }\";\nstring query = $@\"{ Localizacao: { $near: {coordinates}, $maxDistance: 100000 }\";\n2dsphere2dspherestring query = \"{ Localizacao: { $near: { $geometry: { type: \\\"Point\\\",\";\n",
"text": "Hi @Neophyte, thanks for the additional context!I see you have both a 2d and 2dsphere on your location data, is there a reason why?Usually, 2d indexes are needed if you are dealing with plane geometry or using legacy coordinate pairs like so:Data stored in this format needs to be queried like so:{ location : { $near : [ <longitude>, <latitude> ], $maxDistance: , <maxDistanceInRadians> } }On the other hand, a 2dsphere index, expects the data to be stored as one of the GeoJSON object types.I’d actually recommend using the 2dsphere index as you’re dealing with spherical geometry (e.g. finding nearest cities).Also, queries need to specify longitude first and then latitude .So there are two things to try, but first do these two steps, no matter which option you choose:OPTION 1: If keeping your legacy coordinate pairs:and replace with this:As you can see, your query needed to remove the $geometry operator and match the field name you created the index on.Try running the query and see if it returns what you’re expecting (note you may have to adjust the maxDistance you input in this query format as it expects Radians).OROPTION 2: If changing your data to GeoJSON format (recommended):Try running the query and see if what you expect returns.Let me know how this goes and feel free to ask any questions if you need me to clarify anything here. ",
"username": "yo_adrienne"
},
{
"code": "$near$nearSphereGeoJson2dGeographicCoordinatesGeoJsonPoint<GeoJson2dGeographicCoordinates>Cidadevar keys = Builders<Cidade>.IndexKeys.Ascending(\"Localizacao\");\nvar indexOptions = new CreateIndexOptions { Unique = true };\nvar model = new CreateIndexModel<Cidade>(keys, indexOptions);\nvar col = CollectionPopulator.GetCidadeCollection();\ncol.Indexes.CreateOne(model);\n",
"text": "Hi, @yo_adrienneI’ve create the two indexes just to be sure and tried using both $near and $nearSphere to see if any of them would work.So, following your advices in OPTION 2 I did some research on how to create a GeoJSON field and turns out both my collections had its coordinates declared as GeoJson2dGeographicCoordinates, when it is supposed to be GeoJsonPoint<GeoJson2dGeographicCoordinates>. I dropped and repopulated both of them, and recreated my index on the Cidade collection using MongoDBJust to bre sure, I also used the following code to create an index via the C# MongoDB Driver:Now I have these two:i961×207 16.2 KBBut the error persists. I’ve even tried querying with both of them, and with each one separatedly. (Are both collections supposed to have an index on the coordinates object?)",
"username": "Neophyte"
},
{
"code": "",
"text": "Here’s a sample of the new collection (had to put it in another post since I’m a new user):",
"username": "Neophyte"
},
{
"code": "coordinatescollection.Indexes.CreateOne(\n new CreateIndexModel<TDocument>(new IndexKeysDefinitionBuilder<TDocument>().Geo2DSphere(x => x.Localizacao)));\n2dsphere$neardouble lng = double.Parse(infectado.Localizacao.Longitude);\ndouble lat = double.Parse(infectado.Localizacao.Latitude);\n\n// point of interest\nvar point = new GeoJson2DGeographicCoordinates(lng, lat);\nvar pnt = new GeoJsonPoint<GeoJson2DGeographicCoordinates>(point);\n\nvar distanceInMeters = 100000; // multiply by 1609.34 if converting to miles\n\nvar filter = Builders<Cidade>.Filter.Near(x => x.Localizacao.Coordinates, pnt, distanceInMeters );\n\n// This is the actual query execution\nList<Cidade> cities = collection.Find(filter).ToList().Result;\n",
"text": "Thanks for the additional information!Are both collections supposed to have an index on the coordinates object?If you plan on running geospatial queries against the coordinates objects of both collections, then yes.I don’t see you choosing the actual Geo2DSphere key; did you create the 2d sphere index similar to the following?Alternatively, have you tried creating the index in Atlas directly? I don’t think this is the issue though, as your screenshot shows that there is a geospatial 2dsphere index present.Can you try changing your $near query like so (the more “C#” way to do this query):",
"username": "yo_adrienne"
},
{
"code": "double lng = double.Parse(infectado.Localizacao.Longitude);\ndouble lat = double.Parse(infectado.Localizacao.Latitude);\ndouble lng = infectado.Localizacao.Coordinates.Longitude;\ndouble lat = infectado.Localizacao.Coordinates.Latitude;\n",
"text": "Hi, @yo_adrienneI changed the query and got a new error:System.InvalidOperationException: Unable to determine the serialization information for x => x.Localizacao.Coordinates.Then I followed this workaround and the exception disappeared, but it still isn’t able to find the index.I also had to changetosince the former yields the following error:‘GeoJsonPoint’ does not contain a definition for ‘Longitude’ and no accessible extension method ‘Longitude’ accepting a first argument of type ‘GeoJsonPoint’ could be found (are you missing a using directive or an assembly reference?)I’ve tried dropping my indexes and creating one again in C#, but it wasn’t effective.",
"username": "Neophyte"
},
{
"code": "",
"text": "I’m wondering if it’s possible to create some function in Atlas that executes this query for me, one that would require only to the C# program to call it and provide the arguments. I’ve done something like this using Mongoose in a NodeJS side project, but the code was declared locally in Model. Is it possible to do this in C#?",
"username": "Neophyte"
},
{
"code": "",
"text": "Hi @Neophyte.I’m sorry that this has been more difficult than it needs to be. Let me try to get a definitive answer from the Drivers team on how to properly execute this geospatial query.I’m wondering if it’s possible to create some function in Atlas that executes this query for me, one that would require only to the C# program to call it and provide the arguments.Yes, you can create Realm Functions and then use the .NET SDK for Realm to call those functions, but that would be more complicated than it needs to be. Please feel free to try it out in the meantime, though!I’ll leave an update once I get a proper working example for the .NET driver. Thank you for your patience!",
"username": "yo_adrienne"
}
] | Unable to execute queries on geospatial atributes | 2021-03-27T23:48:56.994Z | Unable to execute queries on geospatial atributes | 6,512 |
|
null | [
"ops-manager"
] | [
{
"code": "\"detail\": \"Invalid config: dbPath may not change for processes. Detected the following change: PROCESS parameter dbPath changed from /data to /datatest\",\n\n\"error\": 400,\n\n\"errorCode\": null,\n\n\"parameters\": null,\n\n\"reason\": \"Bad Request\"\n",
"text": "Hi,I have an OpsManager managed cluster. Is there any way to update the dbPath of a process using REST API ?\nI used below API\ncurl --location --request PUT ‘http://url:port/api/public/v1.0/groups/groupid/automationConfig’with the automation configuration document. This document has the updated dbpath but I got the below response.{}Is there any other API that I should try ?Thanks,\nSanthanu",
"username": "santhanu_mukundan"
},
{
"code": "",
"text": "Hi,Do we have any news on my query ?Thanks,\nSanthanu",
"username": "santhanu_mukundan"
}
] | Changing dbpath of a process from REST API | 2021-03-30T13:49:28.022Z | Changing dbpath of a process from REST API | 2,197 |
null | [
"node-js",
"swift",
"server",
"kotlin"
] | [
{
"code": "Realm().create(_:value:update:)",
"text": "Want to clarify how a copy across realms (e.g. ‘Realm().create(_:value:update:)’) works. If the object (same primary key) exists on the target, will it sync changes or overwrite for its List<> (including nested List<>)? [assuming ‘update:’ is ‘.modify’]Also, does the latest version of Sync and Realm (Swift, Kotlin & Node) SDKs support ‘LinkingObjects’ within the object being copied across Realms?-Thanks!",
"username": "f_s"
},
{
"code": "",
"text": "This is not a complete answer, but what I have seen is that the List<> are initially copied (when new non-existent); in which such copy will also create the objects within the List<> onto that new Realm (target). THOUGH, when there are nested List<> it is not consistent to complete those (object) copies … The docs say the objects included in the List<> must be complete (i.e., must be copied over) on target Realm, but I have seen them get copied via the ‘.create()’ too (not always) - as prev stated.As far as an update, I am not aware of that answer.",
"username": "Reveel"
},
{
"code": ".modifyLinkingObjects",
"text": "@f_s the .modify parameter will perform the minimal set of changes to make the list match the new value. LinkingObjects are just the relationship in the reverse direction and will always be available as long as the forward link relationship is defined.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Ian_Ward thank you.Good to know on LinkingObjects.To confirm “minimal set of changes”, shall mean added and deleted List<> objects, or will it also update existing List<> objects with modifications?Will these updates include the same to its nested List<> too?",
"username": "f_s"
},
{
"code": "",
"text": ".modified will also propagate changes for deep/nested modifications",
"username": "Ian_Ward"
}
] | Upon copying an object across Realms, will the write be an overwrite or a sync, for its 'List<>'? | 2021-04-04T00:26:40.324Z | Upon copying an object across Realms, will the write be an overwrite or a sync, for its ‘List<>’? | 2,655 |
null | [] | [
{
"code": "",
"text": "Would anyone be interested in a subset of these changes?",
"username": "Josh_Soref"
},
{
"code": "",
"text": "How did you generate this diff? I took a quick glance and I noticed that it’s not strictly spelling changes.",
"username": "Daniel_Pasette"
},
{
"code": ".....",
"text": "Sorry, my link was wrong (too few dots, sorry, hand crafting URLs is dangerous, fixed) – there’s a big difference between .. and ... and it was late when I decided to post to this forum.The noise you were seeing is because GitHub was showing divergence between master and the branch, instead of only the additive changes.I have a tool which looks for substrings that are possibly misspelled. I iterate over its output, filtering out files / patterns:The MongoDB Database. Contribute to check-spelling/mongo development by creating an account on GitHub.\nInitially I use Google Sheet’s spell checker to pick replacements. Then I tend to run through the candidates looking for others. From there, I try to apply those changes. (Sometimes this goes a bit wrong.)\nThen I try to review the changes to avoid being laughed at (because the above is basically a naive search+replace and can mis-match on a substring).\nAs I’m reviewing the hunks (which is quite painful), I try to make sure things still make sense / fit (e.g. if someone is using whitespace indentation and the replacement is longer/shorter).",
"username": "Josh_Soref"
},
{
"code": "",
"text": "@Daniel_Pasette: is the corrected link more reassuring?",
"username": "Josh_Soref"
},
{
"code": "",
"text": "Hi Josh.Thanks for clarifying. I am interested in accepting the commit, mainly because it improves searchability of the code base, but there is some risk involved and validating the non-comment changes of the commit will be labor intensive. Most of the potential reviewers will be on vacation until Jan 4th, so we’ll get back to you then. Happy New Year and thanks for your contribution.Dan",
"username": "Daniel_Pasette"
},
{
"code": "",
"text": "Thanks Dan,",
"username": "Josh_Soref"
},
{
"code": "",
"text": "Hope your Jan is going well. Let me know if you need anything.",
"username": "Josh_Soref"
},
{
"code": "",
"text": "Hi Josh,I’m so sorry for the long delay in response, I was recently poked by our community manager to respond, which brought this to the top of my inbox.Some thoughts:My feeling is that this kind of change is only worth accepting if we can work it into our tooling and make it durable. If we can’t, I’m skeptical that it will be worth the .I am going to discuss with the development team on how hard it would be to work this kind of spell checking into lint, but we would need to invest in producing such a tool.",
"username": "Daniel_Pasette"
},
{
"code": "allow.txtretriablemasterapache/hive",
"text": "Hi Daniel (and community manager),As I leave individual words in individual commits, it’s fairly trivial for me to drop ones that are rejected by a project, and my tooling offers a place for such things (allow.txt enables one to supplement the dictionary, so I would just add a line with retriable and stop seeing it going forward).Indeed, it’s really best to deploy some CI for this purpose once a project accepts a PR like this so that the codebase doesn’t revert.I’m actively developing a tool for this purpose:Spell check code and commitsI tend to offer the CI after the fact, of the form “if you liked these fixes, you could use this tool to keep your repository clean”, but I try not to push the CI too hard (and typically only offer it if there seems to be a real interest, as opposed to just a neutral response to a submission), which is why it isn’t a core part of my spelling fix offerings.\nFor mongodb, I initially omitted mention entirely because I was trying to follow the submission guidelines and given how large the change was, it seemed better to first get a sense of whether there was general interest in spelling fixes. – That’s sort of the stage of your current reply, trying to decide if it’s worth it at all.I have a few features coming for future versions that will make it much easier to use (automatically recommending files to exclude / automatically skipping files, offering a way to update a user’s branch’s word lists)\nYou can see some of how the tool works here:\nThe configuration is pretty flexible, and I’m generally fairly responsive to feedback/input.prerelease/.github/actions/spellingTemplate for adding check-spelling action to a repository - spell-check-this/.github/actions/spelling at prerelease · check-spelling/spell-check-thisFwiw, it’s becoming easier and easier for me to update change-sets like this one – I just updated apache/hive which is of the same size in terms of corrections.I did initially write a tool that integrated with Travis, but I’ve found that it’s a lot easier for most users if the output is straight in GitHub. If a project wants to work with me on adapting my tooling for some other system, I’m happy to try.",
"username": "Josh_Soref"
},
{
"code": "",
"text": "wrt CIs other than GitHub Actions, eventually you should be able to use nektos/act to run check-spelling in some other CI system (I’ll need to think through how to handle its output, as right now its primary output is GitHub comments, which won’t work if the tool isn’t being run by GitHub) – I can’t predict precisely when although I spent some time this weekend trying to further its support.",
"username": "Josh_Soref"
}
] | Contributing spelling fixes to mongo | 2020-12-18T08:17:27.354Z | Contributing spelling fixes to mongo | 4,074 |
null | [
"aggregation",
"atlas-search"
] | [
{
"code": "",
"text": "Can anyone help me with the partial text search in mongodb Atlas using text.for example : user search for text ‘add’ it should return ‘addidas’, ‘addition’ etc",
"username": "Sagar942150"
},
{
"code": "autocompleteregex",
"text": "You can solve this with an autocomplete index definition on the field and a query with the autocomplete operator for maximum speed.Otherwise, you can use the keyword analyzer and the regex operator.",
"username": "Marcus"
},
{
"code": "",
"text": "Yes I can resolve this by using autocomplete but I don’t want to and facing issue while using regex as it only matches keyword not partial word.",
"username": "Sagar942150"
},
{
"code": "add*",
"text": "with regex you can achieve this with a trailing wildcard character, as in add*. That should work.",
"username": "Marcus"
},
{
"code": "",
"text": "I have tried that but for matching the exact statement it fails the return result",
"username": "Sagar942150"
}
] | Atlas text search | 2021-03-30T13:54:01.163Z | Atlas text search | 3,373 |
null | [
"queries",
"crud"
] | [
{
"code": "",
"text": "I am brand new to MongoDB and back-end development. I am looking for a tutorial on how to use MongoDB Atlas to create a user profile database for an app that I am building. Does anyone have some recommended resources for learning this.",
"username": "Etan_Ginsberg"
},
{
"code": "",
"text": "Hello @Etan_Ginsberg, welcome to the MongoDB Community forum!Good to know you have chosen MongoDB as the database for your app. If you are new to MongoDB, I suggest you get an introduction to the database, its features, connecting to it from a client program and performing CRUD (Create, Read, Update and Delete) operations on the data in the database. One of the ways, is to learn from a course at the MongoDB University (it is self paced, video based and free) - and there is one titled MongoDB Basics, to start with.In addition, there are other resources like documentation, blog posts, pod casts, etc., and you will find a menu at the top of this page to access them.Since, you are talking about an app, what programming language are you thinking about? For example, there are university courses (you will find them from the earlier link above) for using different programming languages with MongoDB, e.g., JavaScript, Python, etc.",
"username": "Prasad_Saya"
}
] | Basic tutorials for using MongoDB for a User database | 2021-04-03T10:55:00.309Z | Basic tutorials for using MongoDB for a User database | 1,687 |
null | [
"dot-net",
"unity"
] | [
{
"code": "",
"text": "Hi Everyone,To follow up on my “Build an Infinite Runner Game with Unity and the Realm Unity SDK” tutorial, I wanted to let you know that I recently published another that takes a step back and focuses on the basics.https://www.mongodb.com/how-to/getting-started-realm-sdk-unity/This tutorial focuses less on building a game and more on including Realm within your Unity project. This is a great starting point for anyone building games and needs something for data storage.Feel free to drop me a comment if you have questions.Best,",
"username": "nraboy"
},
{
"code": "",
"text": "Hello, thanks for this article but I have some questions.Tartar",
"username": "Benjamin_Coven"
},
{
"code": "",
"text": "@Benjamin_Coven Wow thanks for taking the sync implementation for a spin! I love to see our community pushing us forward. Our build support for Unity is currently in early preview so there will be definitely some rough edges but the team is hard at work over the next quarter getting everything dialed in along with more sample apps to show a variety of use cases, particularly sync.Do you mind filing an issue here with crashes you are getting, a sample app would be great!Realm is a mobile database: a replacement for SQLite & ORMs - GitHub - realm/realm-dotnet: Realm is a mobile database: a replacement for SQLite & ORMsThe team will be sure to take a look when they get back from the holidays",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Ian_Ward Thank you for your quick reply.I have uploaded a minimal project here tartar / MongoRealm-MinimalSample · GitLab .I will fill an issue a little bit later, I am on holidays too .I will give a demo next week-end on MongoDB to Unity developers, I guess I will not be able to show them Realm. I don’t have much doubt about the high demand of this feature in low scale Game Devlopment.Note: I wrote a mistake, this is the login that causes my Unity to crash and not App.Create(myRealmAppId);.",
"username": "Benjamin_Coven"
}
] | Getting Started with the Realm SDK for Unity | 2021-03-23T18:13:45.357Z | Getting Started with the Realm SDK for Unity | 4,053 |
null | [
"indexes",
"atlas"
] | [
{
"code": "",
"text": "I had this question lingering in my head since long. Realm Sync partitions are based on a key. Now if the collection(s) grow in volume, will Sync become slow over time? Is it recommended to have a MongoDB index on my partition key for each collection, or is it just an overhead without any performance gain?",
"username": "siddharth_kamaria"
},
{
"code": "",
"text": "@siddharth_kamaria We do recommend adding a MongoDB index on the partition key field - this will speed up the Initial Sync loading time",
"username": "Ian_Ward"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Indexes for MongoDB when using Realm Sync | 2021-04-03T08:50:24.776Z | Indexes for MongoDB when using Realm Sync | 2,678 |
null | [
"queries",
"atlas-functions"
] | [
{
"code": "const testFn1 = async (args) => {\n try {\n const results = await query(args);\n update(results).then(() => console.log('update stats finished'))\n return results;\n } catch (e) {\n console.error(e)\n }\n}\n",
"text": "I’m writing a cloud function that makes a query and does some housekeeping:I have to run the query and return the result fast but also do some updates (statistics) based on this result. Updates are not crucial and can also be carried at a later time.So the question is could this be done any better or is it ok to leave it like this?",
"username": "Roby_Rodriguez"
},
{
"code": "",
"text": "Hi @Roby_Rodriguez,This seems fine as you utilize promise async ability.However, you may consider bulk updates if you want a large number of updated docs in one go:https://docs.mongodb.com/realm/mongodb/crud-and-aggregation-apis/#ordered-bulk-write-operation-availabilityThanks\nPavel",
"username": "Pavel_Duchovny"
}
] | Realm function housekeeping | 2021-04-02T09:07:44.859Z | Realm function housekeeping | 2,392 |
[
"swift",
"atlas-device-sync"
] | [
{
"code": "",
"text": "Hello everyone,\nLately I’ve been looking into using SwiftUI to implement my app, I used the guidelines from this repository: GitHub - realm/task-tracker-swiftui: Simple task manager using Realm and SwiftUI .The new user document is executing without no problems, the partition is correct ex “user=xxxx”, the publisher is calling the right partition to open.I implemented almost similar, but for some reason after successful login I’m getting an empty realm without the newly created user document from the backend.\nScreenshot 2021-04-03 at 18.42.381360×630 143 KB\nThe 3 user logging check are the AppState isLoggedIn bool check:\n\nScreenshot 2021-04-03 at 18.46.161208×352 85 KB\nNot getting any errors on the Realm Logs.Really confused.Thank you.",
"username": "Radu_Patrascu"
},
{
"code": "",
"text": "Hmm this may be the case of new Realm vs Realm.asyncOpen - how are you opening the realm? You can see our SwiftUI tutorial here with Sync -\nhttps://docs.mongodb.com/realm/sdk/ios/integrations/swiftui/#with-sync",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Sorry Ian, I figured it out, I really was not seeing that my “newUserDocument” misses some required field that I added recently, this it seems to be the problem.",
"username": "Radu_Patrascu"
}
] | MongolDB Realm is returning an empty realm | 2021-04-03T15:51:00.379Z | MongolDB Realm is returning an empty realm | 2,902 |
|
null | [
"replication",
"server",
"security",
"configuration"
] | [
{
"code": " /etc/mongod.conf\n# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongodb\n journal:\n enabled: true\n# engine:\n# mmapv1:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 0.0.0.0\n\n\n# how the process runs\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\nsecurity:\n authorization: enabled\n\n#operationProfiling:\n\nreplication:\n replSetName: \"rs0\"\n\n\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:\n",
"text": "my vps: Ubuntu 20.04\nmy mongodb: 4.4.4here my mongod.conf file:",
"username": "Tom_William"
},
{
"code": "ls -l /var/lib/mongodb\nls -l /var/log/mongodb\n",
"text": "More important than the configuration file would be to share the error you are having. Either from the terminal output or from the log file.Then the command you use to start mongod is also useful.The output of the following commands is nice to have.",
"username": "steevej"
},
{
"code": "",
"text": "As @steevej suggest an actual error message or log snip is the best.Immediately however the issue is apparent. The replicaset members also need to authenticate with each other when auth is enabled. This is addressed in the enable-authentication tutorial and specifically on Internal/Membership Authentication. Links Below.",
"username": "chris"
},
{
"code": "",
"text": "I think there is a bug in mongodb 4.4.4, I reinstall mongodb 4.2.13 with the same setting and mongod.conf file and works fine\nthanks any way\n",
"username": "Tom_William"
}
] | My mongod cannot start when I put both security and replication together | 2021-04-03T08:28:24.073Z | My mongod cannot start when I put both security and replication together | 3,626 |
null | [
"aggregation",
"queries",
"data-modeling",
"python",
"time-series"
] | [
{
"code": "data = {\n \"pressure\":945.65,\n \"humidity\":42.12,\n \"temperature\":28.41,\n}\ndeviceId = 1\nminute = datetime.utcnow().replace(second=0, microsecond=0)\n \ndb.time_bucket.update_one(\n {'deviceId': deviceId, 'd': minute},\n {\n '$push': {'samples': data},\n '$inc': {'nsamples': 1}\n },\n upsert=True\n)\n _id:ObjectId(\"603fb0b7142a0cbb439ae2e1\")\n id1:3758\n id6:2\n id7:-79.09\n id8:35.97\n id9:5.5\n id10:0\n id11:-99999\n id12:0\n id13:-9999\n c14:\"U\"\n id15:0\n id16:99\n id17:0\n id18:-99\n id19:-9999\n id20:33\n id21:0\n id22:-99\n id23:0\n timestamp1:2010-01-01T00:05:00.000+00:00\n timestamp2:2009-12-31T19:05:00.000+00:00\nfiles = os.listdir('sampl/')\nsorted_files = sorted(files)\n\nmyclient = MongoClient(\"mongodb://localhost:27017/\")\nmydb1 = myclient[\"mongodbtime\"]\nmycol1 = mydb1[\"mongodbindextimestamp1\"]\n\nfor file in sorted_files:\n df = process_file(file)\n data_dict = df.to_dict('records') # Convert to dictionary\n mycol1.insert_many(data_dict)\n",
"text": "Hi guys.I am doing my thesis for university of ioannina with subject:Performance evaluation of time-series data management across different database systems.More specifically i am benchmarking PostgreSQL vs Mongodb. All this time i was storing each acquired data as a single document but after i read herehttps://levelup.gitconnected.com/time-series-data-in-mongodb-and-python-fccfb6c1a923that i can create a document for bucketing of multiple consecutive data reads.The code the article gives for python is this:i want to create time based buckets ,specifically for every hour or more if needed.I also read here https://docs.mongodb.com/manual/tutorial/model-time-data/#example about the bucket pattern but i dont know what code to use with python pymongo.\nMy time-series data contains around 1.5millions rows from 11 files from 2010 to 2020 and look like this:All the attributes change every 5 minute expect the id1 which remains the same in every document.\nThe code i have used for importing the data after converting to df into the mongodb table is this:Any help would be appreciated!Thanks in advance!",
"username": "harris"
},
{
"code": "for file in sorted_files:\n df = process_file(file)\n for row,item in df.iterrows():\n data_dict = item.to_dict()\n id1=3758\n mycol1.update_many(\n {\"id1\":id1,\"nsamples\": {\"$lt\": 12}},\n {\n \"$push\": {\"id24\": data_dict},\n \"$min\": {\"first\": data_dict['timestamp1']},\n \"$max\": {\"last\": data_dict['timestamp1']},\n # \"$min\":{\"minid13\":data_dict['id13']},\n # \"$max\": {\"maxid13\": data_dict['id13']},\n \"$inc\": {\"nsamples\": 1}\n },\n upsert=True\n )\n",
"text": "Here is the answer on how to insert data with bucket pattern in mongodb:",
"username": "harris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Time-bucket document on time-series | 2021-04-02T12:01:13.783Z | Time-bucket document on time-series | 5,206 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 4.4.5-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.4.4. The next stable release 4.4.5 will be a recommended upgrade for all 4.4 users.\nFixed in this release:",
"username": "Jon_Streets"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB 4.4.5-rc0 is released | 2021-04-02T17:03:36.743Z | MongoDB 4.4.5-rc0 is released | 2,726 |
[
"connector-for-bi"
] | [
{
"code": "",
"text": "From this page https://docs.atlas.mongodb.com/bi-connection/ it appeared that the BI Connector for Atlas was available if I had an M10 or larger cluster. So I upgraded to an M10 cluster to get the BI connector only to discover that if I enable the BI connector, it charges me additional costs. I could not find these additional BI connector prices detailed anywhere on the MongoDB website except for in my account after I upgraded to the M10 cluster. And even these pricing details are not clear:What is sustained monthly usage? When does the $1.47 rate apply vs the $3.84 rate?",
"username": "Brenton_Klassen"
},
{
"code": "",
"text": "Hi Brenton,I’m sorry that our pricing website needs to be updated to reflect these details: at this time the user interface is the best way to get these prices.The pricing model for the BI Connector incentivizes keeping it enabled meaning that after it has been enabled for a while in a month it stops accruing any extra charges that month.So for example in the screenshot above, the cost s $3.84 per day up to a $45 maximum for the month: the illustrative $1.47/day comes from. dividing that $45 maximum by the number of days in a month e.g. assuming you’re leaving it enabled.Being intellectually honest, I think we over-engineered this and ought to simplify in the future.By the way for completeness, Atlas Data Lake offers an alternative SQL interface and can federate queries across one or more Atlas clusters!Cheers\n-Andrew",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | BI Connector for Atlas pricing is unclear | 2021-03-31T15:36:09.830Z | BI Connector for Atlas pricing is unclear | 4,690 |
|
null | [
"connecting"
] | [
{
"code": "",
"text": "question on atlas connection timeouts… it happens on my clients machine and not on my machine… diff IPs… BUT in my account I have 0.0.0.0/32 etc and his public ip set and mine of course… I just refeshed it to see if that helps ie if permissions were dropped over time - since customer had’nt connected from his ip for a couple of months - timeout after 30000ms… the usual msg in the stack.",
"username": "John_Allen"
},
{
"code": "",
"text": "Did you get to the bottom of this?\nSometimes folks have more than one public IP that they get load balanced between",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "@John_Allen Does your client have a strict firewall policy blocking outbound traffic.It some places it would be common to block anything not on port 80,443 for example.",
"username": "chris"
}
] | 30000ms timeout.. connects from my ip but not my customers, it used to | 2021-03-30T18:44:28.322Z | 30000ms timeout.. connects from my ip but not my customers, it used to | 2,016 |
null | [
"atlas-device-sync"
] | [
{
"code": "agency{\n \"_id\" : ObjectId(\"6059c9859a545bbceeb9e881\"),\n \"agency\" : \"Ecuadorian Galapagos\", // This is the partition key\n \"date\" : ISODate(\"2021-03-23T10:57:09.777Z\"),\n \"status\" : \"At Sea\",\n \"user\" : {\n \"email\" : \"[email protected]\",\n \"name\" : {\n \"first\" : \"Global\",\n \"last\" : \"Admin\"\n }\n }\n}\nimport RealmSwift\n\nclass DutyChange: Object {\n @objc dynamic var _id: ObjectId = ObjectId.generate()\n @objc dynamic var user: User? = User()\n @objc dynamic var date = Date()\n @objc dynamic var status = \"\"\n\n override static func primaryKey() -> String? {\n return \"_id\"\n }\n}\n",
"text": "In the latest WildAid O-FISH app example, I see that @Andrew_Morgan has specified that we will be using agency as the partition key and I can see that in the Atlas document from the post.However, what I fail to understand is that the same partition key is not present in the Swift / Kotlin Realm objects.Is this done for the brevity of the example or is it not recommended to have the partition key on the client side? Currently, I’ve a RChat like partitioning scheme and I set the values from my client app.",
"username": "siddharth_kamaria"
},
{
"code": "",
"text": "Hi @siddharth_kamaria, it’s a feature of the Realm SDKs and Realm that you don’t need to include the partition key as an attribute in the Realm Objects. Realm Sync will automatically add it to the Atlas document, based on the partition that the mobile app specifies when opening the realm.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "That actually makes sense. I should remove my partition keys from object models. Got to learn something new today, thank you @Andrew_Morgan ",
"username": "siddharth_kamaria"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | No partition fields specified in WildAid O-FISH example | 2021-04-02T09:16:09.581Z | No partition fields specified in WildAid O-FISH example | 1,592 |
null | [
"queries",
"python",
"indexes"
] | [
{
"code": "{\n \"name\": \"Alex\",\n \"Alias\": \"Ax\"\n \"other\": \"A...\"\n},\n{\n \"name\":\"b\",\n \"Alias\": \"bbb\",\n \"other\": \"...\"\n}\n{\n \"name\": \"Alex\",\n \"Alias\": \"Ax\"\n \"other\": \"A...\"\n}\n",
"text": "I am using Python + pymongo to develop a website, but the process encountered some query efficiency problems.There are a lot of $or operators in the query process, and the query is too slow.I checked with “explain” and $or uses a different index. I have also considered using multi-process/multi-threading to turn the $or condition into a single index query. But this cannot guarantee the “non-repetition” of the data.For example, the following data:I need to perform an or search on “name”, “Alias” and “other”. If I use the $or operator, three queries are actually created, and they are synchronized. Correspond to their respective indexes.\nI need to increase the query speed. I can create 3 threads to run each conditional query separately.But doing so will cause duplicate data.thread1: find({“name”:\"$regex\":“A”})\nthread2: find({“Alias”:$regex\":“A”})\nthread3: find({“other”:$regex\":“A”})They will all query this data:If so, I have to process the returned results of the three threads, which greatly reduces the query efficiency.\nI want to know, is there any more efficient way for mongodb to deal with this kind of problem?",
"username": "binn_zed"
},
{
"code": "$orcollection.find( { '$or': [\n { 'name': { '$regex': 'A' } },\n { 'other': { '$regex': 'A' } }\n] } )\ncollection.find( { '$or': [\n { 'name': { '$regex': '^A' } },\n { 'other': { '$regex': '^A' } }\n] } )\n",
"text": "Hello @binn_zed, you can use the $or operator as follows using PyMongo.See the following notes on optimizing your query:Regex and Index UseFor case sensitive regular expression queries, if an index exists for the field, then MongoDB matches the regular expression against the values in the index, which can be faster than a collection scan. Further optimization can occur if the regular expression is a “prefix expression”, which means that all potential matches start with the same string. This allows MongoDB to construct a “range” from that prefix and only match against those values from the index that fall within that range.$or Clauses and IndexesWhen evaluating the clauses in the $or expression, MongoDB either performs a collection scan or, if all the clauses are supported by indexes, MongoDB performs index scans.So, your query will benefit from having index on each of the three fields. Further optimization can occur if your regular expression is a “prefix expression” (as noted above in the in the Regex and Index Use), for example:",
"username": "Prasad_Saya"
}
] | I need help regarding the use of the $or operator | 2021-04-02T06:49:18.056Z | I need help regarding the use of the $or operator | 2,633 |
null | [] | [
{
"code": "",
"text": "For research purpose, I want to use MongoDB Cloud for one month.\nMy research is not on MongoDB but on different data storage techniques.\nSo, it will be MongoDB vs. MongoDB.\nI do not get any kind of research grunt.\nNor my research has any monetary outcome.\nIt is only for academic and research purpose.\nWhat are the options available for my research work … Can anyone suggest? I will be working with .8 Millions of records…\nIt is very important that I access the service for one moth only. Beyond this time limit, I will not be able to pay as I am not getting any financial aid…\nThank You",
"username": "sps"
},
{
"code": "",
"text": "MongoDB Atlas has a forever free shared cluster which offers a generous 512 MB of storage. Assuming you have 800K records and each record is no more than 0.5 KB (total will be 400 MB + some space for metadata and DB indexes), it will comfortably fit in the free tier cluster. If you hit the storage / any other limits, you can upgrade to the $9 shared cluster just for one month.",
"username": "siddharth_kamaria"
},
{
"code": "",
"text": "Thank You Sir. But, .8 Million files may need to be stored and each of these files may be of of almost 700KB. The duration of storage will be till the read operations are complete.\nWhat to do?\nPlease advise…",
"username": "sps"
},
{
"code": "",
"text": "Well, that’s a lot of storage! (upwards of 500 GB)The simple answer is to just get a bigger Atlas instance, but as you pointed out that might not be feasible. The other option is to host it yourself on your own computer (assuming you have the storage and decent RAM - but you’ll loose out on all the benefits you get from having the replica sets).Now, it really depends on what kind of research are you doing and you need to answer couple of questions for yourself.",
"username": "siddharth_kamaria"
},
{
"code": "",
"text": "Thanks Sir.\n1.I will try to work with files of lesser sizes.\n2. These files will be mostly text files.\nWith Best Regards",
"username": "sps"
},
{
"code": "",
"text": "If these are text files, one option is to process them, extract metadata out of it and store it in Atlas and then store the entire text file in S3.",
"username": "siddharth_kamaria"
},
{
"code": "",
"text": "OK, Thanks.\nActually the files contain sensor observations as text data…",
"username": "sps"
}
] | Mongodb Cloud paid access | 2021-04-02T05:29:53.229Z | Mongodb Cloud paid access | 1,941 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "After many hours trying to figure out the issue i have found that the C# Drivers 2.11.0 have an issue. Depending on what other third party libraries have been used before using the driver they will just throw timeouts exception. Reverting the Driver to 2.10.4 fix the issue. So it’s something that have been added between both version. Right now i have found that if you open a MongoClient at the begining of the application (first line in the program.cs) it will work even after calling the third party dll’s that cause mongo to fail. If instead you call any of these third party dll then try to make any call to mongo you will always have timeout error. I have more details on the operation i was doing over at stackoverflow https://stackoverflow.com/q/63305491/2748412",
"username": "CyberFranck_N_A"
},
{
"code": "",
"text": "Is there anyways to use allowDiskUse on a find with version 2.10.4",
"username": "CyberFranck_N_A"
},
{
"code": "allowDiskUsetrue_tmpdbPath",
"text": "Hi @CyberFranck_N_A and welcome to the forums,Is there anyways to use allowDiskUse on a find with version 2.10.4The allowDiskUse option is specific for MongoDB Aggregation framework only. The option enables writing to temporary files. When set to true it allows aggregation stages to write data to the _tmp subdirectory in the dbPath directory.This is not related to a specific MongoDB driver or version, this is the behaviour of the server.Looking at the error stack trace that you posted on StackOverflow:A timeout occured after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 } }. Client view of cluster state is { ClusterId : “1”, ConnectionMode : “Automatic”, Type : “Unknown”, State : “Disconnected”, Servers : [{ ServerId: “{ ClusterId : 1, EndPoint : “127.0.0.1:27017” }”, EndPoint: “127.0.0.1:27017”, ReasonChanged: “ServerInitialDescription”, State: “Disconnected”, ServerVersion: , TopologyVersion: , Type: “Unknown”, LastHeartbeatTimestamp: null, LastUpdateTimestamp: “2020-08-07T16:00:54.4780565Z” }] }.Also from the server log:{“t”:{“$date”:“2020-08-07T12:31:05.334-04:00”},“s”:“I”, “c”:“-”, “id”:20883, “ctx”:“conn249”,“msg”:“Interrupted operation as its client disconnected”,“attr”:{“opId”:4183920}}\n{“t”:{“$date”:“2020-08-07T12:31:05.334-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:22989, “ctx”:“conn249”,“msg”:“Error sending response to client. Ending connection from remote”,“attr”:{“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Connection reset by peer”},“remote”:“127.0.0.1:61403”,“connectionId”:249}}From a brief look, it is likely that the client was disconnected from the server. Is there something on the network layer that is interrupting the connection ?If you’re still encountering this issue, and able to reproduce it consistently, could you please provide:Regards,\nWan.",
"username": "wan"
},
{
"code": " // get the client\n var client = new MongoClient(\"ConnectionStringHere\");\n\n// get the database\nvar db = client.GetDatabase(\"SomeDatabase\");\n\n// create the filter for collection names\nvar filter = new BsonDocument(\"name\", \"SomeCollection\");\nvar options = new ListCollectionNamesOptions { Filter = filter };\n\n// check if the filter return at least 1 record\nvar containAtLEastOneRecord = db.ListCollectionNames(options).Any();\n\n// declare a third party DLL object\nvar test = ThirdPartyDll.SomeClass();\n// declare a third party DLL object\nvar test = ThirdPartyDll.SomeClass();\n\n // get the client\n var client = new MongoClient(\"ConnectionStringHere\");\n\n// get the database\nvar db = client.GetDatabase(\"SomeDatabase\");\n\n// create the filter for collection names\nvar filter = new BsonDocument(\"name\", \"SomeCollection\");\nvar options = new ListCollectionNamesOptions { Filter = filter };\n\n// check if the filter return at least 1 record\nvar containAtLEastOneRecord = db.ListCollectionNames(options).Any();\n",
"text": "Error is reproducible 100% of the time.Switching driver from 2.11.0 to 2.10.4 fix the issue.MongoDB server version 4.4.0\nWindows 10 latest version (brand new pc new install)example code based on the stack overflow old code that does work :here the similar example but this does not workNoticed the ONLY difference is that i called another third party dll before yours and it doesn’t work with 2.11.0 but if i switch to 2.10.4 both code works. I do use over 250 third party DLL and only a single one cause that issue if i instantiate it before yours. I have contacted their support and as their dll do not crash and they say it’s up to you to fix your issue. I assumed there must be culture info issue between same dll or something like it that is not standard in your code in 2.11.0 that is correct in 2.10.4",
"username": "CyberFranck_N_A"
},
{
"code": "",
"text": "Hi @CyberFranck_N_A,Thanks for the extra information. Would you be able to share the name of this third party DLL ? Essentially an attempt to create a reproducible test case for others.Also, do you get any warning or error message during build time ?Regards,\nWan",
"username": "wan"
},
{
"code": "",
"text": "The third party DLL is called Eyeshot. It’s a proprietary CAD engine. There is no error or warning regarding this at build time in the list. The only one i get is the runtime exceptionA timeout occured after 30000ms selecting a server using CompositeServerSelector…",
"username": "CyberFranck_N_A"
},
{
"code": "",
"text": "Hi @CyberFranck_N_A,The third party DLL is called Eyeshot. It’s a proprietary CAD engine.Thanks for that information. Unfortunately it’s challenging to debug an issue without a reproducible test case.Could you try setting up MongoDB on a different machine, and connecting to it remotely ? You could try to spin up your own LAN server, or just spin-up free-shared-tier MongoDB Atlas. Just guessing, but I would like to know whether your machine is overloaded on runtime which prevents the local server to responds adequately.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "the DB is not local. I have ordered a brand new PC with windows 10 installed and Mongo community on it and that’s the only thing installed. After test are complete we will create a dedicated VM running on Windows Server. My test are done from my PC connecting to that remote computer. Also i do have the same issue while trying to connect to my local database too. The server is running a Ryzen 9 3950X 16 cores and local i am running the threadripper 3970X 32 cores so being overloaded i don’t think that’s the problem here.",
"username": "CyberFranck_N_A"
},
{
"code": "",
"text": "Hi,\nI’m spreadly facing the same issue connecting to Atlas on DEV and STAGING environment, i.e. different clients, different Atlas clusters. Cannot say it’s because of a specific 3rd party dll, I’m still investigating.\nI’ll try to downgrade the driver to the 2.10.4 on the DEV one and try to reproduce.\nFor both of my envs I’ve a WebAPI app on Azure Germany which connects to Atlas cluster, both as M0 Sandbox (General) deployed on AWS / Frankfurt (eu-central-1).\nPROD is still running with the 2.10.2 with no problems. Also, the code is not changed around the driver usage.",
"username": "Stefano_Ghisaura"
},
{
"code": "",
"text": "So i might not be just due to the DLL loading order even if i can actually reproduce this every single time by changing the order only ?",
"username": "CyberFranck_N_A"
},
{
"code": "",
"text": "Today I downgraded the DEV and the STAGING environments to 2.10.4 and later I was debugging a branched codebase running the driver version 2.11.0 getting the timeout too.\nI downgraded the driver in the branched as well to 2.10.4 and I’m not facing the issue anymore.\nUnfortunately I have no so much time to investigate into this deeper, but I’m 100% sure the code around the driver wasn’t changed and the only difference is the driver version. I’ll wait the next version to check if the issue is solved.",
"username": "Stefano_Ghisaura"
},
{
"code": "",
"text": "I have the same problem with just a simple WinForms application trying to enumerate DB’s on the local server.\nThe problem was intermittent and was solved when I downgraded to driver version 2.10.4.\nThe only thing I did on the server is define an admin user with password protection on the DB.",
"username": "Oren_Lev"
},
{
"code": "",
"text": "Hi @Oren_Lev, and welcome to the forum,Although you may see a similar error message, the cause may be entirely different. For example, both a third party DLL interference versus network interference could cause a CompositeServerSelector issue (amongst other things).I have the same problem with just a simple WinForms application trying to enumerate DB’s on the local server.If the issue that you are encountering is reproducible, please open a new discussion thread/topic. Please provide:Best regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "We are using 2.11.5 and we got this issue as well. It doesn’t happen very often.A timeout occured after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 } }. Client view of cluster state is { ClusterId : “1”, ConnectionMode : “Automatic”, Type : “Sharded”, State : “Disconnected”, Servers : [{ ServerId: “{ ClusterId : 1, EndPoint : “192.168.225.110:27017” }”, EndPoint: “192.168.225.110:27017”, ReasonChanged: “InvalidatedBecause:ChannelException:MongoDB.Driver.MongoConnectionException: An exception occurred while receiving a message from the server. —> System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. —> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host\nat System.Net.Sockets.Socket.EndReceive(IAsyncResult asyncResult)\nat System.Net.Sockets.NetworkStream.EndRead(IAsyncResult asyncResult)\n.\n.\n.\n— End of stack trace from previous location where exception was thrown —\nat System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\nat System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\nat MongoDB.Driver.Core.Servers.Server.ServerChannel.d__33`1.MoveNext()”, State: “Disconnected”, ServerVersion: , TopologyVersion: , Type: “Unknown”, LastHeartbeatTimestamp: null, LastUpdateTimestamp: “2021-01-20T16:05:37.3963307Z” }] }.",
"username": "Sam_Lin"
},
{
"code": "",
"text": "Hi, Sam,Thank you for reaching out to us about this issue. Without the full stack trace, it is difficult to diagnose the issue definitively. We did however recently discover and fix a race condition in our SDAM implementation, CSHARP-3302. A fix for CSHARP-3302 has been released in .NET/C# Driver 2.11.6. Please try upgrading and letting us know if this resolves your sporadic server selection timeout issues.Thanks,\nJames",
"username": "James_Kovacs"
},
{
"code": " var database = new MongoClient(\"mongodb+srv://#userName:#[email protected]/database_name?retryWrites=true&w=majority&connect=replicaSet\"); \n[MESSAGE] --- A timeout occured after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 } }. Client view of cluster state is { ClusterId : \"1\", ConnectionMode : \"ReplicaSet\", Type : \"ReplicaSet\", State : \"Disconnected\", Servers : [{ ServerId: \"{ ClusterId : 1, EndPoint : \"Unspecified/cluster0-shard-00-00.eqam6.gcp.mongodb.net:27017\" }\", EndPoint: \"Unspecified/cluster0-shard-00-00.eqam6.gcp.mongodb.net:27017\", ReasonChanged: \"Heartbeat\", State: \"Disconnected\", ServerVersion: , TopologyVersion: , Type: \"Unknown\", HeartbeatException: \"MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server. ---> System.TimeoutException: Timed out connecting to Unspecified/cluster0-shard-00-00.eqam6.gcp.mongodb.net:27017. Timeout was 00:00:30.\n at MongoDB.Driver.Core.Connections.TcpStreamFactory.<ConnectAsync>d__7.MoveNext()\n--- End of stack trace from previous location where exception was thrown ---\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\n at MongoDB.Driver.Core.Connections.TcpStreamFactory.<CreateStreamAsync>d__4.MoveNext()\n--- End of stack trace from previous location where exception was thrown ---\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\n at MongoDB.Driver.Core.Connections.SslStreamFactory.<CreateStreamAsync>d__4.MoveNext()\n--- End of stack trace from previous location where exception was thrown ---\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\n at MongoDB.Driver.Core.Connections.BinaryConnection.<OpenHelperAsync>d__51.MoveNext()\n --- End of inner exception stack trace ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.<OpenHelperAsync>d__51.MoveNext()\n--- End of stack trace from previous location where exception was thrown ---\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\n at MongoDB.Driver.Core.Servers.ServerMonitor.<InitializeConnectionAsync>d__32.MoveNext()\n--- End of stack trace from previous location where exception was thrown ---\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\n at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd(Task task)\n at MongoDB.Driver.Core.Servers.ServerMonitor.<HeartbeatAsync>d__34.MoveNext()\", LastHeartbeatTimestamp: \"2021-02-23T09:14:35.3589542Z\", LastUpdateTimestamp: \"2021-02-23T09:14:35.3589542Z\" }, { ServerId: \"{ ClusterId : 1, EndPoint : \"Unspecified/cluster0-shard-00-01.eqam6.gcp.mongodb.net:27017\" }\", EndPoint: \"Unspecified/cluster0-shard-00-01.eqam6.gcp.mongodb.net:27017\", ReasonChanged: \"Heartbeat\", State: \"Disconnected\", ServerVersion: , TopologyVersion: , Type: \"Unknown\", HeartbeatException: \"MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server. ---> System.TimeoutException: Timed out connecting to Unspecified/cluster0-shard-00-01.eqam6.gcp.mongodb.net:27017. Timeout was 00:00:30.\n at MongoDB.Driver.Core.Connections.TcpStreamFactory.<ConnectAsync>d__7.MoveNext()\n--- End of stack trace from previous location where exception was thrown ---\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\n at MongoDB.Driver.Core.Connections.TcpStreamFactory.<CreateStreamAsync>d__4.MoveNext()\n--- End of stack trace from previous location where exception was thrown ---\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\n at MongoDB.Driver.Core.Connections.SslStreamFactory.<CreateStreamAsync>d__4.MoveNext()\n--- End of stack trace from previous location where exception was thrown ---\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\n at MongoDB.Driver.Core.Connections.BinaryConnection.<OpenHelperAsync>d__51.MoveNext()\n --- End of inner exception stack trace ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.<OpenHelperAsync>d__51.MoveNext()\n--- End of stack trace from previous location where exception was thrown ---\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\n at MongoDB.Driver.Core.Servers.ServerMonitor.<InitializeConnectionAsync>d__32.MoveNext()\n--- End of stack trace from previous location where exception was thrown ---\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\n at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd(Task task)\n at MongoDB.Driver.Core.Servers.ServerMonitor.<HeartbeatAsync>d__34.MoveNext()\", LastHeartbeatTimestamp: \"2021-02-23T09:14:35.4419744Z\", LastUpdateTimestamp: \"2021-02-23T09:14:35.4419744Z\" }, { ServerId: \"{ ClusterId : 1, EndPoint : \"Unspecified/cluster0-shard-00-02.eqam6.gcp.mongodb.net:27017\" }\", EndPoint: \"Unspecified/cluster0-shard-00-02.eqam6.gcp.mongodb.net:27017\", ReasonChanged: \"Heartbeat\", State: \"Disconnected\", ServerVersion: , TopologyVersion: , Type: \"Unknown\", HeartbeatException: \"MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server. ---> System.TimeoutException: Timed out connecting to Unspecified/cluster0-shard-00-02.eqam6.gcp.mongodb.net:27017. Timeout was 00:00:30.\n at MongoDB.Driver.Core.Connections.TcpStreamFactory.<ConnectAsync>d__7.MoveNext()\n--- End of stack trace from previous location where exception was thrown ---\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\n at MongoDB.Driver.Core.Connections.TcpStreamFactory.<CreateStreamAsync>d__4.MoveNext()\n--- End of stack trace from previous location where exception was thrown ---\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\n at MongoDB.Driver.Core.Connections.SslStreamFactory.<CreateStreamAsync>d__4.MoveNext()\n--- End of stack trace from previous location where exception was thrown ---\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\n at MongoDB.Driver.Core.Connections.BinaryConnection.<OpenHelperAsync>d__51.MoveNext()\n --- End of inner exception stack trace ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.<OpenHelperAsync>d__51.MoveNext()\n--- End of stack trace from previous location where exception was thrown ---\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\n at MongoDB.Driver.Core.Servers.ServerMonitor.<InitializeConnectionAsync>d__32.MoveNext()\n--- End of stack trace from previous location where exception was thrown ---\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\n at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd(Task task)\n at MongoDB.Driver.Core.Servers.ServerMonitor.<HeartbeatAsync>d__34.MoveNext()\", LastHeartbeatTimestamp: \"2021-02-23T09:14:35.3579464Z\", LastUpdateTimestamp: \"2021-02-23T09:14:35.3579464Z\" }] }.\n",
"text": "Hi,\nI have the same issue here even after using the 2.11.6 driver.\nThe connection string am using to create the Mongo Client is the following:and the stack trace of the error is the following:if you need any help to reproduce the issue please let me know.\nAny help is appreciated.",
"username": "Nadeem_Khoury"
},
{
"code": "CompositeServerSelectorminPoolSize=20mongodb://username:password@mongo0:27017,mongo1:27017,mongo2:27017/admin?authSource=admin&replicaSet=rs0&minPoolSize=20&maxPoolSize=500[22:19:41 ERR] A timeout occured after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 } }. Client view of cluster state is { ClusterId : \"1\", ConnectionMode : \"ReplicaSet\", Type : \"ReplicaSet\", State : \"Disconnected\", Servers : [{ ServerId: \"{ ClusterId : 1, EndPoint : \"Unspecified/mongo0:27017\" }\", EndPoint: \"Unspecified/mongo0:27017\", ReasonChanged: \"Heartbeat\", State: \"Disconnected\", ServerVersion: , TopologyVersion: , Type: \"Unknown\", HeartbeatException: \"MongoDB.Driver.MongoConnectionException: An exception occurred while receiving a message from the server.\n ---> System.TimeoutException: The operation has timed out.\n at MongoDB.Driver.Core.Misc.StreamExtensionMethods.ReadAsync(Stream stream, Byte[] buffer, Int32 offset, Int32 count, TimeSpan timeout, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Misc.StreamExtensionMethods.ReadBytesAsync(Stream stream, Byte[] buffer, Int32 offset, Int32 count, TimeSpan timeout, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBufferAsync(CancellationToken cancellationToken)\n --- End of inner exception stack trace ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveMessageAsync(Int32 responseTo, IMessageEncoderSelector encoderSelector, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.WireProtocol.CommandUsingCommandMessageWireProtocol`1.ExecuteAsync(IConnection connection, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.IsMasterHelper.GetResultAsync(IConnection connection, CommandWireProtocol`1 isMasterProtocol, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.ServerMonitor.GetIsMasterResultAsync(IConnection connection, CommandWireProtocol`1 isMasterProtocol, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.ServerMonitor.HeartbeatAsync(CancellationToken cancellationToken)\", LastHeartbeatTimestamp: \"2021-03-14T22:18:27.4390847Z\", LastUpdateTimestamp: \"2021-03-14T22:18:27.4390850Z\" }, { ServerId: \"{ ClusterId : 1, EndPoint : \"Unspecified/mongo1:27017\" }\", EndPoint: \"Unspecified/mongo1:27017\", ReasonChanged: \"Heartbeat\", State: \"Disconnected\", ServerVersion: , TopologyVersion: , Type: \"Unknown\", HeartbeatException: \"MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.\n ---> MongoDB.Driver.MongoConnectionException: An exception occurred while receiving a message from the server.\n ---> System.TimeoutException: The operation has timed out.\n at MongoDB.Driver.Core.Misc.StreamExtensionMethods.ReadAsync(Stream stream, Byte[] buffer, Int32 offset, Int32 count, TimeSpan timeout, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Misc.StreamExtensionMethods.ReadBytesAsync(Stream stream, Byte[] buffer, Int32 offset, Int32 count, TimeSpan timeout, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBufferAsync(CancellationToken cancellationToken)\n --- End of inner exception stack trace ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveMessageAsync(Int32 responseTo, IMessageEncoderSelector encoderSelector, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.WireProtocol.CommandUsingQueryMessageWireProtocol`1.ExecuteAsync(IConnection connection, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.IsMasterHelper.GetResultAsync(IConnection connection, CommandWireProtocol`1 isMasterProtocol, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.ConnectionInitializer.InitializeConnectionAsync(IConnection connection, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync(CancellationToken cancellationToken)\n --- End of inner exception stack trace ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.ServerMonitor.InitializeConnectionAsync(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.ServerMonitor.HeartbeatAsync(CancellationToken cancellationToken)\", LastHeartbeatTimestamp: \"2021-03-14T22:18:27.4350973Z\", LastUpdateTimestamp: \"2021-03-14T22:18:27.4350976Z\" }, { ServerId: \"{ ClusterId : 1, EndPoint : \"Unspecified/mongo2:27017\" }\", EndPoint: \"Unspecified/mongo2:27017\", ReasonChanged: \"Heartbeat\", State: \"Disconnected\", ServerVersion: , TopologyVersion: , Type: \"Unknown\", HeartbeatException: \"MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.\n ---> System.TimeoutException: Timed out connecting to 172.31.128.202:27017. Timeout was 00:00:30.\n at MongoDB.Driver.Core.Connections.TcpStreamFactory.ConnectAsync(Socket socket, EndPoint endPoint, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.TcpStreamFactory.CreateStreamAsync(EndPoint endPoint, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync(CancellationToken cancellationToken)\n --- End of inner exception stack trace ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.ServerMonitor.InitializeConnectionAsync(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.ServerMonitor.HeartbeatAsync(CancellationToken cancellationToken)\", LastHeartbeatTimestamp: \"2021-03-14T22:19:25.9946242Z\", LastUpdateTimestamp: \"2021-03-14T22:19:25.9946246Z\" }] }.\n\t\"connections\" : {\n\t\t\"current\" : 1142,\n\t\t\"available\" : 50058,\n\t\t\"totalCreated\" : 7653175,\n\t\t\"active\" : 3\n\t},\n\t\"electionMetrics\" : {\n\t\t\"stepUpCmd\" : {\n\t\t\t\"called\" : 0,\n\t\t\t\"successful\" : 0\n\t\t},\n\t\t\"priorityTakeover\" : {\n\t\t\t\"called\" : 1,\n\t\t\t\"successful\" : 1\n\t\t},\n\t\t\"catchUpTakeover\" : {\n\t\t\t\"called\" : 0,\n\t\t\t\"successful\" : 0\n\t\t},\n\t\t\"electionTimeout\" : {\n\t\t\t\"called\" : 0,\n\t\t\t\"successful\" : 0\n\t\t},\n\t\t\"freezeTimeout\" : {\n\t\t\t\"called\" : 0,\n\t\t\t\"successful\" : 0\n\t\t},\n\t\t\"numStepDownsCausedByHigherTerm\" : 0,\n\t\t\"numCatchUps\" : 0,\n\t\t\"numCatchUpsSucceeded\" : 0,\n\t\t\"numCatchUpsAlreadyCaughtUp\" : 1,\n\t\t\"numCatchUpsSkipped\" : 0,\n\t\t\"numCatchUpsTimedOut\" : 0,\n\t\t\"numCatchUpsFailedWithError\" : 0,\n\t\t\"numCatchUpsFailedWithNewTerm\" : 0,\n\t\t\"numCatchUpsFailedWithReplSetAbortPrimaryCatchUpCmd\" : 0,\n\t\t\"averageCatchUpOps\" : 0\n\t},\nlocalThreshold",
"text": "CSHARP-3302I’ve been watching this thread for a couple months as we are experiencing the same issue. We are running three instances of mongo, each in an EC2 instance inside the same VPC and subnet, configured as a replica set. We have been receiving the same CompositeServerSelector error in a few different flavors, and upgrading to the 2.11.6 CSHARP driver version did not improve these timeouts.We were hit a rash of these in production last week and applied a minPoolSize=20 to the most active mongoDB API clients, and that seems to have improved it slightly, but not eliminated the problem. This newer connection string is: mongodb://username:password@mongo0:27017,mongo1:27017,mongo2:27017/admin?authSource=admin&replicaSet=rs0&minPoolSize=20&maxPoolSize=500The stack trace of the errors we are seeing:We are not experiencing primary stepdowns, but may be seeing widely varying load conditions that are affecting the RTT.From the primary:As far I can tell with multiple dev reviews, we are correctly using the singleton instantiation of the MongoClient and rely on it to manage the connection pools when needed.When the mongo pool hangs, the API response times spike, then monotonically decrease until the API container that is using the MongoClient is recycled because of connection timeouts.Screenshot 2021-03-15 1609061605×505 65.6 KBMy next thought is to start playing with localThreshold to increase the window above the default 15ms to make sure that one or more servers are available in extreme events. Is there a way to get the RTT to servers as seen from the mongo driver and dump those in the case of these timeouts to see if they are limiting?",
"username": "Roger_Clark"
},
{
"code": "LastHeartbeatTimestampLastUpdateTimestamp",
"text": "Hi, Nadeem and Roger,Thank you to each of you for reaching out to us. We have reviewed the log messages provided by both of you and we do not see evidence of CSHARP-3302. CSHARP-3302 was fixed in MongoDB .NET/C# driver v2.11.6 and v2.12.0. As well CSHARP-3302 is evidenced by heartbeats becoming stuck (LastHeartbeatTimestamp and LastUpdateTimestamp not updating) while one member shows that it is in the middle of processing a heartbeat.Server selection timeouts can occur for a wide variety of reasons and is most often the result of network connectivity issues between your client application and the cluster. In both the log lines provided, we see that the client application is unable to reach any of the cluster members due to network timeouts.We recommend verifying network connectivity and configuration, especially firewalls and connection limits and similar issues that could prevent your client application from successfully connecting with your cluster.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": ".AggregateAsync<TModel>(pipeline, new AggregateOptions { BatchSize = 8000, AllowDiskUse = true }))CompositeServerSelectorSystem.NullReferenceException: Object reference not set to an instance of an object.\n Module \"MongoDB.Driver.Core.Servers.ServerMonitor\", in CancelCurrentCheck\n Module \"MongoDB.Driver.Core.Servers.Server\", in HandleBeforeHandshakeCompletesException\n Module \"MongoDB.Driver.Core.Servers.Server\", in GetChannelAsync\n Module \"MongoDB.Driver.Core.Operations.RetryableReadContext\", in InitializeAsync\n Module \"MongoDB.Driver.Core.Operations.RetryableReadContext\", in CreateAsync\n Module \"MongoDB.Driver.Core.Operations.AggregateOperation`1\", in ExecuteAsync\n Module \"MongoDB.Driver.OperationExecutor+<ExecuteReadOperationAsync>d__3`1\", in MoveNext\n Module \"MongoDB.Driver.MongoCollectionImpl`1+<ExecuteReadOperationAsync>d__98`1\", in MoveNext\n Module \"MongoDB.Driver.MongoCollectionImpl`1+<AggregateAsync>d__22`1\", in MoveNext\n Module \"MongoDB.Driver.MongoCollectionImpl`1+<UsingImplicitSessionAsync>d__106`1\", in MoveNext\n",
"text": "James,Thanks for the review. While I cannot rule out a network issue, our instances themselves report no intra-node dropouts, changes of primary, or timeouts. All of our applications are run with internal load balancers and EC2 instances on the same VPC as the Mongo ec2 instances. We do have some heavy .AggregateAsync<TModel>(pipeline, new AggregateOptions { BatchSize = 8000, AllowDiskUse = true })) operations, and continue to optimize things there.That said, we just saw another error that was thrown at the start of the contemporaneous CompositeServerSelector timeouts that may provide additional insights, as it seems strictly within the driver as a NullRefWe typically have between 1400-4000 connections open to the db, and have followed all the thread/socket/heap/ulimit settings pointers to make sure instances aren’t having socket starvation.Thanks again,\nRoger",
"username": "Roger_Clark"
},
{
"code": "CancelCurrentCheck",
"text": "CancelCurrentCheckHi Roger, thanks for reporting this issue. It was fixed in 2.12.0, and tracked here: https://jira.mongodb.org/browse/CSHARP-3436.",
"username": "Boris_Dogadov"
}
] | Bug in c# Driver 2.11.0 | 2020-08-11T13:41:32.161Z | Bug in c# Driver 2.11.0 | 24,348 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "Hi! How can I sum a specific field in an specific array position?\nI have the field: “fee_details”: [{ “amount”: 300 }, { “amount”: 200 }] and I am using the $group operator and I want to sum just the fee_details[0].amount. How can I do that?\nI saw on the web some people doing { $sum: “$specific_field.0.specific_property”} but it doesn’t work",
"username": "Marcello_Manuel_Borg"
},
{
"code": "[\n {\n \"key\": 1,\n \"fee_details\": [{ \"amount\": 300 }, { \"amount\": 200 }]\n },\n {\n \"key\": 2,\n \"fee_details\": [{ \"amount\": 100 },{ \"amount\": 200 }]},\n {\n \"key\": 1,\n \"fee_details\": [{ \"amount\" : 50},{ \"amount\" : 200}]},\n {\n \"key\": 2,\n \"fee_details\": [{ \"amount\" : 30},{ \"amount\" : 200}]\n }\n]{\n $group: {\n _id: \"$key\",\n sum: {\n $sum: {\n $arrayElemAt: [\n \"$fee_details.amount\",\n 0\n ]\n }\n }\n }\n }",
"text": "Hi,\nlets say you have the following collectionyou want to group by key and sum the first fee_details for each key. The corresponding stage will be:Regards,",
"username": "Imad_Bouteraa"
},
{
"code": "",
"text": "I’ve tried it, but it didn’t work. Can I use $sum in a string field? I’ve noticed the amount field type is ‘string’.\nBtw, it’s a string type, but the value is a valid number, ofc",
"username": "Marcello_Manuel_Borg"
},
{
"code": " {\n $group: {\n _id: \"$key\",\n sum: {\n $sum: {\n $toInt: {\n $arrayElemAt: [\n \"$fee_details.amount\",\n 0\n ]\n }\n }\n }\n }\n }[\n {\n \"key\": 1,\n \"fee_details\": [\n {\n \"amount\": \"300\"\n },\n {\n \"amount\": 200\n }\n ]\n },\n {\n \"key\": 2,\n \"fee_details\": [\n {\n \"amount\": \"100\"\n },\n {\n \"amount\": 200\n }\n ]\n },\n {\n \"key\": 1,\n \"fee_details\": [\n {\n \"amount\": \"50\"\n },\n {\n \"amount\": 200\n }\n ]\n },\n {\n \"key\": 2,\n \"fee_details\": [\n {\n \"amount\": \"30\"\n },\n {\n \"amount\": 200\n }\n ]\n }\n][\n {\n \"_id\": 2,\n \"sum\": 130\n },\n {\n \"_id\": 1,\n \"sum\": 350\n }\n]",
"text": "then, use $toIntfor the input:the stage results:",
"username": "Imad_Bouteraa"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Sum specific field in specific array position | 2021-04-01T21:34:42.580Z | Sum specific field in specific array position | 2,141 |
null | [
"app-services-user-auth"
] | [
{
"code": "",
"text": "I haven’t found anything on here yet that answers my question, or otherwise in the docs.I want to be able to use the realm user access token, the same token that is used to authenticate Realm Graphql API requests, to authenticate on my own Golang web API.Currently, I haven’t seen any sort of way to use a Realm function to do this which was one of my initial thoughts. I also don’t believe there is any sort of Realm SDK for Golang.How can one validate a Realm user token in their own server-side application?Thanks!",
"username": "Lukas_deConantseszn1"
},
{
"code": "user_id : ....\naccess_token : ...\n",
"text": "Hi @Lukas_deConantseszn1,I am somewhat don’t understand the question. When you run an HTTP authentication to Realm you getRight? So now if you store this somewhere on your Golang server you can know that the user XXXX is authenticated with that token…Is that what you are looking for?Please correct me if I missunderstood this.Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny,I would definitely say you are understanding. Just want to get your idea straight.So if I store this on my Golang server, I am essentially storing it in either local memory(ram) or in a MongoDB collection. Presumably if I was going to store it in a collection, I would store it in the same collection that I use for user custom data right? So then I would have some sort of field called accessToken. But what if the token expires? I need to update this collection every time I refresh the token, which adds a lot of additional network requests. If I store it in ram, well I still have this same problem. Plus, storing in ram would probably result in a lot of excess data in ram overtime.I just wish there was an endpoint that Realm had for validating a token. Like /validate-token and you would send it the token in the payload, and you would either get a 200 response saying the token was good, or you would get a different response, like 401, saying the token was expired.Please let me know if this is making more sense.Thanks!\nLukas",
"username": "Lukas_deConantseszn1"
},
{
"code": "",
"text": "Hi @Lukas_deConantseszn1,I mean it sounds reasonable I just not sure why you would go to your goLang server rather than going to the realm authentication directly for tokens?But I guess you could still build an http service where you have a “validate” webhook running as system and tries a dummy graphql query with provided token as payload… If return success return a valid token responseDoes that make sense?Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny,That makes a lot of sense and that’s actually similar to something I was thinking of doing really. The dummy gql query.Thanks!",
"username": "Lukas_deConantseszn1"
},
{
"code": "",
"text": "We’ve added this functionality in the product now.The OpenAPI documentation for the endpoint is here: https://docs.mongodb.com/realm/admin/api/v3/#post-/groups/{groupid}/apps/{appid}/users/verify_token This “Authenticate HTTP Client Requests” page about using the endpoint to verify a client access token: https://docs.mongodb.com/realm/reference/authenticate-http-client-requests/#verify-a-client-access-token ",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Thanks @Sumedha_Mehta1!@Lukas_deConantseszn1 FYI ^^^",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Validating Realm User Access Token on Server Application | 2020-09-13T21:49:41.842Z | Validating Realm User Access Token on Server Application | 4,856 |
null | [
"swift"
] | [
{
"code": "configuration.syncConfiguration?.realmURL",
"text": "looking to identify the path of a Realm. Prior to v10 in Swift SDK, I was able to achieve that through:configuration.syncConfiguration?.realmURL…Thank you in advance.",
"username": "Reveel"
},
{
"code": "configuration.syncConfiguration?.partitionValue",
"text": "Currently new MongoDB Realm uses partitions in place of the old URL and path. configuration.syncConfiguration?.partitionValue is the new identifier for the location.https://docs.mongodb.com/realm-sdks/swift/10.7.0/Structs/SyncConfiguration.html",
"username": "bainfu"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Trying to get Realm's path; previously was 'configuration.syncConfiguration?.realmURL' | 2021-04-02T00:15:47.469Z | Trying to get Realm’s path; previously was ‘configuration.syncConfiguration?.realmURL’ | 1,927 |
null | [
"backup"
] | [
{
"code": "",
"text": "Hi there,I am currently thinking about our DRP and this question has just pop in my mind.We are currently using a single region cluster on Atlas GCP.\nWe have CloudBackup enabled that provides us automatic snapshots. Good enough.In the documentation (https://docs.atlas.mongodb.com/backup/cloud-backup/restore/) it is mentionedRestore your Snapshot to an Atlas ClusterFor best possible restore performance, select a target cluster that belongs to:My question is “What happen if a whole region of the cloud provider becomes unavailable ?”Are the CloudSnapshot migrated to a secondary region ?\nDo I have to setup some kind of backup of my backups in a replicated way ?Thanks !",
"username": "Xavier_Krantz"
},
{
"code": "",
"text": "HI @Xavier_Krantz the snapshots are not stored in a different region but are stored in all AZ’s in a single region, so there would have to be a catastrophic regional failure. Today you can download your snapshots and store them someplace else. There is some functionality we are working on in the future that will help address this in a more automated form.",
"username": "bencefalo"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Cloud provider backup region availability | 2021-03-26T15:55:07.203Z | Cloud provider backup region availability | 2,201 |
null | [
"server"
] | [
{
"code": "",
"text": "Hi Team,We are seeing few dynamic link libraries are missing in 3.6.17 for CentOS 8. if i install the same on CentOS7 its shows those additional libs.ldd /usr/bin/mongo\nlinux-vdso.so.1 => (0x00007fff9778f000)\nlibresolv.so.2 => /lib64/libresolv.so.2 (0x00007f75ad456000)\nlibcrypto.so.10 => /lib64/libcrypto.so.10 (0x00007f75acff3000)\nlibssl.so.10 => /lib64/libssl.so.10 (0x00007f75acd81000)\nlibdl.so.2 => /lib64/libdl.so.2 (0x00007f75acb7d000)\nlibrt.so.1 => /lib64/librt.so.1 (0x00007f75ac975000)\nlibm.so.6 => /lib64/libm.so.6 (0x00007f75ac673000)\nlibgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f75ac45d000)\nlibpthread.so.0 => /lib64/libpthread.so.0 (0x00007f75ac241000)\nlibc.so.6 => /lib64/libc.so.6 (0x00007f75abe73000)\n/lib64/ld-linux-x86-64.so.2 (0x00007f75af230000)\nlibz.so.1 => /lib64/libz.so.1 (0x00007f75abc5d000)\nlibgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007f75aba10000)\nlibkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007f75ab727000)\nlibcom_err.so.2 => /lib64/libcom_err.so.2 (0x00007f75ab523000)\nlibk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007f75ab2f0000)\nlibkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007f75ab0e0000)\nlibkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007f75aaedc000)\nlibselinux.so.1 => /lib64/libselinux.so.1 (0x00007f75aacb5000)\nlibpcre.so.1 => /lib64/libpcre.so.1 (0x00007f75aaa53000)ldd /usr/bin/mongod\nlinux-vdso.so.1 (0x00007ffe834c4000)\nlibresolv.so.2 => /lib64/libresolv.so.2 (0x00007f2777452000)\nlibcrypto.so.1.1 => /lib64/libcrypto.so.1.1 (0x00007f2776f6c000)\nlibssl.so.1.1 => /lib64/libssl.so.1.1 (0x00007f2776cd8000)\nlibdl.so.2 => /lib64/libdl.so.2 (0x00007f2776ad4000)\nlibrt.so.1 => /lib64/librt.so.1 (0x00007f27768cc000)\nlibm.so.6 => /lib64/libm.so.6 (0x00007f277654a000)\nlibgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f2776332000)\nlibpthread.so.0 => /lib64/libpthread.so.0 (0x00007f2776112000)\nlibc.so.6 => /lib64/libc.so.6 (0x00007f2775d4f000)\n/lib64/ld-linux-x86-64.so.2 (0x00007f277a7a9000)\nlibz.so.1 => /lib64/libz.so.1 (0x00007f2775b38000)Any idea why this difference is.? We have been debugging a mongo 3.6.17 version issue in newer CentOS8 and suspecting if this could cause an issue.",
"username": "venkataraman_r"
},
{
"code": "lddmongomongodmongod",
"text": "Any idea why this difference is.? We have been debugging a mongo 3.6.17 version issue in newer CentOS8 and suspecting if this could cause an issue.Hi @venkataraman_r,You are comparing the ldd output of the mongo shell versus the mongod server binary, so the linked libraries are not expected to be identical. The linked library versions for mongod may also vary between O/S releases.What problem are you trying to solve?If you want to compare server build options, I suggest comparing the output of:mongo --norc --eval “db.serverBuildInfo()”Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "mongo --norc --eval “db.serverBuildInfo()”Thank you @Stennie_X for pointing out. But when i check the same for mongod also the same difference i see.db.serverBuildInfo() is helpful but i dont see any difference between 3.6.17 CentOS7 and 8.We have been hitting mongo SECONDARY members getting into hung state and opened a JIRA (https://jira.mongodb.org/browse/SERVER-54805). But we are trying to understand why its happening only with CentOS8. CentOS7 with the same mongo version 3.6.17 is working great. So trying to analyze from all the angle seeing a ldd difference. So thinking if that could cause any issues.",
"username": "venkataraman_r"
},
{
"code": "",
"text": "But when i check the same for mongod also the same difference i see.Hi @venkataraman_r,Based on the info in SERVER-54086, you are comparing MongoDB Community server on CentOS 7 with MongoDB Enterprise server on CentOS 7. MongoDB Enterprise uses additional libraries including GSSAPI and Kerberos.This difference does not seem relevant to the problem you have described unless there are some other differences in configuration (for example, security & authentication) that might affect connections. However, you could install the same edition & release version of MongoDB server into the newer environment to remove that variation.As noted in the Jira discussion, MongoDB 3.6 will reach End of Life (EOL) next month and we recommend you upgrade to MongoDB 4.0 to see if the issue is still reproducible. Once a server release series reaches end of life, no further bug fixes or security updates will be created.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "No @Stennie_X, i dont have Enterprise version at all in my servers. Its the same community version.",
"username": "venkataraman_r"
},
{
"code": "mongo --norc --eval “db.serverBuildInfo()”",
"text": "i dont have Enterprise version at all in my servers. Its the same community version.Hi @venkataraman_r,Can you provide the output of mongo --norc --eval “db.serverBuildInfo()” for both servers? Did you install official packages from MongoDB or build from source?I’m curious to see if there is some difference in build options.Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "--norc --eval “db.serverBuildInfo()”",
"text": "--norc --eval “db.serverBuildInfo()”[root@abc ~]# mongo XYZ:27717 --norc --eval “db.serverBuildInfo()”\nMongoDB shell version v3.6.17\nconnecting to: mongodb://XYZ:27717/test?gssapiServiceName=mongodb\nImplicit session: session { “id” : UUID(“6431a77a-6845-4f3e-b70a-7b4481bcd41f”) }\nMongoDB server version: 3.6.17\n{\n“version” : “3.6.17”,\n“gitVersion” : “3d6953c361213c5bfab23e51ab274ce592edafe6”,\n“modules” : ,\n“allocator” : “tcmalloc”,\n“javascriptEngine” : “mozjs”,\n“sysInfo” : “deprecated”,\n“versionArray” : [\n3,\n6,\n17,\n0\n],\n“openssl” : {\n“running” : “OpenSSL 1.1.1g FIPS 21 Apr 2020”,\n“compiled” : “OpenSSL 1.1.1 FIPS 11 Sep 2018”\n},\n“buildEnvironment” : {\n“distmod” : “rhel80”,\n“distarch” : “x86_64”,\n“cc” : “/opt/mongodbtoolchain/v2/bin/gcc: gcc (GCC) 5.4.0”,\n“ccflags” : “-fno-omit-frame-pointer -fno-strict-aliasing -ggdb -pthread -Wall -Wsign-compare -Wno-unknown-pragmas -Winvalid-pch -Werror -O2 -Wno-unused-local-typedefs -Wno-unused-function -Wno-deprecated-declarations -Wno-unused-but-set-variable -Wno-missing-braces -fstack-protector-strong -fno-builtin-memcmp”,\n“cxx” : “/opt/mongodbtoolchain/v2/bin/g++: g++ (GCC) 5.4.0”,\n“cxxflags” : “-Woverloaded-virtual -Wno-maybe-uninitialized -std=c++14”,\n“linkflags” : “-pthread -Wl,-z,now -rdynamic -Wl,–fatal-warnings -fstack-protector-strong -fuse-ld=gold -Wl,–build-id -Wl,–hash-style=gnu -Wl,-z,noexecstack -Wl,–warn-execstack -Wl,-z,relro”,\n“target_arch” : “x86_64”,\n“target_os” : “linux”\n},\n“bits” : 64,\n“debug” : false,\n“maxBsonObjectSize” : 16777216,\n“storageEngines” : [\n“devnull”,\n“ephemeralForTest”,\n“mmapv1”,\n“wiredTiger”\n],\n“ok” : 1,\n“operationTime” : Timestamp(1617065195, 2053),\n“$clusterTime” : {\n“clusterTime” : Timestamp(1617065195, 2053),\n“signature” : {\n“hash” : BinData(0,“AAAAAAAAAAAAAAAAAAAAAAAAAAA=”),\n“keyId” : NumberLong(0)\n}\n}\n}\n[root@abc ~]# cat /etc/redhat-release\nCentOS Linux release 8.1.1911 (Core)\n[root@abc ~]# mongod --version\n-bash: mongod: command not found\n[root@abc ~]# ssh XYZ !!\nssh XYZ mongod --version\ndb version v3.6.17\ngit version: 3d6953c361213c5bfab23e51ab274ce592edafe6\nOpenSSL version: OpenSSL 1.1.1g FIPS 21 Apr 2020\nallocator: tcmalloc\nmodules: none\nbuild environment:\ndistmod: rhel80\ndistarch: x86_64\ntarget_arch: x86_64\n[root@abc ~]#[root@abc ~]# mongo DEF:27717 --norc --eval “db.serverBuildInfo()”\nMongoDB shell version v3.6.9\nconnecting to: mongodb://DEF:27717/test\nImplicit session: session { “id” : UUID(“967d0f97-1419-44a6-8ca0-2d51ac367dc1”) }\nMongoDB server version: 3.6.17\n{\n“version” : “3.6.17”,\n“gitVersion” : “3d6953c361213c5bfab23e51ab274ce592edafe6”,\n“modules” : ,\n“allocator” : “tcmalloc”,\n“javascriptEngine” : “mozjs”,\n“sysInfo” : “deprecated”,\n“versionArray” : [\n3,\n6,\n17,\n0\n],\n“openssl” : {\n“running” : “OpenSSL 1.0.1e-fips 11 Feb 2013”,\n“compiled” : “OpenSSL 1.0.1e-fips 11 Feb 2013”\n},\n“buildEnvironment” : {\n“distmod” : “rhel70”,\n“distarch” : “x86_64”,\n“cc” : “/opt/mongodbtoolchain/v2/bin/gcc: gcc (GCC) 5.4.0”,\n“ccflags” : “-fno-omit-frame-pointer -fno-strict-aliasing -ggdb -pthread -Wall -Wsign-compare -Wno-unknown-pragmas -Winvalid-pch -Werror -O2 -Wno-unused-local-typedefs -Wno-unused-function -Wno-deprecated-declarations -Wno-unused-but-set-variable -Wno-missing-braces -fstack-protector-strong -fno-builtin-memcmp”,\n“cxx” : “/opt/mongodbtoolchain/v2/bin/g++: g++ (GCC) 5.4.0”,\n“cxxflags” : “-Woverloaded-virtual -Wno-maybe-uninitialized -std=c++14”,\n“linkflags” : “-pthread -Wl,-z,now -rdynamic -Wl,–fatal-warnings -fstack-protector-strong -fuse-ld=gold -Wl,–build-id -Wl,–hash-style=gnu -Wl,-z,noexecstack -Wl,–warn-execstack -Wl,-z,relro”,\n“target_arch” : “x86_64”,\n“target_os” : “linux”\n},\n“bits” : 64,\n“debug” : false,\n“maxBsonObjectSize” : 16777216,\n“storageEngines” : [\n“devnull”,\n“ephemeralForTest”,\n“mmapv1”,\n“wiredTiger”\n],\n“ok” : 1,\n“operationTime” : Timestamp(1617065293, 297),\n“$clusterTime” : {\n“clusterTime” : Timestamp(1617065293, 297),\n“signature” : {\n“hash” : BinData(0,“AAAAAAAAAAAAAAAAAAAAAAAAAAA=”),\n“keyId” : NumberLong(0)\n}\n}\n}\n[root@abc ~]# cat /etc/red\nredhat-release redis-1.conf redis-2.conf redis-3.conf redis-4.conf redis-5.conf redis.conf\n[root@abc ~]# cat /etc/redhat-release\nCentOS Linux release 7.6.1810 (Core)\n[root@abc ~]# ssh DEF mongod --version\ndb version v3.6.17\ngit version: 3d6953c361213c5bfab23e51ab274ce592edafe6\nOpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013\nallocator: tcmalloc\nmodules: none\nbuild environment:\ndistmod: rhel70\ndistarch: x86_64\ntarget_arch: x86_64\n[root@abc ~]#",
"username": "venkataraman_r"
},
{
"code": "",
"text": "@Stennie_X, do you see any difference?",
"username": "venkataraman_r"
},
{
"code": "mongodgitVersionreadelf -a | grep NEEDEDlddlddmongodmongodreadelfDT_NEEDEDmongod",
"text": "Hi @venkataraman_r,I don’t think there is any meaningful difference in the mongod build options: both are built from the same gitVersion and appear to have relevant libraries for different CentOS release series.ldd /usr/bin/mongodWhile I don’t think this a useful path to continue pursuing, one of my colleagues shared a tip:Try using readelf -a | grep NEEDED rather than ldd. Running ldd is going to show you not just the direct dependencies of mongod, but the transitive dependencies as well. Since system libraries may have changed their dependencies between RHEL 7 and RHEL 8, the output may well differ even if mongod is actually linked to the same system libraries. Using readelf to extract the DT_NEEDED entries will show only the direct dependencies that the mongod binary expresses.I would follow the advice that Edwin gave in the server issue discussion:We identified the behavior that occurs, but we don’t exactly know why it happens. The MMAPv1 storage engine has been deprecated in favor of WiredTiger since MongoDB 4.0, which correlates with when this issue went away. You can find more information on the storage engine, as well as migrating to WiredTiger from MMAPv1 as a replica set in our docs. We’d love to hear back from you if this issue persists after you’ve upgraded to MongoDB 4.0.I know you aren’t prepared to upgrade to WiredTiger yet, but it has been the default storage engine since MongoDB 3.2 (December 2015) and there has been significant investment in improvements in performance and stability.Is there a known issue with your use case that prevents using WiredTiger?Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "readelf -a | grep NEEDED",
"text": "readelf -a | grep NEEDEDThank you @Stennie_X for sharing the tip. Yes i dont see any difference with \" readelf -a | grep NEEDED \" command. . In past we had performance issue with WT. But we will consider WT in future. I’m fine to close this thread",
"username": "venkataraman_r"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Ldd difference between RHEL 7 and 8 for mongo 3.6.17 above | 2021-03-26T00:08:41.107Z | Ldd difference between RHEL 7 and 8 for mongo 3.6.17 above | 4,059 |
null | [] | [
{
"code": "",
"text": "Hi,I have a Mongo DB table that stores time-series data.I am trying to Mongo DB schema transformation and I have written code to use AutoMapper library to do the transformation for each document.I am reading document by document and transforming it and inserting them. Have configurable features for Bulk Read(Using Skip and Limit) and Bulk Write for speed.I just wanted to know if suppose some error occurs and fails then how can I resume?Can I use this _id value to resume the migration?Wondering if I can use _id as a filter condition because there is an index on this column.\nI can’t use the time to filter because this column is not unique and there we could miss few records to migrate if using greater than and will insert duplicate in using less than.More importantly the _id column is unique. If I am able to apply $gt on this field then I will be able to jump to the last processed location without any overhead because of the index.Basically what I am saying is I need to have one unique column that can be comparable for greater or lesser. _id column is one of them. Not sure if it is comparable. Is it?Update: Maybe _id this is not unique https://docs.mongodb.com/manual/reference/bson-types/#objectid Not sure what this link meant. My head is biting because of reading too much information. Man this quarantine has given me a big problem and that’s time.With regards,\nNithin B",
"username": "Nithin_Bandaru"
},
{
"code": "_id_id_id_id_id_id",
"text": "Hi Nithin,I believe you can use the _id field as you proposed. However there are some caveats:If I am able to apply $gt on this field then I will be able to jump to the last processed location without any overhead because of the index.This is correct, as long as your _id field is monotonically increasing and can be used for sorting in this manner. This will not work if your _id field contains a semi-random string like UUID or similar.If you’re using the auto generated ObjectId, this should work. Unless your deployment is a sharded cluster. The _id field is required to be unique within each collection. It is guarded by a unique index, so attempting to insert a document with a duplicate _id will result in an error.If you need further help with this, could you post:Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "So, If its a Sharded Cluster we can use Shard Key + _id to make it unique across.Thanks.",
"username": "Nithin_Bandaru"
}
] | How to loop through MongoDB's all records(with resume from a particular record) | 2020-05-26T19:54:46.183Z | How to loop through MongoDB’s all records(with resume from a particular record) | 3,304 |
null | [] | [
{
"code": "",
"text": "Hi,\nsince Atlas is based on Lucene, and Lucene handles synonyms, is it possible to provide a custom list of synonyms to an analyzer?",
"username": "Marco_Dell_Anna"
},
{
"code": "",
"text": "Hi,Thanks for the question!As of right now (April 2021) it is not possible to provide a custom list of synonyms to the analyzer. However, this feature will be available later in the year.In the meantime, I encourage you to upvote this feature request here:I'd like to be able to set synonyms for word in my search index, for example, making a search for \"cerulean\" redirect to \"blue\"Best,",
"username": "nraboy"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Custom Synonyms List | 2021-03-24T11:33:27.140Z | Custom Synonyms List | 2,212 |
[
"mongodb-shell",
"installation"
] | [
{
"code": "",
"text": "Hi\nRunning windows 8.1. (64Bit Version)\ninstalled mongo shell off Altas\nall goes well.\nonce i try run mongo.exe i get an error message:mongo.exe - Entry point not found.the procedure entry point bCryptHash could not be located in the dynamic link library.",
"username": "Kieran_O_Toole"
},
{
"code": ">ver\nMicrosoft Windows [Version 6.2.9200]\n\n>cd %MONGODB_HOME%\\bin\n\n>mongo.exe --version\n---------------------------\nmongod.exe - Entry Point Not Found\n---------------------------\nThe procedure entry point BCryptHash could not be located \nin the dynamic link library <...>\\mongodb-4.4.4\\bin\\mongod.exe. \n---------------------------\nOK \n---------------------------\n>mongod --version\ndb version v4.2.13\ngit version: 82dd40f60c55dae12426c08fd7150d79a0e28e23\n. . .\n",
"text": "Installed ZIP-version 4.4.4. There’s an instruction here that shows a successful installation and running.When I try it myself:The following dialog pops up:Haven’t found the solution yet.UPDATELooks like I found the cause.Here it’s stated that my OS version is not supported:Windows 8 / Server 2012 – Support removed in MongoDB 4.4+.Checked the same for MongoDB 4.2.13 and found that my OS is supported. Installed the ZIP version and:",
"username": "Alexander_Petrov"
}
] | Installation on Windows 8 - MongoShell gives error | 2020-12-03T13:02:31.509Z | Installation on Windows 8 - MongoShell gives error | 3,951 |
|
null | [
"swift",
"atlas-device-sync"
] | [
{
"code": "",
"text": "Hi all,I’m developing an iOS app using Realm Sync.\nI have 2 questions on how to manage users using my app:1 - My app will be used by multiple users which belong to different companies. Is there a way to put all the users of a company in a same group so they can share a common group id allowing them to access some share objects (which can be retrieved thanks to the partition key of the objects for example)?2 - Is there a way to allow a user to be connected to only one device at a time? Meaning that if he has an active session on one device and wants to connect to another device, it will be automatically disconnected from the first one.Thanks for your help!",
"username": "Julien_Chouvet"
},
{
"code": "",
"text": "Hi @Julien_Chouvet,For #1, this is what we did with the WildAid O-FISH app (each user is a member of an “Agency” and can work with objects created by other users in the same agency. Take a look at this article for details.#2 is interesting, and I don’t have a solution off the top of my head. Thining aloud, there are some tools at your disposal – for example, you have authentication triggers that can run when a user logs in and you can send push notifications from the Realm backend to the iOS app.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Hi @Andrew_Morgan,Thanks for your help!For #2 I think a trigger and notifications can help but how can I identify the different devices? I know that in the Realm app>App Users there are some Device ID, but what do they refer to? Is there a way to send notifications to some devices based on this ID?\nCapture d’écran 2021-04-01 à 16.24.591436×544 91.4 KB",
"username": "Julien_Chouvet"
},
{
"code": "",
"text": "Hi @Julien_Chouvet #2 is stretching the limits of my experience and so I’d suggest posting a new topic with a title that should attract the right eyeballs – maybe “How do I log a user out from previous devices when they log in on a new one?”",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Ok I’ll do that. Thanks for your help!",
"username": "Julien_Chouvet"
}
] | Manage multiple Realm Apps users | 2021-04-01T08:01:27.521Z | Manage multiple Realm Apps users | 2,869 |
null | [
"installation"
] | [
{
"code": "",
"text": "Hi Experts,Wanted to check if anyone is running mongo 3.6x on centos 8 in their application.Thanks,\nKiran",
"username": "Kiran_Pamula"
},
{
"code": "",
"text": "Can someone reply to this post?? Why is Mongo guide missing with Centos compatibility details?",
"username": "Raj_Kumar"
},
{
"code": "",
"text": "Can someone reply to this post??I assume the question would be answered by those running MongoDB 3.6.x on CentOS 8 who see this thread per the thread title. Why would anyone else reply ?Why is Mongo guide missing with Centos compatibility details?In my opinion. It is adequately documented.Like these?Platform SupportMongoDB 3.6 Community Edition supports the following 64-bit versions of Red Hat Enterprise Linux (RHEL), CentOS Linux, and Oracle Linux [1] on x86_64 architecture:Or this\n\nimage835×468 36.2 KB\n",
"username": "chris"
},
{
"code": "",
"text": "Thanks Chris. Are there any platform recommendations to tun on Centos 8.",
"username": "Raj_Kumar"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Wanted to check if anyone is running mongo 3.6x on centos 8 in their application | 2021-03-29T16:20:56.836Z | Wanted to check if anyone is running mongo 3.6x on centos 8 in their application | 1,828 |
null | [] | [
{
"code": "",
"text": "I’m using Atlas MongoDB for a university NodeJS - HTML project. A few hours ago I was working on it and could access all my collections via get and post requests. Suddenly and unexpectedly, without changing anything in my code at all, not in my server, not in my client page, not in my schemas and nothing on my db, I can’t access two specific collections. Programmatically speaking, as I said, everything is correct in my code and I had 100% access to my data a few hours ago. I tried many things like restarting my server several times, using another browser to send the get requests, chat with support, checking my db status. I have access through compass and shell. Anyone has any idea what that could be and how can I fix it. Let me know what info you need to help me. Using 4.4 mongo version, NodeJS as server and html page for my client. I have model schemas for every collection I have and a simple api in my server to access my data through get and post requests.\nThanks",
"username": "Vasilis_Aronis"
},
{
"code": "",
"text": "Update: My collection contains around 100k documents, total size 78mb. I realised if I let my get request running it will bring me back the data after around 25-30minutes… That’s insane, last night it took only 1minute max. I’m using my one free cluster M0 Tier. Why it became so extremely slow in just a few hours without changing anything? I didn’t add any new data or change anything in the db or my source code. Any ideas?",
"username": "Vasilis_Aronis"
},
{
"code": "",
"text": "Hi Vasilis,I’m really sorry that this happened to you. I suspect that what happened is you likely ran into our M0 Free Tier throughput limitation and were throttled as a result.We will have the team explore how to more gracefully handle this situation in future. This is a difficult problem to solve: we unfortunately have to have limits on what we can enable in our free sandboxes so that everyone can enjoy them without being disrupted by neighbors. On the flip side we like to give people the ability to burst… but maybe that’s a mistake since it leads to this seesawing effect.-Andrew",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "It’s fine! Yes, and during my requests there were many connections on the same server and in addition, the low performance of the M0 tier made all this big time delay. The best solution I found was to make a local server on my machine using community server package and it covers my needs for now. I don’t have trouble using Atlas on that tier, some times has no delay at all. Thanks for responding! Also your support team was really helpful!",
"username": "Vasilis_Aronis"
}
] | Suddenly can't access some collections | 2021-03-25T06:05:01.713Z | Suddenly can’t access some collections | 5,948 |
null | [
"atlas-device-sync",
"react-native"
] | [
{
"code": "const config = {\n schema: [Task.schema],\n sync: {\n user: user,\n partitionValue: `true`\n }\n };\n console.log(config,'config');\n Realm.open(config).then((projectRealm) => {\n realmRef.current = projectRealm;\n const syncTasks = projectRealm.objects(\"work_plan\");\n console.log(syncTasks,'syncTasks'); \n let sortedTasks = syncTasks.sorted(\"created_on\");\n console.log(sortedTasks, 'sortedTasks');\n setTasks([...sortedTasks]);\n sortedTasks.addListener(() => {\n setTasks([...sortedTasks]);\n });\n });\n",
"text": "See my code, i mentioned path in config but it’s not woked\nThank You",
"username": "Rohit_Dey"
},
{
"code": "",
"text": "Hi @Rohit_Dey, welcome to the community!Could you please share some more details on what you’re expecting to happen and what’s actually happening?",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Yes, thanks for the reply.\nAfter sync from realm app, all data was listed\nBut i expecting that after sync complete all data will be visible in offline mode.\nAfter turn of the internet the sync data removed and not showed.\nThat’s the issue i faced.Thank You",
"username": "Rohit_Dey"
},
{
"code": "",
"text": "Is this React Native?",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Yes, it is react native",
"username": "Rohit_Dey"
}
] | After Sync from realm app, the data not stored in local properly | 2021-03-31T18:43:26.928Z | After Sync from realm app, the data not stored in local properly | 1,810 |
null | [
"aggregation",
"queries",
"python"
] | [
{
"code": "jsonjson{\n\n \"1\": {\n \"mongodb\":\"mydb1.mongodbtime.find({\\n \\\"timestamp1\\\": {\\\"$gte\\\": datetime.strptime(\\\"2010-01-01 00:05:00\\\", \\\"%Y-%m-%d %H:%M:%S\\\"),\\n \\\"$lte\\\": datetime.strptime(\\\"2015-01-02 00:05:00\\\", \\\"%Y-%m-%d %H:%M:%S\\\")}},\\n {\\\"id13\\\":1}),\",\n \"mongodb1index\":\"mydb1.mongodbindextimestamp1.find({\\n \\\"timestamp1\\\": {\\\"$gte\\\": datetime.strptime(\\\"2010-01-01 00:05:00\\\", \\\"%Y-%m-%d %H:%M:%S\\\"),\\n \\\"$lte\\\": datetime.strptime(\\\"2015-01-02 00:05:00\\\", \\\"%Y-%m-%d %H:%M:%S\\\")}},\\n {\\\"id13\\\":1}),\"\n },\n \"2\": {\n \"mongodb\":\"mydb1.mongodbtime.find({\\n \\\"timestamp1\\\": {\\\"$gte\\\": datetime.strptime(\\\"2010-01-01 00:05:00\\\", \\\"%Y-%m-%d %H:%M:%S\\\"),\\n \\\"$lte\\\": datetime.strptime(\\\"2015-01-02 00:05:00\\\", \\\"%Y-%m-%d %H:%M:%S\\\")}},\\n {\\\"id13\\\":1}),\",\n \"mongodb1index\":\"mydb1.mongodbindextimestamp1.find({\\n \\\"timestamp1\\\": {\\\"$gte\\\": datetime.strptime(\\\"2010-01-01 00:05:00\\\", \\\"%Y-%m-%d %H:%M:%S\\\"),\\n \\\"$lte\\\": datetime.strptime(\\\"2015-01-02 00:05:00\\\", \\\"%Y-%m-%d %H:%M:%S\\\")}},\\n {\\\"id13\\\":1}),\n\n }\n\n}\n\ncollectionsmongodbtimemongodbindextimestamp1mongodbtimequerymydb1 = myclient[\"mongodbtime\"]\n\n\nwith open(\"queriesdb.json\",'r') as fp:\n queries = json.load(fp)\n db = {\"mongodb\": \"mongodbtime\", \"mongodb1index\": \"mongodbtime\"}\n for num_query in queries.keys():\n query = queries[\"1\"]\n print(query)\n for db_name in db:\n print(db_name)\n run(query[db_name])\ndef run(query):\n\n for j in range(0, 1):\n \n start_time = time.time()\n cursor = query\n for x in cursor:\n pprint(x)\n\n # capture end time\n end_time = time.time()\n # calculate elapsed time\n elapsed_time = end_time - start_time\n times.append(elapsed_time)\n #elapsed_time_milliSeconds = elapsed_time * 1000\n #print(\"code elapsed time in milliseconds is \", elapsed_time_milliSeconds)\n finalmeasurments(times)\n\nprint(cursor)printqueryquery",
"text": "I have a json file with around 50 queries.\nThe json file look like this:I have two collections one named mongodbtime and one called mongodbindextimestamp1 in the database mongodbtime.\nThe code i have used in python for passing the query and execute it look like this:I passed it like a string and obviously when i print(cursor) it just print the query Should i use another form of file?\nAny idea on how i should execute my query?",
"username": "harris"
},
{
"code": "eval()cursor = querycursor = eval(query)eval",
"text": "Hi @harris,This is really a Python question, not a MongoDB question - but I can help!What you’re trying to achieve here is to load snippets of code as strings from a JSON file, and then execute them as Python code. You can do this with Python’s eval() function. Please read the warning below.Replacing your cursor = query with cursor = eval(query) should work, I think.Warning: Make sure the data you’re loading in via JSON is safe - i.e., created and managed by you. Using eval on strings that are provided by other people or systems is a huge security flaw - it allows people to send you arbitrary code that will be run on your computer.",
"username": "Mark_Smith"
},
{
"code": "",
"text": "Thank you @Mark_Smith.I know its a python question and i am sorry for that,i can delete it after!Thank you so much again ,i couldn’t find any solution on my own .",
"username": "harris"
},
{
"code": "for num_query in queries.keys():\n query = queries[\"1\"]\nfor num_query in queries.keys():\n query = queries[num_query]\nfor num_query, query in queries.items():\n print(query)\nitems()(key, value)",
"text": "Just a note that I think this should read:… but could be further simplified to:items() returns an iterator over (key, value) pairs in a dict.",
"username": "Mark_Smith"
},
{
"code": "",
"text": "It’s not a problem to ask Python questions here! Please don’t delete the question - I just wanted to be clear the problem I was solving ",
"username": "Mark_Smith"
},
{
"code": "mongodbpostgresqlevalevalpostgresqleval",
"text": "How are you @Mark_Smith?i have one more question to ask if i may.I am using these test for benchmarking purposes.Specifically mongodb vs postgresql so execution time is very important for me.Do you think that may eval adds more execution time?I dont use eval in my postgresql tests so i am wondering if eval is the right choice.",
"username": "harris"
},
{
"code": "evaldef benchmark1(coll):\n coll.find({ ... })\n\nBENCHMARKS = {\n \"1\": {\n \"mongodb\": benchmark1\n }\n}\neval",
"text": "If you’re concerned about the parsing overhead eval, you have two options that come to mind:1: You could store your data structure containing your benchmarks in Python code - maybe in another module that you could import. Functions can be stored in dicts in Python, so you could define something like:There would be a minor overhead with the extra function call, but I would expect it to be trivial compared to everything else.2: You could use Python’s compile function outside of your loop to precompile the Python expression you already have into a code object. You’d still run the resulting code object with eval, but because it’s already been parsed, it should be more-or-less equivalent to just calling the code directly.My personal opinion: If I was writing this from scratch, I’d probably go with option 1. Because you’re nearly there already, I’d probably go with option 2.I hope this helps! Benchmarks can be tricky.",
"username": "Mark_Smith"
},
{
"code": "mydb1 = myclient[\"mongodbtime\"]\n\n\nwith open(\"queriesdb.json\",'r') as fp:\n queries = json.load(fp)\n db = {\"mongodb\": \"mongodbtime\", \"mongodb1index\": \"mongodbtime\"}\n for num_query in queries.keys():\n query = queries[num_query]\n print(query)\n for db_name in db:\n print(db_name)\n run(query[db_name])\ndef run(query):\n\n for j in range(0, 10):\n code_obj = compile(query, 'queriesdb.json', 'eval')\n\n start_time = time.time()\n cursor = eval(code_obj)\n # capture end time\n end_time = time.time()\n # calculate elapsed time\n elapsed_time = end_time - start_time\n times.append(elapsed_time)\n #elapsed_time_milliSeconds = elapsed_time * 1000\n #print(\"code elapsed time in milliseconds is \", elapsed_time_milliSeconds)\n finalmeasurments(times)\nmydb1.mongodbtime.aggregate(\n ^\nIndentationError: unexpected indent\n",
"text": "Thanks for your reply @Mark_Smith.In the option 2 do you mean something like that?It doesnt seems to work.\nThe output is :",
"username": "harris"
},
{
"code": "for j in ...mode='eval'",
"text": "That is almost what I meant. You’ve got your call to compile inside your for j in ... loop, which means you’re compiling the code 10 times - which is a waste of time. If you move the compile call 2 lines up, then you’ll compile once, and then eval the code object 10 times, which is more efficient.Check the documentation for compile - I think you need to pass in the argument mode='eval' for your use-case.",
"username": "Mark_Smith"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Passing a mongodb query from json file to execute in python | 2021-03-30T09:11:55.830Z | Passing a mongodb query from json file to execute in python | 6,492 |
null | [
"data-modeling",
"swift",
"atlas-device-sync",
"kotlin",
"developer-hub"
] | [
{
"code": "",
"text": "I’ve just published an article – Realm Data and Partitioning Strategy Behind the WildAid O-FISH Mobile Apps – that details the data architecture, schema, and partitioning strategy we used. If you’re developing a mobile app with Realm, this post will help you design and implement your data architecture.MongoDB partnered with the WildAid Marine Protection Program to create these mobile apps for officers to use while out at sea patrolling Marine Protected Areas (MPAs) worldwide. We implemented apps for iOS, Android, and web, where they all share the same Realm back end, schema, and sync strategy.",
"username": "Andrew_Morgan"
},
{
"code": "class User: EmbeddedObject, ObservableObject {\n @objc dynamic var name: Name? = Name()\n @objc dynamic var email = \"\"\n}\nDuty ChangeReport\"%%user.custom_data.agency.name\": \"%%partition\"",
"text": "That is a super article with a lot of great info! Thanks for posting it.A couple of questions if you don’t mind:I see the User object is an embedded objectWhich a great, re-usable class but the object does not exist on it’s own - only within a higher level managed object, and each User is a discreet set of data. But it appears it’s being used in the same way in a couple of classes; both in the Duty Change class as well as within the Report class. Was there not a need to tie those other parent objects back to a single user?Also, how does the User embedded object tie back to the Realm User which is used in security - maybe I overlooked that in the article:\"%%user.custom_data.agency.name\": \"%%partition\"",
"username": "Jay"
},
{
"code": "",
"text": "Hi @Jay, thanks for the comments.You’re correct that the same “user” does appear in multiple objects and documents with some duplication of data. The user is always uniquely identified by their username.Another option would have been to use realm relationships so that multiple objects could refer to the same user object. The reason why we opted to duplicate the data is that each boarding report is a historical (possibly legal) record and represents the state of the world when it is created. If for example, an officer changed their name at some later time, we want the existing boarding reports to continue to show their name as it was when the report was created.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Thanks for that information!Back to the embedded User data… What inspired the design decision to create a User embedded object instead of just adding those properties to the main object?The embedded aspect doesn’t seem to add any functionality - just a bit more typing when accessing the name and email propertiesobject.user.name vs object.name",
"username": "Jay"
},
{
"code": "",
"text": "It was just to add a bit of extra structure. At one point, I thought that there might be some additional data associated with the user. I agree that flattening the enclosing classes wouldn’t have much of a down side.As always, schema inertia plays a part. Even though schema changes are infinitely simpler with MongoDB and Realm (vs. relational databases) there’s still some coordination required when you have multiple developers working on iOS, Android, web, Atlas, Charts and the backend Realm app.",
"username": "Andrew_Morgan"
}
] | New article on the Realm data model for WildAid's O-FISH apps | 2021-03-31T09:32:06.038Z | New article on the Realm data model for WildAid’s O-FISH apps | 4,264 |
null | [
"python"
] | [
{
"code": "self.db = pymongo.MongoClient(host=defs.CONNECT_STRING,\n ssl=True,\n ssl_certfile=defs.CERTFILE_LOCATION,\n ssl_cert_reqs=ssl.CERT_REQUIRED,\n ssl_ca_certs=defs.CA_CERTS_LOCATION)\nself.valid_connection = self.db.pcap_api.authenticate(defs.CONNECTION_AUTHENTICATION_STRING,\n mechanism='MONGODB-X509')\n",
"text": "Hello, I wonder if someone could answer this question.On a Centos 7 server, we have a process runningroot 1779 1 1 2018 ? 12-02:50:11 /bin/mongod -f /etc/mongodb/mongoc.confThis is version v3.4.2Also on same server is tool that calls following python code:Is it correct to state that the code, when making a pymongo.MongoClient “connect”, is making a connection locally on the same server to the mongo process seen above?",
"username": "charles_dillard"
},
{
"code": "",
"text": "Also, may be worth pasting a snippet of the conf file:net:\nport: 27018\nbindIp: 1.2.3.4 <<< ip of server being discussed here\nssl:\nmode: requireSSL\nPEMKeyFile: /etc/mongodb/CRPdgMUSsalt01.pem\nclusterFile: /etc/mongodb/cluster.pem\nCAFile: /etc/mongodb/CACerts.crt",
"username": "charles_dillard"
},
{
"code": "defs.CONNECT_STRINGhosthostMongoClient",
"text": "Hi @charles_dillardYou can’t make that assumption. It depends on the contents of defs.CONNECT_STRING in the code, which can specify everything from the server IP/hostname, port, default database and other connection details.You can find more details on this parameter here. To ensure a connection to localhost, you’d either want to ensure the host argument specified that explicitly, or omit a value for host entirely, which will tell MongoClient to default to localhost on the default port (which is 27017).Hope this helps!Mark",
"username": "Mark_Smith"
},
{
"code": "",
"text": "from pymongo import MongoClient\nc = MongoClient()\nc.test_databaseon server in question as a test? This is a production server, so need to be careful.CONNECT_STRING = os.environ.get(‘MONGO_URL’)When I run os.environ.get for the MONGO_URL I get “none”. Also, MONGO_URL doesn’t show up in env variables. So same question: Is this likely connecting to localhost?Thanks for your help. Trying to debug old pcap retrieval code on production system.",
"username": "charles_dillard"
},
{
"code": "",
"text": "Hi @charles_dillardport: 27018In terms of running PyMongo MongoClient, it’ll default to 27017 as the ‘standard’ port and based on your configuration file above which indicates the server is running on 27018, you’ll need to amend the port value at the very least. I’m not sure which version of PyMongo is being used so you may need to look for an older version of this (the current version’s) page mongo_client – Tools for connecting to MongoDB — PyMongo 4.3.3 documentationYou can use should be able to use ‘pip list’ to determine what the local/system version of PyMongo is being run and then look to that version of the Python driver in the readthedocs page.Hope this helps!\nEoin",
"username": "Eoin_Brazil"
},
{
"code": "MongoClientpingfrom pymongo import MongoClient\nc = MongoClient()\nc.admin.command('ping')\n{u'ok': 1.0}localfind_one()NoneMongoClient",
"text": "1: MongoClient instantiation and getting a collection are lazy, so the code you’ve listed will pass even if mongod isn’t there to connect to. You’ll need to do some kind of operation to check - I recommend a ping, like so:This should return something like {u'ok': 1.0}, indicating that the server could be successfully contacted.You could also try querying the local database, which may be able to tell you something useful about the host your code is connecting to: https://docs.mongodb.com/manual/reference/local-database/ (try printing out the result of calling find_one() with no parameters, which should contain a “hostname” field.2: If the host is None then MongoClient will attempt to connect to localhost on port 27017. Given that your mongod is configured to listen on port 27018, I would expect your client code to fail.",
"username": "Mark_Smith"
},
{
"code": "",
"text": "This is version v3.4.23.4 EoL Jan 2020\n3.6 EoL April 2021",
"username": "chris"
},
{
"code": "",
"text": "Ok, will check it out. Thanks everyone for your help on this issue.",
"username": "charles_dillard"
},
{
"code": "",
"text": "This is version v3.4.2Hi @charles_dillard,As an addendum to @chris’ comment on MongoDB 3.4 reaching End of Life in Jan, 2020 (no further updates including security fixes), I would also strongly recommend upgrading to the final MongoDB 3.4.24 release while you plan for a major version upgrade to a supported release series (currently 4.0 or newer).Minor releases only include bug fixes and non-backward breaking changes, so upgrading within the 3.4.x release series is not a high risk decision. There have been 22 minor releases and many improvements in the three years following MongoDB 3.4.2 (Feb 1, 2017). I also think it is much more likely that you will encounter a known issue in a low numbered release like 3.4.2.Regards,\nStennie",
"username": "Stennie_X"
}
] | Is python code connecting locally to mongoc | 2021-03-30T18:52:12.630Z | Is python code connecting locally to mongoc | 2,664 |
null | [] | [
{
"code": "",
"text": "Hi,I’m creating a port of MongoDB4.9 on FreeBSD. It builds fine on ARM64. But it does not run on my RPI4 due to an illegal instruction.I found this issue and commit to the mongodb repo.Why is the -march raised for arm64? And is there really an issue if it stays on the older default?Would love to keep running MongoDB on my RPI.Regards,\nRonald.",
"username": "R_K"
},
{
"code": "mastermaster-march=armv8armv8.2-a-marchscons CCFLAGS=\"-march=armv8a\" --use-hardware-crc32=off ...armv8aarmv8.1-a--use-hardware-crc32=offC{,C,XX}FLAGS",
"text": "Hi -I’m creating a port of MongoDB4.9 on FreeBSD.Just an FYI but MongoDB 4.9 isn’t a stable release or a branch that will lead to a stable release. If you are just working on the port that is maybe not an issue, but you might do better to just track the master branch rather than the v4.9 branch. The master branch will become MongoDB 5.0.It builds fine on ARM64. But it does not run on my RPI4 due to an illegal instruction .The RPI4 isn’t a platform for which we produce builds or on which we test, so the level of support is definitely “best effort”. However, I’m reasonably confident we will be able to get it working for you.I found this issue and commit to the mongodb repo.Why is the -march raised for arm64?We have opted into an explicit setting for -march= because it brings the defaults for a build from source into alignment with how we produce the builds that we actually release. We have pushed the required ISA from armv8 to armv8.2-a so we can assume the presence of the LSE intrinsics and hardware CRC support.And is there really an issue if it stays on the older default?The default -march setting should be suppressed if you provide a different one on the command line for your build. So if you build with scons CCFLAGS=\"-march=armv8a\" --use-hardware-crc32=off ... you should get a working build that targets armv8a. If the rPI4 can handle armv8.1-a then you can request that instead and I think probably also drop the --use-hardware-crc32=off.Would love to keep running MongoDB on my RPI.It is definitely our intention that you still be able to, which is why we included the logic to suppress the application of the new default when overridden by any of C{,C,XX}FLAGS on the command line, but we want developers building for ARM to get the best performance by default when running on the type of hardware we expect to see used to back production database instances.Please let me know if you have any other questions or comments and I’ll be happy to help further.Thanks,\nAndrew",
"username": "Andrew_Morrow"
},
{
"code": "",
"text": "Andrew, thank you for your answer. I was wondering about the unusual version numbering already. But as a RC0 was tagged on github I presumed something was cooking here.\nIf the port becomes committed I will mark it as WIP.I’m currently trying a build with different -march configured.Regards,\nRonald",
"username": "R_K"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 4.9 on RPI4 does not run | 2021-04-01T09:44:06.092Z | MongoDB 4.9 on RPI4 does not run | 3,554 |
null | [] | [
{
"code": "auto.register.schemasfalse",
"text": "Hi,I tried to source data from mongodb and in configuration file I setted auto.register.schemas to false but the connector always created the schema. The schema I setted contains namespace fields but one generated by connector does not contain it, so that why I want to disable schema auto creation.Thank you for you help.",
"username": "Fabien_OUEDRAOGO"
},
{
"code": "auto.register.schemasCaused by: org.apache.avro.UnresolvedUnionException: Not in union [\"null\",{\"type\":\"record\",\"name\":\"ChangeEventStreamId\",\"namespace\":\"com.test.avro.model.season\",\"fields\":[{\"name\":\"_id\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"_data\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"copyingData\",\"type\":[\"null\",\"boolean\"],\"default\":null}]}]: {\"_id\": \"{\\\"$oid\\\": \\\"605b1f7148957c5fe341e471\\\"}\", \"_data\": null, \"copyingData\": true}output.value.schema",
"text": "Hi,I figured out that for auto.register.schemas, it was misconfiguration. The problem now is that I get this error Caused by: org.apache.avro.UnresolvedUnionException: Not in union [\"null\",{\"type\":\"record\",\"name\":\"ChangeEventStreamId\",\"namespace\":\"com.test.avro.model.season\",\"fields\":[{\"name\":\"_id\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"_data\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"copyingData\",\"type\":[\"null\",\"boolean\"],\"default\":null}]}]: {\"_id\": \"{\\\"$oid\\\": \\\"605b1f7148957c5fe341e471\\\"}\", \"_data\": null, \"copyingData\": true} when I have the namespace in the output.value.schema. Any idea?Best regards",
"username": "Fabien_OUEDRAOGO"
}
] | How to avoid source connector to create the schema specified with "output.schema.value" | 2021-03-30T15:25:45.890Z | How to avoid source connector to create the schema specified with “output.schema.value” | 2,260 |
null | [
"database-tools",
"backup"
] | [
{
"code": "",
"text": "Z:\\pbm_backup>mongorestore --port 27018 --oplogReplay oplogDump\n2021-03-31T08:52:49.238+0000 preparing collections to restore from\n2021-03-31T08:52:49.284+0000 replaying oplog\n2021-03-31T08:52:49.300+0000 skipping applying the config.system.sessions namespace in applyOps\n2021-03-31T08:52:49.301+0000 skipping applying the config.system.sessions namespace in applyOps\n2021-03-31T08:52:49.302+0000 skipping applying the config.system.sessions namespace in applyOps\n2021-03-31T08:52:49.302+0000 skipping applying the config.system.sessions namespace in applyOps\n2021-03-31T08:52:49.303+0000 Failed: restore error: error applying oplog: createIndex error: cannot transform type primitive.D to a BSON Document: Key partialFilterExpression of inlined map conflicts with a struct field name\n2021-03-31T08:52:49.304+0000 0 document(s) restored successfully. 0 document(s) failed to restore.Z:\\pbm_backup>",
"username": "Sameer_Kattel"
},
{
"code": "mongorestore --version",
"text": "Hi @Sameer_Kattel, what’s the output of mongorestore --version?",
"username": "Tim_Fogarty"
},
{
"code": "",
"text": "mongorestore --versionHi @Tim_FogartyIt’s latestC:\\Users\\DevAdmin>mongorestore --version\nmongorestore version: 100.3.1\ngit version: 32632b931f9c41d8314b75ecc88e551b012b1e30\nGo version: go1.15.8\nos: windows\narch: amd64\ncompiler: gcC:\\Users\\DevAdmin>",
"username": "Sameer_Kattel"
},
{
"code": "createIndexescommitIndexespartialFilterExpression",
"text": "Thanks for the info. I was able to reproduce this bug. Mongorestore cannot properly parse createIndexes or commitIndexes oplog entries if the index being created has a partialFilterExpression. I have opened the Jira ticket TOOLS-2833 to track this bug. I just made the fix and it should be available in the next release of Database Tools.If the version of MongoDB you dumped from and the version you are restoring to are both less than 4.4, then version 100.0.2 of the tools should work.If you are using version 4.4 of the server, you will have to wait for the next release or use a different dump if possible.If this isn’t possible you can build a version of the tools from the git branch which contains the fix. People often have difficulties building the tools on Windows, so that should only be a last resort if there’s no other workaround.",
"username": "Tim_Fogarty"
},
{
"code": "",
"text": "Thanks for the suggestions.",
"username": "Sameer_Kattel"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Monorestore failing while trying to restore oplogdump | 2021-03-31T09:10:01.594Z | Monorestore failing while trying to restore oplogdump | 4,075 |
null | [] | [
{
"code": "",
"text": "It’s Trent here from ActivePlace, what will be one day the worlds largest social marketplace in the health, wellness and fitness space. We chose MongoDB for a number of reasons, one of the big ones being scale as we grow and gather more and more data.Anyway, good to meet you all and when you get a chance checkout us out and let me know if you have any feedback on what we’re building.@Manuel_Meyer",
"username": "Trent"
},
{
"code": "",
"text": "Hello @Trent, Welcome to the MongoDB Community forum!Great to know you are a user of MongoDB. Yes, scaling with MongoDB is one of its better features. I come from a RDBMS/SQL (the other database) background and find MongoDB quite different - it is very flexible and easy to work with (as a developer). Nowadays I am honing my aggregation query skills.You will find some useful resources here - at the top of the page there is a menu with documentation, blog posts, podcasts, etc., and of course you can always post a question and even post an answer a question. In addition you will find folks from all kinds of software background writing here and sometimes they have strange given badges like “database rebel” (I have one of those too ). Hope you find all this useful in someway!",
"username": "Prasad_Saya"
}
] | It's Trent here from ActivePlace.com | 2021-03-31T20:25:10.567Z | It’s Trent here from ActivePlace.com | 2,960 |
null | [
"installation"
] | [
{
"code": "",
"text": "I was starting the mongod process for the first time.\nbut, it wasn’t not.\nThere are any files at data directory.This is error from the command line.2021-03-31T10:01:57.685+0900 I STORAGE [main] Engine custom option: log=(archive=true,enabled=true,file_max=300MB,path=/home/test/secondary/log)\nabout to fork child process, waiting until server is ready for connections.\nforked process: 7240\nERROR: child process failed, exited with error number 51\nTo see additional information in this output, start without the “–fork” option.This is error from the log file.2021-03-31T10:01:57.685+0900 I CONTROL [main] ***** SERVER RESTARTED *****\n2021-03-31T10:01:57.688+0900 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’\n2021-03-31T10:01:57.690+0900 I CONTROL [initandlisten] MongoDB starting : pid=7240 port=27017 dbpath=/home/test/secondary/data 64-bit host=localhost.localdomain\n2021-03-31T10:01:57.690+0900 I CONTROL [initandlisten] db version v4.0.23\n2021-03-31T10:01:57.690+0900 I CONTROL [initandlisten] git version: 07c6611b38d2aacbdb1846b688db70b3273170fb\n2021-03-31T10:01:57.690+0900 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013\n2021-03-31T10:01:57.690+0900 I CONTROL [initandlisten] allocator: tcmalloc\n2021-03-31T10:01:57.690+0900 I CONTROL [initandlisten] modules: none\n2021-03-31T10:01:57.690+0900 I CONTROL [initandlisten] build environment:\n2021-03-31T10:01:57.690+0900 I CONTROL [initandlisten] distmod: rhel70\n2021-03-31T10:01:57.690+0900 I CONTROL [initandlisten] distarch: x86_64\n2021-03-31T10:01:57.690+0900 I CONTROL [initandlisten] target_arch: x86_64\n2021-03-31T10:01:57.690+0900 I CONTROL [initandlisten] options: { config: “/home/test/secondary/mongod.conf”, net: { bindIp: “0.0.0.0”, port: 27017 }, processManagement: { fork: true, pidFilePath: “/home/test/secondary/mongod.pid” }, replication: { replSetName: “replica” }, setParameter: { replWriterThreadCount: “2” }, storage: { dbPath: “/home/test/secondary/data”, directoryPerDB: true, journal: { enabled: true }, syncPeriodSecs: 10.0, wiredTiger: { engineConfig: { configString: “log=(archive=true,enabled=true,file_max=300MB,path=/home/test/secondary/log)” } } }, systemLog: { destination: “file”, logAppend: true, path: “/home/test/secondary/log/mongod.log” } }\n2021-03-31T10:01:57.690+0900 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=15430M,cache_overflow=(file_max=0M),session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),log=(archive=true,enabled=true,file_max=300MB,path=/home/test/secondary/log)\n2021-03-31T10:01:58.841+0900 I STORAGE [initandlisten] WiredTiger message [1617152518:841074][7240:0x7fc076910b80], txn-recover: Main recovery loop: starting at 2/41316352 to 6/256\n2021-03-31T10:01:58.841+0900 I STORAGE [initandlisten] WiredTiger message [1617152518:841326][7240:0x7fc076910b80], txn-recover: Recovering log 2 through 6\n2021-03-31T10:01:58.841+0900 E STORAGE [initandlisten] WiredTiger error (2) [1617152518:841692][7240:0x7fc076910b80], file:sizeStorer.wt, txn-recover: __posix_open_file, 672: /home/test/secondary/data/sizeStorer.wt: handle-open: open: No such file or directory Raw: [1617152518:841692][7240:0x7fc076910b80], file:sizeStorer.wt, txn-recover: __posix_open_file, 672: /home/test/secondary/data/sizeStorer.wt: handle-open: open: No such file or directory\n2021-03-31T10:01:58.841+0900 E STORAGE [initandlisten] WiredTiger error (2) [1617152518:841716][7240:0x7fc076910b80], file:sizeStorer.wt, txn-recover: __txn_op_apply, 287: operation apply failed during recovery: operation type 4 at LSN 2/41317632: No such file or directory Raw: [1617152518:841716][7240:0x7fc076910b80], file:sizeStorer.wt, txn-recover: __txn_op_apply, 287: operation apply failed during recovery: operation type 4 at LSN 2/41317632: No such file or directory\n2021-03-31T10:01:58.841+0900 E STORAGE [initandlisten] WiredTiger error (2) [1617152518:841737][7240:0x7fc076910b80], file:sizeStorer.wt, txn-recover: __wt_txn_recover, 706: Recovery failed: No such file or directory Raw: [1617152518:841737][7240:0x7fc076910b80], file:sizeStorer.wt, txn-recover: __wt_txn_recover, 706: Recovery failed: No such file or directory\n2021-03-31T10:01:58.842+0900 E STORAGE [initandlisten] WiredTiger error (0) [1617152518:842098][7240:0x7fc076910b80], connection: __wt_cache_destroy, 350: cache server: exiting with 2 pages in memory and 0 pages evicted Raw: [1617152518:842098][7240:0x7fc076910b80], connection: __wt_cache_destroy, 350: cache server: exiting with 2 pages in memory and 0 pages evicted\n2021-03-31T10:01:58.842+0900 E STORAGE [initandlisten] WiredTiger error (0) [1617152518:842118][7240:0x7fc076910b80], connection: __wt_cache_destroy, 358: cache server: exiting with 76358 bytes in memory Raw: [1617152518:842118][7240:0x7fc076910b80], connection: __wt_cache_destroy, 358: cache server: exiting with 76358 bytes in memory\n2021-03-31T10:01:58.842+0900 E STORAGE [initandlisten] WiredTiger error (0) [1617152518:842125][7240:0x7fc076910b80], connection: __wt_cache_destroy, 364: cache server: exiting with 76189 bytes dirty and 1 pages dirty Raw: [1617152518:842125][7240:0x7fc076910b80], connection: __wt_cache_destroy, 364: cache server: exiting with 76189 bytes dirty and 1 pages dirty\n2021-03-31T10:01:59.611+0900 I STORAGE [initandlisten] WiredTiger message [1617152519:611563][7240:0x7fc076910b80], txn-recover: Main recovery loop: starting at 2/41316352 to 7/256\n2021-03-31T10:01:59.611+0900 I STORAGE [initandlisten] WiredTiger message [1617152519:611833][7240:0x7fc076910b80], txn-recover: Recovering log 2 through 7\n2021-03-31T10:01:59.612+0900 E STORAGE [initandlisten] WiredTiger error (2) [1617152519:612249][7240:0x7fc076910b80], file:sizeStorer.wt, txn-recover: __posix_open_file, 672: /home/test/secondary/data/sizeStorer.wt: handle-open: open: No such file or directory Raw: [1617152519:612249][7240:0x7fc076910b80], file:sizeStorer.wt, txn-recover: __posix_open_file, 672: /home/test/secondary/data/sizeStorer.wt: handle-open: open: No such file or directory\n2021-03-31T10:01:59.612+0900 E STORAGE [initandlisten] WiredTiger error (2) [1617152519:612271][7240:0x7fc076910b80], file:sizeStorer.wt, txn-recover: __txn_op_apply, 287: operation apply failed during recovery: operation type 4 at LSN 2/41317632: No such file or directory Raw: [1617152519:612271][7240:0x7fc076910b80], file:sizeStorer.wt, txn-recover: __txn_op_apply, 287: operation apply failed during recovery: operation type 4 at LSN 2/41317632: No such file or directory\n2021-03-31T10:01:59.612+0900 E STORAGE [initandlisten] WiredTiger error (2) [1617152519:612286][7240:0x7fc076910b80], file:sizeStorer.wt, txn-recover: __wt_txn_recover, 706: Recovery failed: No such file or directory Raw: [1617152519:612286][7240:0x7fc076910b80], file:sizeStorer.wt, txn-recover: __wt_txn_recover, 706: Recovery failed: No such file or directory\n2021-03-31T10:01:59.612+0900 E STORAGE [initandlisten] WiredTiger error (0) [1617152519:612641][7240:0x7fc076910b80], connection: __wt_cache_destroy, 350: cache server: exiting with 2 pages in memory and 0 pages evicted Raw: [1617152519:612641][7240:0x7fc076910b80], connection: __wt_cache_destroy, 350: cache server: exiting with 2 pages in memory and 0 pages evicted\n2021-03-31T10:01:59.612+0900 E STORAGE [initandlisten] WiredTiger error (0) [1617152519:612658][7240:0x7fc076910b80], connection: __wt_cache_destroy, 358: cache server: exiting with 76358 bytes in memory Raw: [1617152519:612658][7240:0x7fc076910b80], connection: __wt_cache_destroy, 358: cache server: exiting with 76358 bytes in memory\n2021-03-31T10:01:59.612+0900 E STORAGE [initandlisten] WiredTiger error (0) [1617152519:612665][7240:0x7fc076910b80], connection: __wt_cache_destroy, 364: cache server: exiting with 76189 bytes dirty and 1 pages dirty Raw: [1617152519:612665][7240:0x7fc076910b80], connection: __wt_cache_destroy, 364: cache server: exiting with 76189 bytes dirty and 1 pages dirty\n2021-03-31T10:02:00.472+0900 I STORAGE [initandlisten] WiredTiger message [1617152520:472949][7240:0x7fc076910b80], txn-recover: Main recovery loop: starting at 2/41316352 to 8/256\n2021-03-31T10:02:00.473+0900 I STORAGE [initandlisten] WiredTiger message [1617152520:473217][7240:0x7fc076910b80], txn-recover: Recovering log 2 through 8\n2021-03-31T10:02:00.473+0900 E STORAGE [initandlisten] WiredTiger error (2) [1617152520:473573][7240:0x7fc076910b80], file:sizeStorer.wt, txn-recover: __posix_open_file, 672: /home/test/secondary/data/sizeStorer.wt: handle-open: open: No such file or directory Raw: [1617152520:473573][7240:0x7fc076910b80], file:sizeStorer.wt, txn-recover: __posix_open_file, 672: /home/test/secondary/data/sizeStorer.wt: handle-open: open: No such file or directory\n2021-03-31T10:02:00.473+0900 E STORAGE [initandlisten] WiredTiger error (2) [1617152520:473594][7240:0x7fc076910b80], file:sizeStorer.wt, txn-recover: __txn_op_apply, 287: operation apply failed during recovery: operation type 4 at LSN 2/41317632: No such file or directory Raw: [1617152520:473594][7240:0x7fc076910b80], file:sizeStorer.wt, txn-recover: __txn_op_apply, 287: operation apply failed during recovery: operation type 4 at LSN 2/41317632: No such file or directory\n2021-03-31T10:02:00.473+0900 E STORAGE [initandlisten] WiredTiger error (2) [1617152520:473606][7240:0x7fc076910b80], file:sizeStorer.wt, txn-recover: __wt_txn_recover, 706: Recovery failed: No such file or directory Raw: [1617152520:473606][7240:0x7fc076910b80], file:sizeStorer.wt, txn-recover: __wt_txn_recover, 706: Recovery failed: No such file or directory\n2021-03-31T10:02:00.529+0900 E STORAGE [initandlisten] WiredTiger error (0) [1617152520:529544][7240:0x7fc076910b80], connection: __wt_cache_destroy, 350: cache server: exiting with 2 pages in memory and 0 pages evicted Raw: [1617152520:529544][7240:0x7fc076910b80], connection: __wt_cache_destroy, 350: cache server: exiting with 2 pages in memory and 0 pages evicted\n2021-03-31T10:02:00.529+0900 E STORAGE [initandlisten] WiredTiger error (0) [1617152520:529567][7240:0x7fc076910b80], connection: __wt_cache_destroy, 358: cache server: exiting with 76358 bytes in memory Raw: [1617152520:529567][7240:0x7fc076910b80], connection: __wt_cache_destroy, 358: cache server: exiting with 76358 bytes in memory\n2021-03-31T10:02:00.529+0900 E STORAGE [initandlisten] WiredTiger error (0) [1617152520:529576][7240:0x7fc076910b80], connection: __wt_cache_destroy, 364: cache server: exiting with 76189 bytes dirty and 1 pages dirty Raw: [1617152520:529576][7240:0x7fc076910b80], connection: __wt_cache_destroy, 364: cache server: exiting with 76189 bytes dirty and 1 pages dirty\n2021-03-31T10:02:00.530+0900 W STORAGE [initandlisten] Failed to start up WiredTiger under any compatibility version.\n2021-03-31T10:02:00.530+0900 F - [initandlisten] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 68\n2021-03-31T10:02:00.530+0900 F - [initandlisten] \\n\\n***aborting after fassert() failure\\n\\n\n2021-03-31T10:02:00.538+0900 F - [initandlisten] Got signal: 6 (Aborted).\n0x559457b712f1 0x559457b70509 0x559457b709ed 0x7fc0750965d0 0x7fc074cf0207 0x7fc074cf18f8 0x5594560c258c 0x5594561d0467 0x5594561a22e8 0x5594561a75a2 0x559456185240 0x5594568d314a 0x559456133af7 0x559456137675 0x5594560c4259 0x7fc074cdc3d5 0x559456131bbfThis is my mongodb configuration file.#. mongod.conf, Percona Server for MongoDB\n#. for documentation of all options, see:\n#. http://docs.mongodb.org/manual/reference/configuration-options/#. Where and how to store data.\nstorage:\ndbPath: /home/test/secondary/data\ndirectoryPerDB: true\nsyncPeriodSecs: 10\njournal:\nenabled: true\nwiredTiger:\nengineConfig:\n#. cacheSizeGB: 14\nconfigString: “log=(archive=true,enabled=true,file_max=300MB,path=/home/test/secondary/log)”#. Two options below can be used for wiredTiger and inMemory storage engines\nsetParameter:\nreplWriterThreadCount: 2\n#. wiredTigerConcurrentReadTransactions: 128\n#. wiredTigerConcurrentWriteTransactions: 128#. where to write logging data.\nsystemLog:\ndestination: file\nlogAppend: true\npath: /home/test/secondary/log/mongod.logprocessManagement:\nfork: true\npidFilePath: /home/test/secondary/mongod.pid#. network interfaces\nnet:\nport: 27017\nbindIp: 0.0.0.0#.security:\n#. authorization: enabled\n#. clusterAuthMode : keyFile\n#. keyFile: /home/test/secondary/mongodb.keyreplication:\nreplSetName: replica\noplogSizeMB: 1000※ I typed “#”. Then the font style was applied in the post, so I used “#.”",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "/home/test/secondary/datahad you checked if anything there in datadir /home/test/secondary/data ?If it’s first time, then delete all from this path and try to start",
"username": "ROHIT_KHURANA"
},
{
"code": "",
"text": "hi. @ROHIT_KHURANA.\nFirst of all, thank you for your reply.Unfortunately, the problem has not been resolved.",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "Try a different dirpath and see if it works or not\nor\ndrop the directory and recreate instead deleting the contents of it",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "any change in error logs?",
"username": "ROHIT_KHURANA"
}
] | Mongod start error question | 2021-03-31T01:25:33.178Z | Mongod start error question | 3,451 |
null | [
"production",
"golang"
] | [
{
"code": "",
"text": "The MongoDB Go Driver Team is pleased to announce the release of 1.5.1 of the MongoDB Go Driver.This release contains several bug fixes. Due to a bson related security issue, we recommend all users upgrade to this version of the driver. For more information please see the release notes.You can obtain the driver source from GitHub under the v1.5.1 tag.General documentation for the MongoDB Go Driver is available on pkg.go.dev and on the MongoDB Documentation site. BSON library documentation is also available on pkg.go.dev. Questions can be asked through the MongoDB Developer Community and bug reports should be filed against the Go project in the MongoDB JIRA. Your feedback on the Go driver is greatly appreciated.Thank you,The Go Driver Team",
"username": "Isabella_Siu"
},
{
"code": "",
"text": "it is by far the most difficult and complicated mongo library to use.\nIt is too prone to errors. Makes you have to write too much repeated code every time you want to do a simple query",
"username": "Hector_Oliveros"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Go Driver 1.5.1 Released | 2021-03-30T19:03:08.048Z | MongoDB Go Driver 1.5.1 Released | 2,265 |
[
"dot-net",
"beta"
] | [
{
"code": "estimatedDocumentCount()$collStatscount",
"text": "This is a beta release for the 2.13.0 version of the driver.The main new features in 2.13.0-beta1 include:The full list of JIRA issues that are currently scheduled to be resolved in this release is available at:https://jira.mongodb.org/issues/?jql=project%20%3D%20CSHARP%20AND%20fixVersion%20%3D%202.13.0%20ORDER%20BY%20key%20ASCThe list may change as we approach the release date.Documentation on the .NET driver can be found at:",
"username": "Boris_Dogadov"
},
{
"code": "",
"text": "",
"username": "system"
}
] | .NET Driver 2.13.0-beta1 Released | 2021-03-31T22:20:41.005Z | .NET Driver 2.13.0-beta1 Released | 3,069 |
|
null | [
"atlas-device-sync",
"xamarin"
] | [
{
"code": "",
"text": "I have problem with simple sync configuration for Xamarin app. I guess I’m missed something but after many hours I still can’t find solution.I need simple collection all can read and add documents, but only owners can edit their documents.How to configure it?",
"username": "Radoslaw_Kubas"
},
{
"code": "",
"text": "That question is overly vague and the best we can do is point you to the docs. Can you clarify what is it that you tried and how it didn’t work? Did you try the quick start example?",
"username": "nirinchev"
},
{
"code": "",
"text": "Thanks for the quick reply and sorry for the little detailed question. Yes, I looked through these documents before but found no answer there (maybe I missed something).I’m trying to migrate app from Realm Cloud to MongoDB, everything works very nice, except permissions.It is little more complicated in real life but to explain simpler I have collection of 20 000 documents. Each document has different owner, everyone can read all documents, and everyone can add new document, but only owners can edit their documents.In Realm Cloud app I could manage it with object level permissions, but now permissions works different.First I tried to put all documents into one Partition and create “owner” and “non-owner” role with permissions, but it seems when Sync is enabled permissions are managed “on the synced cluster”.Then I tried to setup permissions there, but I can’t find a way how to configure it correctly. If I set different Partition for each user, users can see only own documents because when I openvar realm = Realm.GetInstance(new SyncConfiguration($\"{RealmApp.CurrentUser.Id}\", RealmApp.CurrentUser));only documents from one partition are synchronised, but if I put all data in one Partition , everyone have full access.Do I need to redesign everything and use for example functions to add/update documents or there is a way to set object level permissions?",
"username": "Radoslaw_Kubas"
},
{
"code": "",
"text": "@Radoslaw_Kubas There are no object level permissions with new Realm Sync - only partition/realm level permissions of read or read/write. To accomplish what you’re suggesting I’d have two realms - one for each user where they are the owner of the documents and one global realm - which would have a read-only copy of all documents. You could use Realm Triggers - https://docs.mongodb.com/realm/triggers/database-triggers/\nto replicate any changes an owner makes from their private-per-user realm to the global realm.I hope this helps",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Ian_Ward Thank you very much for the hint ",
"username": "Radoslaw_Kubas"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Problem with simple Sync config | 2021-03-31T19:34:05.530Z | Problem with simple Sync config | 4,021 |
null | [
"installation"
] | [
{
"code": "",
"text": "HI im working on a Windows 2019 VM on this machine I’ve installed MongoDB Community I opend the TCP Port 27017 and 27018 as well as the mongod.exe and mongos.exe but I’m still not able to connect to my MongoDB remotly.I added a admin user this “mechanisms” : [\n“SCRAM-SHA-1”,\n“SCRAM-SHA-256”\n]not sure if this is important.Im not 100% sure if i started MongoDB with Authentication but well…When im on my VM I can only connect to the IP 127.0.0.1:27017 even with name and passwort but I cant connect to the external IP even tho im on the VM, remotly nothing worked so far.Is there any way to check what is wrong?Thank you for your time.Kind regards Björn",
"username": "Bjorn_K"
},
{
"code": "",
"text": "Hi @Bjorn_Kmongod will only bind to 127.0.0.1 by default as a secure option as no authentication method is configured by default.Check out https://docs.mongodb.com/manual/administration/security-checklist/\nand\nhttps://docs.mongodb.com/manual/administration/production-notes/\nas well as\nhttps://docs.mongodb.com/manual/administration/production-checklist-operations/MongoDB Atlas is a good choice for DBaaS to remove this Administrative overhead. They do a really good job with it. (Says the person with self hosted)",
"username": "chris"
},
{
"code": "",
"text": "change bind ip from localhost to IP of your VMs. so mongodb will be accessible remotely.",
"username": "ROHIT_KHURANA"
},
{
"code": "",
"text": "Thanks that helped a lot, at least now I can connect as long as authentication is off, for some reason if I turn authentication on it cant find my user and autehntication failed even tho I use the same login from remote pc as I use in mongo on the VM.",
"username": "Bjorn_K"
},
{
"code": "",
"text": "Hope, you are creating one admin user before enabling Auth. Please follow all steps as per below link and share your input if it works or not.",
"username": "ROHIT_KHURANA"
},
{
"code": "",
"text": "You can create that first admin user with authentication enabled, as long as it is done via a connection on 127.0.0.1, this is the localhost exception.",
"username": "chris"
},
{
"code": "",
"text": "Yes so first Installed the msi 4.4.4 from the MongoDB site and started it through the console with mongod, then I started mongo through console and entereduse admin\ndb.createUser(\n{\nuser: “root”,\npwd: “root”,\nroles: [ { role: “userAdminAnyDatabase”, db: “admin” }, “readWriteAnyDatabase” ]\n}\n)This is the Output:Successfully added user: {\n“user” : “root”,\n“roles” : [\n{\n“role” : “userAdminAnyDatabase”,\n“db” : “admin”\n},\n“readWriteAnyDatabase”\n]\n}then I restarted mongod with mondog --auth --bind_ip_allwhen im on the VM I can connect with my credentials with mongo -u root -p rootI tried to connect with Compass and with the Visual Studio Code Plugin from my remote pc and it doesnt work I only get the message in mongod in the console:{“t”:{\"$date\":“2021-03-31T05:19:52.856-07:00”},“s”:“I”, “c”:“ACCESS”, “id”:xxxxx, “ctx”:“conn2”,“msg”:“Authentication failed”,“attr”:{“mechanism”:“SCRAM-SHA-256”,“principalName”:“root”,“authenticationDatabase”:“admin”,“client”:“xx.xxx.xxx.xxx:xxxx”,“result”:“UserNotFound: Could not find user “root” for db “admin””}}If im on the VM and I try to connect to localhost or 127.0.0.1 with compas and the credentials it works fine.\nIm pretty sure it cant be the Firewall, because If i set up noauth I can connect from my remote pc",
"username": "Bjorn_K"
},
{
"code": "",
"text": "Thanks for your answer I descriped my problem a little bit more detailed as you can see, maybe you know an answer. I dont know how this forum works if everybody get informed as soon as somebody answeres but thanks to you both so far i really appreciate it.",
"username": "Bjorn_K"
},
{
"code": "use admindb.createUser()",
"text": "Hi @Bjorn_KLooks pretty straight forward. The only thing I can think of is perhaps use admin was not performed before the db.createUser()when im on the VM I can connect with my credentials with mongo -u root -p rootBy default mongo connects to the test database. So given that this works I think you created the user in the test db.",
"username": "chris"
},
{
"code": "",
"text": "i first wrote use admin -> enter, then i entered the rest is that wrong does it need to be all at once?",
"username": "Bjorn_K"
},
{
"code": "mongo -u root -p rootmongo -u root -p root test",
"text": "By default mongo connects to the test database. So given that this works I think you created the user in the test db.I take this back. mongo -u root -p root will connect to the localhost and use admin as the authentication database and use the test db.mongo -u root -p root test will connect to localhost and use test as the db and the authentication.i first wrote use admin -> enter, then i entered the rest is that wrong does it need to be all at once?That is correct. I’m a little stumped, if the only thing you change is the --auth flag. And auth is working on the when on the mongod host.",
"username": "chris"
},
{
"code": "",
"text": "Yes it is kinda weird.\nSo basicly what I did wrong first was to create a user in my individual database with only reading rights and only after that i created a admin user on admin db.Then I tried to deinstall everything, deleted everyfolder, registry everything and installed it new.\nThis time I created the admin user first in the admin database as mentioned before.I am able to login from remote pc when --auth is off.\nI am able to login on the VM if auth is off or on.\nIm not able to connect when auth if on and I try to connect to the IP adress of the server, then mongod tells me the user doesnt exist. It doesnt matter if I try it on the VM or from my remote pc. Is it maybe because im Using mongo community?",
"username": "Bjorn_K"
},
{
"code": "",
"text": "I did a little more try and error maybe I did something in gernal wrong but after I did the following it worked:Thanks for your help guys propably i need to setup more now but im happy i got this far.//but now if i connect with compas it tells me in mongod connection accepted but nothing is hown in compas on my remote pc. if i do it in visual studio code i at least get a message that it was successfully",
"username": "Bjorn_K"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Can't connect to MongoDB on WinServer 2019 | 2021-03-30T23:51:49.831Z | Can’t connect to MongoDB on WinServer 2019 | 5,299 |
null | [] | [
{
"code": "[hunt.hosts, hunt.hunters].sorted(\"score\", ascending: true)",
"text": "I’m using Realm Swift and have lists of embedded objects that I want to combine and sort.Specifically, I have an embedded object called Hunter in a scavenger hunt app.At least one Hunter is hosting the scavenger hunt, so they are in a hunt.hosts list. (other hunters can be added as hosts as well, the reason this is a list).There is also a list of type Hunter in a hunt.hunters list.I’d like to show a list sorted by score that includes both. Kind of like:[hunt.hosts, hunt.hunters].sorted(\"score\", ascending: true)I could do this by mapping all of the objects into a new array, but then I lose the live updates. Is there a way to combine multiple lists of the same type of EmbeddedObjects?If not, I can rethink my structure, but it would be awesome if this was possible.Thanks.–Kurt",
"username": "Kurt_Libby1"
},
{
"code": "",
"text": "Well, I didn’t get any hits on here so I went another route.If there is a way to do this, I would love to know.But what I did was change the way that the embedded object model was set up.I added a type property with a RealmEnum and then just put them all in one list with different types that can be filtered in queries to show the right lists of hunters. This way I can omit the queries if I want them all in one list rather than attempting to combine them later.–Kurt",
"username": "Kurt_Libby1"
}
] | Combine lists of embedded objects | 2021-03-29T15:25:54.651Z | Combine lists of embedded objects | 1,463 |
null | [] | [
{
"code": "",
"text": "These are some things we discovered during a POC. Maybe ‘everyone knows, nobody tells?’\nAny corrections and suggestions are welcome.",
"username": "Suresh_Batta"
},
{
"code": "MemberSerialization.OptIn",
"text": "Hey, thank you so much for this feedback. I’ll try to address these points one by one:Again, thanks for the feedback here - even if some of this information exists spread over multiple venues, we should definitely try to consolidate it and make it more discoverable so that new users have an easier time getting started and don’t run into weird behavior.",
"username": "nirinchev"
},
{
"code": "",
"text": "@nirinchev You’re welcome.You don’t need to have all RealmObject inheritors in the same assembly but you do need to reference something from those assemblies to ensure that the assembly is loaded, otherwise Realm will not discover them the first time it’s opened.This has not been our experience. The models were declared (when defined in another assembly) and we received the same error. There is fody weaver in the mix and I think this is an issue with transitive dependencies. Realm isn’t able to reach the RealmObject related types in another assembly through the referenced project.What is the issue you were facing when using RealmObject inheritors?We will try out the OpIn attribute. We were trying to use the RealmObject derived model in a HttpResponseMessage and got errors. Will post the exact exception later.That is somewhat usage-specificAll usage, at least how MongoDB Sync has been advertised and to which we were attracted to is Offline-first. IS there any other way Realm can be used to achieve this Offline-first functionality other than having a Realm open on the main thread? I wager, no.\nThis is why I think the documentation needs to reflect this key fact so developers are aware.",
"username": "Suresh_Batta"
}
] | Gotchas we wish we knew (or were documented) | 2021-03-30T22:23:52.469Z | Gotchas we wish we knew (or were documented) | 2,011 |
null | [
"dot-net"
] | [
{
"code": "var result = db.collection(\"A\")\n .Aggregate()\n .Match(Builders<T>.Filter.Eq(\"<ArrayName>.<PropertyName>\", \"name\"))\n .ToList();\n",
"text": "Hello,\nI have a performance issue that I can’t ignore.\nI have a collection with 1000 document. This collection will not grow so the number is stable.\nEach document weight 1kb. A single document has one array of subdocument and some other properties.What I do is:The result of the query operation in MongoDbCompass is 1ms for 200 document and its super ok.\nThe result of the ToList() of the documents is 10 seconds.This happen even with property not inside an array.What can I do ?\nThanksEdit\nI have indexes in place for the properties I need to search for.\nAlso this result are from localhost:27017, so all in local.I tested with the FindSync() and FindAsync() and I get the same results.\nI tried .Skip(n) and Limit(n + 20) and the performance for 20 documents are 1.5 second. A lot.",
"username": "Andrea_Zanini"
},
{
"code": "mongoexplain()explain",
"text": "Can you run the same query in the Mongo shell, mongo and see if you get similar results? You should also try and run the explain() for the query and post it here.the explain docs are here.",
"username": "Joe_Drumgoole"
},
{
"code": "{\nqueryPlanner: {\nplannerVersion: 1,\nnamespace: 'FirstCollection.Pokemon',\nindexFilterSet: false,\nparsedQuery: { 'moves.name': { '$eq': 'thunder' } },\nwinningPlan: {\n stage: 'FETCH',\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { 'moves.name': 1 },\n indexName: 'MovesName',\n isMultiKey: true,\n multiKeyPaths: { 'moves.name': [ 'moves' ] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: { 'moves.name': [ '[\"thunder\", \"thunder\"]' ] }\n }\n},\n rejectedPlans: []\n},\nexecutionStats: {\nexecutionSuccess: true,\nnReturned: 232,\nexecutionTimeMillis: 0,\ntotalKeysExamined: 232,\ntotalDocsExamined: 232,\nexecutionStages: {\n stage: 'FETCH',\n nReturned: 232,\n executionTimeMillisEstimate: 0,\n works: 233,\n advanced: 232,\n needTime: 0,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 1,\n docsExamined: 232,\n alreadyHasObj: 0,\n inputStage: {\n stage: 'IXSCAN',\n nReturned: 232,\n executionTimeMillisEstimate: 0,\n works: 233,\n advanced: 232,\n needTime: 0,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 1,\n keyPattern: { 'moves.name': 1 },\n indexName: 'MovesName',\n isMultiKey: true,\n multiKeyPaths: { 'moves.name': [ 'moves' ] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: { 'moves.name': [ '[\"thunder\", \"thunder\"]' ] },\n keysExamined: 232,\n seeks: 1,\n dupsTested: 232,\n dupsDropped: 0\n }\n }\n },\nserverInfo: {\nhost: 'DESKTOP-CS3UEH8',\nport: 27017,\nversion: '4.4.0',\ngitVersion: '563487e100c4215e2dce98d0af2a6a5a2d67c5cf'\n },\n ok: 1\n}",
"text": "Hello Joe,\nthanks for the reply.I run ‘db.collection.find(…)’ and got the results. The performance was strictly bettere than the ToList()I also run the ‘db.collection.find().explain()’ and this is the result:",
"username": "Andrea_Zanini"
},
{
"code": ".explain(\"executionStats\")stage: 'IXSCAN'executionStats",
"text": "Can you post the output of .explain(\"executionStats\"). This will give a more detailed report. I can see you are using an index stage: 'IXSCAN' which is good. What we are trying to do here is eliminate the possibility that the database is the bottleneck. the executionStats parameter will give us a lot more detail about where the time is spent.",
"username": "Joe_Drumgoole"
},
{
"code": ".explain(\"executionStats\")List<BsonDocument>",
"text": "In the previous post if you scroll down the section code you can see the “executionStats”.\nSorry I didn’t specify that I used the .explain(\"executionStats\")What I see slowing down it is the actual manifestation of the data.\nI know I am trying to retrieve a List<BsonDocument> where each document has all the properties, array etc.\nI can of course set a $projection to limit what to retrieve (and I will do eventually) but because I want to learn how mongodb works I am trying to understand why this is happening.",
"username": "Andrea_Zanini"
},
{
"code": "",
"text": "I see it now. So the actual query time is effectively 0. So your delay is not in the query or the database engine. Could it be n/w or client delays?",
"username": "Joe_Drumgoole"
},
{
"code": "",
"text": "I don’t think so. I am running all locally (mongodb is hosted in localhost:27017). All C# in a basic Console Application project with no client side.\nI am doing like this just to understand the MongoDb C# Driver.So I don’t really know.",
"username": "Andrea_Zanini"
},
{
"code": "",
"text": "Put a time stamp either side of the query on the client. It may just take a while to start your program. How long does it take from the program side to run the query.",
"username": "Joe_Drumgoole"
},
{
"code": "FindSync()Aggregate().Match()",
"text": "So, as I wrote I am doing this in a ConsoleApplication NetcoreApp 3.1.\nI executed 2 different ways of querying, one with FindSync() the other with Aggregate().Match().The first one (FindSync):\nCapture1672×494 10.4 KBThe second one (Aggregate):\nCapture11628×485 10.8 KB",
"username": "Andrea_Zanini"
},
{
"code": "",
"text": "I am going to ask my C# colleague @yo_adrienne to take a look. She is based in Nevada so it will be a few hours.",
"username": "Joe_Drumgoole"
},
{
"code": "ToListAsync()",
"text": "Hi @Andrea_Zanini!Can you try your queries but with ToListAsync()?And if possible, can you also log out the execution stats?",
"username": "yo_adrienne"
},
{
"code": "await query.ToListAsync();ToListAsync()",
"text": "Hello, sorry for the late answer.I tried the same thing with the await query.ToListAsync(); but even with that the time it takes for manifest the data is the same.For the execution stats do you mean what @Joe_Drumgoole asked me? Or I can, with C# code, somehow use the execution stats for the ToListAsync() operation?",
"username": "Andrea_Zanini"
},
{
"code": "",
"text": "Hi All,I am experiencing the same issue. It’s not a matter of a big number of documents retrieved by the query but when doing the tolist(), it takes around 20 seconds to materialize them. Do you have any workaround or estimation for the fix? We also tried with the newest mongoDb driver version, but the issue is still there either using tolist or toListAsync. Please help",
"username": "Sergio_Fiorillo"
},
{
"code": "public class Example {\n [BsonId]\n [BsonRepresentation(BsonType.ObjectId)]\n public Guid Id {get; set; }\n \n [BsonRepresentation(BsonType.String)]\n public string Name {get; set; }\n}\n",
"text": "Hello Sergio,\nI didn’t found the cause of the problem and how to solve it unfortunately.What I did to go on with the project was start from the scratch and rebuild my document class with all the BsonAttribute to define via code what type are the property of the document (even if, when declared, have already the type)\nExample:I changed a lot, more that I wanted to my document, but now the manifestation time it’s what I expected to be.I didn’t close the issue because I think there is something spooky around it.",
"username": "Andrea_Zanini"
},
{
"code": " var query = collection.Aggregate(new AggregateOptions { AllowDiskUse = true })\n .Match(matching_criteria)\n .Group(grouping_criteria)\n .Sort(sorting_criteria)\n .Project(projection)\n .As<DOCUMENT>();\n\n var result = query.ToList();\n",
"text": "Hi Andrea,Thanks for your response. I tried that but still having the same issue toList is extremely slow:Any other ideas on this?",
"username": "Sergio_Fiorillo"
},
{
"code": "",
"text": "Hi,I’m experiencing a similar performance overhead with the client for 1667 results.\nTime in Compass : Actual Query Execution Time (ms):120\nC# driver: around 6000 msAny update on this?",
"username": "Joao_Passos"
},
{
"code": "",
"text": "I am going to get some C# experts to check this one out.",
"username": "Joe_Drumgoole"
},
{
"code": " public class Entity\n{\n\tpublic Guid Id { get; set; }\n\tpublic List<Bar> BarValues { get; set; }\n\tpublic List<Foo> FooValues { get; set; }\n}\n\npublic class Bar\n{\n\tpublic Guid Id { get; set; }\n\tpublic Guid B { get; set; }\n\tpublic Guid C { get; set; }\n}\n\npublic class Foo\n{\n\tpublic Guid Id { get; set; }\n\tpublic Guid B { get; set; }\n\tpublic object C { get; set; }\n}",
"text": "In my case, the time seems to be in the serialization\nIt’s returning 1667 instances of Entity, each one with 500 instances of Bar and 150 instances of Foo",
"username": "Joao_Passos"
},
{
"code": "db.GetCollection()query.ToList();",
"text": "@Andrea_Zanini, @Sergio_Fiorillo, @Joao_Passos:There are still many factors that could be affecting these times, so I’d like to ask a few more questions regarding your respective scenarios:The object models collectively posted here seem straightforward, so the deserialization shouldn’t be an issue.Additionally, regarding these observations:The result of the query operation in MongoDbCompass is 1ms for 200 document and its super ok.\nThe result of the ToList() of the documents is 10 seconds .Time in Compass : Actual Query Execution Time (ms):120\nC# driver: around 6000 msI’d like to quickly explain why this may be:When making a call to db.GetCollection(), this is usually ~8ms, as @Andrea_Zanini’s screenshots show:The first one (FindSync):The second one (Aggregate):When creating the Aggregate queries, it’s important to note that this doesn’t result in the first batch being retrieved; it’s only in-memory and doesn’t go out on the wire yet. Calling query.ToList(); takes the longest now because it has to retrieve the first batch (~600ms based on the FindSync log in Andrea’s first screenshot), PLUS all subsequent batches. This is why the total times are 8.5 seconds or higher.Compass reports query execution time is only 120ms because that only considers the query time on the server. It doesn’t include the time on the wire nor the deserialization from BSON to C# objects.So, some options:If you find that ping time is your bottleneck, your wait time is probably due to the amount of roundtrips that are required to retrieve the batches. Based on the earlier example of 9.1 seconds grand total execution time and ~600ms per batch, that’s about 15 round trips to the server (9.1s / 600ms).Possible solution to ping issue is to locate your app servers closer to the cluster that you’re querying.If you find that bandwidth is your bottleneck, your wait time is probably due to the time it takes to transmit each 15mb batch across the wire, again, around 15 times.Possible solution to bandwidth issue is to reduce the amount of data being retrieved, via a projection (like @Sergio_Fiorillo appears to be doing), and/or enabling wire protocol compression in your connection string or via code.Finally, if all else fails and it still appears to be a deserialization issue, we’d love to see a reproduction of it happening, ideally with some test data. I know it might be additional work, but even a small repo with this information helps us immensely in being able to debug the exact issue you may be having!Thanks for your patience! Looking forward to hearing from you and seeing if these possible solutions help.",
"username": "yo_adrienne"
},
{
"code": "",
"text": "Hi,Thank you for the prompt support.\nThe object models are more complex than the simplified version I posted\nI have MongoDB running on my local machine so bandwidth is not an issue.Here’s a repo with a working example: https://github.com/jpdlpassos/mongodb5000 raw values took 675 ms\n5000 typed values took 14461 msHopefully, you can help me spot where i can improve my code.Thanks again!",
"username": "Joao_Passos"
}
] | C# .NET Core 3.1 - Driver (2.11.1) - Slow ToList() data manifestation | 2020-09-05T20:30:50.158Z | C# .NET Core 3.1 - Driver (2.11.1) - Slow ToList() data manifestation | 13,755 |
null | [] | [
{
"code": "",
"text": "Hi, I’m working on running static analysis tools on MongoDB, but the tools I want to use, clang-tidy & Clang Static Analyzer, accepts compile_commands.json. I’m struggling to get such a json due to the scons build system this project uses.Do you have any recommendations on how to create the compile_commands.json using scons?I’ve found the pinetr2e/scons-compiledb: SCons support for compile_commands.json generation (github.com) tool, but I could not get it working.\nI also tried the CodeChecker’s build logger functionality to LD_PRELOAD some hooks for the exec-like function catching any compiler invocation that the scons build system triggers - without success.",
"username": "Balazs_Benics"
},
{
"code": "compile_commands.jsonscons compiledbscons compile_commands.jsonAliasclang-tidyclang-tidymasterbugprone-unused-raiibugprone-use-after-movereadability-const-return-typereadability-avoid-const-params-in-decls",
"text": "Hi -You can build a compile_commands.json for MongoDB on all branches back to and including v3.2 by running scons compiledb or scons compile_commands.json, the former being an Alias for the latter.The implementation you linked to is actually derived from our original implementation: GitHub - pinetr2e/scons-compiledb: SCons support for compile_commands.json genearationOur implementation is built in to the MongoDB sources, and lives in mongo/compilation_db.py at master · mongodb/mongo · GitHub, and a more or less identical version wih some improvements has been merged into SCons itself as of SCons 4.0: scons/compilation_db.py at master · SCons/scons · GitHubI’d be interested to know more about what you are planning to do with clang-tidy and the MongoDB sources. We started running clang-tidy as part of MongoDB v4.4 development, but with a rather small set of checks. We have been gradually expanding the set of those checks as we identify specific checks that we find useful. On the master branch right now, the list of enabled checks is:Thanks,\nAndrew",
"username": "Andrew_Morrow"
},
{
"code": "",
"text": "Awesome! I’m gonna test it as soon as I have time.I’m working on developing static analysis checks and MongoDB seemed to be large and mature enough to have a look at the reports of my experimental check. I’m just diffing the reports to see what changes.You might be interested in our open-source tool enabling us to do so. Have a look at CodeChecker! Let us know if you find it useful.",
"username": "Balazs_Benics"
}
] | Acquire compile_commands.json for the build targets | 2021-03-31T10:08:23.341Z | Acquire compile_commands.json for the build targets | 4,136 |
[
"performance"
] | [
{
"code": "",
"text": "Hi AllI am trying to design a queue with Mongo DB as its persistent storage for a highly scalable application. This would involve a lot of delete and slice queries. Cassandra has a blog that says it will impact the performance of the database due to its data model. Can someone point me to an article that explains how the Mongo DB stores the data and the impact of update/delete on read performance?Read the latest announcements, product updates, community activities and more. Subscribe now to the DataStax blog!Thanks\nGuru",
"username": "Guru_Prashanth_Thana"
},
{
"code": "",
"text": "Hi @Guru_Prashanth_Thana,I suggest you to read the following blog series regarding mongodb performance :Having said that, it sounds like if the queue is only intend to persist and tunnel data, using a capped collection might make sense:Thanks,\nPavel",
"username": "Pavel_Duchovny"
}
] | Impact of delete and update on read performance | 2021-03-31T11:47:02.671Z | Impact of delete and update on read performance | 3,742 |
|
null | [] | [
{
"code": "",
"text": "my current setup is, I have a single replica set with a single master and two more replicas. Currently, it has a large collection and I need to add an unique index. I estimate this would take couple of hours and I will be following the instructions given here - Build Indexes on Replica Sets — MongoDB Manualnow my question is, lets say I add the unique index in follower 2 and add it back to the replica set. In the meantime, what if primary has added a document which conflicts with this index.",
"username": "V_N_A"
},
{
"code": "db.collection.createIndex()",
"text": "Hi @V_N_A,Please note that unique indexes have a specific section in that guide:To create unique indexes using the following procedure, you must stop all writes to the collection during this procedure.If you cannot stop all writes to the collection during this procedure, do not use the procedure on this page. Instead, build your unique index on the collection by issuing db.collection.createIndex() on the primary for a replica set.Thanks,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "To create unique indexes using the following procedure, you must stop all writes to the collection during this procedure.oh! I got confused by this line. I thought I’d have to stop writes to the collection in the replica. Does this mean, I would have stop the writes in primary too?",
"username": "V_N_A"
},
{
"code": "",
"text": "Yes. Stop all writes throughout the replica… Meaning on Primary which is the one who doing writes.",
"username": "Pavel_Duchovny"
}
] | What happens in case of a conflict during replication? | 2021-03-30T13:30:58.435Z | What happens in case of a conflict during replication? | 2,598 |
null | [
"database-tools"
] | [
{
"code": "",
"text": "Hi folks, while working with a customer that is hitting this Mongo bug ( mongodump fails when capped collection position lost), I run out of options trying a workaround for the problem, which happens sporadically (very likely depending on variable latency issues)\nbut makes automation of backups not possible.Given the bug is open for quite a while, we are considering working on a fix, but my concern is that we\nmight start working in something and end up in a rabbit hole with a solution way to complex or that require architectural changes.Also, I’m posting in this mongo tools channel, but there’s also a bug related to the mongo core tools, which increases, even more, my suspicions that this\nmight required to change things that the mongo team might not be willing to accept.So, what do you guys think? Does anyone here have the expertise or knows who could give a good direction on how to fix that problem?",
"username": "Erlon_Cruz"
},
{
"code": "mongodumpmongodumpmongodumpmongodumpmongodumpmongodumpmongodumpmongodump",
"text": "Welcome to the MongoDB Community forums @Erlon_Cruz!!The issues you’ve highlighted were originally reported for MongoDB 3.4 server and MongoDB 3.2 mongodump, so it would be good to confirm if any behaviour has changed in recent versions of MongoDB.To help understand the bug affecting your mongodump backups, can you please:confirm the exact MongoDB server and mongodump versions you are using in your environmentconfirm the type of deployment you are backing up (standalone, replica set, or sharded cluster)share more context on the capped collection(s) that cause your mongodump backups to periodically fail (for example, is that the oplog system collection or a user collection?)There have been improvements to the initial sync process and mongodump since MongoDB 3.2, but you may need to consider a different backup method if some of your capped collections are rolling over faster than mongodump can complete.Since all data dumped via mongodump has to be read into memory by the MongoDB server, it is not an ideal backup approach for deployments that are highly active or have uncompressed data significantly larger than the WiredTiger cache. Backup approaches like filesystem snapshots and agent-based backup (eg MongoDB Ops Manager or Cloud Manager) are more common for production deployments that have outgrown mongodump.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi Stennie,Thanks for answering, so the points you asked:One of the collections I mentioned, the one used for logs, seems to have no more errors after I increase the collection size in 10x, but doing the same in the transaction collection didn’t work the same.About the improvements you mention, can you tell what versions of mongo they were added? And if they improved anything on that behavior, shouldn’t the bugs I mentioned in Jira be updated or closed?Erlon[1] Bug #1852502 “Juju backups failing Executor error: CappedPositio...” : Bugs : juju\n[2] mongodump.log · GitHub",
"username": "Erlon_Cruz"
},
{
"code": "system.profilemongodumpmongodumpmongodumpmongodumpmongodumpdb.fsyncLock()db.fsyncUnlock()mongodump",
"text": "Hi @Erlon_Cruz,Thanks for the extra details. Since the issues you mentioned were originally reported against older versions, context on the actual versions used (and whether this affects system vs user collections) helps eliminate the possibility that you are running into some related bugs that have since been fixed.In particular I was thinking of issues with small but very active system capped collections (for example, system.profile is 1MB by default) that should be excluded from mongodump by default.One of the collections I mentioned, the one used for logs, seems to have no more errors after I increase the collection size in 10x, but doing the same in the transaction collection didn’t work the same.The underlying problem is twofold: capped collections have a fixed size and can roll over while the mongodump is in progress, and deletes to capped collections are currently implicit rather than replicated.Implicit deletion from capped collections is an implementation detail inherited from the legacy MMAPv1 storage engine. The MMAP storage engine was removed in MongoDB 4.2, and there is currently work in progress to address this behaviour in future server versions. SERVER-55156 (Move capped collection responsibilities to the collection layer) and related issues will unblock being able to address the initial sync issue (SERVER-32827) you mentioned in the first post in this thread. I don’t expect there will be a straightforward fix for mongodump, but I’ll defer to the database tools team to comment on the Jira ticket.I can think of some possible workarounds without any changes to server or mongodump :Increase the size of affected capped collections (which can still be a challenge to mongodump depending on workload).Use an alternative backup strategy: filesystem snapshots or backup agent would be best.If you have a replica set deployment you could also consider configuring a hidden member for backup purposes and using db.fsyncLock() / db.fsyncUnlock() to quiesce writes while the mongodump backup is running. Stopping writes on a secondary in order to take a backup does presume the backup can be completed before the secondary’s oplog becomes too stale to sync, so this is a less recommendable backup approach.Consider using a TTL index to limit data size instead of a capped collection.About the improvements you mention, can you tell what versions of mongo they were added? And if they improved anything on that behavior, shouldn’t the bugs I mentioned in Jira be updated or closed?Issues will be definitely be closed if there is an associated commit, but with 50K+ issues in the SERVER project sometimes there are older issues that are either duplicates or indirectly addressed via other code changes.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi Stennie, thanks a lot for your help, support, and detailed information.\nI really appreciated it!",
"username": "Erlon_Cruz"
}
] | Assessing complexity to fix mongodump behaviour for capped collections | 2021-03-14T16:16:17.464Z | Assessing complexity to fix mongodump behaviour for capped collections | 2,924 |
null | [
"atlas-triggers"
] | [
{
"code": "updatedFields{\n \"_id\": \"someId\"\n \"fullName\": {\n \"first\": \"matt\"\n \"last\": \"hope\"\n }\n...\n}\nfullName.firstupdateDescription.updatedFields\"fullName\": {\n \"first\": \"will\"\n \"last\": \"hope\"\n }\nupdateDescription.updatedFields\"fullName\": {\n \"first\": \"will\"\n }\n",
"text": "Hi,\nI am using mongo update triggers.\nWhen I update one field inside an object, in the trigger message updatedFields I see the full object, even though only 1 field inside this got updated, Is it possible to see just that updated field and not the whole object?Example:Initial Doc:Lets say I update fullName.first to be “will”. (I am using $merge to do this from one collection to the triggerCollection, not sure if that makes any difference).\nIn the trigger message details, in the updateDescription.updatedFields I see:Is it possible in the updateDescription.updatedFields to see just the updated field and not the whole object?\nSo the following:Thanks,\nMatt",
"username": "HopeM"
},
{
"code": "triggerCollection",
"text": "Hi @HopeM welcome to the community!Could you please share the aggregation that’s writing to triggerCollection?",
"username": "Andrew_Morgan"
},
{
"code": "db.getCollection('students')..aggregate([\n {some match query},\n {\"$merge\": \"triggerCollection\"}\n])\nfullName.first",
"text": "Hopefully this is enough… (the match query has no relevance here)So lets say I update the fullName.first in the doc in the students collection… then run that aggregation\nthus updating the corresponding doc in the triggerCollection. Then causing the update trigger change event to occur.",
"username": "HopeM"
},
{
"code": "on$mergetriggerCollection",
"text": "Have you tried using the on option for $merge to ensure that it’s updating any existing doc in triggerCollection rather than creating a new one?",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "I am using the default of “_id”, It is definitely updating the correct records and not creating new ones though.",
"username": "HopeM"
},
{
"code": "changeEvent{\\\"_id\\\":{\\\"_data\\\":\\\"82605E074B0000009C2B022C0100296E5A1004BF0C5A1CEA7D4178AF7F4F58A773FDCE46645F69640064605E04A993D628CC1DC7FA740004\\\"},\\\"operationType\\\":\\\"update\\\",\\\"clusterTime\\\":{\\\"$timestamp\\\":{\\\"t\\\":1616774987,\\\"i\\\":156}},\\\"fullDocument\\\":{\\\"_id\\\":\\\"605e04a993d628cc1dc7fa74\\\",\\\"first\\\":\\\"cecil\\\",\\\"last\\\":\\\"morgan\\\"},\\\"ns\\\":{\\\"db\\\":\\\"forum\\\",\\\"coll\\\":\\\"triggerCollection\\\"},\\\"documentKey\\\":{\\\"_id\\\":\\\"605e04a993d628cc1dc7fa74\\\"},\\\"updateDescription\\\":{\\\"updatedFields\\\":{\\\"first\\\":\\\"cecil\\\"},\\\"removedFields\\\":[]}}firstupdateDescriptionstudents",
"text": "I’m seeing a different behaviour. I set up a similar trigger, and when I update the first name, this is what I see in the trigger’s log for the changeEvent:\n{\\\"_id\\\":{\\\"_data\\\":\\\"82605E074B0000009C2B022C0100296E5A1004BF0C5A1CEA7D4178AF7F4F58A773FDCE46645F69640064605E04A993D628CC1DC7FA740004\\\"},\\\"operationType\\\":\\\"update\\\",\\\"clusterTime\\\":{\\\"$timestamp\\\":{\\\"t\\\":1616774987,\\\"i\\\":156}},\\\"fullDocument\\\":{\\\"_id\\\":\\\"605e04a993d628cc1dc7fa74\\\",\\\"first\\\":\\\"cecil\\\",\\\"last\\\":\\\"morgan\\\"},\\\"ns\\\":{\\\"db\\\":\\\"forum\\\",\\\"coll\\\":\\\"triggerCollection\\\"},\\\"documentKey\\\":{\\\"_id\\\":\\\"605e04a993d628cc1dc7fa74\\\"},\\\"updateDescription\\\":{\\\"updatedFields\\\":{\\\"first\\\":\\\"cecil\\\"},\\\"removedFields\\\":[]}}Only first is appearing in updateDescription.What command are you using to update the students collection?",
"username": "Andrew_Morgan"
},
{
"code": "triggerCollectionstudentsstudents$mergetriggerCollectionupdateDescription.updatedFields\"fullName\": {\"first\": \"will\", \"last\": \"hope\"}fullName.first",
"text": "I am using $merge to update the triggerCollection collection (collection upon which I have the triggers listening).Sorry if I am not being clear… hopefully the below may help:Hope that makes sense.",
"username": "HopeM"
},
{
"code": "firstupdatedFields[{\n $match: {}\n}, {\n $merge: {\n into: 'triggerCollection'\n }\n}]\nconsole.log(JSON.stringify(changeEvent.updatedFields));\"{\\\"updatedFields\\\":{\\\"first\\\":\\\"ted\\\"},\\\"removedFields\\\":[]}\"\nlast",
"text": "Hi @HopeM that makes sense. I’ve gone through similar steps but see only first appearing in updatedFields and so I’m trying to figure out where we’re diverging. This is my pipleline…In my function, I print to the console: console.log(JSON.stringify(changeEvent.updatedFields)); and this is the outut:The last attribute (which didn’t change) isn’t included.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Hi @Andrew_Morgan that looks like what I am trying… I am going to spend some time today having a play around, if I notice anything that is different in what I am doing or find the problem, I will update you.Thank you for your help so far.",
"username": "HopeM"
},
{
"code": "$mergefullName\"updateDescription\": {\n \"updatedFields\": {\n \"fullName\": {\"first\": \"will\", \"last\": \"hope\"}\n }\n}\n4.2.12",
"text": "Hi @Andrew_Morgan,I spent a while yesterday playing around trying various things… suffice to say nothing seemed to work or shed any light on this issue.Your aggregation pipeline is pretty much exactly what i’m doing, with the $merge step.\nAnd like I said previously I am getting both fields under fullName come through in updated fields even though I only update one.So I am not sure what the difference could potentially be.\nCould it possibly be to do with a specific mongo version? I am using version 4.2.12",
"username": "HopeM"
},
{
"code": "4.4.4",
"text": "I’m using 4.4.4 and so it might be worth upgrading to see if that changes things.",
"username": "Andrew_Morgan"
},
{
"code": "4.4.4",
"text": "Okay, I have tested on 4.4.4 now and it’s the same result.Not sure where to go from here.",
"username": "HopeM"
},
{
"code": "firstlastfullName",
"text": "@HopeM Just spotted what we’re doing differently. I had first and last as top-level fields. If I embed them within fullName then I get the same behavior as you do.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Aha okay at least we’re on the same page now…So my question now is - Is this expected behaviour for embedded fields?",
"username": "HopeM"
},
{
"code": "update$merge$merge{\"$set\":{\"fullName\":{\"first\":\"Asya\",\"last\":\"Kamsky\"}}}\nfullName$merge",
"text": "This has to do with the way update operation works as $merge basically turns into an update internally.By default the behavior of $merge is to merge fields at the top level. So your update basically becomes:If the entire subdocument matched, then it would be a noop, but when any of its subfields don’t match it becomes an update of the fullName field. That’s the way $merge currently works, so there’s no way for you to only get the subfield that was changed at the moment.Asya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "Okay, that’s exactly what I wanted to get to the bottom of!Thanks Asya.",
"username": "HopeM"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Stitch update Triggers (updatedFields in trigger message) | 2021-03-26T11:13:14.898Z | Stitch update Triggers (updatedFields in trigger message) | 3,663 |
null | [
"dot-net",
"crud"
] | [
{
"code": ".findOneAndUpdate({_id: id},[{$set:{present:{$eq:[false,\"$present\"]}}}]);var pipeline = new EmptyPipelineDefinition<T>()\n .AppendStage(\"{$set:{Present:{$eq:[false,\\\"$Present\\\"]}}}\", \n BsonSerializer.LookupSerializer<T>());\n\ncollection.FindOneAndUpdateAsync(x => x.Id == id, Builders<T>.Update.Pipeline(pipeline));\n",
"text": "Hi, I’m trying to replicate this mongo query using the C# driver:\n.findOneAndUpdate({_id: id},[{$set:{present:{$eq:[false,\"$present\"]}}}]);What would be the recommended way to write this query? So far the best solution I’ve come up with involves magic strings:Thanks!",
"username": "Hej_manes"
},
{
"code": "",
"text": "I am using the following piece of code in order to toggle a field.\nAlthough the field has to be an integer.// the field type is ‘Expression<Func<TEntity, int>>’ in my case\nvar updateDef = Builders.Update.BitwiseXor(field, 1);\nreturn await _collection.UpdateOneAsync(filter, updateDef, null, cancellationToken);Hope this helps ",
"username": "Ivan_Povazan"
},
{
"code": "",
"text": "Thanks for sharing a solution @Ivan_Povazan and welcome to the MongoDB Community!",
"username": "yo_adrienne"
}
] | Writing a boolean toggle query with the C# driver | 2021-02-23T18:39:00.182Z | Writing a boolean toggle query with the C# driver | 3,529 |
null | [
"aggregation"
] | [
{
"code": "AggregateAsync()using(var cursor = await collection.AggregateAsync(pipeline, options))\n{\n while (await cursor.MoveNextAsync())\n {\n // doing nothing here...\n }\n}\n",
"text": "I have a materialized view aggregation query (last stage is $merge into another collection) which works well. I don’t actually need the call to AggregateAsync() to return the results as all I care about is that it updates the collection. Do I need to iterate over the resulting cursor to ensure that all updates occur or can I just ignore the returned cursor?Right now I have the following which seems unnecessary but I’m not sure how to prove that it is necessary without setting up a query that guarantees it will update/return more than some batch size that I don’t explicitly set:Is the while loop necessary or has the database already processed the full update prior to returning any results?",
"username": "ttutko"
},
{
"code": "collection.AggregateAsync",
"text": "Hi Thomas, thank you for your good question.\nYour understanding is correct, after the collection.AggregateAsync method is executed, the pipeline is already executed on the server, and the last merge state is executed as well. Therefore the reading of the pipeline results will not affect the database updates, and can be skipped in this case.",
"username": "Boris_Dogadov"
},
{
"code": "await collection.AggregateAsync(pipeline, options));",
"text": "Thanks! For one additional point of clarification to make sure that resources are properly cleaned up… Do I still need a using statement here or is it safe to simply have await collection.AggregateAsync(pipeline, options)); on a line?",
"username": "ttutko"
},
{
"code": "AggregateAsynccollection.AggregateToCollectioncollection.AggregateToCollectionAsync",
"text": "Yes, in case of AggregateAsync method the result cursor needs to be disposed. More suitable method in this case would be collection.AggregateToCollection or collection.AggregateToCollectionAsync.",
"username": "Boris_Dogadov"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | C# Driver Materialized View - Iteration Required? | 2021-03-29T23:51:45.990Z | C# Driver Materialized View - Iteration Required? | 3,480 |
null | [
"java"
] | [
{
"code": "",
"text": "For the LIFE of me there is nothing out there about connecting from a Standard OR flexible Google GCP App engine ?Atlas can be reached from a Standard app engine using older driver 3.4 or less - java /or node … BUT that was 2018I think ONLY a google Flexible (non free tier) app engine can reach Atlas mongodb … is this true ? we are using the latest streams driver for java for java based google GCP hosted App.ANYONE know anything… it seems so obvious… but so little documentation on why not… my java GCP App on standard engine just returns objects… no errors… SEEMS to connect ok via the driver BUT cant insert documents OR read from… even changethe string to bas password… ZIPPO expceptions… and even opened up standard engine firewall … to no avail… for 27017",
"username": "John_Allen"
},
{
"code": "",
"text": "IMO this should work - in my mongodb world talk in 2019 I used Google App Engine’s free tier with the Node driver to build a web and mobile app. It was really easy to get set up, no issues. I had to look back at my notes from the talk, but basically I had everything default (standard runtime, us Central), I set up my Atlas instance on GCP also in the free tier us central region, created database user via Atlas UI, set IP access list in Atlas UI, set client connection URI in app.yaml file, and was off to the races.",
"username": "Rachelle"
},
{
"code": "",
"text": "Did you find out if this is actually the case? It might explain the connection problems i’m having.",
"username": "Yezzer"
},
{
"code": "",
"text": "yes thanks Rachelle… we upgraded our drive… as we also use Morphia… I do have a question on atlas connection timeouts… it happens on my clients machine and not on my machine… diff IPs… BUT in my account I have 0.0.0.0/32 etc and his public ip set and mine of course… I just refeshed it to see if that helps ie if permissions were dropped over time - since customer had’nt connected from his ip for a couple of months - timeout after 30000ms… the usual msg in the stack.",
"username": "John_Allen"
}
] | Connection from GCP Standard OR flexible Application Engine | 2020-07-19T21:21:36.011Z | Connection from GCP Standard OR flexible Application Engine | 1,815 |
[] | [
{
"code": "",
"text": "I have a frustrating crash occurring in my App.\nI would welcome any help or advice.\nI feel it must be something fundamental i have wrong as i cannot believe my use case is unique.\nThe App is built in SwiftUI and uses Realm Sync.\nXcode 12.4. latest releases of IOSAll was going good until I added a local realm in memory DB (i have also tried to file)\nThere is nothing special about the code.\nIt just renders a Scroll View of data items and allows the use to select individual items - toggle a status field.\nI have the following crash occurringThread 1: EXC_BAD_ACCESS (code=1, address=0x3f)\nAttributeID, unsigned Int - see attached\nScreenshot 2021-03-29 at 15.37.421774×54 25.8 KBThe crash only occurs when I logout of the App and re-login in via the biometrics option.\nWhen the following default login screen appears.\nIt is a consistent crash.\nScreenshot 2021-03-29 at 15.45.40508×566 57.5 KBI have narrowed down the unique situation the crash occurs.The crash only occurs when i present and update the data from the local Realm DB in a SwiftUI View.\n(ie toggle a data field) The update is rendered ok and all seems fine.I have tried Modals and Nav links - all have the same result.if i do not update the data from within the View - all is ok.If i update the data via Realm Studio - the update is rendered ok within the App and it does not cause a crash.If I attach the Realm data manipulation View to the main App Tab bar (whether i enter the Tabar View or not) - All is OK! … I can update the local Realm data via any App View and it does not cause and crash on re-login.Any help or thoughts appreciated.\nthanks",
"username": "Damian_Raffell"
},
{
"code": "",
"text": "@Damian_Raffell It sounds like you might be blocking the main thread with your Realm write. Have you tried to move the writes to a serial queue?",
"username": "Lee_Maguire"
},
{
"code": "",
"text": "Thanks for responding …I don’t think i am blocking the main thread. The App all works fine and responds well.\nThe local Realm is loaded off the main thread on a serial queue. But the App does not even get that far.Its only after i logout and try to log in again (which logs out of Sync, but i don’t think its anything to do with that, data updates all work fine and do not cause this crash) - its only after updating my local Realm data.It happens right here in the code. While waiting for User response. It never executes the login closure.Screenshot 2021-03-30 at 12.03.081202×255 42.5 KBI am seeing this also2021-03-30 11:57:57.517931+0100 NaturalBritain[15199:642006] [error] precondition failure: accessing attribute in a different namespace: 2777808CoreSimulator 732.18.6 - Device: iPhone 12 Pro (E8B7D200-AC93-4962-B9AA-45873FCF2993) - Runtime: iOS 14.4 (18D46) - DeviceType: iPhone 12 ProAttributeGraph precondition failure: accessing attribute in a different namespace: 2777808.",
"username": "Damian_Raffell"
},
{
"code": "",
"text": "Are you using the Realm SwiftUI property wrappers? Could you give us a full stack trace?",
"username": "Lee_Maguire"
},
{
"code": "",
"text": "Hi OK … i think i have tracked this down.\nNo i wasn’t using the wrappers.\nI was loading some of the data into an Oberservable Object with an @Published array for rendering within a Scroll View. Sometimes this array could be empty. Nothing complained until i logged out. I guess this array address got flushed further down the Nav stack, and the View couldn’t handle it.\nI will put this issue on hold for now while i test further.thanks for your support\nI will have a look at the wrappers.\ncheers",
"username": "Damian_Raffell"
},
{
"code": "",
"text": "Final ending to this…\nIt wasn’t the array, it was the underlying List View (not a Scroll View) - i was loading into.\nI guess it wasn’t preserving an accurate state somewhere.\nThe bio login was a misleading entry point.\nI guess this is where SwiftUI tries to re-render the Views and it couldn’t do it.\nI have changed to a LazyVS and all is good in the hood. - cheers",
"username": "Damian_Raffell"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Crash on IOS Bio Security Login with local Realm data | 2021-03-29T15:22:41.561Z | Crash on IOS Bio Security Login with local Realm data | 2,782 |
|
[
"database-tools",
"monitoring"
] | [
{
"code": "",
"text": "As per mongo documentation lrw column shows read|write lock percentage. Based on which data, this field is getting populated ? The mongotop output in the specified time window does not have any write operations. The db.currentOp() output also does not show any operation waiting for lock.So, in this context how does one explain the 100% the write lock column.System ConfigurationOs Version: CentOS 7\ncat /etc/redhat-release\nCentOS Linux release 7.6.1810 (Core)\nMongo Version : 3.4.16\nStorage Engine: mmapV1\nData Path is on tmpfs[serverxx] out: run: df -hP -x iso9660\n[serverxx] out: Filesystem Size Used Avail Use% Mounted on\n[serverxx] out: /dev/sda2 95G 19G 71G 22% /\n[serverxx] out: devtmpfs 50G 0 50G 0% /dev\n[serverxx] out: tmpfs 50G 28K 50G 1% /dev/shm\n[serverxx] out: tmpfs 50G 2.1G 48G 5% /run\n[serverxx] out: tmpfs 50G 0 50G 0% /sys/fs/cgroup\n[serverxx] out: tmpfs 59G 0 59G 0% /var/data/sessions.1\n[serverxx] out: tmpfs 9.9G 0 9.9G 0% /run/user/0mongostat1856×438 38.6 KB mongotop1009×796 109 KB",
"username": "Kokila_Soumi"
},
{
"code": "insert query update delete getmore command dirty used flushes vsize res qrw arw net_in net_out conn time\n *0 *0 *0 *0 0 2|0 0.0% 0.0% 0 5.33G 25.0M 0|0 1|0 253b 40.5k 3 Mar 30 15:11:14.056\n *0 *0 *0 *0 0 1|0 0.0% 0.0% 0 5.33G 25.0M 0|0 1|0 112b 40.3k 3 Mar 30 15:11:15.056\n *0 *0 *0 *0 0 0|0 0.0% 0.0% 0 5.33G 25.0M 0|0 1|0 111b 40.2k 3 Mar 30 15:11:16.057\n *0 *0 *0 *0 0 1|0 0.0% 0.0% 0 5.33G 25.0M 0|0 1|0 112b 40.2k 3 Mar 30 15:11:17.056\n *0 *0 *0 *0 0 1|0 0.0% 0.0% 0 5.33G 25.0M 0|0 1|0 112b 40.3k 3 Mar 30 15:11:18.055\n *0 *0 *0 *0 0 0|0 0.0% 0.0% 0 5.33G 25.0M 0|0 1|0 111b 40.2k 3 Mar 30 15:11:19.056\n *0 *0 *0 *0 0 0|0 0.0% 0.0% 0 5.33G 25.0M 0|0 1|0 111b 40.2k 3 Mar 30 15:11:20.056\n *0 *0 *0 *0 0 1|0 0.0% 0.0% 0 5.33G 25.0M 0|0 1|0 112b 40.3k 3 Mar 30 15:11:21.056\n",
"text": "Hi @Kokila_Soumi,Welcome to MongoDB community.I am honestly not familiar with the lrw or lrwt columns as I don’t get them in the latest mongostat tool:Can you explain how do you get this output ?Thanks,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi Pavel,Thanks for your update !Running below command on Mongo DB 3.4.16 version displays lrw columns in results:mongostat -h serverxx:port --all --discover 5sample1741×125 6.58 KBPlease let me know if you need any other data !Thanks,\nKokila S",
"username": "Kokila_Soumi"
},
{
"code": "locks.Collectiondb.currentOp()db.currentOp()--all",
"text": "Hi @Kokila_Soumi, welcome to the forum!mongostat calculates its data from the serverStatus command. It uses the locks.Collection subdocument to calculate lrw and lrwt: serverStatus — MongoDB Manual14 commands ran during that time period, it’s possible one of those acquired an exclusive lock. I’ll note that running db.currentOp() will only give you a snapshot of the current running operations. You might not catch the op that is acquiring a lock if you run db.currentOp(). Additonally, it looks like the time period of mongotop that you showed doesn’t match up with the time period from mongostat (1:32:06 vs 1:35:43)?@Pavel_Duchovny these are MMAPv1 specific stats and won’t be shown unless running against a mongod that is using MMAPv1 and you use the --all option. @Kokila_Soumi by the way, MongoDB 3.4 is no longer supported and MMAPv1 is deprecated. We would strongly recommend that you upgrade to a newer version of MongoDB with WiredTiger.",
"username": "Tim_Fogarty"
},
{
"code": "",
"text": "Hi Tim,Thanks a lot for the inputs !Please help us to get clarify on below queries:As per this mongostat — MongoDB Manual link, the command column in the mongostat output indicates the commands run on the secondary, local|replicated. But the mongostat output is collected from the primary member. so i could not find any documentation on what is explanation for the data on primary member.Is there any system configuration for the number of locks(similar to file decriptor) which is used to calculate lock percentage ?As mentioned earlier, in the row when lrw shows 100 %, the equivalent mongotop output before and after the time does not show any write operation. Please refer newly attached screenshot(mongotop)3.1 Also, in same rows(of mongostat output) we could see only query operations count is greater than zero. If so, 14 commands will be for read operations. Will Exclusive Lock be acquired for Mongo Read operation and how it is affecting write lock percentage ?5.If not lrw, is there any mechanism to monitor the lock % on the system ?\nmongotop848×460 12.8 KB\n",
"username": "Kokila_Soumi"
},
{
"code": "local|replicatedserverStatus.locks.W.acquireWaitCount / serverStatus.locks.W.acquireCount01:32:4101:32:3601:32:4101:32:3801:32:4301:32:3601:32:4101:32:3801:32:41findAndModifyinsertlrwlrw",
"text": "That looks like a mistake in the documentation. mongostat shows local|replicated for both primary and secondary nodes. I will open an internal ticket to fix that. Sorry for the confusion.The lock percentage is calculated as serverStatus.locks.W.acquireWaitCount / serverStatus.locks.W.acquireCount. This means that you could see 100% even if only 1 Write lock was acquired and it had to wait because of a conflicting lock. Therefore a value of 100% should not be alarming. It does not necessarily mean that there is high lock contention.The mongostat output for 01:32:41 is showing what happened in the past 5 second interval (01:32:36 to 01:32:41). Mongotop is also showing what happened in the last 5 seconds, but it’s happening at a different 5-second interval. The mongotop interval 01:32:38 to 01:32:43 shows 1ms of write locks. That interval overlaps with the mongostat interval 01:32:36 to 01:32:41. So I think this all makes sense if a command acquired the write lock between 01:32:38 and 01:32:41.Also you said:we could see only query operations count is greater than zero. If so, 14 commands will be for read operations.This is not correct, the command might not be an insert, update, or delete, but there are other commands that can take an exclusive lock that won’t show up in any of the other fields. For example the findAndModify command would take an exclusive lock but doesn’t show up under insert.I don’t think lrw is a great parameter to use to monitor the health of a mongod. As I explained above, a high lrw value doesn’t necessarily indicate that there’s anything wrong.There are dozens of other parameters you could monitor. Memory, CPU, and disk usage are probably the most useful things to monitor. If you want a more full monitoring solution, I would recommend either upgrading to a newer version so that you can use Free Cloud Monitoring, or using MongoDB Atlas which includes even more sophisticated monitoring.I’m sorry, I’m not sure what you mean exactly by “lock % on the system”.I will mention that locking is generally a much lower concern in WiredTiger than it is in MMAPv1. For most operations WiredTiger locks at the document level and uses optimistic concurrency control. So lock contention is less frequent. Again, I’d strongly recommend using a newer version of MongoDB with WiredTiger if you can. If you need some pointers on how to do that, let me know.",
"username": "Tim_Fogarty"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongostat lrw column shows 100 % when there are no write operations performed on a mongodb | 2021-03-29T19:57:14.751Z | Mongostat lrw column shows 100 % when there are no write operations performed on a mongodb | 3,083 |
|
[
"dot-net",
"atlas-device-sync",
"app-services-user-auth"
] | [
{
"code": "Unknown: no rule exists for namespace UserData.CustomUserData{\n \"title\": \"CustomUserDatum\",\n \"properties\": {\n \"Company\": {\n \"bsonType\": \"string\"\n },\n \"FirstName\": {\n \"bsonType\": \"string\"\n },\n \"LastName\": {\n \"bsonType\": \"string\"\n },\n \"_id\": {\n \"bsonType\": \"string\"\n }\n }\n}\nschema for namespace (UserData.CustomUserData) must include partition key \"ownerId\" schema for namespace (UserData.CustomUserData) must have consistent types of partition key \"ownerId\" (expected: string, required: true; actual: string, required: false)",
"text": "Hi there,Perhaps someone can help me, I am trying to set up custom user data but keep running into problems and I’m not sure the documentation is that clear. I am using the .NET SDK.I have a series of RealmObjects which form my normal user data (such as Book, Person etc…). They are using a property _id as their primary key and a partition key of ownerId which is the user’s ID, all as suggested in the documentation, working very nicely.Then following the documentation further I am adding a CustomUserData class. This is not a subclass of RealmObject. This class has a property _id which is the user’s ID and identifies it as belonging to that user. There is no ownerId property.This is how I have sync set up in Realm:\n\nimage1447×362 16.9 KB\nAnd this is how I have Custom User Data set up:\n\nimage1454×536 28.5 KB\nAs far as I can tell I have set everything up correctly as per the documentation. I get an error in the app when trying to access the data: Unknown: no rule exists for namespace UserData.CustomUserData. OK fair enough, so on Realm, in Data Access=>Rules I find the newly added CustomUserData and select the ‘Users can only read and write their own data’ template’. This required a ‘Field Name For User ID’. This is where I get a bit confused, it’s not really in the docs how this should be configured for custom user data. Is this then the _id property? After I have done this I get a schema error:\n\nimage940×337 19.3 KB\nOK so I can go back to the app now, and custom user data works. I can then generate the schema in Realm and it looks like this:So far so good. When I try to save the schema it’s not possible as I get the error schema for namespace (UserData.CustomUserData) must include partition key \"ownerId\". Oh dear. I can add it to keep it quiet but then I get the error: schema for namespace (UserData.CustomUserData) must have consistent types of partition key \"ownerId\" (expected: string, required: true; actual: string, required: false). If I would make it required I would need that property in the class too and… basically I am going around in a lot of circles and not sure exactly what I am supposed to be doing.Could anyone tell me where I am going wrong here? I would suggest the documentation could be a little clearer too for people who aren’t the brightest such as me Many thanks!!Will",
"username": "varyamereon"
},
{
"code": "await user.RefreshCustomDataAsync();\n\n// Tip: define a class that represents the custom data\n// and use the gerneic overload of GetCustomData<>()\nvar cud = user.GetCustomData<CustomUserData>();\n\nConsole.WriteLine($\"User is cool: {cud.IsCool}\");\nConsole.WriteLine($\"User's favorite color is {cud.FavoriteColor}\");\nConsole.WriteLine($\"User's timezone is {cud.LocalTimeZone}\");\nCustomUserDataCustomUserDataRealmObjectCustomUserData",
"text": "Hi Will, thanks for the detailed explanation!The first thing that I’d mention is that the collection for your custom user data doesn’t need to be in a different database (I’ve always used the same database as I’m using for synced data – I don’t think that would cause the problem you’re seeing though).You shouldn’t need to add a schema or any rules in order for the custom user data to be readable from your app. Also, that data shouldn’t need to include the partition key.I’ve not used the .NET SDK but from the docs, this is how to read the custom user data from your app:From some of the errors you’re seeing, it sounds as though your app may be trying to read directly from your CustomUserData collection and/or sync it (e.g. by having a CustomUserData class that inherits from RealmObject) – you don’t need to do that. Messages saying that no rule is set up hints that you’re trying to read from the collection directly, messages that the partitioning key is missing hints that you’re trying to sync it.As CustomUserData is not in the database that you set up sync for, I wouldn’t expect that to work (but you shouldn’t need to do it).",
"username": "Andrew_Morgan"
},
{
"code": "CustomUserDataRealmObjectError:\n\nAction on service 'mongodb-atlas' forbidden: arguments to 'findOneAndReplace' don't match rule criteria\nStack Trace:\n\nFunctionError: no rule exists for namespace 'LogbookData.CustomUserData' at <eval>:16:4(4)\nDetails:\n{\n \"action\": \"findOneAndReplace\",\n \"reason\": \"arguments to 'findOneAndReplace' don't match rule criteria\",\n \"serviceName\": \"mongodb-atlas\",\n \"serviceType\": \"mongodb-atlas\"\n}\n{\n \"arguments\": [\n {\n \"database\": \"LogbookData\",\n \"collection\": \"CustomUserData\",\n \"filter\": {\n \"_id\": \"604a3ba17d7ff51c8c69f15e\"\n },\n \"update\": {\n \"_id\": \"604a3ba17d7ff51c8c69f15e\",\n...\n },\n \"upsert\": true\n }\n ],\n \"name\": \"findOneAndReplace\",\n \"service\": \"mongodb-atlas\"\n}\n",
"text": "Hi Andrew,Thanks for your reply. As far as I can tell I have implemented everything as you suggested and is suggested in the documents, the app is reading and writing data only as suggested and the CustomUserData class does not implement RealmObject. The only thing I am unsure of is what you mean when you say trying to read the collection directly?Many thanksWillEDIT: This is the log entry:Thanks",
"username": "varyamereon"
},
{
"code": "UserData.CustomUserData",
"text": "Hi Will, by reading the collection directly, I mean using the SDK to read or write from the UserData.CustomUserData collection explicitly (rather than vis the custom user data feature) – in which case, the Realm data access rules would be applied. https://docs.mongodb.com/realm/sdk/dotnet/examples/mongodb-remote-access/",
"username": "Andrew_Morgan"
},
{
"code": "app = App.Create(myRealmAppId);\nuser = await app.LogInAsync(Credentials.Anonymous());\n\nmongoClient = user.GetMongoClient(\"mongodb-atlas\");\ndbTracker = mongoClient.GetDatabase(\"tracker\");\ncudCollection = dbTracker.GetCollection<CustomUserData>(\"user_data\");\n\nvar cud = new CustomUserData(user.Id)\n{\n FavoriteColor = \"pink\",\n LocalTimeZone = \"+8\",\n IsCool = true\n};\n\nvar insertResult = await cudCollection.InsertOneAsync(cud);\n",
"text": "Thanks Andrew,The problem is occurring when trying to add data for the first time. I am following the instructions which seem to suggest adding data directly:This is the point where the exception is thrown.Will",
"username": "varyamereon"
},
{
"code": "CustomUserDatamongodb-atlasCustomUserDataUserData.CustomUserDataatlas-custom-user-dataapp = App.Create(myRealmAppId);\nuser = await app.LogInAsync(Credentials.Anonymous());\n\nmongoClient = user.GetMongoClient(\"atlas-custom-user-data\");\ndbUser = mongoClient.GetDatabase(\"UserData\");\ncudCollection = dbUser.GetCollection<CustomUserData>(\"user_data\");\n\nvar cud = new CustomUserData(user.Id)\n{\n FavoriteColor = \"pink\",\n LocalTimeZone = \"+8\",\n IsCool = true\n};\n\nvar insertResult = await cudCollection.InsertOneAsync(cud);",
"text": "Hi Will,that makes sense now.The issue is that Realm Sync is expecting every collection that’s configured in your Atlas service to be synced (and so require the partition key).There’s no reason to sync CustomUserData and so you shouldn’t include it in your mongodb-atlas service. However, you need to configure the CustomUserData collection in your Realm app – I can see why this feels like you’re going round in circles!The solution is to create a second MongoDB service in your Realm app that’s linked to the same Atlas cluster…\nimage1183×798 94.7 KBFrom your Realm app’s perspective, you now have two MongoDB services, but they’re actually working on the same Atlas cluster (so can access the same data).You can now add a rule for the UserData.CustomUserData to your atlas-custom-user-data service…image958×309 29.9 KBYour code for writing to that collection would then look something like this (using the new service name):",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Thanks Andrew!That works great, no sync issues and no errors showing on the Realm portal. The only thing that is slightly annoying is all classes are displayed here still:image219×577 14.9 KBbut I think there is no way around that.Many thanks for your help, all the bestWill",
"username": "varyamereon"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Setting up Custom User Data | 2021-03-12T12:10:31.573Z | Setting up Custom User Data | 6,061 |
|
null | [
"security"
] | [
{
"code": "",
"text": "I use MongoDB Realm which is linked to an Atlas cluster. I store sensible user data in this cluster.Do MongoDB employees have access (read or write) to the data in the collections in my cluster?",
"username": "Jean-Baptiste_Beau"
},
{
"code": "",
"text": "Hi Jean-Baptiste,Thanks for the great question. The short answer is no, role based access control and the principle of least privilege prevent MongoDB employees from having read or write access to the data in the collections in your cluster.However, a more complete answer would point out that certain “break glass” scenarios exist in which appropriate MongoDB Production Support employees could leverage metadata or logs in context of recovery from a failure scenario that could in turn contain snippets of sensitive customer data: this is where governance comes in. MongoDB Cloud is a mature cloud platform operated with a governance philosophy in line with our information security management system which in turn adopts the best practices of the ISO-27001, PCI-DSS, SOC-2, and HIPAA standards, as validated by third party auditors.For an in-depth review of our Technical and Operational Security Controls, please review this resource:Technical and Organizational Security MeasuresFor further reading on MongoDB’s industry-leading security capabilities built for financial services, healthcare, and government use cases, I recommend the whitepaper available a Trust Center — MongoDB Cloud Services | MongoDB and in particular would point out the Client-Side Field Level Encryption capability which allows you to configure subsets of your schemas (namely for the data of highest classification level where you’re willing to trade off reduced queryability for guaranteed confidentiality) which ensures that only ciphertext ever enters the MongoDB Cloud trust boundary for those schema subsets.Cheers\n-Andrew",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "@Andrew_Davidson Thank you for the great, complete answer!",
"username": "Jean-Baptiste_Beau"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Do MongoDB employees have access to the data in my collections? | 2021-03-30T10:04:53.640Z | Do MongoDB employees have access to the data in my collections? | 3,124 |
null | [
"atlas-cluster",
"atlas"
] | [
{
"code": "",
"text": "I’m having a high consumption of system memory on my server, but I can’t quite figure out what is using so much memory. Can someone help me with the command to visualize what is being used in memory?I currently have 4GB of RAM, 3.6GB of RAM is in constant use.",
"username": "Saulo_Lago"
},
{
"code": "",
"text": "Hi @Saulo_Lago,To start to answer this question we need the following information:If not then:Joe.",
"username": "Joe_Drumgoole"
},
{
"code": "",
"text": "Hi, I am also facing same issue, i am using a atlas cluster M40 with 16G of memory and i am getting that 15 G memory is used while checking with realtime.",
"username": "Ashish_Tiwari1"
},
{
"code": "",
"text": "Hi @Ashish_Tiwari1,How big is the data set in your cluster? What queries are you running on it?Joe",
"username": "Joe_Drumgoole"
},
{
"code": "",
"text": "Hi @Ashish_Tiwari1 and @Saulo_Lago,Having servers utilising most memory does not necessary means there is a memory bottleneck. If your application operations and overall SLA is not showing any bad signs I would not worry about it.Having said that, since our support are the personal who can analyse specific Atlas performance issues I suggest you consider opening a support request on the Atlas support tab.Best regards,\nPavel",
"username": "Pavel_Duchovny"
}
] | High system memory consumption | 2021-03-14T14:18:37.633Z | High system memory consumption | 4,892 |
null | [
"aggregation",
"atlas-search"
] | [
{
"code": "",
"text": "For normal mongodb query we can use $gte and it wont check the type but for atlas query it only accept BSON and number.{name:{$gte:“Sample”}}o/p :- the data we get where name start with \" S \" and beyound (ex:name starting with s,t,u-----z)",
"username": "Sagar942150"
},
{
"code": "gte$",
"text": "today, for gte (without $) we require number or date types.The way to accomplish today is to convert the string to numbers for now. Does this make sense?",
"username": "Marcus"
},
{
"code": "",
"text": "No we can’t convert the string to number as it is text",
"username": "Sagar942150"
}
] | $search pipeline to get result between range for string | 2021-03-26T13:33:06.205Z | $search pipeline to get result between range for string | 2,485 |
null | [
"queries",
"data-modeling"
] | [
{
"code": "",
"text": "I my collection there are multiple formats of date in the different documents. i.e. some with “MM/DD/YYY” and some with “MM/DD/YY HH:MM:SS AM” how do I filter out those that match only MM/DD/YYYY format.",
"username": "Vivek_02262"
},
{
"code": "",
"text": "Hello @Vivek_02262, include sample documents showing the actual data - the date field and its different formats. Also, tell if the date field is stored as a date type field or as a string?",
"username": "Prasad_Saya"
}
] | Filter Results based on Data Format | 2021-03-29T20:21:59.267Z | Filter Results based on Data Format | 1,858 |
null | [
"replication",
"monitoring"
] | [
{
"code": "{\n \"name\" : \"mmapv1\",\n \"supportsCommittedReads\" : false,\n \"readOnly\" : false,\n \"persistent\" : true\n}\n[root@DAP1SM04 ~]# mongo DEP1SM03:27727\nMongoDB shell version v3.6.17\nconnecting to: mongodb://DEP1SM03:27727/test?gssapiServiceName=mongodb\nImplicit session: session { \"id\" : UUID(\"fedcd09f-8a25-45d2-8d0c-c5fa779c9588\") }\nMongoDB server version: 3.6.17\nServer has startup warnings:\n2021-03-16T09:50:49.459+0000 I CONTROL [initandlisten]\n2021-03-16T09:50:49.459+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.\n2021-03-16T09:50:49.459+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.\n2021-03-16T09:50:49.459+0000 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.\n2021-03-16T09:50:49.459+0000 I CONTROL [initandlisten]\n2021-03-16T09:50:50.163+0000 I REPL [replexec-0]\n2021-03-16T09:50:50.163+0000 I REPL [replexec-0] ** WARNING: This replica set was configured with protocol version 0.\n2021-03-16T09:50:50.163+0000 I REPL [replexec-0] ** This protocol version is deprecated and subject to be removed\n2021-03-16T09:50:50.163+0000 I REPL [replexec-0] ** in a future version.\nset02b:PRIMARY> exit\nbye\n[root@DAP1SM04 ~]# mongo DEP1SM04:27727\nMongoDB shell version v3.6.17\nconnecting to: mongodb://DEP1SM04:27727/test?gssapiServiceName=mongodb\nImplicit session: session { \"id\" : UUID(\"c3392728-2a49-419c-9c6f-7d052b8e63ba\") }\nMongoDB server version: 3.6.17\nServer has startup warnings:\n2021-03-16T09:23:51.388+0000 I CONTROL [initandlisten]\n2021-03-16T09:23:51.388+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.\n2021-03-16T09:23:51.388+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.\n2021-03-16T09:23:51.388+0000 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.\n2021-03-16T09:23:51.388+0000 I CONTROL [initandlisten]\n2021-03-16T09:23:51.547+0000 I REPL [replexec-0]\n2021-03-16T09:23:51.547+0000 I REPL [replexec-0] ** WARNING: This replica set uses arbiters, but readConcern:majority is enabled\n2021-03-16T09:23:51.547+0000 I REPL [replexec-0] ** for this node. This is not a recommended configuration. Please see\n2021-03-16T09:23:51.547+0000 I REPL [replexec-0] ** https://dochub.mongodb.org/core/psa-disable-rc-majority-3.6\n2021-03-16T09:23:51.547+0000 I REPL [replexec-0]\n2021-03-16T09:23:51.547+0000 I REPL [replexec-0]\n2021-03-16T09:23:51.547+0000 I REPL [replexec-0] ** WARNING: This replica set was configured with protocol version 0.\n2021-03-16T09:23:51.547+0000 I REPL [replexec-0] ** This protocol version is deprecated and subject to be removed\n2021-03-16T09:23:51.547+0000 I REPL [replexec-0] ** in a future version.\nset02b:SECONDARY> exit\nbye\n[root@DAP1SM04 ~]# mongo DAP1SM03:27727\nMongoDB shell version v3.6.17\nconnecting to: mongodb://DAP1SM03:27727/test?gssapiServiceName=mongodb\nImplicit session: session { \"id\" : UUID(\"54e9cc35-0043-4272-803e-dd3f90f89f5c\") }\nMongoDB server version: 3.6.17\nServer has startup warnings:\n2021-03-16T09:50:48.430+0000 I CONTROL [initandlisten]\n2021-03-16T09:50:48.430+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.\n2021-03-16T09:50:48.430+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.\n2021-03-16T09:50:48.430+0000 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.\n2021-03-16T09:50:48.430+0000 I CONTROL [initandlisten]\n2021-03-16T09:50:48.630+0000 I REPL [replexec-0]\n2021-03-16T09:50:48.630+0000 I REPL [replexec-0] ** WARNING: This replica set was configured with protocol version 0.\n2021-03-16T09:50:48.630+0000 I REPL [replexec-0] ** This protocol version is deprecated and subject to be removed\n2021-03-16T09:50:48.630+0000 I REPL [replexec-0] ** in a future version.\nset02b:SECONDARY> exit\nbye\n[root@DAP1SM04 ~]# mongo DAP1SM04:27727\nMongoDB shell version v3.6.17\nconnecting to: mongodb://DAP1SM04:27727/test?gssapiServiceName=mongodb\nImplicit session: session { \"id\" : UUID(\"2e3d35e3-d001-49e5-86e8-5c69da316521\") }\nMongoDB server version: 3.6.17\nServer has startup warnings:\n2021-03-16T09:24:48.417+0000 I CONTROL [initandlisten]\n2021-03-16T09:24:48.417+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.\n2021-03-16T09:24:48.417+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.\n2021-03-16T09:24:48.417+0000 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.\n2021-03-16T09:24:48.417+0000 I CONTROL [initandlisten]\n2021-03-16T09:24:48.886+0000 I REPL [replexec-0]\n2021-03-16T09:24:48.886+0000 I REPL [replexec-0] ** WARNING: This replica set was configured with protocol version 0.\n2021-03-16T09:24:48.886+0000 I REPL [replexec-0] ** This protocol version is deprecated and subject to be removed\n2021-03-16T09:24:48.886+0000 I REPL [replexec-0] ** in a future version.\nset02b:SECONDARY> DAP1SM04:27727\n",
"text": "Hi Team,I have a replica-set running with mmapv1 storage engine and mongo Version 3.6.17. Its a 4Member + 1 arbiter architecture. Please see the ps output from all the VMs for this replica-set. You can see the command args are same.DEP1SM03\nroot 10080 1 31 Mar16 ? 05:32:14 /usr/bin/mongod --ipv6 --nojournal --noprealloc --smallfiles --slowms 500 --storageEngine mmapv1 --bind_ip_all --port 27727 --dbpath=/var/data/sessions.1/set02b --replSet set02b --fork --pidfilepath /var/run/sessionmgr-27727.pid --oplogSize 5120 --logpath /var/log/mongodb-27727.log --logappend --quietDEP1SM04\nroot 17705 1 26 Mar16 ? 04:46:38 /usr/bin/mongod --ipv6 --nojournal --noprealloc --smallfiles --slowms 500 --storageEngine mmapv1 --bind_ip_all --port 27727 --dbpath=/var/data/sessions.1/set02b --replSet set02b --fork --pidfilepath /var/run/sessionmgr-27727.pid --oplogSize 5120 --logpath /var/log/mongodb-27727.log --logappend --quietDAP1SM03\nroot 10053 1 11 Mar16 ? 01:58:58 /usr/bin/mongod --ipv6 --nojournal --noprealloc --smallfiles --slowms 500 --storageEngine mmapv1 --bind_ip_all --port 27727 --dbpath=/var/data/sessions.1/set02b --replSet set02b --fork --pidfilepath /var/run/sessionmgr-27727.pid --oplogSize 5120 --logpath /var/log/mongodb-27727.log --logappend --quietDAP1SM04\nroot 10053 1 1 Mar16 ? 00:18:19 /usr/bin/mongod --ipv6 --nojournal --noprealloc --smallfiles --slowms 500 --storageEngine mmapv1 --bind_ip_all --port 27727 --dbpath=/var/data/sessions.1/set02b --replSet set02b --fork --pidfilepath /var/run/sessionmgr-27727.pid --oplogSize 5120 --logpath /var/log/mongodb-27727.log --logappend --quietBut when i connect to mongo replicas individually only for one Secondary the startup warning shows a warning that readConcern:majority is enabled. So im really confused if the readConcern is really enabled in this replica-set or not.The db.serverStatus().storageEngine shows supportsCommittedReads is false.Is it a bug? or the readConcern:majority is really enabled?",
"username": "venkataraman_r"
},
{
"code": "readConcern",
"text": "readConcern majority is not supported in MMAPv1 engine. Please have a look on below link",
"username": "ROHIT_KHURANA"
},
{
"code": "",
"text": "Ok Then why the mmapv1 replica-set showing this warning.?",
"username": "venkataraman_r"
},
{
"code": "",
"text": "@ROHIT_KHURANA, Do you have a reason why mmap replica-set showing this warning then?",
"username": "venkataraman_r"
},
{
"code": "DEP1SM04",
"text": "DEP1SM04@venkataraman_rAre you geeting this warning for only instance running on DEP1SM04?\nPlease share output of below command\nrs.conf();Thanks\nBraj Mohan",
"username": "BM_Sharma"
},
{
"code": "DEP1SM04",
"text": "DEP1SM04this warning occurrs on arbiter node.",
"username": "ROHIT_KHURANA"
},
{
"code": "This replica set uses arbiters, but readConcern:majority is enabled\n2021-03-16T09:23:51.547+0000 I REPL [replexec-0] ** for this node.\n",
"text": "@venkataraman_r\nPlease have a look on below jira which indicates mongo put this warning wherever we use arbiter nodeWe want to add a startup warning that alerts users if enableReadConcernMajority is set to true on a node that is part of a replica set that contains an arbiterhttps://jira.mongodb.org/browse/SERVER-37557",
"username": "ROHIT_KHURANA"
},
{
"code": "bind_ip_all",
"text": "Hi @venkataraman_r welcome to the community!I believe @ROHIT_KHURANA is correct in this case. It is a startup warning, designed to inform you of any suboptimal settings for production instances.Having said that, I would like to point out some things for your consideration:Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "This replica set uses arbiters, but rea",
"text": "This replica set uses arbiters, but reaThe output i shared is from PRIMARY and SECONDARY. As per your statement, im using arbiter in the RS so it should show for all the SECONDARY. But it shows only for one SEC member.By point is if the warning is correct, then it should show for all 4 members of a RS. but it shown only for one SEC.@kevinadi,Thanks for your reply. Yes we undetstood mongo 3.6 is EOL by next month. I’m in the process of migrating to the latest one. But we are looking this discepency in the existing deployment with 3.6. So raised this question.",
"username": "venkataraman_r"
},
{
"code": "rs.conf()rs.status()mongod",
"text": "Hi @venkataraman_rThe message about readConcern majority should only show up in a PSA (Primary-Secondary-Arbiter) setup as per SERVER-42573. This was implemented in MongoDB 3.6.1, thus you are correct that it should not show up in a 5-node PSSSA configuration like yours.There may be some other underlying issues, like whether the replica set is seeing itself properly (from the output of rs.conf() and rs.status(), for example), whether all mongod binaries involved are verified to be 3.6.17 and not an accidental mix of versions, or other issues. However if everything is verifiably correct and the message still shows up in your deployment, especially only in one node as you observed, it may be a new issue.Having said that, there are some points against this:In short, in this specific case I would not worry about what appear to be an erroneous startup warning if it doesn’t affect the operations of the replica set. I would recommend you to instead migrate from MMAPv1 to the WiredTiger storage engine and upgrade to the latest MongoDB version (currently 4.4.4) so you’ll receive the most up to date version with lots of bugfixes and improvements. Also, please consider turning on Auth for your deployment.Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "setup have one arbiter node with 4 data nodes.",
"username": "ROHIT_KHURANA"
}
] | readConcern:majority warning shown for mmapV1 engine | 2021-03-17T03:50:22.196Z | readConcern:majority warning shown for mmapV1 engine | 2,857 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "Publish produces file MongoDB.Libmongocrypt.dllbut requires MongoDB.LibMongocryptOn runError:\nAn assembly specified in the application dependencies manifest (WebApplication3.deps.json) was not found:\npackage: ‘MongoDB.LibMongocrypt’, version: ‘1.2.0’\npath: ‘lib/netstandard2.0/MongoDB.LibMongocrypt.dll’\n[root@ip-10-0-10-30 publishARM]# vim WebApplication3.deps.json",
"username": "Graeme_Henderson"
},
{
"code": "",
"text": "Hello @Graeme_Henderson, thanks for your report, we see the issue and will fix it soon.\nBest regards, Dima",
"username": "Dmitry_Lukyanov"
},
{
"code": "",
"text": "@Graeme_Henderson, can you please also provide the following details:",
"username": "Dmitry_Lukyanov"
},
{
"code": "",
"text": "environment\nOS Linux running AWS linux 2 instance on t4g ARM machine\nFramework asp dotnet core 2.2 mvc razor pages\nsteps\npublish for linux-arm64\ntransfer files to machine\nset app to executable\n./appname",
"username": "Graeme_Henderson"
},
{
"code": "",
"text": "Hey @Graeme_Henderson ,\nCan you please confirm that your code uses client-side encryption (MongoDB.Libmongocrypt)?\nAlso, had this code worked for you with the previous driver versions?\nBest regards, Dima",
"username": "Dmitry_Lukyanov"
},
{
"code": "",
"text": "No we don’t use client side encryption.",
"username": "Graeme_Henderson"
},
{
"code": "",
"text": "can you please also say whether you were able to do these steps with the previous driver versions (not beta)?",
"username": "Dmitry_Lukyanov"
},
{
"code": "",
"text": "I’m having the same issue on Linux at deployment, I had to go back to 1.11.6 to have it working again. Looking up for a resolution ",
"username": "Eugenio_Blabla"
},
{
"code": "",
"text": "Hey @Graeme_Henderson, can you please check the latest patch release 2.12.1?\nThanks in advance, Dima",
"username": "Dmitry_Lukyanov"
},
{
"code": "",
"text": "On my side I did try the update and release in test environment and it did fail like the last time.",
"username": "Eugenio_Blabla"
},
{
"code": "",
"text": "@Eugenio_Blabla, can you provide the error message? Also, it would be good to have detailed repro steps(as much as possible detailed).",
"username": "Dmitry_Lukyanov"
}
] | Publish issue with .NET 2.12 driver: MongoDB.Libmongocrypt.dll | 2021-03-03T21:38:23.449Z | Publish issue with .NET 2.12 driver: MongoDB.Libmongocrypt.dll | 5,149 |
null | [] | [
{
"code": "",
"text": "Hey Folks!I truly hope you’re all doing well and are staying safe. This is a difficult time for many and I’m proud to say that we at MongoDB are working hard to help those hardest hit and also to empower those fighting the virus. The Covid-19 Help Project is still accepting applications. If you’re interested check out the blog post.I also want to let you know that we have so many ways we can help get the message out about your projects, companies and brands - but we rarely get the opportunity to do it… I know everyone is so busy… but I’d love to help you generate some awareness for your projects.I’m preparing a short survey to get feedback from Startups and Developers. It will also have an invitation to participate in some community activities such as blogging, videos, podcast appearances, AMA’s, Twitch Live coding, Speaking at a virtual event. I encourage you to answer YES to the question about participating in promotional activities for your brand and project. This will enable me to get the ball rolling on some these great activities.In the meantime - a question for you - would any of you actually participate in a Twitch Live Coding Session? I did one with @nraboy and had a blast. I think it might be cool to spotlight a startup developer working on something cool. Or, would you prefer to do a brief Startup Spotlight Zoom session where we chat about your project and let the world know about it. These are just a couple of the options available - let me know what you would find most valuable.Thank you and please be safe and stay well.Regards,\nMike",
"username": "Michael_Lynn"
},
{
"code": "",
"text": "In the meantime - a question for you - would any of you actually participate in a Twitch Live Coding Session? I did one with @nraboy and had a blast. I think it might be cool to spotlight a startup developer working on something cool. Or, would you prefer to do a brief Startup Spotlight Zoom session where we chat about your project and let the world know about it. These are just a couple of the options available - let me know what you would find most valuable.I don’t consider myself to be good enough (or entertaining enough) to do Twitch Live Coding Session. Maybe some day. At some point I could be interested to have some Zoom session and talk about project(s) my startup has been working on.Once project/product is released, I don’t think I will turn down any opportunity to promote it. These times have put things on hold, but I try to look at it as opportunity to prepare even more for better times that are coming. ",
"username": "kerbe"
},
{
"code": "",
"text": "Thanks Kerbe! We’ll be here when you’re ready to promote. Keep me in the loop on progress.Regards,\nMike",
"username": "Michael_Lynn"
},
{
"code": "",
"text": "Hey @Michael_Lynn,community activities such as blogging , videos , podcast appearances , AMA’s , Twitch Live coding , Speaking at a virtual eventThis sounds like a great idea for startups.In the meantime - a question for you - would any of you actually participate in a Twitch Live Coding Session? I did one with @nraboy and had a blast. I think it might be cool to spotlight a startup developer working on something cool. Or, would you prefer to do a brief Startup Spotlight Zoom session where we chat about your project and let the world know about it. These are just a couple of the options available - let me know what you would find most valuable.I think I would prefer the zoom/podcast interview session. I will admit, I would be a bit nervous about a live stream; as well as the zoom/podcast interview (as it would be my first), but less so.A question I have is as follows.\nI am not sure at what level you would want the individual or startup before participation. I am a one-man startup that built a training center management webapp. I have released a version 1 and my first client has been using it since Jan 2020. The client is my electrical union hall (I am a licensed electrician as well). I am in the process of a version 2 which changes from a classic server setup to utilizing serverless technology with AWS and the Serverless Framework. The current version runs as a Docker container on AWS Fargate as a node.js server, using Apollo server to implement the graphql endpoint, with everything stored in MongoDB Atlas (with Charts to display metrics to the client) and React.js on the client.The app allows the organization the ability to manage the data with creating skills and safety courses, which members will be able to sign up for online and attend in class sessions to achieve their certifications. Which can be share electronically (via qrcode, link, email) to employers for verification (employer records) rather than using physical cards!I encourage you to answer YES to the question about participating in promotional activities for your brand and project.Will do then.",
"username": "Natac13"
},
{
"code": "",
"text": "YES! Perfect… Sean - I will reach out via email to get something on the calendar. Check out the previous podcast episodes to get an idea about the flow and content of the episodes. We’re really just looking to have interesting convos with interesting people about interesting technology and it sounds like all three of those things apply! I look forward to chatting with you.",
"username": "Michael_Lynn"
},
{
"code": "",
"text": "would any of you actually participate in a Twitch Live Coding Session?Unfortunately we’re handling private data that includes the DNA of our customer. So even doing a simple video about our app in action requires a lot of masking to not reveal the names of these living people.Sorry have to pass but thanks for askingAndreas",
"username": "Andreas_West"
},
{
"code": "",
"text": "Andreas - thanks for letting me know… Would love to be able to help you amplify your success if we figure out some way of doing that securely.",
"username": "Michael_Lynn"
},
{
"code": "",
"text": "I’m preparing a short survey to get feedback from Startups and Developers. It will also have an invitation to participate in some community activities such as blogging , videos , podcast appearances , AMA’s , Twitch Live coding , Speaking at a virtual event. I encourage you to answer YES to the question about participating in promotional activities for your brand and project. This will enable me to get the ball rolling on some these great activities.Hey Michael, just wanted to find out if this offer still ongoing, it sounds like a great idea, and would love to participate in it.I am currently working on a Cybersecurity solution that helps smart home owners find network vulnerabilities before hackers do.Thanks.",
"username": "Neil_Okikiolu"
},
{
"code": "",
"text": "Is this still running?Tradis.ai (member of MongoDB Startup Program) it’`s building a cryptocurrency trading platform, powered by AI. We are still in Beta/Early access. We are using MongoDB as our principal data store and data platform aka BI. Maybe would be cool thing to have talk ",
"username": "Mario_TradisAI"
}
] | Checking in - Do you want to promote your project, brand or company? | 2020-04-09T18:37:32.343Z | Checking in - Do you want to promote your project, brand or company? | 5,916 |
null | [
"atlas-triggers"
] | [
{
"code": "JSON.stringifyconsole.log’âexports = async (changeEvent) => {\n const property = changeEvent && changeEvent.fullDocument && changeEvent.fullDocument.name\n console.log('the string', property)\n console.log('the json', JSON.stringify({ name: property }))\n}\nname{ \"name\": \"Test’s\" }the string Test’s\nthe json {\"name\":\"Test’s\"}\nthe string Testâs\nthe json {\"name\":\"Testâs\"}\nJSON.stringifyTestâs",
"text": "Inside an Atlas Triggered function, when I use JSON.stringify or console.log a property containing the curly quote character, e.g. ’, the output is changed to â.Configure a Trigger function on a collection and log a property from a document, for example:Create a document in the collection with a curly brace in the name property, e.g. { \"name\": \"Test’s\" }View the log entry for the create.The log entries should show the text using the curly quote, e.g. the output should be:The log entries show some other encoding mechanism:This is not just a problem of the function log itself being encoded incorrectly: our trigger function pushes the JSON.stringify of the document to SQS, and the encoding shows the incorrect Testâs as well for the message on the SQS queue.",
"username": "Timmy_Pandahouse"
},
{
"code": "",
"text": "As of now, this behavior has been corrected, e.g. curly quotes now are left as curly quotes and so on.I initially posted this to the MongoDB Jira board, before it was suggested by support that I post here. https://jira.mongodb.org/browse/SERVER-55450Since I was able to isolate this behavior to the MongoDB Atlas triggered function, I would still like follow-up on what occurred on the MongoDB side, e.g. an incident report, and will comment here if I receive it.",
"username": "Timmy_Pandahouse"
},
{
"code": "",
"text": "This issue has come back again, and I have yet to hear any reply from a MongoDB representative.",
"username": "Timmy_Pandahouse"
},
{
"code": "’â\\u0080\\u0099",
"text": "For reference, the exact encoding transform is from ’ to â\\u0080\\u0099",
"username": "Timmy_Pandahouse"
},
{
"code": "name{ \"name\": \"Test’s\" }name{ \"name\": \"Test’s\" }console.log",
"text": "Create a document in the collection with a curly brace in the name property, e.g. { \"name\": \"Test’s\" }View the log entry for the create.To be more clear here, the exact steps are:",
"username": "Timmy_Pandahouse"
},
{
"code": "const { get, set } = require('lodash')\nconst stringifyKeys = require('stringify-keys')\n\nfunction fixMongoDbMangling(document) {\n stringifyKeys(document)\n .forEach(key => {\n if (typeof get(document, key) === 'string') {\n set(document, key, get(document, key).replace(/â\\u0080\\u0099/g, '’'))\n }\n })\n}\n",
"text": "This appears to be an intermittent problem. The two times it happened so far are:I’ve been able to make a stopgap fix which takes care of the most visible+ugly character encoding problem, using this (not in the Realm function, it’s in an AWS post-process function):",
"username": "Timmy_Pandahouse"
},
{
"code": "",
"text": "Update on 2021-03-26 13:01:45It has been confirmed that this is an issue on our end and we are actively working on a fix.I have passed this information along to our Realm Engineering Team to help in their efforts in resolving this issue.Then a later update 2021-03-29 00:47:21My colleagues on the Realm Engineering Team believe that they fixed the issue",
"username": "Timmy_Pandahouse"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Strings are encoded incorrectly in Atlas Triggered function | 2021-03-23T18:39:09.404Z | Strings are encoded incorrectly in Atlas Triggered function | 3,160 |
null | [
"aggregation",
"queries",
"data-modeling",
"one-to-one-relationship"
] | [
{
"code": "",
"text": "I have a one to many relationship of two Collections say A to B. How can I i show the desired output in one document for each id.\nFor example, I have/Collection A/\n{\n“a_Id”: “abc”,\n“name”: “xyz”,\n“age”: 5\n}/Collection B/\n{\n“b_id”: “abc”,\n“FeeAmount”: 800000,\n“invoiceNumber”: “A10”,\n“Date”: “2021-10-29T00:00:00.000+04:00”,\n“PaidAmount”: 200000\n},\n{\n“b_id”: “abc”,\n“FeeAmount”: 90,\n“invoiceNumber”: “A20”,\n“Date”: “2021-10-29T00:00:00.000+04:00”,\n“PaidAmount”: 20\n}How can I achieve the following output after lookup on base of id?\nThis is 1 document per id/Desired OutPut/\n{\n“name”: “xyz”,\n“age”: 5\n“availableLimitAmount”: 800000,\n“FeeAmount”: 800000,\n“invoiceNumber”: “A10”,\n“Date”: “2021-10-29T00:00:00.000+04:00”,\n“PaidAmount”: 200000\n},\n{\n“name”: “xyz”,\n“age”: 5\n“FeeAmount”: 90,\n“invoiceNumber”: “A20”,\n“Date”: “2021-10-29T00:00:00.000+04:00”,\n“PaidAmount”: 20\n}",
"username": "MWD_Wajih_N_A"
},
{
"code": "db.collectionB.aggregate([{$lookup: {\n from: 'collectionA',\n localField: 'a_id',\n foreignField: 'b_id',\n as: 'lookup'\n}}, {$project: {\n \"name\" : { $first : \"$lookup.name\" },\n \"age\" : { $first : \"$lookup.age\" },\n FeeAmount : 1,\n Date : 1,\n PaidAmount : 1,\n availableLimitAmount : 1\n}}])\n{ _id: ObjectId(\"6061d50144567725448f109b\"),\n FeeAmount: 800000,\n Date: '2021-10-29T00:00:00.000+04:00',\n PaidAmount: 200000,\n name: 'xyz',\n age: 5 }\n{ _id: ObjectId(\"6061d50144567725448f109c\"),\n FeeAmount: 90,\n Date: '2021-10-29T00:00:00.000+04:00',\n PaidAmount: 20,\n name: 'xyz',\n age: 5 }\n",
"text": "Hi @MWD_Wajih_N_A,The following aggregation lookup can do the requested… Its important to note that using a lookup can potentially have performance overhead and we suggest to design the schema to avoid lookups as much as possible.RESULT:Best regards,\nPavel",
"username": "Pavel_Duchovny"
}
] | One to Many relationship join and project | 2021-03-26T14:01:41.488Z | One to Many relationship join and project | 6,889 |
[
"data-modeling"
] | [
{
"code": "",
"text": "I design a MongoDB collection where I am planning to use mongo ObjectID as a Document name. The ObjectID is a PostID, I want to track all the userID who liked or disliked the post.eo63rd0ixwp61960×592 116 KB",
"username": "Kellen"
},
{
"code": "{\n\"_id\" : ...,\n\"title\" : ...,\nlikes : 100,\ncomments: 10\n...\n}\n{\n _id : ... ,\n postId : ... ,\nuserId : ... ,\nuserName : ... ,\navatarLink ...\n}\n{\n _id : ... ,\n postId : ... ,\narraySize : 50,\n users : [ {\n userId : ... ,\n userName ... ,\n avatarLink ...\n ],\nhasNext : true\n}\n",
"text": "Hi @Kellen,Welcome to MongoDB Community,If I understood correctly you are looking for a design pattern to store those likes per post?If my assumption is correct I think that storing all users that might like a post in the post document is risky , as you might have an unbounded array and potentially reach 16MB limit.What you should keep in the post document the number of likes/comments that the post have:Those should be updated as your likes are inserted.What I think you should consider is 2 options:I suggest to read the following blogs:Please let me know if that makes sense.Best regards,\nPavel",
"username": "Pavel_Duchovny"
}
] | Is ObjectID a bad practice as field name | 2021-03-29T07:02:03.475Z | Is ObjectID a bad practice as field name | 2,783 |
|
null | [
"mongodb-shell",
"server",
"installation"
] | [
{
"code": "",
"text": "I have been working on getting connected to my db for ages but I can’t seem to basically (I am on mac) I installed homebrew which worked then installed mongodb but when I type mongo at all I get dyld: Symbol not found: _clock_getres\nReferenced from: /usr/local/bin/mongo (which was built for Mac OS X 10.12)\nExpected in: /usr/lib/libSystem.B.dylib\nThis is because i am on 10.10 mac so how can I fix this because i need mongodb!",
"username": "Areze_F"
},
{
"code": "",
"text": "What is the version you are trying to install?Please check this jira tickethttps://jira.mongodb.org/browse/SERVER-43249",
"username": "Ramachandra_Tummala"
},
{
"code": "mongomongodump",
"text": "Hi @Areze_F,MongoDB 3.6 was the last server & shell release to support Mac OS X 10.10 (Yosemite).Since you are using Homebrew to install, try:brew uninstall mongodb-community\nbrew install [email protected] first command removes the latest version of MongoDB (which requires a newer macOS) and the second should install MongoDB 3.6 server and command-line tools (mongo, mongodump, etc).Hope that helps!Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to install on low version of mac | 2021-03-23T20:15:25.601Z | How to install on low version of mac | 3,568 |
null | [] | [
{
"code": "",
"text": "Hello people,I’m Mohammad-Ali, doing MongoDB since 2018. I would be glad to help with MongoDB- and Mongoose-related questions.",
"username": "aerabi"
},
{
"code": "",
"text": "Hi Mohammad,Welcome to the MongoDB community! That’s awesome to hear, we have a lot of people using Mongoose and your help will be greatly appreciated Looking forward to seeing you around.",
"username": "ado"
},
{
"code": "",
"text": " Welcome to the MongoDB Community @aerabi and g’day from Sydney, Australia!Are you able to share a bit more about what you’re currently working on with MongoDB?We have community members from around the world, but Germany is home to some very keen community contributors including:@michael_hoeller from Tübingen: 🌟 Moin moin from Michael. Michael is a MongoDB Champion and also co-organises the DACH Virtual Community.@Arkadiusz_Borucki from Munich: 🌟 Hello from Arkadiusz Borucki (also a MongoDB Champion!).@Philipp_Wuerfel from Berlin: Hello - Hello and happy new year from Berlin (future MongoDB Champion? )Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hello @ado and @Stennie_X, and thanks for the kind words.So, I started working in my current company in mid-2019, when I introduced MongoDB there (as I had previous experiences with it, e.g. at Picnic in Holland).My interaction with MongoDB can be classified into 3 categories:P.S: This Champions program is very sexy. I’ll start working out to attend the tournament. Liebe Grüße,\nMohammad-Ali",
"username": "aerabi"
},
{
"code": "",
"text": "Hello @aerabi, welcome to the forum your set of experience reads interesting, for sure this can help others here in the forum. Good to know that you seem to have deeper knowledge in mongoose. Recently I come in every second project across the question if we need mongoose any more, though I have not seen this question in the forum. This will become an interesting communication.Regards, and warm greetings further south \nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "welcome to the forum @aerabi ! ",
"username": "Arkadiusz_Borucki"
},
{
"code": "",
"text": "Hello @michael_hoeller and @Arkadiusz_Borucki, and thanks for the warm welcomes.Recently I come in every second project across the question if we need mongoose any more, though I have not seen this question in the forum. This will become an interesting communication.I agree, @michael_hoeller, Mongoose is now so much the default option people usually won’t ask themselves if they really need it. @Michael_Lynn has done an interesting interview with the developer of Mongoose and has opened a thread on it here. The link to the original interview is also presented. We could probably discuss there when to use Mongoose and when not to.",
"username": "aerabi"
}
] | Greetings from Germany | 2021-03-17T16:17:12.025Z | Greetings from Germany | 5,237 |
[
"atlas-functions",
"stitch"
] | [
{
"code": "usersUser ID FielduserIdidusersdb.users.insertOne({userId: ObjectId('copied_id'), description: \"something\"})userId",
"text": "Hi!\nI followed steps in stitch documentation:\nCutom User DataIn the Users section of Mongo Stitch panel I enabled custom data. I chose database, collection ( users ) and stated User ID Field to be userId . Then I deployed changes.\n\nCustomUserData1243×563 30.2 KB\nI have a created and confirmed user.I copied the user’s id and inserted the following document to the users collection db.users.insertOne({userId: ObjectId('copied_id'), description: \"something\"}).The document exists and I can see it when querying the collection. The userId field has correct ObjectId() value (in the correct ObjectId format).Nonetheless, when I log in with the user (I log in via Node.js SDK) the customData property of the userObject is empty. It should contain at least the description field.What can be the reason? Has anyone had a similar problem?\nCheers,\nEryk",
"username": "Eryk_Czajkowski"
},
{
"code": "",
"text": "+1 having this problem too. I’ve reached out to support and will post their reply here also.",
"username": "Kieran_Peppiatt"
},
{
"code": "const speaksEnglish = user.profile.customData.primaryLanguage === \"English\";const speaksEnglish = user.customData.primaryLanguage === \"English\";",
"text": "Hi Folks – Sorry to hear that you’re having trouble accessing custom user data within Stitch. After scanning the documentation looks like there might be a typo in the sample code, we believe –const speaksEnglish = user.profile.customData.primaryLanguage === \"English\";Should be –const speaksEnglish = user.customData.primaryLanguage === \"English\";We’ve filed a ticket to update our documentation. If that doesn’t resolve your issue, would you mind either posting or sending me directly an example document from your users collection (with any PII removed).",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "same issue here… pebcak I fixed it by setting userId as String, not ObjectId.\n(not seen in doc btw)Hoping it’s not too late ",
"username": "erwan_oger"
},
{
"code": "",
"text": "here is what I did to retrieve the custom data \n1- create a function in stitch functions.\nthe function you create looks like this:\nexports = function(arg){return context.user.custom_data;\n};2- you give it a name example: myFunction3.you call the function inside your code like so:\nconst client = Stitch.defaultAppClient;\nclient.callFunction(‘myFunction’).then(result=>result).catch(err=>err)I hope this answers your questions, good luck ",
"username": "Ahlam_bey"
},
{
"code": "",
"text": "Changing from ObjectId to String worked for me as well, thanks!",
"username": "Renato_Campos"
}
] | Stitch: Can't make custom user data to appear in userObject.customData | 2020-03-17T18:03:15.720Z | Stitch: Can’t make custom user data to appear in userObject.customData | 4,924 |
|
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Hello guys, iam working on the design of a LIMS database . I should implement a simplified version, which is more common with a Booking System. So every user should be able to Book a resource ( Classroom, Waffer Machine etc). Every user should be able to see its Bookings and the admin should be able to see all bookings per user. The admin should also be able to see Bookings per Resource, and of course the web app should be able to retrieve booked dates of a specific resource in order to prevent overlaped bookings for the same resource. So my schema is this:Users {\n_Id: ObjectId\nname: String\nemail: String\nhashed_password: String\nphone: String\nrole: String, enum:[“admin”,“user”]\n}Bookings{\n_Id: ObjectId\nuserID: reference User\ndate_started:Date\ndate_finished:Date\nProject_title: String\nProject_description: String\ntotal_cost: Number\n}Resources{\n_Id: ObjectId\nname: String\nstatus: String, enum [“available”,“reserved”]\ncost_per_day: Number\nphotoURL: String\nreservation: [\nBookingID: reference Booking\ndate_started: Date\ndate_finished: Date\n}I would like to hear your opinions, if this schema satisfies both the functional requirements and anti-patterns hints, like unbounded arrays etc",
"username": "petridis_panagiotis"
},
{
"code": "usersresourcesusers: \n _Id: ObjectId,\n name: String,\n ..., // other info\n role: String, enum: [ “admin”, “user” ],\n bookings: [\n { booking_id: <>, booking_date: <>, from_date: <>, to_date: <>, resource_id: <>, resource_name: <>, project_details: { ... }, total_cost: <> },\n { ... },\n ...\n ]\nbookingsbooking_idresource_idusersusersresourcesresource_idresource_nameresources:\n _Id: ObjectId,\n name: String,\n status: String, enum [ “available”, “reserved” ] ,\n cost_per_day: Number,\n photoURL: String,\n reservations: [\n { user_id: <user reference>, booking_id: <booking reference> },\n { ... },\n ...\n ]\nreservationsbookingsusersresources$lookupresourcesusersusersresourcesbookings",
"text": "Hello @petridis_panagiotis, welcome to the MongoDB Community forum!In general, I considered these two factors - the amount of data and the kind of queries. These specify how easily you can access the data using simple queries.My suggestions are based upon:I am going with two collections - the users and the resources.As you see I have a bookings array field with booking information. Each booking has its own unique id (the booking_id), the resource_id, the from and to booking dates, and other details.Your queries on users are:This model will allow query the users collection to perform both the queries - without accessing another collection. Also, note that we are including the resources collection document fields resource_id and resource_name for each booking; this is some data duplication and I think it is tolerable.Note the reservations array field - it stores each bookings reference and its users reference. This is to fetch the relevant user and booking details for queries.Your queries on resources are:Both the queries will do an aggregation “join” operation (uses $lookup stage) on the resources and users collections.With this model, when a new booking is created you will be creating booking data and updating both the users and resources collections.The bookings as a separate collection can be useful where you have specific queries on this collection only - for example, get all the bookings in the last one month.",
"username": "Prasad_Saya"
}
] | Schema Design Laboratory Information Management System | 2021-03-23T22:14:28.622Z | Schema Design Laboratory Information Management System | 3,185 |
null | [] | [
{
"code": "",
"text": "Many customers are currently reviewing mongodb, and most of them are considering the following requirements from the RDB perspective:[Requirement]Is there anything we can consider or refer to regarding the relevant contents?",
"username": "Kim_Hakseon"
},
{
"code": "storage",
"text": "Hi @Kim_Hakseon,most of them are considering the following requirements from the RDB perspectiveI’d recommend SQL to MongoDB: An RDBMS Migration Guide as a starting point. It is difficult to suggest other resources without more information on your requirements.There are many different resources for data modelling. @michael_hoeller’s post has a handy round-up to get you started: How do I model relationships with MongoDB? - #2 by michael_hoeller.There aren’t a lot of configuration options to affect file-level options or data locality, but I think the general categories would be:If you’re looking for other details can you provide more information on the DB system and features you are trying to compare to?Do you have any more specific details? DB design could include a broad range of topics such as schema design, indexes, production tuning, etc. Efficiency depends on your use case requirements (for example, you may prefer efficient reads over efficient writes or some other interpretation).Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | References to requirements | 2021-03-29T01:39:25.233Z | References to requirements | 2,535 |
null | [
"configuration"
] | [
{
"code": "",
"text": "I’m looking at a config file.storage:wiredTiger:configString: <.String>\ncheckpointSizeMB: <.Int>\nstatisticsLogDelaySecs: <.Int>I saw these three options.\nI looked up the manual for exactly what this option says, but it didn’t come out, so I’d like to know the description of these options.\nOr are these options real?",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "One of the param is superseded by a new param and it is made hidden as per this ticket\nOthers not surehttps://jira.mongodb.org/browse/DOCS-7581",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi @Kim_Hakseon,Those options are not listed as per the official documentation in storage options for MongoDB 4.4. If you don’t mind, could you tell us where did you find those options, and for what MongoDB version?Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "I asked because it was in the sample config file that I received somewhere to study.",
"username": "Kim_Hakseon"
}
] | About config file options | 2021-03-25T01:20:14.270Z | About config file options | 2,736 |
null | [
"aggregation",
"queries",
"python"
] | [
{
"code": "# [\n# {\n# \"$match\": {\n# \"timestamp1\": {\"$gte\": datetime.strptime(\"2020-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\"),\n# \"$lte\" :datetime.strptime(\"2020-01-01 01:05:00\", \"%Y-%m-%d %H:%M:%S\")}\n# }\n# },\n# {\n# \"$group\": {\n#\n# \"_id\": {\"$dateToString\": { \"format\": \"%Y-%m-%d %H\", \"date\": \"$timestamp1\" }},\n# \"max_id13\": {\n# \"$max\": \"$id13\"\n# }\n# }\n# },\n#\n# {\n# \"$project\": {\n# \"_id\":0,\n# \"day\":\"$_id\",\n# \"max_id13\":1\n# }\n# },\n# {\"$sort\": {\"day\": 1}}\n# ]\n# ).explain()['executionStats'])\nAttributeError: 'CommandCursor' object has no attribute 'explain'",
"text": "I am trying to explain but it seems i do something wrong.\nMy query:Output: AttributeError: 'CommandCursor' object has no attribute 'explain'\nAny help?Thanks in advance!",
"username": "harris"
},
{
"code": "explainexplainagg_pipeline = [\n {\n \"$match\": {\n \"timestamp1\": {\"$gte\": datetime.strptime(\"2020-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\"),\n \"$lte\" :datetime.strptime(\"2020-01-01 01:05:00\", \"%Y-%m-%d %H:%M:%S\")}\n }\n },\n {\n \"$group\": {\n \"_id\": {\"$dateToString\": { \"format\": \"%Y-%m-%d %H\", \"date\": \"$timestamp1\" }},\n \"max_id13\": {\n \"$max\": \"$id13\"\n }\n }\n },\n {\n \"$project\": {\n \"_id\":0,\n \"day\":\"$_id\",\n \"max_id13\":1\n }\n },\n { \"$sort\": {\"day\": 1} }\n]\n\nexplain_output = db.command('aggregate', 'collection_name', pipeline=agg_pipeline, explain=True)\n\npprint.pprint(explain_output)\n",
"text": "Hello @harris,The explain method on the aggregation pipeline can be used as follows. Note that the command collection.aggregate doesn’t support the explain in PyMongo - you need to use the db.command syntax as shown in the example.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Explain on aggregate function with python pymongo | 2021-03-28T15:54:02.725Z | Explain on aggregate function with python pymongo | 8,709 |
null | [
"kubernetes-operator"
] | [
{
"code": "\"level\":\"info\",\"ts\":1616526722.9571285,\"caller\":\"pem/secret.go:42\",\"msg\":\"secret abecorn-openshift-cert doesn't exist yet\",\"ReplicaSet\":\"trade-services-core/abecorn-openshift\"}\n{\"level\":\"info\",\"ts\":1616526723.1438038,\"caller\":\"operator/mongodbreplicaset_controller.go:149\",\"msg\":\"Updated StatefulSet for replica set\",\"ReplicaSet\":\"trade-services-core/abecorn-openshift\"}\n{\"level\":\"info\",\"ts\":1616526723.143847,\"caller\":\"agents/agents.go:80\",\"msg\":\"Waiting for agents to register with OM\",\"ReplicaSet\":\"trade-services-core/abecorn-openshift\",\"statefulset\":\"abecorn-openshift\",\"agent hosts\":[\"abecorn-openshift-0.abecorn-openshift-svc.trade-services-core.svc.cluster.local\",\"abecorn-openshift-1.abecorn-openshift-svc.trade-services-core.svc.cluster.local\",\"abecorn-openshift-2.abecorn-openshift-svc.trade-services-core.svc.cluster.local\"]}\n{\"level\":\"info\",\"ts\":1616526724.675965,\"caller\":\"om/automation_status.go:40\",\"msg\":\"Waiting for MongoDB agents to reach READY state...\",\"ReplicaSet\":\"trade-services-core/abecorn-openshift\",\"processes\":[\"abecorn-openshift-0\",\"abecorn-openshift-1\",\"abecorn-openshift-2\"]}\n{\"level\":\"info\",\"ts\":1616526749.488708,\"caller\":\"om/automation_status.go:57\",\"msg\":\"MongoDB agents have reached READY state\",\"ReplicaSet\":\"trade-services-core/abecorn-openshift\"}\n{\"level\":\"error\",\"ts\":1616526749.6907418,\"caller\":\"workflow/failed.go:67\",\"msg\":\"Failed to create/update (Ops Manager reconciliation phase): Status: 409 (Conflict), ErrorCode: CANNOT_STOP_BACKUP_INVALID_STATE, Detail: Cannot stop backup unless the cluster is in the STARTED state.\",\"ReplicaSet\":\"trade-services-core/abecorn-openshift\",\"stacktrace\":\"github.com/10gen/ops-manager-kubernetes/pkg/controller/operator/workflow.failedStatus.Log\\n\\t/go/src/github.com/10gen/ops-manager-kubernetes/pkg/controller/operator/workflow/failed.go:67\\ngithub.com/10gen/ops-manager-kubernetes/pkg/controller/operator.(*ReconcileCommonController).updateStatus\\n\\t/go/src/github.com/10gen/ops-manager-kubernetes/pkg/controller/operator/common_controller.go:132\\ngithub.com/10gen/ops-manager-kubernetes/pkg/controller/operator.(*ReconcileMongoDbReplicaSet).Reconcile\\n\\t/go/src/github.com/10gen/ops-manager-kubernetes/pkg/controller/operator/mongodbreplicaset_controller.go:155\\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Cont...\n{\"level\":\"info\",\"ts\":1616526759.7401872,\"caller\":\"operator/mongodbreplicaset_controller.go:72\",\"msg\":\"-> ReplicaSet.Reconcile\",\"ReplicaSet\":\"trade-services-core/abecorn-openshift\"}\n{\"level\":\"info\",\"ts\":1616526759.740251,\"caller\":\"operator/mongodbreplicaset_controller.go:73\",\"msg\":\"ReplicaSet.Spec\",\"ReplicaSet\":\"trade-services-core/abecorn-openshift\",\"spec\":{\"version\":\"3.6.22\",\"exposedExternally\":true,\"credentials\":\"mongo-keys\",\"opsManager\":{\"configMapRef\":{\"name\":\"mongo-config\"}},\"cloudManager\":{\"configMapRef\":{}},\"persistent\":false,\"type\":\"ReplicaSet\",\"backup\":{\"mode\":\"disabled\"},\"agent\":{\"startupOptions\":null},\"members\":3,\"podSpec\":{},\"security\":{\"tls\":{\"secretRef\":{}}},\"connectivity\":{}},\"desiredReplicas\":3,\"isScaling\":false}\n{\"level\":\"info\",\"ts\":1616526759.7403138,\"caller\":\"operator/mongodbreplicaset_controller.go:74\",\"msg\":\"ReplicaSet.Status\",\"ReplicaSet\":\"trade-services-core/abecorn-openshift\",\"status\":{\"phase\":\"Reconciling\",\"message\":\"Failed to create/update (Ops Manager reconciliation phase): Status: 409 (Conflict), ErrorCode: CANNOT_STOP_BACKUP_INVALID_STATE, Detail: Cannot stop backup unless the cluster is in the STARTED state.\",\"lastTransition\":\"2021-03-23T19:12:39Z\",\"observedGeneration\":1,\"version\":\"\"}}\n{\"level\":\"info\",\"ts\":1616526759.9033709,\"caller\":\"connection/opsmanager_connection.go:26\",\"msg\":\"Using Ops Manager version v20210309\",\"ReplicaSet\":\"trade-services-core/abecorn-openshift\"}",
"text": "I see the following in the operator pod:\nIt seems I’m missing some step that isn’t in the documentation about getting the agents to come up and allow the mongodb resource to finish reconciling. The mongodb custom resource stays in the “reconciling” phase forever.",
"username": "Dean_Peterson"
},
{
"code": "",
"text": "I would open a support ticket.",
"username": "Albert_Wong"
}
] | Mongodb enterprise kubernetes operator keeps reconciling | 2021-03-23T19:36:23.085Z | Mongodb enterprise kubernetes operator keeps reconciling | 3,239 |
null | [
"containers",
"ops-manager",
"kubernetes-operator"
] | [
{
"code": "",
"text": "I’ve been trying to deploy the MongoDB Kubernetes Operator on Openshift 4.4 in the Azure cloud.\nWhen the operator creates the Ops Manager POD from the MongoDBOpsManager CRD, the pod quits with the error:failed to create symbolic link ‘/data/journal’: Permission deniedI’ve googled a solution, which is to to create Multiple PVCs to bind the “Data, Journal and Logs” directories. Even by doing so, the solution didn’t work and had the Ops pod had the same error.",
"username": "Amitai_Gz"
},
{
"code": "",
"text": "Can you open a ticket with MongoDB and post what is the ticket name? I personally don’t have an OCP environment in Azure but it works fine with AWS.",
"username": "Albert_Wong"
},
{
"code": "",
"text": "Same issue, need help !!",
"username": "YUDI_TATA"
},
{
"code": "WARNING\n\nGrant your containers permission to write to your Persistent Volume. The Kubernetes Operator sets fsGroup = 2000 in securityContext This makes Kubernetes try to fix write permissions for the Persistent Volume. If redeploying the resource does not fix issues with your Persistent Volumes, contact MongoDB support.\n",
"text": "same issue for me, docs say it, but i cant solve it, i am using digitalocean cluster",
"username": "David_Pestana"
}
] | Mongodb Kubernetes OpsManager Pod Permissions Error | 2020-06-29T14:36:14.551Z | Mongodb Kubernetes OpsManager Pod Permissions Error | 3,755 |
null | [] | [
{
"code": "",
"text": "Is there any way to be inform about new release of mongo db with email ?",
"username": "naeimeh_mhm"
},
{
"code": "",
"text": "Welcome to the MongoDB Community @naeimeh_mhm!What MongoDB product releases are you interested in?You have a few options to follow new releasees at the moment:Subscribe to notifications for the Product & Driver Announcements category in the forum. This includes server, driver, and other product releases. Some announcements will be for beta or release candidate versions, which is helpful if you want to track upcoming releases.Subscribe to MongoDB Release Announcements. This includes new announcements of MongoDB Enterprise server (which is released concurrently with the same version of MongoDB Community server) as well as enterprise tools like the MongoDB Connector for BI and Ops Manager.Regards,\nStennie",
"username": "Stennie_X"
}
] | Mongo DB Release Info | 2021-03-23T17:58:35.065Z | Mongo DB Release Info | 4,013 |
null | [
"monitoring"
] | [
{
"code": "",
"text": "Hi,We recently had a problem where after performing a resize operation on a EBS volume, the volume completely stopped responding to MongoDB queries for some minutes. We recovered from this state by forcing a restart on the MongoDB primary host, which triggered the failover to a secondary. We did try to execute a stepdown on the primary, but it did not have any effect, which forced us to move to the restart the server option.There was no automatic failover (i.e. the primary stepping down on its own) because even though the data volume was not responding, the mongo process was still up and running and responding to health checks from the secondaries.So, to summarise, the volume was not responding, no query was being successfully executed, the CPU on the host was showing more than 50% in io-wait, and the manual stepdown did not work, only the host restart.While this of course is a failure in the underlying hardware, is there a way to configure Mongo to failover in case the data volume shows this type of behaviour/failures?Thanks",
"username": "Joao_Santos"
},
{
"code": "mongodmongodwatchdogPeriodSecondswatchdogPeriodSecondsmongodmongod",
"text": "While this of course is a failure in the underlying hardware, is there a way to configure Mongo to failover in case the data volume shows this type of behaviour/failures?Welcome to the MongoDB Community @Joao_Santos!There is a Storage Node Watchdog feature you can enable to detect filesystem unresponsiveness and terminate the mongod process if a critical directory path is unresponsive:By default, the Storage Node Watchdog is disabled. You can only enable the Storage Node Watchdog on a mongod at startup time by setting the watchdogPeriodSeconds parameter to an integer greater than or equal to 60. However, once enabled, you can pause the Storage Node Watchdog and restart during runtime. See watchdogPeriodSeconds parameter for details.I would only use this wth a replica set member. If mongod is terminated by the watchdog process due to unresponsive I/O, mongod may not be able to cleanly restart. The documentation page (linked above) has more details.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | No stepdown on EBS Volume failure | 2021-03-24T19:37:16.529Z | No stepdown on EBS Volume failure | 1,956 |
null | [] | [
{
"code": "",
"text": "protobuf is so interesting, can mongodb use protobuf?",
"username": "anlex_N"
},
{
"code": ".proto",
"text": "Hi @anlex_N,There’s a brief mention on the BSON spec homepage:BSON can be compared to binary interchange formats, like Protocol Buffers. BSON is more “schema-less” than Protocol Buffers, which can give it an advantage in flexibility but also a slight disadvantage in space efficiency (BSON has overhead for field names within the serialized data).The protobuf (Protocol Buffers) serialisation format uses a .proto description of the data structure you want to work with. BSON uses schema-on-read: field names and data types are embedded in the BSON document.The schema-on-read aspect provides an advantage in distributed data systems: there is no central schema catalog to maintain or refer to, and a document contains all the information needed for deserialisation.You could use protobuf (or other serialisation formants) in your client applications, but data ultimately has to be serialised to/from BSON for the current MongoDB Wire Protocol.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | What do you think of protobuf? | 2021-03-23T07:29:51.922Z | What do you think of protobuf? | 7,864 |
null | [] | [
{
"code": "",
"text": "Hi\nwe had an issue with one of our legacy MongoDB server\nthat was running on single instance ,\nThe server had unexpected shut down (power issue) and fail to start after\nis there a way we can recover the data from the files , as we cannot bring back the server\nwe try reover etc … no success[initandlisten] Assertion: 28595:-31803: WT_NOTFOUND: item not found\nSTORAGE [initandlisten] exception in initAndListen: 28595 -31803: WT_NOTFOUND: item not found, terminatingany idea / help / suggestion / support will be appreciated",
"username": "Rami_Avital"
},
{
"code": "--repair",
"text": "Hi @Rami_AvitalYou can try mongod’s --repair option. It is very likely that the files are now corrupted and your only option is to restore from a backup.This is one of the scenarios that running a replicaset protects against as well as regular tested backups.",
"username": "chris"
},
{
"code": "",
"text": "we tried but no luck …\nwe need some expert to try restore the data from files or some other magic",
"username": "Rami_Avital"
},
{
"code": "mongod --repairmongodumpmongorestore--repair--repair",
"text": "we need some expert to try restore the data from files or some other magicHi @Rami_Avital,Recovering from data file corruption is unfortunately challenging and often unsuccessful.However, I noticed you are running a very old version of MongoDB (3.0.12 was released in May, 2016).The repair functionality for WiredTiger has been improved in more recent server versions (4.0.3+) per SERVER-19815.This isn’t guaranteed magic, but I would:download the latest version of MongoDB 4.0 (currently 4.0.23)try running mongod --repair (4.0.23 binary) against a copy of your data files following the procedure to Recover a Standalone after an Unexpected Shutdown.If the repair procedure results in a usable deployment, I expect you will either have to remain on MongoDB 4.0 or mongodump and mongorestore into your 3.0 deployment.Also to be clear: the --repair option is a last resort for data recovery and is definitely not a substitute for proper backups. The repair process will salvage data structures that can be read and skip those that cannot. This does not guarantee you can recover all of your data, and it is likely that some data will be missing depending on the nature of the data file corruption. However, if you don’t have any recent backups this has a chance of getting your deployment back online.If you have at least one known good backup (even if not recent), I would compare data that is expected to be present in both your backup and the repaired database.If the newer --repair process fails, this unfortunately is the stage at which you may need to accept that you have to return to your last good backup.A final approach would be to try heroic efforts to salvage raw data. This is rarely successful and can involve a lot of deep dive effort. @alexbevi’s article on Recovering a WiredTiger collection from a corrupt MongoDB installation may be a useful guide, but note that this article predated the enhanced repair process I suggested above.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Recover corrupted files (MongoDB 3.0.12) .wt | 2021-03-25T10:15:56.946Z | Recover corrupted files (MongoDB 3.0.12) .wt | 7,551 |
null | [
"queries",
"dot-net"
] | [
{
"code": "SortByDescendingvar latestPriceChange = await _appDbContext.OfferChanges.Find(o =>\n o.OfferId == offerListingDto.Id && o.ChangeType == OfferChangeType.PriceChange && o.IsCommited)\n .SortByDescending(o => o.ChangeTime) // is this even needed? \n .FirstOrDefaultAsync();\nIFindFluentAsQueryableIQueryable var latestPriceChange = await _appDbContext.OfferChanges\n .AsQueryable()\n .OrderByDescending(o => o.ChangeTime)\n .FirstOrDefaultAsync(o => o.OfferId == offerListingDto.Id && o.ChangeType == OfferChangeType.PriceChange && o.IsCommited);",
"text": "Do I need to do the sorting explicitly or can I safely remove SortByDescending and it will do the same?Also, I’m trying to use async methods where possible when dealing with the I/O bound operations. Is IFindFluent interface generally preferred over AsQueryable?IQueryable version:",
"username": "Konrad_Kogut"
},
{
"code": "filter -> sort -> limitdb.collection.aggregate([\n { $match: { OfferId: offerListingDto.Id, ChangeType: OfferChangeType.PriceChange, IsCommited: true } },\n { $sort: { ChangeTime: -1 } },\n { $limit: 1 }\n])\n",
"text": "Hello @Konrad_Kogut, welcome to the MongoDB Community forum!Do I need to do the sorting explicitly or can I safely remove SortByDescending and it will do the same?Yes, the explicit sort (descending) on the date field is required - the first document after the sort would be the latest document.Also, I’m trying to use async methods where possible when dealing with the I/O bound operations. Is IFindFluent interface generally preferred over AsQueryable?I am not familiar with these methods. But, the order of the operations in the query can matter for making the query efficient. The operations should be: filter -> sort -> limitAlso, the query can be built using an Aggregation pipeline, for example (substitute appropriate field names):This query can be optimized by creating a compound index, e.g., on the filter+sort fields (see Aggregation Pipeline Optimization).",
"username": "Prasad_Saya"
}
] | Get the latest document from collection | 2021-03-24T14:32:03.091Z | Get the latest document from collection | 1,981 |
null | [
"replication",
"configuration"
] | [
{
"code": "getLastError",
"text": "settings.getLastErrorDefaults\nA document that specifies the write concern for the replica set.\nThe replica set will use this write concern only when write operations or getLastError specify no other write concern.Does the “settings.getLastErrorDefaults” setting change the Write Conern option by default for all write queries for mongodb server?And what is the value that has changed like this, and if the Write Conern applied to the query comes in? Are you saying that if the w option exists in operation, you ignore the setting value of settings.getLastErrorDefaults?",
"username": "Kim_Hakseon"
},
{
"code": "db.collection.insertOne(\n { someField: \"some value\" }, \n { writeConcern: { w : \"majority\", wtimeout : 100 } } \n)\n",
"text": "Hello @Kim_Hakseon, here is the explanation:Does the “settings.getLastErrorDefaults” setting change the Write Concern option by default for all write queries for mongodb server?“settings.getLastErrorDefaults” is an optional document specified in the replica set configuration - rsconf. It specifies the default write concern for the replica set.But, you can override this default setting for example, for a specific write operation:And what is the value that has changed like this, and if the Write Concern applied to the query comes in? Are you saying that if the w option exists in operation, you ignore the setting value of settings.getLastErrorDefaults?Yes. The write concern specified for a write operation like db.collection.updateOne( { //… }, { //… }, { // write concern…} ), overrides the default setting specified in the “settings.getLastErrorDefaults” of the replica set configuration.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "So if you want to use full sync in a three-node replica set (P-S-S) structure, can I give you w:3?",
"username": "Kim_Hakseon"
},
{
"code": "1n'majority'",
"text": "The write concern for a replica set describes the number of data-bearing members (i.e. the primary and secondaries, but not arbiters) that must acknowledge a write operation before the operation returns as successful.In a replica set, data gets in sync on all nodes. The write concern specifies that an acknowledgement is received after the data is written to 1, n (2 or 3 in your case) or 'majority' of the nodes. So, whatever the value you specify, the data is always syncd on all data bearing nodes. The write concern determines the acknowledgement level only.See the write concern’s ‘w’ option for more details.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to set up Write Concern on a server? | 2021-03-26T12:56:00.825Z | How to set up Write Concern on a server? | 3,482 |
null | [] | [
{
"code": "",
"text": "I am trying to use $text search in realm react native client and it returns, code 12, message “$text is not allowed in this context”.\nhere is the code:\nconst mongodb = user.mongoClient(‘mongodb-atlas’);\nconst productCollection = mongodb.db(‘database’).collection(‘products’);\nproductCollection.find({$text: {$search: search}})",
"username": "prakhar_tomar"
},
{
"code": "",
"text": "@prakhar_tomar Doesn’t look like $text is supported from the client because you must be a system user to call it, see here - https://docs.mongodb.com/realm/mongodb/crud-and-aggregation-apis/#evaluation-operator-availabilityI believe what you could do is call a remote function that has system user access however",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Thanks for the reply.\nIf text is not supported then, what would be the best practice for creating a search bar with realm?",
"username": "prakhar_tomar"
},
{
"code": "$text",
"text": "Create a (backend) function that performs the $text search.",
"username": "Ted_Hayes"
},
{
"code": "",
"text": "I found a solution :-\nconst mongodb = user.mongoClient(‘mongodb-atlas’);\nconst collection = mongodb.db('db).collection(‘collection’);\ncollection.aggregate( [ { $search: { autocomplete: { path: ‘’, query: ‘’, }, }, }, ] )",
"username": "prakhar_tomar"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | $text not working in realm mongodb data access | 2021-03-23T07:24:51.499Z | $text not working in realm mongodb data access | 3,243 |
null | [
"queries",
"change-streams"
] | [
{
"code": "",
"text": "We have a mongo collection in which we store only the exceptions occurring in the application. Is it possible to send some type notification or alert the someone/team that new entry is created in that collection. I know it can be done by writing some code in the application but just wanted to know if there is a way to do it directly from MongoDB.Thank you.\nJW",
"username": "Jason_Widener1"
},
{
"code": "",
"text": "Hello @Jason_Widener1,You can use Change Streams - these work with replica set and sharded clusters only:Change streams allow applications to access real-time data changes without the complexity and risk of tailing the oplog. Applications can use change streams to subscribe to all data changes on a single collection, a database, or an entire deployment, and immediately react to them.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thank you so much. I will try it over the weekend.",
"username": "Jason_Widener1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Notification from MongoDB on data changes | 2021-03-24T15:39:52.280Z | Notification from MongoDB on data changes | 7,118 |
null | [
"realm-web"
] | [
{
"code": "",
"text": "The realm-web package (1.2.1) is causing my web application to hang on load because of a generator inside the MongoDBCollection class. I’ve tested my app on Microsoft Edge, Firefox, and Chrome with no issues whatsoever. Please help me here.The line of code that safari doesn’t like is below:async *watch({ ids, filter, } = {}) {\n…\n}Is there a way around this? Do I need to polyfill generators for safari using webpack or something. I’m kinda desperate here so any help will do. Thanks",
"username": "Karl_Ducille-Jones"
},
{
"code": "",
"text": "Ok I believe I’ve found the issue. I was testing my web app on an older computer to find performance issues easier. The computer has Safari version 11.1.2 installed and I don’t believe realm-web works on that browser. I just tested it on my new Macbook pro and everything is fine. I will use polyfills to address this issue.For anybody else seeing this error, update your browser ",
"username": "Karl_Ducille-Jones"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Breaking error in Safari - SyntaxError: Unexpected token '*' | 2021-03-26T23:20:29.658Z | Breaking error in Safari - SyntaxError: Unexpected token ‘*’ | 5,195 |
null | [] | [
{
"code": "",
"text": "Hi,I use mongocxx 3.6.1 for mongodb access in C++ app. Due to the db administration rules, the app connects to mongodb without the permission to admin db. The app needs to get the replica set info such as which server is primary and which is secondary. When the app calls { replSetGetStatus : 1 } by database::run_command(), the call fails with the error “not authorized on admin to execute command”.I wonder if there is a way to get the replica set status without admin permission.Thanks.",
"username": "Yudong_Sun"
},
{
"code": "isMaster",
"text": "Hi @Yudong_SunThe driver should handle this automatically for you. I have never used the C++ driver but this is a common idiom from the few drivers I have used.Indeed the tutorial creates a connection using a mongodb uri:\nhttp://mongocxx.org/mongocxx-v3/tutorial/#make-a-connectionThe database command to get the topology(primary, members, some settings) is isMasterMongoDB drivers and clients use isMaster to determine the state of the replica set members and to discover additional members of a replica set.",
"username": "chris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Is it possible to get replica set status without admin permission | 2021-03-26T17:33:32.421Z | Is it possible to get replica set status without admin permission | 2,617 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.