image_url
stringlengths 113
131
⌀ | tags
list | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | []
| [
{
"code": "",
"text": "While trying to download MongoDB Community Server, I come across this error.“Error Writing to file:\nC:\\WINDOWS\\system32\\msvcp140.dll. Verify that you have access to that directory.”I have attempted to troubleshoot and resolve this issue on my own. I watched a youtube video that proposed 2 different solutions, one involved downloading the 32 bit version and 64 bit version of this file and putting it into their appropriate folders. I came across an error that stated “The action can’t be completed because the folder or a file in it is open in another program” (I could not determine what program was using it, and if i could kill it)\nThe second proposed solution that didn’t work for me was downloading Visual C++ Redistributable, no luck there.Any insights would be really greatly appreciated, thank you!",
"username": "Liz_Wilson"
},
{
"code": "",
"text": "do not blindly trust dll-only download sites.140 means files came with Visual Studio v14 2015. if you want to make sure it is installed, you need the same version of C++ redistributable:\nDownload Visual C++ Redistributable for Visual Studio 2015 from Official Microsoft Download Centernow for the access problem, there are few reasons:",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Resolved, thank you very much!",
"username": "Liz_Wilson"
}
]
| Issue Installing MongoDB Community Server - | 2023-01-17T21:32:34.644Z | Issue Installing MongoDB Community Server - | 977 |
null | [
"connecting",
"golang"
]
| [
{
"code": "opts := options.Client().ApplyURI(\"mongodb://\" + ipPort).SetConnectTimeout(5 * time.Second).SetCompressors([]string{\"zstd\"})\n\n",
"text": "Hello everyone,\nI am trying to use network compression feature of mongoDb using golang,\nI am trying to use Zstd compression, and I am using the following method at the time of db connection,While checking the db logs, I found an message with the db connection logs,Compression negotiation not requested by clientCan you please help me, why this message is shown?\nIs there any problem with the db connection?\nWill it make any impact on MongoDb queries?",
"username": "sahil_garg1"
},
{
"code": "client, err := mongo.Connect(context.TODO(), options.Client().ApplyURI(uri).SetAppName(\"===GO APP===\"))\n\"msg\":\"supported compressor\",\"attr\":{\"compressor\":\"zstd\"}client, err := mongo.Connect(context.TODO(), options.Client().ApplyURI(uri).SetCompressors([]string{\"zstd\"}).SetAppName(\"=== GO APP ===\"))\n",
"text": "Compression negotiation not requested by clientI got this message too when I used a connection with no compressors set. Which matches the server code.This one generated: \"msg\":\"supported compressor\",\"attr\":{\"compressor\":\"zstd\"}",
"username": "chris"
}
]
| Getting message "Compression negotiation not requested by client" in db connection logs | 2023-01-13T03:40:57.869Z | Getting message “Compression negotiation not requested by client” in db connection logs | 1,031 |
null | [
"aggregation",
"cluster-to-cluster-sync"
]
| [
{
"code": "",
"text": "hello guys hope you are doing well , iam using mongosync utility to sync two cluster which i achieved successfully ,but i want this sync to happen everytime i have installed utility in ec2 ubuntu instance how do i run the commands all the time and since today iam not able to intiate the syncing\niam getting the following error\n“error”:\"(ChangeStreamHistoryLost) PlanExecutor error during aggregation :: caused by :: Resume of change stream was not possible, as the resume point may no longer be in the oplog.\",“time”:“2022-12-15T11:31:02.443635973+03:00”,“message”:“Error during replication”}would be very happy if someone could share some suggestion and resolution, thanks",
"username": "mohamed_aslam"
},
{
"code": "replSetResizeOplog",
"text": "Hi @mohamed_aslamThe error indicates the sync is attempting to resume an existing sync. To do that the oplog must be large enough to cope with the time the sync is not running.You can use replSetResizeOplog to resize the oplog.",
"username": "chris"
},
{
"code": "",
"text": "my source cluster has oplog window of 6 hours and destination 24hrs which is default, how to do i keep the sync 24/7 without interruption",
"username": "mohamed_aslam"
},
{
"code": "",
"text": "any suggestion,please i would be glad becuase we want to implement the syncing for reporting and analytic purpose ,please any help would be very highly appreciated and iam struck with this error",
"username": "mohamed_aslam"
},
{
"code": "",
"text": "You need to make sure that the source oplog window is large enough to cover the initial data copy and whatever pause/resume intervals you have. If you check the mongosync logs, you will see when it started the data copy and when it finished - most likely it took longer than 6 hours. In general, your oplog window should be larger than 6 hours (best practice is 24-48 hours)",
"username": "Alexander_Komyagin"
}
]
| Mongosync utility oplog time resume is not possible | 2022-12-15T08:38:48.512Z | Mongosync utility oplog time resume is not possible | 2,163 |
[]
| [
{
"code": "",
"text": "\nScreenshot_20230117_081732720×1544 162 KB\nI am hosting my bot on repl.it bot working completely fine I don’t know what happen to mongo db it’s raising this error not only for this bot others bots also what could be the reason?from motor.motor_asyncio import AsyncIOMotorClient as MongoClientmongo = MongoClient(config.MONGO_DB_URI, serverSelectionTimeoutMS=60000)And yes my mongo db url is correct",
"username": "Suraj_Gupta_N_A"
},
{
"code": "",
"text": "DNS got broken somewhere along the way. I saw another post with similar issue.Resolves here(Canada) okay now. Nothing reported on the Status Page",
"username": "chris"
}
]
| Getting this error in my mongo db | 2023-01-17T03:35:52.325Z | Getting this error in my mongo db | 778 |
|
null | [
"rust"
]
| [
{
"code": " let id = match mongo_client.database(\"database\").collection(\"collection\")\n .insert_one(a_document, None).await {\n Ok(insert_one_result) => insert_one_result.inserted_id.as_object_id().unwrap(),\n Err(e) => return Err(e)\n };\n",
"text": "Hello!When inserting a document, the Rust driver returns the inserted _id as an Option.\nI was wondering why that is and if it is safe to unwrap. Below is a small example of what I mean.Is it possible, on a successful insert, that the Option is ever None and the code above panics?",
"username": "alemandev"
},
{
"code": "_id_idObjectId_id: None_id",
"text": "_id is the only field that must be present in a document and it must be unique.The official drivers will automatically create an _id field if one is not present when inserting a document. This is usually a generated ObjectId.Is it possible, on a successful insert, that the Option is ever None and the code above panics?Not for your code(assuming no _id: None in the document). However it is possible to explicitly insert a None/null in to the _id field and thus get a None/null back for the inserted_id.Disclaimer: Not a rust user.",
"username": "chris"
},
{
"code": "Bson::as_object_idNone_id",
"text": "Hi, I’m one of the maintainers of the Rust driver. Bson::as_object_id returns None if the value you call it on is not an ObjectId: Bson in bson - Rust\nso if you are using non-ObjectId values for the _id field you could experience a panic here.",
"username": "kmahar"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Is it safe to unwrap the _id of an InsertOneResult? | 2023-01-15T06:36:43.732Z | Is it safe to unwrap the _id of an InsertOneResult? | 1,476 |
null | [
"security"
]
| [
{
"code": "",
"text": "Hi, I am a sql DBA but supporting mongodb too. I have enabled authentication in config file and created logins separately for user databases and admin database for developers to use in the application. What kind of security is it called? Any name for it? Thanks for your help.Another question: TEST and prod servers has 2012 windows operating system. Because of this I couldn’t upgrade mongo past 4.2. Only way to upgrade mongo is to first upgrade operating system, correct?",
"username": "Ana"
},
{
"code": "",
"text": "The default authentication method is Salted Challenge Response Authentication Mechanism, SCRAM.Only way to upgrade mongo is to first upgrade operating system, correct?Correct. 4.4 requires Server 2016+",
"username": "chris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Security and operatingSystem question | 2023-01-17T12:53:57.010Z | Security and operatingSystem question | 1,147 |
null | [
"aggregation",
"compass",
"next-js"
]
| [
{
"code": "_id: \"ABC123\"\ndate: \"2022-10-10 01:00:00\"\npowerIn: 30\npowerOut: 50\n_id: \"ABC123\"\ndate: \"2022-10-10 01:00:00\"\npowerIn: 30\npowerOut: 50\nnetzero: -20\ncumulativePowerIn: 80\ncumulativePowerOut: 120\ncumulativeNetzero: -40\n",
"text": "Hi, I’m working on mongoDB and Next.js. Before I call API, I want to aggregate in Mongo Compass and call the aggregated API. And I’d like to synchronize the 2 collections when data is added to original collection. The data is added through MongoDB, not my Next.js website (server side). So I guess trigger is not the case. How to make aggregate collection to have new added data based on insert of original collection? The data is added by an hour FYI.\nThis is the sample of my data.Original CollectionAggregated Collection",
"username": "Chloe_Gwon"
},
{
"code": "$merge{ $merge: {\n into: <collection> -or- { db: <db>, coll: <collection> },\n on: <identifier field> -or- [ <identifier field1>, ...], // Optional\n let: <variables>, // Optional\n whenMatched: <replace|keepExisting|merge|fail|pipeline>, // Optional\n whenNotMatched: <insert|discard|fail> // Optional\n} }\n",
"text": "Hello @Chloe_Gwon !If I am understanding your need correctly …How to make aggregate collection to have new added data based on insert of original collection?You should be able to use a $merge stage (at the end of your aggregation, to just add new documents.Make sure to check out the options for various different matching strategies you can configure:",
"username": "Justin_Jenkins"
},
{
"code": "",
"text": "Hi! Thanks for your reply, Justin. I just tried merge, and this is the output that I was hoping. Just to clarify tho, my main issue is make new data automatically aggregate without me merging it.While I was waiting for the response, I found out scheduled trigger. Will it work in this case?",
"username": "Chloe_Gwon"
},
{
"code": "",
"text": "I think creating replicas will help you in this scenario. This will be an efficient way",
"username": "Prasanna_Sasne"
},
{
"code": "",
"text": "Oh yeah this makes more sense. Thanks so much, I’ll try it right now",
"username": "Chloe_Gwon"
},
{
"code": "",
"text": "But can you do aggregation in replicas tho?",
"username": "Chloe_Gwon"
},
{
"code": "",
"text": "I think aggregation will only work with replicas or shards. The database should be connected to the same mongo instance. Aggregation can not be used on standalone databases. If we can add aggregation on two standalone databases, please let me know how to do it?",
"username": "Prasanna_Sasne"
},
{
"code": "$merge",
"text": "As I understand it you have:Are those “original” inserts happening at any time, or at the hourly mark you mentioned? Or is the hourly when you planned to do this sync?I found out scheduled trigger. Will it work in this case?The exact performance really depends on your application as a whole, etc. so it is a little hard to say.However you could use these concepts to guide your decision:If you are using Atlas you could use either Database Triggers or Scheduled Triggers.The main difference being the Database Triggers will fire when a document is added or in some way modified … so performance here will mostly be around what your trigger code does, and if doing that per document is best. This has the advantages of keeping things up-to-date with the original collection automatically, but the downside of it might run very often, depending on how many modifications you have.The Scheduled Triggers can let you batch together an operation, so for example you can run a single aggregation with a $merge at the end to mass insert any new data, and run that on an hourly schedule.If you aren’t using Atlas you could still trigger something (like a cron job) to run the aggregation pipeline every hour as well.Or, you can as part of your application code do this aggregation and insertion into the your aggregated collection.",
"username": "Justin_Jenkins"
},
{
"code": "",
"text": "Aggregation can not be used on standalone databasesThis is false. Please verify your sources.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks so much! This works very well : ) I appreciate your reply!",
"username": "Chloe_Gwon"
},
{
"code": "",
"text": "I have mentioned in the question itself, if we can do aggregation with two standalone databases(passing collections from one database to another), please suggest. You must provide the answer before saying some concept is wrong. I will definitely appreciate your answer in this case.",
"username": "Prasanna_Sasne"
},
{
"code": "",
"text": "You must provide the answer before saying some concept is wrong.Writing that what you wrote is wrong is the answer.I think aggregation will only work with replicas or shards.It might be true that your think aggregation will only work with replicas or shards. But what you think is wrong. It is false that aggregation only work with replicas or shards. Aggregation works even if you do not run a replica set or shards.And you repeat the same wrong affirmation.Aggregation can not be used on standalone databases.Aggregation can be used on standalone databases. You do not need replica set or shard to run an aggregation.But you cannot use the aggregation framework to $out or $merge from one standalone to another standalone.Just to be clear, aggregation can be used on standalone (that is no replica set and no shard) mongod instances but it cannot be used to synchronized the database of 1 instance into the other instance.If Atlas triggers are not available to you. Change stream, which requires running a replica set of at least one mongod instance, can be used to synchronized databases from one database system to another database system. And just to be clear a database system can be a replica set of 1 mongod instance or a normal replica set. The target database system does not need to be a replica set.",
"username": "steevej"
},
{
"code": "",
"text": "One thing to add in here for future readers. Triggers are a great way to detect a change in a collection. And if you need to then write data into another collection using the Aggregation language, you can do this using the $merge or $out syntax.Additionally, if you need to write data not just into a different Database and Collection, but also into an entirely different Cluster or Serverless instance, you can use our Data Federation service. With Data Federation you can use $out and $merge, reading data from one Cluster or Instance and writing it to another Cluster or Instance as long as it is in the same Atlas Project.",
"username": "Benjamin_Flast"
}
]
| How to synchronize 2 collections after aggregation when data is added to original collection? | 2023-01-13T21:39:17.552Z | How to synchronize 2 collections after aggregation when data is added to original collection? | 2,071 |
null | [
"queries"
]
| [
{
"code": "{\n \"hotelId\": \"H00000\",\n \"hotelName\": \"Hotel Spa Elia\",\n}\nhotelName",
"text": "I have a collection of following format.I want to build a autocomplete feature on hotelNameIs atlas text search with autocomplete is suitable for this task or this would be a over kill?",
"username": "Nazmus_Sakib"
},
{
"code": "",
"text": "If trying to do autocomplete, Atlas Search is the way to go!Check out this blog to see an in depth comparison: A Decisioning Framework for MongoDB $regex and $text vs Atlas Search | MongoDB",
"username": "Elle_Shwer"
},
{
"code": "{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"hotelName\": {\n \"foldDiacritics\": false,\n \"type\": \"autocomplete\"\n }\n }\n }\n}\n[\n {\n \"$search\": {\n \"index\": \"index_1\",\n \"autocomplete\": {\n \"path\": \"hotelName\",\n \"query\": \"rad\",\n \"fuzzy\": {\n \"maxEdits\": 1,\n \"prefixLength\": 3,\n \"maxExpansions\": 10\n }\n }\n }\n }\n]\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"hotelName\": {\n \"type\": \"string\"\n }\n }\n }\n}\n[\n {\n '$search': {\n 'index': 'index_2',\n 'text': {\n \"query\": \"rad\",\n 'path': {\n 'wildcard': '*'\n },\n \"fuzzy\": {\n \"maxEdits\": 1,\n \"prefixLength\": 3,\n \"maxExpansions\": 10\n }\n }\n }\n }\n]\nfuzzyautocomplete",
"text": "Thanks,I made following two indices and respective queriesI find accepted results in both case. I wonder what are the difference in using fuzzy search with autocomplete and not?",
"username": "Nazmus_Sakib"
},
{
"code": "",
"text": "Fuzzy creates space for spelling errors or 1 letter being off from the results returned. Autocomplete just returning results as they match the fragment being typed. Combined allows for 1 letter off and fragments to return results.",
"username": "Elle_Shwer"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| When should we use text search | 2023-01-12T12:30:39.461Z | When should we use text search | 1,314 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "[\n {\n \"test\": 1,\n \"invoices\": [\n {\n \"nu\": 1,\n \"statuss\": \"E0006\"\n },\n {\n \"nu\": 1,\n \"statuss\": \"A0001\"\n }\n ]\n },\n {\n \"test\": 2,\n \"invoices\": [\n {\n \"nu\": 2,\n \"statuss\": \"E0007\"\n }\n ]\n },\n {\n \"test\": 3,\n \"invoices\": [\n {\n \"nu\": 2,\n \"statuss\": \"Z0007\"\n }\n ]\n },\n {\n \"test\": 3,\n \"invoices\": [\n {\n \"nu\": 123,\n \"statuss\": \"A0007\"\n },\n {\n \"nu\": 22,\n \"statuss\": \"A0001\"\n }\n ]\n },\n {\n \"test\": 3,\n \"invoices\": [\n {\n \"nu\": 2,\n \"statuss\": \"B0007\"\n }\n ]\n }\n]\ndb.collection.aggregate([\n {\n $match: {\n \"test\": 3\n }\n },\n {\n $sort: {\n \"invoices.statuss\": 1\n }\n },\n {\n \"$skip\": 0\n },\n {\n \"$limit\": 10\n }\n])\n[\n {\n \"_id\": ObjectId(\"5a934e000102030405000003\"),\n \"invoices\": [\n {\n \"nu\": 123,\n \"statuss\": \"A0007\"\n },\n {\n \"nu\": 22,\n \"statuss\": \"A0001\"\n }\n ],\n \"test\": 3\n },\n {\n \"_id\": ObjectId(\"5a934e000102030405000004\"),\n \"invoices\": [\n {\n \"nu\": 2,\n \"statuss\": \"B0007\"\n }\n ],\n \"test\": 3\n },\n {\n \"_id\": ObjectId(\"5a934e000102030405000002\"),\n \"invoices\": [\n {\n \"nu\": 2,\n \"statuss\": \"Z0007\"\n }\n ],\n \"test\": 3\n }\n]\n",
"text": "Requirement: Sort the below collection with invoices/statuss.\nCollection:Query:Output:Question:\nAs per my understanding in “_id”: ObjectId(“5a934e000102030405000003”) the invoices collection should also be sorted, but in the result it’s not the same. What we are missing?",
"username": "Tuhin_Sarkar"
},
{
"code": "",
"text": "If you look at the $sort documentation you will see that this stage sorts top level documents.What I understand is that you want the sort the array invoices within the top level documents. If this is the case you have to use $sortArray.",
"username": "steevej"
}
]
| Mongodb sorting on nested field | 2023-01-17T12:36:55.210Z | Mongodb sorting on nested field | 4,258 |
null | [
"queries",
"crud",
"transactions"
]
| [
{
"code": "const session = db.getMongo().startSession()\nsession.startTransaction()\nconst account = session.getDatabase('sample_analytics').getCollection('accounts')\nconst customer = session.getDatabase('sample_analytics').getCollection('customers')\naccount.insertOne({\n account_id: 00009,\n limit: 88888\n})\naccount.updateOne( { account_id:00009 }, {$inc: { limit: -100.00 }})\n**customer.insertyOne({ name:\"vinci5\"})**\n\nsession.commitTransaction()\n",
"text": "Below code snippet on sample_anlytic dbI purpously written wrong statement higlighted but after commit i query\ndb.accounts.find({account_id:9})\ni got limit: 88788",
"username": "VIKASH_RANJAN1"
},
{
"code": "customersaccount_id: 9uri = <your_string_here>\nclient = pymongo.MongoClient(uri)\ndb = client.sample_analytics\naccount = db.accounts\ncustomer = db.customers\nwith client.start_session() as session:\n with session.start_transaction():\n account.insert_one({'account_id': 9,'limit': 88888 }, session=session)\n account.update_one( { 'account_id':9 }, {'$inc': { 'limit': -100.00 }},session=session)\n customer.inserty_one({'name':\"vincy5\"},session=session) //change to insert_one to make the transaction work\n",
"text": "Hey @VIKASH_RANJAN1,Welcome to the MongoDB Community Forums! Can you please share the full code, along with the error message and result of the query you are running? I tried the code on my end with the python driver and everything works as one expects in case of a transaction, ie. when an error occurred in between the transaction, the customers collection did not have any document account_id: 9. This is the code I ran in pymongo and everything worked as expected, maybe you can try this in your environment:Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "\nMongodb3720×1404 290 KB\n\nPlease find screenshot .It ignore syntax error and commit transaction",
"username": "VIKASH_RANJAN1"
},
{
"code": "",
"text": "It looks like you are missing a parameter to do your updateOne inside the transaction.See https://www.mongodb.com/docs/manual/core/transactions/#std-label-transactions-write-concern to see how to do updateOne withing a transaction.",
"username": "steevej"
},
{
"code": "",
"text": "I am simulating if txn failed in between the session it should rollback but it still hold true for syntax error(insertyOne). though for logical error it behaves differently\nmango21977×808 335 KB\n",
"username": "VIKASH_RANJAN1"
},
{
"code": "updateOne(){ session }updateOne()updateOne()updateOne()account.updateOne( { account_id:00009 }, {$inc: { limit: -100.00 }}, { session })",
"text": "Hi @VIKASH_RANJAN1What @Satyam and @steevej alluded to is that your updateOne() statement is performed outside of the transaction. This is why it appears to do the wrong thing.In the first example, you need to add the { session } variable into the updateOne() as seen in this Node driver transaction example to signify that this statement is part of the session and thus the transaction.In the second example, the updateOne() statement contains an error and thus it was not executed. Transaction or not, it makes no difference in this case.Try to modify your updateOne() to be like account.updateOne( { account_id:00009 }, {$inc: { limit: -100.00 }}, { session }) and it should behave as expected.Hope this clears things up.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "It’s inside session via below command\nconst session = db.getMongo().startSession()\nsession.startTransaction()\nconst account = session.getDatabase(‘sample_analytics’).getCollection(‘account’)\nconst customer = session.getDatabase(‘sample_analytics’).getCollection(‘customer’)\n…\n…\nsession.commitTransaction()",
"username": "VIKASH_RANJAN1"
},
{
"code": "sessionupdateOneupdateOne()account.updateOne( { account_id:00009 }, {$inc: { limit: -100.00 }}, { session })",
"text": "It’s inside session via below commandIt is not. Unlike most SQL-style interactive session, a statement starting a session does not mean that all the subsequent statements are inside that session. In MongoDB, you’ll need to include the session object in the CRUD statement.As I have mentioned in my earlier reply, you need to modify your updateOne statement to include the session object:Try to modify your updateOne() to be like account.updateOne( { account_id:00009 }, {$inc: { limit: -100.00 }}, { session }) and it should behave as expected.I encourage you to read through the link I posted earlier, which explains in detail how sessions and transactions work in MongoDB: https://www.mongodb.com/docs/drivers/node/current/fundamentals/transactions/#core-api-implementation since transaction CRUD syntax in MongoDB is a bit different from what SQL does. The concept of the transaction itself are identical, thoughBest regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi Kelvin,\nSorry for bothering again then why did my scenario2 where I written same way aborted and rollbacked to original value\n\nimage1920×785 133 KB\n",
"username": "VIKASH_RANJAN1"
},
{
"code": "{session}mongoshSessionMongoServerErrorMongoServerErrorMongoServerErrorinsertyOneTypeErrorMongoServerError",
"text": "Ah I see where the confusion is I apologize to exacerbate the confused situation. I was using the example of a Node program (where {session} is required to make the CRUD operation work inside the transaction), and you’re using mongosh and grabbed the database from the Session object as per https://www.mongodb.com/docs/manual/reference/method/Session/#mongodb-method-SessionIn scenario 1, I think everything behaved like they should:In scenario 2, there’s a MongoServerError:Am I following your thought so far correctly?Basically scenario 2 showed that a MongoServerError would automatically abort a transaction. This is in contrast with your earlier example where you called insertyOne: it resulted in TypeError instead of MongoServerError, so the transaction didn’t auto abort.Is this the information you’re looking for? Sorry again for the confusion.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "So we can conclude that\nScenario 1: MongoDB Perspective its valid since Account :0 don’t exists. But taking a case financial matter were Debit=Credit will fail the business validation. Hence Application has to be intelligent enough to catch modiifiedCount:0 and do a rollback instead of commit.MongoDB only rollback MongoServerError not TypeError .There also application need to be intellegent enough to do rollback instead of commit",
"username": "VIKASH_RANJAN1"
},
{
"code": "",
"text": "In my opinion, MongoDB can only rollback what it can detect. The database is not aware of what else you are doing in your code.A TypeError is a bug in your code.application need to be intellegent enough to do rollback instead of commitYes, your application should be catching all the errors that may happen in your code and ask for a rollback.",
"username": "steevej"
}
]
| Transaction failed but earlier statement hold true | 2023-01-05T09:01:45.368Z | Transaction failed but earlier statement hold true | 1,070 |
null | [
"aggregation",
"queries",
"data-modeling"
]
| [
{
"code": "",
"text": "Hello, I have a need of a requirement in that I don’t want to create to many collections. Can anyone let me know what is the threshold limit of max size of a collection in mongo?",
"username": "dharmvijay_Patel"
},
{
"code": "",
"text": "Hi @dharmvijay_Patel ,There is no such limit for the collection size unless it is capped collection. or collection creation is explicit with the defined size like this.\nFor more details of Mongodb limits please follow this document MongoDB Limits and ThresholdsThis forum post can also be refered",
"username": "Aayushi_Mangal"
}
]
| What can be the threshold limit of collection size? | 2023-01-17T11:04:13.321Z | What can be the threshold limit of collection size? | 1,176 |
null | [
"queries",
"node-js",
"mongoose-odm"
]
| [
{
"code": "",
"text": "Hi guys, I am facing problem with mongoose fuzzy search. It is completely useless because I wanted to search results like tickets title and description, But it is not returning the best results. It is giving me the less rows. Like If I search “Ticket” and in DB there are ten rows with “One Ticket”, “Two Ticket” like that it is returning only three to four rest of six are missing, Strange thing. Kindly guide me I am in middle of it. Sorry I forgot to tell you that I am using “mongoose-fuzzy-searching” module.",
"username": "Wasim_Akmal"
},
{
"code": "mongoose-fuzzy-searching",
"text": "Welcome to the MongoDB community @Wasim_Akmal !mongoose-fuzzy-searching is an open source community plugin for Mongoose, so you may want to try discussing questions directly with the maintainer at Issues · VassilisPallas/mongoose-fuzzy-searching · GitHub. Unfortunately there doesn’t seem to have been any commits or replies in this repo for a few years, so you may have to investigate and fix the relevancy issue yourself.If you are looking for a more robust (and officially supported) search solution, MongoDB Atlas Search is available for all Atlas clusters (including free tier).For some articles and examples see MongoDB Developer Centre: Atlas Search and the MongoDB Atlas Search unit at MongoDB University.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "@Stennie_X Thanks, I was just confirming this because I did all research on this. Fuzzy search is a key component so I guess mongoose should provide it.",
"username": "Wasim_Akmal"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Fuzzy Search with mongoose | 2023-01-15T19:49:58.561Z | Fuzzy Search with mongoose | 1,903 |
null | [
"compass"
]
| [
{
"code": "",
"text": "Hello,I´m trying to install MongoDB Compass silently.\nBecause I´m using the MSI file this is no problem I can use the MSI parameter like /qn and this works fine.But I want to know if there are furthermore parameters which I can use.\nI´m planning to include a Server during the installation process.\nDoes anybody know if this is possible?I´m looking forward to get all the possible parameters.\nAs I know there should be some like “INSTALLLOCATION”, are there furthermore?I appreciate your help",
"username": "Matthias_Lob"
},
{
"code": "",
"text": "Which msi are you using?\nYou have to use server msi\nCheck this link for more params",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi @Ramachandra_TummalaI´m using “mongodb-compass-1.35.0-win32-x64.msi” for the installation.\nIf I understand it correctly, this is a GUI for working with the Mongo databases.\nWhen I start the MongoDBCompass I can add a Server to which I want to connect to.\nAnd I want to implement this server during the installation process, so that the user doesn´t have to implement the server himself.The Link you posted didn´t give me the correct parameters, or I missunderstood them.Are there furthermore parameters?Kind regards.",
"username": "Matthias_Lob"
}
]
| MongoDB Compass Installation Parameter | 2023-01-16T08:06:06.706Z | MongoDB Compass Installation Parameter | 1,103 |
null | [
"queries"
]
| [
{
"code": "collection().find({\n $and: [\n {\"tags\": {$in: [1, 2]}},\n {\"tags\": {$size: 2}}\n ]\n})\ncollection().find({\n $and: [\n {\"tag1\": true},\n {\"tag3\": true},\n ....\n ]\n})\n",
"text": "Hello everyone,I have a document that can be flagged with different tags. These tags are limited in number, meaning that a document can have at most 9 tags. Over those tags, I run queries like: Give me all documents that contains exactly the given tags. The question is, which of the two following approaches would be more performant in terms of index size and speed:Thank you",
"username": "Green"
},
{
"code": "db.collection.find( { tags: { $all: [1,2 ]} , tags : { $size : 2 }} )\n",
"text": "Hi @Green ,It sounds like for tagging its more natural to use an array with tags values.Thats a common pattern with MongoDB . Obviously index the array field and any combined fields in your queries…Now to verify that tags 1,2 are the only tags in the query you need to run something like that:With $all and $size you can verify that all tags exist in the same document and the size is 2 so they are the only tags.See this docs for more:MongoDB Manual - How to query an array: query on the array field as a whole, check if element in array, query for array element, query if field in array, query by array size.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hello @Pavel_Duchovny ,\nthank you for your answer, this is what I did!Thank you!",
"username": "Green"
}
]
| Fixed length array vs fields | 2022-11-27T20:21:55.816Z | Fixed length array vs fields | 1,364 |
null | [
"replication",
"transactions",
"cxx"
]
| [
{
"code": "mongocxx::instance instance {};\nmongocxx::uri mongouri(\"mongodb://localhost:27017/?replSet=myReplicaSet\");\nmongocxx::pool mongopool(mongouri);\n\nint main(int argc, char *argv[])\n{\n QCoreApplication a(argc, argv);\n\n auto client = mongopool.acquire();\n auto session = client->start_session();\n\n\n mongocxx::options::transaction txn_opts;\n mongocxx::read_concern rc;\n mongocxx::write_concern wc;\n mongocxx::read_preference rp;\n\n rp.mode(mongocxx::read_preference::read_mode::k_primary);\n rc.acknowledge_level(mongocxx::read_concern::level::k_snapshot);\n wc.acknowledge_level(mongocxx::write_concern::level::k_majority);\n txn_opts.read_concern(rc);\n txn_opts.write_concern(wc);\n\n session.start_transaction(txn_opts);\n\n auto coll = session.client().database(\"MongoLibTest\")[\"transaction\"];\n\n coll.insert_one(make_document(kvp(\"test\",\"testTransaction\")));\n\n session.abort_transaction();\n\n return a.exec();\n}\n",
"text": "Hello everyone,Sorry if I’m clear enough, it’s my first post on this forum. Here’s my situation : I try to understand how the transactions works with the mongocxx driver. I try the following code :When I execute this, I have a document that is inserted in my collection. Is it normal as I abort the collection ?\nI’m running on a Mongo 5.0.14 server. The server is configured as a replica set with a single node.\nI use the driver mongocxx 3.6.Thanks a lot for your help,",
"username": "Alex_Stalmans"
},
{
"code": "sessionsessioncoll.insert_one",
"text": "The session needs to be passed to each operation in the transaction. Try passing session as an argument to coll.insert_one.",
"username": "Kevin_Albertson"
},
{
"code": "",
"text": "Thank you ! I wasn’t attentive enough …\nIt solved my issue directly.",
"username": "Alex_Stalmans"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Mongocxx : Understanding transactions | 2023-01-16T15:26:35.143Z | Mongocxx : Understanding transactions | 1,184 |
null | [
"replication",
"containers"
]
| [
{
"code": "\"stateStr\" : \"(not reachable/healthy)\",\"_id\" : 1,\n\"name\" : \"localhost:27020\",\n\"health\" : 0,\n\"state\" : 8,\n\"stateStr\" : \"(not reachable/healthy)\",\n\"uptime\" : 0,\n\"optime\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n},\n\"optimeDurable\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n},\n\"optimeDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n\"optimeDurableDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n\"lastAppliedWallTime\" : ISODate(\"1970-01-01T00:00:00Z\"),\n\"lastDurableWallTime\" : ISODate(\"1970-01-01T00:00:00Z\"),\n\"lastHeartbeat\" : ISODate(\"2022-12-01T07:10:45.285Z\"),\n\"lastHeartbeatRecv\" : ISODate(\"1970-01-01T00:00:00Z\"),\n\"pingMs\" : NumberLong(0),\n\"lastHeartbeatMessage\" : \"Error connecting to localhost:27020 (127.0.0.1:27020) :: caused by :: Connection refused\",\n\"syncSourceHost\" : \"\",\n\"syncSourceId\" : -1,\n\"infoMessage\" : \"\",\n\"configVersion\" : -1,\n\"configTerm\" : -1\n",
"text": "Hi I’m having difficult seting up replicaset.I have a docker compose file where I config 1 container as a primary note and two as replica set but when I used rs.add(\"\") ,for example rs.add(“localhost:27020”) in primary node it gave me the\n\"stateStr\" : \"(not reachable/healthy)\",\nHere is the full error:Am I missing any configs here, please tell me as I’m kinda new to Mongodb\nThanks",
"username": "Huy_Hoang1"
},
{
"code": "",
"text": "Is mongod on the node you are trying to add is up?\nCan you connect to it?\nCan the nodes commnicate with each other in your network",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "@Ramachandra_Tummala",
"username": "Huy_Hoang1"
},
{
"code": "",
"text": "If you are using docker compose file i think it is automated process\nSo why you are adding manually from shell?\nThe error may be due to bindIp param\nNode connectivity means assuming they are on separate machines you should be able to connect giving address/port from each node to others",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "yeah, I think I would try to use the bash script to automate the process.\nI think I will have more and more bugs with mongodb, so do you have skype or any social media that I can connect with you.\nThank you very much.",
"username": "Huy_Hoang1"
},
{
"code": "",
"text": "I am not docker savvy but I think that you cannot use localhost for your replica set.I think that each docker instance has a separate address space and that localhost on one docker instance refers to itself not to the host localhost. You would need to setup the replica set using the IP address of each docker instance.You may access localhost:27017 from the main host because the port 27017 is redirected to a given docker instance.",
"username": "steevej"
},
{
"code": "/etc/hosts/etc/mongo.confnet.binIprs.add",
"text": "I am not docker savvy but I think that you cannot use localhost for your replica set.whether docker or not, if members are not running on the same machine, you cannot use localhost on any of them.there are 3 places you need to check. /etc/hosts to give each member’s host machine a name in other members’ host machine, config file (usually /etc/mongo.conf) for net.binIp to allow from other members, and rs.add to add members with their respective IP addresses or DNS names (can be set in hosts file)this is not required if you start all instances on a single host on different ports.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "actually all memeber run in one server, so I think using localhost is OK in this case",
"username": "Huy_Hoang1"
},
{
"code": "",
"text": "I’m start all instances in single host",
"username": "Huy_Hoang1"
},
{
"code": "Error connecting to localhost:27020 (127.0.0.1:27020) :: caused by :: Connection refused\",",
"text": "actually all memeber run in one servercontradictsI have a docker compose file where I config 1 container as a primary note and two as replica setIt is a contradiction because each container is an isolated virtual host. (Italicized because it is a little bit different than a real virtual machine)I think using localhost is OK in this caseIt is clearly not OK because you getError connecting to localhost:27020 (127.0.0.1:27020) :: caused by :: Connection refused\",So despite the fact that I am not a docker user, I am pretty sure one container cannot access another container using localhost. And each member of a replica set has to connect to every member of the replica set.I think a little bit of reading about docker and networking might help you:When working with Docker, you usually containerize the services that form your stack and use inter-container networking to communicate between them. Sometimes you might need a container to talk to a service on your host that hasn’t been...\nEst. Reading Time: 3 minutes\nHow to reach localhost on host from docker container? Please help a docker newbie Situation: I run a NodeJS app with the monero-javascript library to connect to a localhost monero-wallet-rpc running on my host OS. Problem: I can not connect! My...\nReading time: 3 mins 🕑\nLikes: 4 ❤\nOverview of Docker networks and networking conceptsI also think that you should stick with running non-docker version on your system.I’m start all instances in single hostYou start all the docker instances from the same host but each docker is a separate entity.Running all the instances of a replica set on the same physical host using 3 dockers or 3 VMs is foolish and useless because when you loose your physical host you loose your data.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks, I see the problem here, I will try to setup into 3 physical serversI also think that you should stick with running non-docker version on your system.why? I think deploy mongodb on docker container is a great way to bring into production",
"username": "Huy_Hoang1"
},
{
"code": "net.bindIpmongod --config mongod_X.conf",
"text": "actually all memeber run in one server, so I think using localhost is OK in this caseI’m start all instances in single hostYou misunderstood the concept of how docker and compose work. Check those recommended links @steevej gave above. but in simple terms it goes like this:You can still run 3 “mongod” instances in you host machine without polluting anything.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "From my above answer, you can use the exact same steps, of running 3 instances in a host machine, to run 3 instances in a “single” container, provided that it has resources to handle them",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "you can use the exact same steps, of running 3 instances in a host machineWhile you can do the above for experimentation it is not advised to do that for production becausewhen you lose your physical host you lose your dataA good way to start multiple instances on the same host is to use mlaunch.",
"username": "steevej"
},
{
"code": "",
"text": "to run 3 instances in a “single” container, provided that it has resources to handle themThanks for your reply but to be clear I’m not running 3 instances in a single container ( What I meant above “single host” I meant single server sorry for misunderstanding ), I have 3 containers, one for primary node and the other 2 containers for replica sets",
"username": "Huy_Hoang1"
},
{
"code": "",
"text": "From my above answer, you can use the exact same steps, of running 3 instances in a host machinePlease, read my other answer above the one you quoted, for this plus how you run 3 containers.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Thanks I will read them carefully, once I found solutions for this I’ll let you guys know",
"username": "Huy_Hoang1"
},
{
"code": "",
"text": "I have one very beginner question : is it good way to start a mongodb like this:\nI wrote a docker compose file for mongodb and the image is mongo:latest, once I complete setup the mongodb container , I then go inside mongodb container ,installed and cloned a flask application into it and this container contains:",
"username": "Huy_Hoang1"
},
{
"code": "",
"text": "is it good way to start a mongodb like thisit is good for starters to understand. but you tie the app and db together, thus you will not be able to scale it in the future.\ndb needs lots of space and should run at stable ports when you want a replica set. app does not need to store anything and should be independent so you can use load balancers (hundreds of them, for example). that is the basics of using multiple containers. you need to also learn networking so you can tie containers together.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Hi everyone,Does someone have the answer to this question? I have the same problem and I fix “Connection refused” by ensuring that:But I still have error “stateStr” : “(not reachable/healthy)” and I check my mongod.log show error code 18 AuthenticationFailed. Details in the topic:\n[MongoDB replicaSet error AuthenticationFailed (code 18)]Many thanks!!",
"username": "Khiem_Nguy_n"
}
]
| Can't add replica set to PRIMARY node | 2022-12-01T07:14:46.056Z | Can’t add replica set to PRIMARY node | 5,250 |
null | [
"aggregation",
"queries",
"data-modeling"
]
| [
{
"code": "",
"text": "i have 2 collection of user with 50 fields and user_details with 50 fields and i want to save user_details inside user collection as an Array how can i do that ?",
"username": "Aniket_Zapatkar"
},
{
"code": "db.user.aggregate([\n {\n $lookup:\n {\n from: \"user_details\",\n localField: \"_id\",\n foreignField: \"userId\",\n as: \"user_details\"\n }\n },\n {\n $merge: {\n into: \"user\",\n on: \"_id\",\n whenMatched: \"replace\",\n whenNotMatched: \"discard\"\n }\n }\n])\n\n",
"text": "Hi @Aniket_Zapatkar ,What is the connecting field between the collections? If its for example _id in user and userId in user_details here is an aggregation to do it :Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "both collection have user_id in comman",
"username": "Aniket_Zapatkar"
}
]
| Collection inside collection as an array | 2023-01-16T10:13:14.614Z | Collection inside collection as an array | 821 |
null | [
"replication",
"connecting",
"atlas-cluster"
]
| [
{
"code": "MongoServerSelectionError: getaddrinfo ENOTFOUND ac-8q8dm3y-shard-00-00.5ldtt3z.mongodb.net\n at Timeout._onTimeout (/home/runner/Bojji/node_modules/mongodb/lib/sdam/topology.js:292:38)\n at listOnTimeout (node:internal/timers:557:17) {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n servers: Map(3) {\n 'primary' => [ServerDescription],\n 'secondary' => [ServerDescription],\n 'secondary' => [ServerDescription]\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: 'atlas-q5c7wq-shard-0',\n maxElectionId: null,\n maxSetVersion: null,\n commonWireVersion: 0,\n logicalSessionTimeoutMinutes: null\n },\n code: undefined,\n [Symbol(errorLabels)]: Set(0) {}\n}\nmongodb+srv://myusername:[email protected]/mydb?retryWrites=true&w=majority\n",
"text": "For some reason, I’m receiving this error when connecting to my MongoDB in replit. It was all working fine like 8 hours ago, and been connecting with it for 2 weeks now.Error:Here’s my connection string:Here are the things I tried but to no avail:Here are other information:\nMongodb Cluster version: 5.0.14\nMongodb Cluster Tier: M0 Sandbox (General)\nMongodb type: Replica Set - 3 nodes\nMongodb Package version: 4.13.0\nWindows OS: Windows 11More information:\nI can connect and query just fine in MongoDB Compass. It happens in replit.",
"username": "Philip_Kristoffer_Tosing"
},
{
"code": "",
"text": "It’s been 12 hours. It still isn’t working.In addition:What’s happening?",
"username": "Philip_Kristoffer_Tosing"
},
{
"code": "",
"text": "Hi @Philip_Kristoffer_Tosing,I can connect and query just fine in MongoDB Compass. It happens in replit.I’m not too familiar with replit but I have found the following on our forums regarding replit connectivity to Atlas with “no changes made” as well.It could possibly be related to MongoDB atlas stopped working on replit (Python, pymongo) - #15 by thedankboi - Bug Reports - Replit AskThe only reason I speculate this is because you’re able to connect with Compass which shows to me there isn’t any issues with the cluster. In addition to that, I have not found any issues reported on https://status.cloud.mongodb.com/ recently.Might be worth double checking with replit as well in this case as your troubleshooting looks okay to me at this stage. If you want, you could also contact the atlas in-app chat support team to check if there are any issues with your cluster although since you have connected with Compass it may not be the case. Good to may be double check though.Regards,\nJason",
"username": "Jason_Tran"
}
]
| MongoDB suddenly stopped working (MongoServerSelectionError, ReplicaSetNoPrimary) | 2023-01-16T21:50:52.030Z | MongoDB suddenly stopped working (MongoServerSelectionError, ReplicaSetNoPrimary) | 1,439 |
null | [
"queries",
"spring-data-odm"
]
| [
{
"code": "@Query(\"{'admission_date': {$gte: ?0, $lte: ?1}}\")\npublic List<Patient> findByAdmissionDateBetween(String minDate, String maxDate)\n_id: 1015\npatientName: \"Hemant Srivastav\"\ngender: \"male\"\ndate_of_birth: 2006-07-18T18:30:00.000+00:00\nadmission_date: 2022-07-01T18:30:00.000+00:00\ndiagnosis: \"broken arm\"\n{\n \"errorMessage\": \"Patients between dates not found.\",\n \"errorCode\": 400\n}\n@GetMapping(\"/admissiondates\")\npublic ResponseEntity<List<Patient>> getPatientsBtwnAdmissionDates(\n @RequestParam @DateTimeFormat(iso = DateTimeFormat.ISO.DATE) String minDate,\n @RequestParam @DateTimeFormat(iso = DateTimeFormat.ISO.DATE) String maxDate)\n throws PatientAdmissionException {\nList<Patient>patientsBtwnDatesList = patientService.getPatientsBtwnAdmissionDates(minDate,maxDate);\nreturn new ResponseEntity<>(patientsBtwnDatesList, HttpStatus.OK);\n}\npublic List<Patient> getPatientsBtwnAdmissionDates(String minDate, String maxDate)\n throws PatientAdmissionException {\nList<Patient> patientBtwnDatesList = patientRepository.getPatientsBtwnAdmissionDates(minDate, maxDate);\nif (patientBtwnDatesList.isEmpty() == true) {\n throw new PatientAdmissionException(\"Patients between dates not found.\");\n}\nreturn patientBtwnDatesList;\n}\n",
"text": "I am using @Query annotation for getting documents between specified dates.Sample Document:but I am getting an exception:My Service and Controller method implementation is this.\nController:Service:I am not able to understand what mistake am I making pls HELP!",
"username": "Abhishek_Mathur"
},
{
"code": "findByAdmissionDateBetween(String minDate, String maxDate)minDate and maxDate",
"text": "Hi @Abhishek_Mathur and welcome to the MongoDB community forum!!findByAdmissionDateBetween(String minDate, String maxDate)From the above statement, it seems you are trying to compare the ISODate for the date stored in document to the String in the parameter of the function.\nCould you confirm that my understanding is correct and the datatype in the documents are string instead of ISODate? You may be able to use Compass Schema analyzer for thisIf my understanding is correct, the MongoDB documentation states:For most data types, comparison operators only perform comparisons on fields where the BSON type matches the query value’s type. MongoDB supports limited cross-BSON comparison through Type Bracketing.The recommendation here would be to convert the minDate and maxDate to ISO date format and then make the comparison.Best Regards\nAasawari",
"username": "Aasawari"
}
]
| Question: How to use @Query annotation for MongoDB for fetching documents between dates? | 2023-01-13T05:35:19.162Z | Question: How to use @Query annotation for MongoDB for fetching documents between dates? | 1,614 |
[
"crud"
]
| [
{
"code": " const addArrDecibelHistory = async (body) => {\n const userToFetch = await user.findOneAndUpdate(\n { username: 'lior', password:'lior' },\n \n { $push: {'decibelHistory.$.test': {number : 6} } },\n \n );\n };\n",
"text": "hello\ni am trying to push a item to my nasted array and its not working!\ni am trying with the $ operator but this dont work as well\nif i put decibelHistory.$.number i get error that says :\n“MongoServerError: Plan executor error during findAndModify :: caused by :: The positional operator did not find the match needed from the query.”but if i will write : decibelHistory$.test\nif will not give me this error but it will not update as well\ni will upload also my image of the db\n",
"username": "Lior_aharon"
},
{
"code": "Atlas atlas-b8d6l3-shard-0 [primary] test> db.collection.find()\n[\n {\n _id: ObjectId(\"63c5af9244361187c0ee7b37\"),\n username: 'lior',\n password: 'lior.a98',\n decibleHistory: [ { test: [ '' ] } ],\n timelapse: Long(\"1200\"),\n __v: 0\n }\n]\nAtlas atlas-b8d6l3-shard-0 [primary] test> db.collection.updateOne( { username: 'lior', \"decibleHistory.test\": { $exists: true }}, { $push: { \"decibleHistory.$.test\": { $each: [ { number: 6}], $position: 0}} } )\nAtlas atlas-b8d6l3-shard-0 [primary] test> db.collection.find()\n[\n {\n _id: ObjectId(\"63c5af9244361187c0ee7b37\"),\n username: 'lior',\n password: 'lior.a98',\n decibleHistory: [\n { test: [ { number: 6 }, '' ] }\n ],\n timelapse: Long(\"1200\"),\n __v: 0\n }\n]\n",
"text": "Hi @Lior_aharon and welcome to the MongoDB community forum!!Based on the above dataset shared, I tried to replicate the query in my local environment.Here is how my dataset and the update query looks like:The output for the above query would be the following:Could you check if the above query works for you? If not, could you show us what’s the end result should be?Let us know if you have any further queries.Best Regards\nAasawari",
"username": "Aasawari"
}
]
| $push to specific field | 2023-01-12T23:46:55.694Z | $push to specific field | 1,016 |
|
null | [
"replication",
"python"
]
| [
{
"code": "",
"text": "Hi,I have an issue with connection to the mongodb using python code. Appreciate if got any solution to this error. Had tried alot of options but did not work.Error below:\nraise ServerSelectionTimeoutError(\npymongo.errors.ServerSelectionTimeoutError: Could not reach any servers in [(‘demo-db-0’, 27017)]. Replica set is configured with internal hostnames or IPs?, Timeout: 30s, Topology Description: <TopologyDescription id: 63c4cb5a9c09397560936c60, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription (‘demo-db-0’, 27017) server_type: Unknown, rtt: None, error=AutoReconnect(‘demo-db-0:27017: [Errno 11001] getaddrinfo failed’)>]>",
"username": "Valerie_Yeo"
},
{
"code": "demo-db-0demo-db-0.example.com",
"text": "‘demo-db-0:27017: [Errno 11001] getaddrinfo failed’)>The client is unable to resolve demo-db-0 to an ip address.Some possible resolutions:",
"username": "chris"
},
{
"code": "",
"text": "The “demo-db-0” is mongo database name.",
"username": "Valerie_Yeo"
}
]
| Could not reach any servers - python | 2023-01-16T04:25:28.347Z | Could not reach any servers - python | 1,200 |
null | [
"aggregation",
"queries",
"node-js",
"atlas-search"
]
| [
{
"code": "[{\n \"sktextsdata\": [\n {\n \"id\": {\n \"name\": \"Tarkasangraha\",\n \"poetry\": \"\",\n \"presentation\": \"SC1:1\",\n \"audioUrl\": [\"SC1:1\", \"SC1:2\", \"SC1:3\", \"SC1:4\", \"SC1:5\"],\n \"content\": [\n {\n \"words\": [\n {\n \"begin\": \"0.000\",\n \"end\": \"13.280\",\n \"id\": \"f000001\",\n \"word\": \"The author of Tarkasangraha is Annambhatta.\"\n }\n ],\n \"line\": \"SC1:1-SC2:0_PCID:0_CID:1\"\n },\n {\n \"words\": [\n {\n \"begin\": \"13.280\",\n \"end\": \"22.240\",\n \"id\": \"f000002\",\n \"word\": \"Tarkasangraha deals with defining the elements of the world.\"\n }\n ],\n \"line\": \"SC1:1-SC2:0_PCID:0_CID:2\"\n },\n {\n \"words\": [\n {\n \"begin\": \"22.240\",\n \"end\": \"31.920\",\n \"id\": \"f000003\",\n \"word\": \"It has 7 chapters.\"\n }\n ],\n \"line\": \"SC1:1-SC2:0_PCID:0_CID:3\"\n }\n ]\n }\n }\n ]\n}]\nlet pipeline = [\n {\n '$search': {\n \"index\": \"skgallery\",\n 'text': {\n 'query': query,\n 'path': 'sktextsdata.content.words.word',\n 'fuzzy': {\n 'maxEdits': 2,\n 'prefixLength': 4,\n 'maxExpansions': 1000\n },\n },\n 'highlight': {\n 'path':'sktextsdata.content.words.word',\n \"maxNumPassages\": 1000\n }\n }\n },\n {\n '$project': {\n '_id': 1,\n 'resHighlights': {\n '$meta': 'searchHighlights'\n },\n 'score': {\n '$meta': 'searchScore'\n },\n 'sktextsdata.name': 1\n }\n }\n ]\n",
"text": "Here, I created Atlas search index for the path “sktextsdata.content.words.word”. When there is a search hit of a certain word, how can I retrieve its parent node that is “sktextsdata.content.line” which is not included in path for not having any relevance? So far my code only retrieves the search results with “highlight” in the pipeline.The document:The pipleline:Any help would be appreciated.",
"username": "Keshav_Melnad"
},
{
"code": "\"“sktextsdata.content.line”$project\"sktextsdata.name\"\"sktextsdata.id.name\"",
"text": "Hi @Keshav_Melnad - Welcome to the community!I created Atlas search index for the path “sktextsdata.content.words.word”Would you be able to provide the Atlas search index definition used in this search?When there is a search hit of a certain word, how can I retrieve its parent node that is “sktextsdata.content.line” which is not included in path for not having any relevance? So far my code only retrieves the search results with “highlight” in the pipeline.Apologies as I am a bit confused here. However, to try get a better understanding - Are you successfully retrieving the document(s) you want but have the \"“sktextsdata.content.line” field missing?On an additional note, I noticed your $project stage attempts to project the field \"sktextsdata.name\" which I cannot see exists (although I do see \"sktextsdata.id.name\" instead). Is this expected?If possible, could you advise what you are expecting as the output based off what search term(s)?Looking forward to your reply.Regards,\nJason",
"username": "Jason_Tran"
}
]
| How to retrieve a parent field when search hit occurs at its child field? | 2023-01-11T09:38:33.380Z | How to retrieve a parent field when search hit occurs at its child field? | 899 |
null | [
"aggregation"
]
| [
{
"code": "$map$getField$map{ $literal: \"$$field.field_id\"}$literal",
"text": "Hi people,New with MongoDB so still learning the ropes. I am currently writing an aggregation pipeline. Not to get into specifics (I can write them later if needed) but I am using $map and inside the map I am using $getField which uses the variable set in the $map step of the pipeline.\nI use { $literal: \"$$field.field_id\"} for field which doesn’t seem to work.\nAm I using it wrong and can $literal accept a variable from map and is there a workaround ?Thanks",
"username": "Petar_Labetic"
},
{
"code": "",
"text": "Here is an example where I use it Mongo playground. It’s a bit of a mess but the las step is where I use it",
"username": "Petar_Labetic"
},
{
"code": "",
"text": "It seems that my issue isn’t that I am using variable in $literal but instead that I am trying to use $getField with a variable field name. Still not sure what the workaround for this is.",
"username": "Petar_Labetic"
},
{
"code": "\"field\": {\n \"$literal\": \"$$content_field.field_id\"\n }\n\"field\": \"$$content_field.field_id\"\n",
"text": "I am not sure what you expect when using $literal but $literal means that what ever follows IS NOT an expression so none of the $ or $$ will be interpreted or evaluated.This means that in your case:will set field literally to the string $$content_field.field_id. It will not try to evaluate the variable.I am not sure of your intent but just usingmight bring you closer to what you want.",
"username": "steevej"
}
]
| Using a $map variable in $literal | 2023-01-16T00:51:48.813Z | Using a $map variable in $literal | 667 |
[]
| [
{
"code": "",
"text": "With a series of Google searches, I came across a series of objections about MongoDB for the online store For exampleI am soon to start working on a personal project and am considering to use MongoDB over mysql as I will be using the MEAN stack, any points on why I should use MYSQL instead?And now I have a project that is an online store with nodejs and mongoose In your opinion, according to update 4 that acid and multi doc is supported, is there a problem for such a project (because I am almost finish it :()The MongoDB 4.0 update means that multi-document ACID transactions are general availability, simplifying logic for devs and making it more straightforward to add them to any application that needs them.This Store has about 5,000 products and 1500 visitors per a day now use WooCommerce!",
"username": "Arash_Soft"
},
{
"code": "",
"text": "With a series of Google searches, I came across a series of objections about MongoDB for the online storeWelcome to the MongoDB community @Arash_Soft!When doing research, it is important to check the dates on sources as information can be outdated quickly with a rapidly evolving product like MongoDB. The original discussion you found from mid-2016 predated the addition of some relevant features like a Decimal Data Type in MongoDB 3.4 (released in Nov 2016), multi-document transactions (added for replica sets in MongoDB 4.0 in Aug 2018 and expanded to sharded clusters in MongoDB 4.2 a year later), and Client-Side Field Level Encryption in MongoDB 4.2.Some of the 2016 discussion also presumed that multi-document transactions are a requirement for ecommerce use cases, which is not strictly true for MongoDB depending on your schema design. As noted in the MongoDB 4.0 update you referenced:In MongoDB, the data model is fundamentally different. The document model encourages users to store related data together in a single document. MongoDB, has always supported ACID transactions in a single document and, when leveraging the document model appropriately, many applications don’t need ACID guarantees across multiple documents.MongoDB has been used for ecommerce applications for years prior to the introduction of Decimal types and multi-document transactions, but modern server features significantly improve performance and ease of development.I highly recommend reading through some of the schema design resources including the series on patterns & anti-patterns:You may also be interested in MongoDB Atlas for managed hosting. Atlas undergoes independent verification of platform security, privacy, and compliance controls (see the Trust Center for more info) and has integrated features like a rich full-text search based on Apache Lucene and data visualisations using MongoDB Charts.Regards,\nStennie",
"username": "Stennie_X"
}
]
| Mongodb for ecommerce shop | 2021-01-07T19:35:06.094Z | Mongodb for ecommerce shop | 4,673 |
|
null | []
| [
{
"code": "",
"text": "Hi everyone, I created a word cloud chart and it works fine, the thing is when I use the filters to try and filter some words the chart disappears and stops working until I remove the filters, any ideas?",
"username": "Juan_DIego_Arango"
},
{
"code": "",
"text": "Can you share a screenshot of the chart builder? If the chart is disappearing it generally means you are filtering out all of the data.",
"username": "tomhollander"
}
]
| Word filters in word cloud in mongo charts | 2023-01-16T15:55:04.809Z | Word filters in word cloud in mongo charts | 956 |
null | []
| [
{
"code": "lae0901@DESKTOP-CHT933T:~$ sudo apt-get update\nHit:1 http://archive.ubuntu.com/ubuntu focal InRelease\nHit:2 http://archive.ubuntu.com/ubuntu focal-updates InRelease\nGet:3 http://archive.ubuntu.com/ubuntu focal-backports InRelease [108 kB]\nGet:4 https://cli.github.com/packages stable InRelease [3917 B]\nHit:5 https://dl.google.com/linux/chrome/deb stable InRelease\nErr:4 https://cli.github.com/packages stable InRelease\n The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 23F3D4EA75716059\nHit:6 https://deb.nodesource.com/node_16.x focal InRelease\nGet:7 http://security.ubuntu.com/ubuntu focal-security InRelease [114 kB]\nIgn:8 https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/6.0 InRelease\nHit:9 https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/6.0 Release\nFetched 222 kB in 2s (119 kB/s)\nReading package lists... Done\nW: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://cli.github.com/packages stable InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 23F3D4EA75716059\nW: Failed to fetch https://cli.github.com/packages/dists/stable/InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 23F3D4EA75716059\nW: Some index files failed to download. They have been ignored, or old ones used instead.\n",
"text": "Running the steps to install mongo community on WSL Ubuntu 20.04 . The “sudo apt-get update” step fails with error “signature could not be verified because public key is not available”.\nI did run “sudo apt-key adv --keyserver”, but I am not sure which keyserver to use on that command.\nWhat to do? I did the same install steps on a different system yesterday and it worked.",
"username": "Stephen_Richter"
},
{
"code": "",
"text": "You need to download the key directly, the instruction pipe it directly to apt-key: Import the public key used by the package management system",
"username": "chris"
},
{
"code": "lae0901@DESKTOP-CHT933T:~$ sudo apt-get install gnupg\nReading package lists... Done\nBuilding dependency tree\nReading state information... Done\ngnupg is already the newest version (2.2.19-3ubuntu2.2).\nThe following package was automatically installed and is no longer required:\n libc-ares2\nUse 'sudo apt autoremove' to remove it.\n0 upgraded, 0 newly installed, 0 to remove and 106 not upgraded.\nae0901@DESKTOP-CHT933T:~$ wget -qO - https://www.mongodb.org/static/pgp/server-6.0.asc | sudo apt-key add -\nOK\nlae0901@DESKTOP-CHT933T:~$ echo \"deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/6.0 multiverse\" | sudo tee /etc/apt/sources.list.d/mongodb-org-6.0.list\ndeb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/6.0 multiverse\nlae0901@DESKTOP-CHT933T:~$ sudo apt-get update\nGet:1 https://cli.github.com/packages stable InRelease [3917 B]\nErr:1 https://cli.github.com/packages stable InRelease\n The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 23F3D4EA75716059\nHit:2 https://deb.nodesource.com/node_16.x focal InRelease\nHit:3 https://dl.google.com/linux/chrome/deb stable InRelease\nHit:4 http://archive.ubuntu.com/ubuntu focal InRelease\nHit:5 http://archive.ubuntu.com/ubuntu focal-updates InRelease\nHit:6 http://security.ubuntu.com/ubuntu focal-security InRelease\nHit:7 http://archive.ubuntu.com/ubuntu focal-backports InRelease\nIgn:8 https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/6.0 InRelease\nHit:9 https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/6.0 Release\nReading package lists... Done\nW: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://cli.github.com/packages stable InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 23F3D4EA75716059\nW: Failed to fetch https://cli.github.com/packages/dists/stable/InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 23F3D4EA75716059\nW: Some index files failed to download. They have been ignored, or old ones used instead.\n",
"text": "I am running the steps to install MongoDB community edition.\nStill getting the public key not available error.",
"username": "Stephen_Richter"
},
{
"code": "Err:1 https://cli.github.com/packages stable InRelease\n The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 23F3D4EA75716059\nW: Failed to fetch https://cli.github.com/packages/dists/stable/InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 23F3D4EA75716059",
"text": "W: Failed to fetch https://cli.github.com/packages/dists/stable/InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 23F3D4EA75716059Unrelated to MongoDB, I should have picked that up in the first post.Something github related, check that one.",
"username": "chris"
},
{
"code": "",
"text": "thank you. Got the answer here:",
"username": "Stephen_Richter"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Install error. apt-get update. error during signature verification | 2023-01-16T19:24:22.101Z | Install error. apt-get update. error during signature verification | 3,845 |
null | []
| [
{
"code": "{\n id: \"docId-1\",\n array1: [\n {id: \"asset-01\"}\n ],\n array2: [\n {prizeId: \"prize1\", image: \"asset-01\"}\n ]\n}\n{\n id: \"docId-1\",\n array1: [\n {id: \"asset-01\", prizeId: \"prize1\"}\n ],\n array2: [\n {prizeId: \"prize1\", image: \"asset-01\"}\n ]\n}\n{\n id: \"docId-1\",\n array1: [\n {id: \"asset-02\", prizeId: \"prize1\"}\n ],\n array2: [\n {prizeId: \"prize1\", image: \"asset-02\"}\n ]\n}\n",
"text": "Hi,I know a document exists for an update operation I want to perform. Also I know one of the array elements exists. I want to update that known array element and an element in another array that potentially might require different match criteria. Both arrays contain objects.Can I use a single update operation to update both arrays when the match criteria for the 2 arrays might be different?\nDoes “arrayFilters” allow you to get an array index for 2 different arrays?Sample documents BEFORE updateWhen array1 does not have match criteria (no prizeId = prize1) matching element for array2When array1 does have match criteria (prizeId = prize1) matching array2Sample document AFTER update (asset-01 → asset-02)For both of the sample documents above, I would like to update the element in array2, identified by “prizeId = prize1”, as well as make sure an element matching “prizeId = prize1” in array1 is updated, or array1 is appended if array1 does not contain a matching element.Thank you for any help!",
"username": "John_Grant1"
},
{
"code": "> db.array.find().pretty()\n{\n \"_id\" : ObjectId(\"63c18e1727fe9408c37b0258\"),\n \"id\" : \"docId-1\",\n \"array1\" : [ ],\n \"array2\" : [\n {\n \"prizeId\" : \"prize1\",\n \"image\" : \"asset-01\"\n }\n ]\n}\n{\n \"_id\" : ObjectId(\"63c18e1727fe9408c37b0259\"),\n \"id\" : \"docId-1\",\n \"array1\" : [\n {\n \"id\" : \"asset-01\",\n \"prizeId\" : \"prize1\"\n }\n ],\n \"array2\" : [\n {\n \"prizeId\" : \"prize1\",\n \"image\" : \"asset-01\"\n }\n ]\n}\n> db.array.updateMany({\"array1\":{$exists:true,$ne:[]},\"array1.prizeId\":\"prize1\",\"array2.prizeId\":\"prize1\"},{$set:{\"array1.$.id\":\"asset-02\",\"array2.$.image\":\"asset-02\"}})\n{ \"acknowledged\" : true, \"matchedCount\" : 1, \"modifiedCount\" : 1 }\n> db.array.find().pretty()\n{\n \"_id\" : ObjectId(\"63c18e1727fe9408c37b0258\"),\n \"id\" : \"docId-1\",\n \"array1\" : [ ],\n \"array2\" : [\n {\n \"prizeId\" : \"prize1\",\n \"image\" : \"asset-01\"\n }\n ]\n}\n{\n \"_id\" : ObjectId(\"63c18e1727fe9408c37b0259\"),\n \"id\" : \"docId-1\",\n \"array1\" : [\n {\n \"id\" : \"asset-02\",\n \"prizeId\" : \"prize1\"\n }\n ],\n \"array2\" : [\n {\n \"prizeId\" : \"prize1\",\n \"image\" : \"asset-02\"\n }\n ]\n}\n",
"text": "Hi @John_Grant1,\nI hope i’ ve understand in the best way your question. below i’ ve simulate your request:And the query for do that:So in this way i’ ve updated only the field asset of the second document from 01 to 02 And the first document remained unchanged.Hoping is useful!!Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "Hi @Fabio_Ramohitaj ,Nice to e-meet you and thank you for trying to help!Unfortunately I don’t think you understood the challenge. I want to update a single document. I provided 2 examples of the state of that single document. BOTH examples should have the SAME RESULT after a single update operation. The “first document remained unchanged” is NOT desired.Also please note I made an edit to array1 in example 1. array1 is not empty, but lacks the same criteria as array2.Best,\nJohn",
"username": "John_Grant1"
},
{
"code": "// Given:\nconst params = {\n docId: \"docId-1\",\n prizeId: \"prize1\",\n image: \"asset-id-after-update\",\n oldImage: \"asset-01\",\n}\n\n// Solution:\n\nconst filter1 = {\n id: params.docId,\n \"array1\": {$elemMatch: {$or: [\n {id: params.oldImage},\n {prizeId: params.prizeId}\n ]}},\n // \"array2\": {$elemMatch: {prizeId: params.prizeId}},\n}\n\nconst filter2 = {\n id: params.docId,\n// \"array1\": {$elemMatch: {$or: [\n// {id: params.oldImage},\n// {prizeId: params.prizeId}\n// ]}},\n \"array2\": {$elemMatch: {prizeId: params.prizeId}},\n}\n\nconst update1 = {\n $set: {\n \"array1.$\": {id: params.image, prizeId: params.prizeId},\n // \"array2.$.image\": params.image,\n },\n}\n\nconst update2 = {\n $set: {\n //\"array1.$\": {id: params.image, prizeId: params.prizeId},\n \"array2.$.image\": params.image,\n },\n}\n\ndb.getCollection('my-docs').updateOne(filter1, update1)\ndb.getCollection('my-docs').updateOne(filter2, update2)\n",
"text": "Here is what I’ve tried",
"username": "John_Grant1"
},
{
"code": "{ _id: ObjectId(\"63c18e1727fe9408c37b0259\"),\n id: 'docId-1',\n array1: [ { id: 'asset-02', prizeId: 'prize1' } ],\n array2: [ { prizeId: 'prize1', image: 'asset-03' } ] }\narrayFilters = [\n { \"filter1.id\" : \"asset-02\" } ,\n { \"filter2.image\" : \"asset-03\" }\n]\ndbécollection.updateOne( \n { \"id\" : \"docId-1\" } ,\n { \"$set\" : {\n \"array1.$[filter1].result\" : 123 ,\n \"array2.$[filter2].result\" : 456\n }} ,\n { arrayFilters }\n)\n{ _id: ObjectId(\"63c18e1727fe9408c37b0259\"),\n id: 'docId-1',\n array1: [ { id: 'asset-02', prizeId: 'prize1', result: 123 } ],\n array2: [ { prizeId: 'prize1', image: 'asset-03', result: 456 } ] }\n",
"text": "Does “arrayFilters” allow you to get an array index for 2 different arrays?Yes, for example, starting with documents:The following update:Should update the document to:",
"username": "steevej"
},
{
"code": "",
"text": "@steevej Your answer helped me arrive at a final solution. I use your 2-filter technique to get an index to each array for $set, and one of the filters uses the $or operator to handle the 2 scenarios array1 can be in.I will mark this as solved. Thank you!",
"username": "John_Grant1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to update elements of 2 arrays in the same document | 2023-01-13T16:49:48.556Z | How to update elements of 2 arrays in the same document | 973 |
null | [
"aggregation",
"queries",
"node-js"
]
| [
{
"code": "",
"text": "I’m doing a lookup from another collection to bring over an array of _ids, from which I’ll then $match a given property of my document. If I just $project:1 it, I’m seeing an array, clear as day. But when I try to use it as a filter with match, for MongoDB it is not an array. What gives?",
"username": "Gonzalo_Munoz"
},
{
"code": "",
"text": "Hello @Gonzalo_Munoz ,Welcome to The MongoDB Community Forums! To understand your use case better, could you please share below details:Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "I’m sorry, thanks for your reply but I had to move on and I ended up filtering the data server side.",
"username": "Gonzalo_Munoz"
}
]
| "$in needs an array" but the argument is clearly an array | 2023-01-16T16:15:57.158Z | “$in needs an array” but the argument is clearly an array | 723 |
null | [
"aggregation"
]
| [
{
"code": "{\nname: \"bicycle\"\ncategory: [ \"transport\", \"sport\" ],\nstatus: \"present\" or \"deleted\"\n}\nconst data = await Product.aggregate([\n {$unwind: \"$category\"},\n {$group: {_id: \"$category\", quantity: {$sum: 1}}}\n ])\n const data = await Product.aggregate([\n {$unwind: \"$category\"},\n {$group: {_id: \"$category\", quantity: {\n $sum: { $cond: { if: { \"status\": \"present\" }, then: 1, else: 0}}\n }}}\n ])\n",
"text": "DB item:I need result - categoryName: items quantityBut I tried to get only “present” items.result is the same !\nWhy my condition doesn’t work ?",
"username": "Aleksander_Podmazko"
},
{
"code": "const getCategories = async(req, res) => {\n const data = await Product.aggregate([\n {$unwind: \"$category\"},\n {$group: {_id: \"$category\", quantity: {$sum: {\n $cond: [ {$eq: [\"$status\", \"present\"]}, 1, 0]\n }}}}\n ])\n",
"text": "Thats work",
"username": "Aleksander_Podmazko"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Aggregation unwind & group by condition | 2023-01-16T15:42:42.070Z | Aggregation unwind & group by condition | 828 |
null | [
"node-js",
"python",
"golang"
]
| [
{
"code": "",
"text": "Hello everyone,\nMyself Kailash Kejriwal, a CS student from IIIT Gwalior, India, currently in my pre-final year. I love to contribute to open-source and have also been a contributor in the Google Summer of Code 2022. I am proficient with NodeJS, Golang, Python and also good with blockchain. More of me at kailashk.meI have been using MongoDB for more than a year and now want to contribute to the MongoDB community. Hope to learn and contribute in this journey.Thank you",
"username": "Kailash_Kejriwal"
},
{
"code": "",
"text": "Hey @Kailash_Kejriwal,\nWelcome to MongoDB Community!We are excited to have you here and we love your enthusiasm to learn and give back to the community.Let us know if there is something specific you are looking at and we will guide you in the right direction Thanks\nHarshit",
"username": "Harshit"
},
{
"code": "",
"text": "Hello Harshit,\nI am willing to contribute to MongoDB and want to know how should I start with the contributions?",
"username": "Kailash_Kejriwal"
},
{
"code": "",
"text": "Hey there, Kailash! Great to meet you! I’m also a Google Summer of Code alumnus. high fives",
"username": "webchick"
},
{
"code": "",
"text": "Hey @Kailash_Kejriwal,Sorry for the delayed response here!Would love to understand your interests and in which area are you looking to contribute (for ex: Docs, Drivers, Server, Testing, Q&A, Example Apps) to then help detail the right contribution opportunity.In the meantime below are some links and project links that you can explore:Each project will have information about how to contribute. We recommend starting with a small issue that solves a problem for you and having some discussion with the project collaborators (for example, on the relevant issue or discussion list) to try to ensure this is a desired change.You can also participate in some of our community programs:Contribute to the community by answering questions in the forums: We have some amazing forum badges that help you elevate to becoming a Forum Elder based on your activity. You can start by looking at the following discussion categories.Participating in our #100daysofcode Challenge: Join, learn and motivate other community members to learn with you in your learning journey.Contribute to your local MongoDB Community by volunteering, speaking, or leading your local MongoDB User Groups(MUG)Let us know if this is helpful and please share your interests so that we can help you better.",
"username": "Harshit"
}
]
| Hello, I'm Kailash | 2023-01-09T20:04:57.897Z | Hello, I’m Kailash | 2,068 |
null | [
"node-js",
"atlas-functions"
]
| [
{
"code": "\nError: Unable to open a realm at path '/workspace/mongodb-realm/products-gjstr/server-utility/metadata/sync_metadata.realm.lock'. Please use a path where your app has read-write permissions.\n",
"text": "I have google cloud function with mongo node js sdk where I call realm function to get the data.\nDuring function execution sdk tries to create Mongodb-realm folder and some files there for sync purposes (why it was done in this way? I do not want it happen at all, for me it looks like security issue\n)As I understood it’s not possible to prevent this folder being created. I need somehow give permissions for function to create this folder. Can not find how to do it. Any recommendations?Also is it possible to call the realm function via http? I just need to get the data and I do not care about sync sdk capabilities and so on",
"username": "Denys_Medvediev"
},
{
"code": "https://realm.mongodb.com/api/client/v2.0/app/<app-id>/locationhttps://<hostname>/api/client/v2.0/app/<app-id>/functions/call{\n \"name\": \"<function name>\",\n \"arguments\": [<argument list>]\n}\n",
"text": "Hi @Denys_Medvediev,why it was done in this way? I do not want it happen at all, for me it looks like security issueThe SDK is supposed to work with clients (especially mobile), not from a backend function, and it keeps a local copy of the DB to enable the offline functionality.You can try to use the Realm Web SDK, that doesn’t, at this time, support the offline functionality, and allows you to call functions.Also is it possible to call the realm function via http?Yes, it is: you must of course do what the SDK does behind the scenes first:",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Mongodb-realm folder is not created on google cloud function | 2023-01-15T12:23:08.893Z | Mongodb-realm folder is not created on google cloud function | 1,015 |
null | [
"queries"
]
| [
{
"code": "{\n \"other_info\" : \"foo\",\n \"table\": {\n \"name1\": 1,\n \"name2\": 1,\n \"name3\": 1,\n \"name4\": 0,\n \"name5\": 1,\n \"name6\": 1,\n \"name7\": 1\n }\n}\n{\n \"other_info\" : \"bar\",\n \"table\": {\n \"name1\": 0,\n \"name2\": 1,\n \"name3\": 4,\n \"name4\": 1\n }\n}\n",
"text": "Hello!\nI have many documents like this in the collectionI have an array NAMES= [“name4”,“name5”,“name7”]i want to query it so that i find all documents that have any of the values in the list as a key where its value is greater than 0Match ANY item in NAMES where value of found NAME $gt 0How can i accomplish this?",
"username": "Doge_Lee"
},
{
"code": "\"tables.v\" : { \"$gt\" : 0 }\n{ \"$or\" : [\n { \"table.name4\" : { \"$gt\" : 0 } } ,\n { \"table.name5\" : { \"$gt\" : 0 } } ,\n { \"table.name7\" : { \"$gt\" : 0 } }\n] }\n",
"text": "The main reason having dynamic field names is not recommended is that it complicates simple things like this.With the attribute pattern this is easy to solve. A simple query likewould suffice.One way to do it with your current schema would be to use aggregation with a $set stage that uses $objectToArray to essentially dynamically convert your table into the attribute pattern. The a $match that uses the simple query above should work.Another way is to use map on NAMES to create a big “$or” query that would end up looking like:",
"username": "steevej"
}
]
| Match any item in array to key in object greater than 0 | 2023-01-15T07:32:11.946Z | Match any item in array to key in object greater than 0 | 1,637 |
null | [
"student-developer-pack"
]
| [
{
"code": "",
"text": "Hi @Liker_Boon,I have recieved the coupon code from the github developer package but whenever I apply the coupon it gives out an error \" The coupon is either applied before start date or after end date.\"Thank you",
"username": "luongdinhduc0000_N_A"
},
{
"code": "",
"text": "Same here. Got any way to fix it?",
"username": "Soren_Blank"
},
{
"code": "",
"text": "Hi and welcome to the forums!It sounds like your code might be expired. When were you given the code?",
"username": "Aiyana_McConnell"
},
{
"code": "",
"text": "I received it yesterday on github",
"username": "luongdinhduc0000_N_A"
},
{
"code": "",
"text": "I have the same issue as the above 3 students!",
"username": "Rach_Pra"
},
{
"code": "",
"text": "@luongdinhduc0000_N_A @Soren_Blank @Rach_PraHi all and again, apologies for issue you’re experiencing.I’ve responded to each of you individually via DM. Please review your DMs and follow-up so that we can help troubleshoot.",
"username": "Aiyana_McConnell"
},
{
"code": "",
"text": "@Aiyana_McConnell I am also getting the same error. Can you please help me on this.",
"username": "koushikpuppala"
},
{
"code": "",
"text": "Hi there, like many of those in this thread I believe that your code has expired. I’ve reached out to you via DM to collect some additional information so that I can help you.",
"username": "Aiyana_McConnell"
},
{
"code": "",
"text": "@Aiyana_McConnell I am also getting the same error. can you please help ?",
"username": "afd-aus_N_A"
},
{
"code": "",
"text": "Hi there and welcome to the forums!I just followed-up with you via DM ",
"username": "Aiyana_McConnell"
},
{
"code": "",
"text": "Hi,\nThe same thing happened to me I also received a coupon code from the GitHub Developer package but the code is already in use, “which means the coupon code was already expired”",
"username": "Rooney_Roc"
},
{
"code": "",
"text": "@Rooney_RocHi there and welcome to the forums! I’ve reached out to you via DM to try to troubleshoot. Did you apply your Atlas credit code to a MongoDB instance? Are you trying to apply the credit code to a different MongoDB instance? If so, please be aware that you cannot transfer free credits between MongoDB instances. Once you’ve applied the credit code, you can only use those free credits in the MongoDB instance you applied them to.",
"username": "Aiyana_McConnell"
},
{
"code": "",
"text": "If you’re having issues with your Atlas code and reply to this thread, please be aware that the MongoDB offices will be closed for the holidays from December 23rd to December 27th. Expect a response on or after the 27th. Happy Holidays! ",
"username": "Aiyana_McConnell"
},
{
"code": "",
"text": "Same Problems here.\nplease help",
"username": "_N_A73"
},
{
"code": "",
"text": "Hi @_N_A73, welcome to the forums! I will reach out to you via DM to help resolve your issue.",
"username": "Aiyana_McConnell"
}
]
| Getting error with the github student coupon code | 2022-11-20T11:17:53.665Z | Getting error with the github student coupon code | 3,709 |
null | [
"queries"
]
| [
{
"code": "",
"text": "mongodump --uri “mongodb+srv://m001-student:[email protected]/sample_supplies”uncaught exception: SyntaxError: unexpected token: identifier :\n@(shell):1:12\nthis is the error i am getting and it is frustating pllzzz help me",
"username": "Ganesh_Kulkarni"
},
{
"code": "",
"text": "mongodump --uri “mongodb+srv://m001-student:[email protected]/sample_supplies”I suspect that you are running this command in mongo shell rather than a command terminal.Post a screenshot of what you are doing that shows the issue you are having. A screenshot will help us help you better.",
"username": "steevej"
},
{
"code": "",
"text": "@Ganesh_Kulkarni, still awaiting fora screenshot of what you are doing that shows the issue you are havingWe cannot help you without further information. However, if your issue is resolve please share the solution as a courtesy to other users of this forum.",
"username": "steevej"
},
{
"code": "",
"text": "\nScreenshot_20230115_1150201790×183 7.95 KB\n\ni am trying to import a json file using this command , but it is showing this error , what can i do",
"username": "_MR_ARyA_N_A"
},
{
"code": "",
"text": "Hi @_MR_ARyA_N_A,\nAs have suggested @steevej, you’re runnig the command mongoimport from the mongoshell and it is the wrong way!!\nYou must run the mongoimport command from the bash shell or CMD so that the mongoimport binary is retrieved from the machine.Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "showing the same error",
"username": "_MR_ARyA_N_A"
},
{
"code": "",
"text": "Please show us a screenshot of latest run",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "problem solved !! the issue was that i didn’t had mongoimport file in my bin folder .",
"username": "_MR_ARyA_N_A"
}
]
| Unexpected token: identifier : | 2021-12-15T17:03:47.776Z | Unexpected token: identifier : | 8,592 |
null | [
"atlas-cli"
]
| [
{
"code": "atlas config init ",
"text": "Hello !I’m want to whitelist the IP of my CircleCI runner then delete the IP in the whitelist. The goal is to dump locally my database to use it during test.The issue is that if I want to use API authentication in CLI I have to use atlas config init and this command will be executed in a script, I can’t prompt the keys during the script. How can I authenticate to Atlas using Atlas CLI without have to manually prompt some credentials?",
"username": "Tom_Boscher"
},
{
"code": "--projectId--orgIdconfig.toml",
"text": "Hey @Tom_Boscher!How can I authenticate to Atlas using Atlas CLI without have to manually prompt some credentials?You should be able to do this via a profile.You can save your frequently-used connection settings as profiles. Profiles store the project IDs, organization IDs, and, optionally, API keys to use in future Atlas CLI sessions. To save time, you can specify a profile instead of using the --projectId and --orgId flags with each command. The Atlas CLI stores your profiles in a configuration file called config.toml .",
"username": "Justin_Jenkins"
},
{
"code": "",
"text": "Thank you for your answer, I saw about profile but isn’t there a solution without having to store a file?",
"username": "Tom_Boscher"
},
{
"code": "",
"text": "I saw about profile but isn’t there a solution without having to store a file?Maybe I’m not following … but how could you write a script that doesn’t prompt for a key, but also somehow has the key without writing that key somewhere (either to a config, or in a script)?With the right permissions writing the key to a file shouldn’t be a problem, if that is the concern.",
"username": "Justin_Jenkins"
},
{
"code": "",
"text": "My issue is that my CircleCI runner data won’t be persistent so I will have to store the file in my repo and I prefer to avoid this solution for security reason.EDIT: I found that there is environment variables for Atlas CLI",
"username": "Tom_Boscher"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How can I login using atlas config init during a script | 2023-01-13T19:42:53.271Z | How can I login using atlas config init during a script | 1,453 |
null | [
"aggregation",
"charts"
]
| [
{
"code": " const sdk = new ChartsEmbedSDK({\n baseUrl: 'This is where I've placed my Chart url'\n });\n\n const chart = sdk.createChart({ chartId: 'This is where I've placed my chart id',\n height: \"400px\",\n width:\"100%\",\n showAttribution:false,\n background:\"transparent\",\n labels:false,\n autoRefresh:true,\n filter: { \"user\": { \"$match\": '63c18143c9379120a15b935e' } }\n\n}); \n\nasync function refresh(){\n await chart.render(document.getElementById('chart'));\n await chart.setAutoRefresh(true);\n await chart.setMaxDataAge(30);\n}\n\nrefresh()\n",
"text": "Hi there,I’m new to MongoDB charts and have done a search for answers via the community but cannot find the one I am looking for.I have set up my chart as unauthenticated and using 2 fields to display data:I also want to filter via the field ‘user’ which contains the user id for the logged-in user.If I filter this via the MongoDB chart dashboard it works fine. But if I set this via the SDK, it doesn’t show the chart at all on my web application.I’ve tried to test this using:*filter: { “user”: { “$match”: ‘63c18143c9379120a15b935e’ } }\n*chart.setFilter({“user”: { “$match”: ‘63c18143c9379120a15b935e’ }}.I’ve tried without the $match for both also {“user”: “63c18143c9379120a15b935e”} and nothing is displayed.It works fine without setting the filters.Can someone help and explain what I’m doing wrong, do I need a token?Many thanksKirstieHere’s the code:",
"username": "Kirstie_Hayes"
},
{
"code": " {“user”: { $oid: “63c18143c9379120a15b935e”} }\n",
"text": "Hi @Kirstie_Hayes -You don’t want to include the $match in the query.\nAssuming the value you are searing on is an ObjectID, you’ll either need to decorate it like this:Or I think you can use a BSON library to instantiate an ObjectID instance directly.Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "Hi @tomhollanderIt is an ObjectID and that worked, thanks so much for your help.Kirstie",
"username": "Kirstie_Hayes"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Mongodb chart filter not working via the SDK | 2023-01-14T12:35:38.416Z | Mongodb chart filter not working via the SDK | 1,300 |
null | [
"aggregation"
]
| [
{
"code": "{\n $dateDiff:{startDate:{$toDate:\"1976-01-18\"}, endDate:\"$$NOW\",unit:\"year\"}\n}\n$dateDiffstartDateendDateunitsyearyears",
"text": "Hello,I think there is an error in the implementation of $dateDiff aggregator operator. If I write something like this:it return 47 and not 46.\nThe documentation says what I am expecting:The $dateDiff expression returns the integer difference between the startDate and endDate measured in the specified units . Durations are measured by counting the number of times a unit boundary is passed. For example, two dates that are 18 months apart would return 1 year difference instead of 1.5 years .Anyone else seeing this?Thanks in advance",
"username": "Roberto_Trapletti"
},
{
"code": "",
"text": "I think it’s correct - think of both dates truncating to January 1st of the same year and then subtract them. I get 47 years, don’t you?Asya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "Thank you Asya! But I cannot find where it says that truncates the dates before subtract. The example listed says only: Difference 18 months → 1 year difference (and not 2). This way I understood that it does the subtract and then get the duration in unit (with floor)",
"username": "Roberto_Trapletti"
},
{
"code": "",
"text": "I think the key is the phrase “Durations are measured by counting the number of times a unit boundary is passed.” so 1.5 years is only 1 year if the year boundary (Jan 1) is passed once, if it’s passed twice then it’s 2 years. So from December 28th 2022 to today is one year (because we pass year boundary), same as January 2nd 2022 to today.Asya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "Ok but if it is really implemented this way, I would update the documentation because the example show something else. I would like to use this feature for age calculation but this way is unusable… Thanks anyway",
"username": "Roberto_Trapletti"
}
]
| $dateDiff (aggregation) behavior mismatch | 2023-01-12T16:10:14.192Z | $dateDiff (aggregation) behavior mismatch | 1,131 |
null | [
"node-js"
]
| [
{
"code": "",
"text": "i face this error when i connect cloud db :- MongoServerSelectionError: connect ETIMEDOUT 13.234.58.23:27017",
"username": "Irfan_Ansari"
},
{
"code": "",
"text": "Have you whitelisted your IP?",
"username": "Ramachandra_Tummala"
},
{
"code": "Server selection timed out after ${serverSelectionTimeoutMS} ms",
"text": "yes i whitelisted my IP But not work itit show me this error:-Express server is up and running on port 5000\nwe encountered MongoServerSelectionError: connect ETIMEDOUT 35.154.119.205:27017\nC:\\Users\\Irfan Ansari\\Desktop\\IssueTracker\\node_modules\\mongodb\\lib\\sdam\\topology.js:293\nconst timeoutError = new error_1.MongoServerSelectionError(Server selection timed out after ${serverSelectionTimeoutMS} ms, this.description);\n^MongoServerSelectionError: connect ETIMEDOUT 35.154.119.205:27017\nat Timeout._onTimeout (C:\\Users\\Irfan Ansari\\Desktop\\IssueTracker\\node_modules\\mongodb\\lib\\sdam\\topology.js:293:38)\nat listOnTimeout (node:internal/timers:559:17)\nat processTimers (node:internal/timers:502:7) {\nreason: TopologyDescription {\ntype: ‘ReplicaSetNoPrimary’,\nservers: Map(3) {\n‘ac-uy1whk5-shard-00-00.tytajh9.mongodb.net:27017’ => ServerDescription {\naddress: ‘ac-uy1whk5-shard-00-00.tytajh9.mongodb.net:27017’,\ntype: ‘Unknown’,\nhosts: ,\npassives: ,\narbiters: ,\ntags: {},\nminWireVersion: 0,\nmaxWireVersion: 0,\nroundTripTime: -1,\nlastUpdateTime: 619568,\nlastWriteDate: 0,",
"username": "Irfan_Ansari"
},
{
"code": "",
"text": "Then it could be ISP issue\nAre you using VPN/antivirus/proxy/firewall?\nDisable them and see if you can connect\nor\nTry from a different network",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "i use windows 10 and in its already set VPN and firewall on and proxy off .since you say turn off VPN, proxy, and firewall so i did it but not connected till now.and you say it could ISP issue but yes i think so it would be untill not understand how to solve it.",
"username": "Irfan_Ansari"
},
{
"code": "",
"text": "pls visit this link . his quest. like my quest. and i don’t understand how they solve it.",
"username": "Irfan_Ansari"
}
]
| MongoServerSelectionError: connect ETIMEDOUT 13.234.58.23:27017 | 2023-01-13T05:12:00.413Z | MongoServerSelectionError: connect ETIMEDOUT 13.234.58.23:27017 | 6,611 |
[]
| [
{
"code": "",
"text": "charts are not rendering this has been happening very frequently . please help me fix this\n\nScreenshot 2023-01-05 at 2.30.32 PM1238×582 41.8 KB\n",
"username": "Sasidhar_Reddy_Balu"
},
{
"code": "",
"text": "Hi @Sasidhar_Reddy_Balu - Welcome to the community.Please contact the Atlas support team via the in-app chat providing this screenshot. You can additionally raise a support case if you have a support subscription. The Atlas chat support team will have more insight into the Atlas cluster associated with this chart.Best Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
]
| Mongo Charts Not Rendering | 2023-01-05T09:01:09.148Z | Mongo Charts Not Rendering | 644 |
|
[
"aggregation",
"atlas-search"
]
| [
{
"code": "",
"text": "Hi,\nI am trying the Atlas Search and working on \" Lesson 4: Using $search and compound operators/ Practice\". While trying to work on sample_supplies.sales collection with the example provided, The atlas aggregation UI is not returning the preview/ result.Please find the Screen shot Attached. In ss1, I tried using “path”: “items” and in ss2 I tried using “path”: “items.name”. I also tried to check the query by putting the condition in match stage, which worked.\nss1953×395 49 KB\n\n\nss2953×328 35.6 KB\nLooking forward for your help here as I am unable to figure out the issue.",
"username": "Harsh_Chauhan1"
},
{
"code": "$search",
"text": "Hi @Harsh_Chauhan1,Can you provide the search index definition associated with this $search?Regards,\nJason",
"username": "Jason_Tran"
}
]
| MongoDB Atlas Search Lesson 4: Using $search and compound operators/ Practice | 2023-01-13T11:56:18.888Z | MongoDB Atlas Search Lesson 4: Using $search and compound operators/ Practice | 1,196 |
|
null | []
| [
{
"code": "",
"text": "Hello,\nI am using Mongo to house multi-tenant data with each tenant having their own unique set of fields. I have to support searching and sorting on all fields. At this point it seems like Atlas Search would help me meet most of my requirements (with the current & upcoming releases).Having said that, Would I be able to configure alerts that would let me know when the index is approaching a possible explosion scenario? What should I do to avoid this?Thanks much,\nPrasad",
"username": "Prasad_Kini"
},
{
"code": "",
"text": "Hi Prasad,There are several Atlas Search alerts which can be configured. More information on the Fix Atlas Search Issues documentation.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi Jason,\nI don’t see any events that would alert me of possible index explosions. Is it named something else?Thanks,\nPrasad",
"username": "Prasad_Kini"
},
{
"code": "",
"text": "index explosionsWhat should I do to avoid this?Could you clarify what you mean by index explosions? Are you referring to Document Mapping Explosions? If so, as the documentation suggests, you can upgrade your cluster or use a static mapping that does not index all fields in your data.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi Jason,\nYes, I am referring to Document Mapping Explosions.I am aware that upgrading the cluster will help in avoiding such explosions. I am trying to understand if this can be done proactively so that we don’t run into such a scenario where the explosion has happened and we are scrambling to upgrade the cluster to get the application back up.Please note that my application requires indexing all the fields in my data.Thanks,\nPrasad",
"username": "Prasad_Kini"
},
{
"code": "mongot",
"text": "I am trying to understand if this can be done proactively so that we don’t run into such a scenario where the explosion has happened and we are scrambling to upgrade the cluster to get the application back up.Thank you for confirming Prasad. There are multiple search memory alerts you could configure. Perhaps you can have those configured in addition to monitoring the metrics utilised by the mongot process on the metrics over time.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi Jason,\nThere are numerous alerts that can be configured on Atlas. Could you please give some pointers on which ones would help me catch such mongot crashes well before they happen?Thanks for the quick responses!Prasad",
"username": "Prasad_Kini"
}
]
| Mongo Atlas Search Index Explosion | 2023-01-12T22:59:30.559Z | Mongo Atlas Search Index Explosion | 775 |
null | [
"graphql",
"schema-validation"
]
| [
{
"code": "",
"text": "Hi,I am trying to create documents with a GeoJSON Polygon via Graphql but I cannot create the correct JSON schema to support this.I have GeoJSON Points working as it is an array of numbers but the Polygon requires nested arrays and Mongo cannot generate the graphql schema even though my JSON schema is valid.any help appreciated",
"username": "Sebastian_Cook"
},
{
"code": "{\n \"title\": \"Field\",\n \"properties\": {\n \n \"_appId\": {\n \"bsonType\": \"string\"\n },\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"active\": {\n \"bsonType\": \"bool\"\n },\n \n \"name\": {\n \"bsonType\": \"string\"\n },\n\n \"polygons\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"array\",\n \"minItems\": {\n \"$numberInt\": \"2\"\n },\n \"maxItems\": {\n \"$numberInt\": \"2\"\n },\n \"items\": {\n \"type\": \"number\"\n }\n }\n },\n \n\n \"required\": [\n \"_appId\"\n ]\n }",
"text": "I have the same problem here, but with realmDB. When I add these nested arrays in my Realm Schema it throw the following message error:array property “polygons” cannot be of type array to be sync-compatible",
"username": "Virmerson"
},
{
"code": "\"location\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"coordinates\": {\n \"type\": \"array\",\n \"items\": {\n \"type\": \"array\",\n \"items\": {\n \"type\": \"array\",\n \"items\": {\n \"bsonType\": \"double\"\n }\n }\n }\n },\n \"type\": {\n \"type\": \"string\"\n }\n }\n},",
"text": "Hi Virmerson,I have just got it working using the below JSON-schema and the real-web SDK I can create/upsert documents with a geojson polygon, however grapql is unable to generate the schema so I cannot use graphql until I figure out how to ignore a specific field validation.Ultimately I want to find documents with geoIntersect between the user location polygon and the document location polygon so I will test this next ",
"username": "Sebastian_Cook"
},
{
"code": "{\n \"$schema\": \"http://json-schema.org/draft-04/schema#\",\n \"id\": \"http://json-schema.org/geojson/geometry.json#\",\n \"title\": \"geometry\",\n \"description\": \"One geometry as defined by GeoJSON\",\n \"type\": \"object\",\n \"required\": [ \"type\", \"coordinates\" ],\n \"oneOf\": [\n {\n \"title\": \"Point\",\n \"properties\": {\n \"type\": { \"enum\": [ \"Point\" ] },\n \"coordinates\": { \"$ref\": \"#/definitions/position\" }\n }\n },\n {\n \"title\": \"MultiPoint\",\n \"properties\": {\n \"type\": { \"enum\": [ \"MultiPoint\" ] },\n \"coordinates\": { \"$ref\": \"#/definitions/positionArray\" }\n }\n },\n {\n \"title\": \"LineString\",\n \"properties\": {\n \"type\": { \"enum\": [ \"LineString\" ] },\n \"coordinates\": { \"$ref\": \"#/definitions/lineString\" }\n }\n },\n {\n \"title\": \"MultiLineString\",\n \"properties\": {\n \"type\": { \"enum\": [ \"MultiLineString\" ] },\n \"coordinates\": {\n \"type\": \"array\",\n \"items\": { \"$ref\": \"#/definitions/lineString\" }\n }\n }\n },\n {\n \"title\": \"Polygon\",\n \"properties\": {\n \"type\": { \"enum\": [ \"Polygon\" ] },\n \"coordinates\": { \"$ref\": \"#/definitions/polygon\" }\n }\n },\n {\n \"title\": \"MultiPolygon\",\n \"properties\": {\n \"type\": { \"enum\": [ \"MultiPolygon\" ] },\n \"coordinates\": {\n \"type\": \"array\",\n \"items\": { \"$ref\": \"#/definitions/polygon\" }\n }\n }\n }\n ],\n \"definitions\": {\n \"position\": {\n \"description\": \"A single position\",\n \"type\": \"array\",\n \"minItems\": 2,\n \"items\": [ { \"type\": \"number\" }, { \"type\": \"number\" } ],\n \"additionalItems\": false\n },\n \"positionArray\": {\n \"description\": \"An array of positions\",\n \"type\": \"array\",\n \"items\": { \"$ref\": \"#/definitions/position\" }\n },\n \"lineString\": {\n \"description\": \"An array of two or more positions\",\n \"allOf\": [\n { \"$ref\": \"#/definitions/positionArray\" },\n { \"minItems\": 2 }\n ]\n },\n \"linearRing\": {\n \"description\": \"An array of four positions where the first equals the last\",\n \"allOf\": [\n { \"$ref\": \"#/definitions/positionArray\" },\n { \"minItems\": 4 }\n ]\n },\n \"polygon\": {\n \"description\": \"An array of linear rings\",\n \"type\": \"array\",\n \"items\": { \"$ref\": \"#/definitions/linearRing\" }\n }\n }\n}\n",
"text": "I found this, which is super useful:",
"username": "Rob_Elliott"
},
{
"code": "",
"text": "I too would like to leverage a json-schema to validate geojson objects within a document.This project provides ‘The overall GeoJSON schema’, but the resulting json file is a bit long and it doesn’t appear that MongoDB supports the ability to include another schema file reference.",
"username": "Nicholas_Haas"
}
]
| What is the correct JSON schema to support a GeoJSON polygon? | 2021-02-27T12:15:05.274Z | What is the correct JSON schema to support a GeoJSON polygon? | 6,855 |
null | [
"java"
]
| [
{
"code": "One of the EventListeners had an uncaught exception\njava.lang.NullPointerException: Cannot invoke \"java.util.Date.getTime()\" because \"lastjoin\" is null\n\tat de.arbeeco.statcord.util.Data.awardVcPoints(Data.java:257)\n\tat de.arbeeco.statcord.events.GuildVoiceEvents.onGuildVoiceUpdate(GuildVoiceEvents.java:25)\n\tat net.dv8tion.jda.api.hooks.ListenerAdapter.onEvent(ListenerAdapter.java:424)\n\tat net.dv8tion.jda.api.hooks.InterfacedEventManager.handle(InterfacedEventManager.java:96)\n\tat net.dv8tion.jda.internal.hooks.EventManagerProxy.handleInternally(EventManagerProxy.java:88)\n\tat net.dv8tion.jda.internal.hooks.EventManagerProxy.handle(EventManagerProxy.java:70)\n\tat net.dv8tion.jda.internal.JDAImpl.handleEvent(JDAImpl.java:171)\n\tat net.dv8tion.jda.internal.handle.VoiceStateUpdateHandler.handleGuildVoiceState(VoiceStateUpdateHandler.java:215)\n\tat net.dv8tion.jda.internal.handle.VoiceStateUpdateHandler.handleInternally(VoiceStateUpdateHandler.java:58)\n\tat net.dv8tion.jda.internal.handle.SocketHandler.handle(SocketHandler.java:39)\n\tat net.dv8tion.jda.internal.requests.WebSocketClient.onDispatch(WebSocketClient.java:984)\n\tat net.dv8tion.jda.internal.requests.WebSocketClient.onEvent(WebSocketClient.java:870)\n\tat net.dv8tion.jda.internal.requests.WebSocketClient.handleEvent(WebSocketClient.java:848)\n\tat net.dv8tion.jda.internal.requests.WebSocketClient.onBinaryMessage(WebSocketClient.java:1023)\n\tat com.neovisionaries.ws.client.ListenerManager.callOnBinaryMessage(ListenerManager.java:385)\n\tat com.neovisionaries.ws.client.ReadingThread.callOnBinaryMessage(ReadingThread.java:276)\n\tat com.neovisionaries.ws.client.ReadingThread.handleBinaryFrame(ReadingThread.java:996)\n\tat com.neovisionaries.ws.client.ReadingThread.handleFrame(ReadingThread.java:755)\n\tat com.neovisionaries.ws.client.ReadingThread.main(ReadingThread.java:108)\n\tat com.neovisionaries.ws.client.ReadingThread.runMain(ReadingThread.java:64)\n\tat com.neovisionaries.ws.client.WebSocketThread.run(WebSocketThread.java:45)\n",
"text": "I made a Discord leveling bot but for some Reason it sometimes fails to set a Value properly… The code where it should set it is here: Statcord/Data.java at d7d1c50ae0b8422089c2ae8ac44eef0a124f75e5 · Arbee4ever/Statcord · GitHub … I haven’t been able to reproduce it on my testing Instance, but on the public instance, I see this error:quite often… Any idea why?",
"username": "Arbee_N_A"
},
{
"code": "",
"text": "I changed the behaviour of it, so it doesnt delete it anymore, and something fixed it not being set… but idk why",
"username": "Arbee_N_A"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| A value doesn't get set sometimes | 2023-01-05T16:08:54.661Z | A value doesn’t get set sometimes | 731 |
null | [
"queries",
"node-js",
"next-js"
]
| [
{
"code": "\tlet totalJobs = 0;\n\n\ttotalJobs = await mongoDb\n\t\t.collection(\"jobs\")\n\t\t.find(query)\n\t\t.sort({ publicationDate: -1 })\n\t\t.countDocuments();\n",
"text": "Hello there!I see that cursor.count is deprecated.\nHowever, I cannot seem to find what can replace it when we are using it with the type ‘FindCursor<WithId>’.\nIn other words, collection.estimatedDocumentCount and collection.countDocuments is not usable in this context.I am trying to get the count of documents from a collection with a specific sort and query (find):What should I used isntead of .count() /.countDocuments();?Thanks in advanced!",
"username": "Marving_Moreton"
},
{
"code": "db.collection.aggregate([\n { $match: <query> },\n { $group: { _id: null, n: { $sum: 1 } } }\n])\n",
"text": "Hi @Marving_Moreton,\nYou can use this way to perform the same operation:As mentioned in the documentation.Hoping It help you!Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "totalJobs = await mongoDb\n\t\t.collection(\"jobs\")\n\t\t.countDocuments(query);\n",
"text": "with the type ‘FindCursor’.I really do not understand by the above and why it could matter when counting documents.I really do not understand why you are sorting when counting documents. It could potentially be slow for no added benefits. The count will be the same with or without sorting.You may call countDocuments with the query. So your statement should be",
"username": "steevej"
},
{
"code": "",
"text": "Hello,Correct. Apologies, still very junior here!I have refactored my db calls and has not realized that I did not require to sort for the count Thanks for your help guys!",
"username": "Marving_Moreton"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB replacement cursor.count on type collection.countDocuments | 2023-01-14T23:09:56.299Z | MongoDB replacement cursor.count on type collection.countDocuments | 1,747 |
[
"aggregation",
"replication",
"python",
"performance",
"change-streams"
]
| [
{
"code": "pipeline = [{\"$addFields\": {\n \"tmpfields\": {\n \"$objectToArray\": \"$updateDescription.updatedFields\"}\n }},\n {\"$match\": {\"tmpfields.k\": {\n \"$nin\": [\"updated_at\"] if cfg.CONF.is_resource == \"True\" else [\n \"\"]}}}\n ]\ncursor = client[collection].watch(full_document='updateLookup',\n pipeline=audit_filter if cfg.CONF.is_audit == \"True\" else pipeline, start_after=resume_token_new)\n",
"text": "We are running Change Stream for some time but have started seeing behaviour where the CPU Spikes for long duration of time.\nSetup: We are using MongoDB 4.4 version Replica Set, primary and two replicas\n12 cpu , 252GB RAM and 2TB data disk each.\nand using Python to open a changestream.App Dynamics\n\nimage1477×167 26.1 KB\nKindly Assist.",
"username": "Anupam_Vashisth"
},
{
"code": "",
"text": "I’m experiencing the same thing on our production deployment. It seems to use an entire core spinning, while also generating a ton of “slow query” log messages.",
"username": "Geoffrey_Challen"
}
]
| High CPU Usage 90% with Change stream and 20% without Change stream | 2022-12-21T04:34:43.826Z | High CPU Usage 90% with Change stream and 20% without Change stream | 1,811 |
|
null | [
"aggregation",
"atlas-search"
]
| [
{
"code": "",
"text": "Hello,\nWhat is the best way to run grouping aggregations (sum, average etc) on the documents returned by $search? Adding a $group stage is not very performant and I have been unable to find an alternative.Thanks,\nPrasad",
"username": "Prasad_Kini"
},
{
"code": "",
"text": "You can use facets and count to get totals/sum. But Atlas Search does not have an out of the box solution for average. Feel free to vote on this feedback item as it relates.",
"username": "Elle_Shwer"
},
{
"code": "",
"text": "Hi Elle,\nCould you please share some documentation on getting sums in Atlas search? I was able to find it for counts, but not for sums.Thanks,\nPrasad",
"username": "Prasad_Kini"
}
]
| Month Atlas Search & Grouping | 2023-01-10T19:24:11.380Z | Month Atlas Search & Grouping | 945 |
null | []
| [
{
"code": "",
"text": "Hi,Are the questions from the exam prep the same as the ones from exam? Or are just similar?",
"username": "Mugurel_Frumuselu"
},
{
"code": "",
"text": "Hi @Mugurel_Frumuselu,The certification study guide contains the exam objectives and the questions will be based on these objectives, assessing your understanding of MongoDB. It will help you to prepare for the exam.It is recommended to take practice exams in order to familiarize yourself with the pattern before taking the actual exam.Please let me know if you have any follow-up questions on this.Thanks,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
]
| Is the questions from the exam prep are same? | 2023-01-10T14:43:59.846Z | Is the questions from the exam prep are same? | 1,619 |
null | [
"mongodb-shell",
"spring-data-odm"
]
| [
{
"code": "",
"text": "I want to fetch exactly matching array nested document only which tag_name is “Testnew”.{\n“_id”: {\n“$oid”: “63c142fe7d89cf0a303e70f9”\n},\n“organization_id”: {\n“$numberLong”: “200020”\n},\n“tags”: [\n{\n“tag_name”: “Testnew”,\n“status”: true,\n},\n{\n“tag_name”: “Testnew”,\n“status”: true,\n}\t,\n{\n“tag_name”: “Test”,\n“status”: true,\n}\n],\n}{ “tags” : { “$elemMatch” : { “tag_name” : “Testnew”, “status” : true}}}\nit returns full document with all nested array those are also not matched.My expect result is:\n“tags”: [\n{\n“tag_name”: “Testnew”,\n“status”: true,\n},\n{\n“tag_name”: “Testnew”,\n“status”: true,\n}Please share any query so that I can find exact matches data,",
"username": "susantakumar_pradhan"
},
{
"code": "{ \"tags\" : { \"$filter\" : {\n \"input\" : \"$tags\" ,\n \"cond\" : {\n \"$and\" : [\n { \"$eq\" : [ \"$$this.tag_name\" , \"Testnew\" ] } ,\n { \"$eq\" : [ \"$$this.status\" , true ] }\n ]\n }\n} } }\n",
"text": "Please read Formatting code and log snippets in posts and update your sample documents, code and expected result.If you were only interested in the first element you could use a projection that repeats your $elemMatch{ “tags” : { “$elemMatch” : { “tag_name” : “Testnew”, “status” : true}}}If you are interested in all elements that match, which your expect result shows, it is a little bit more complicated. You need a $filter projection such as:",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Fetch only matching nested Array document only | 2023-01-14T16:46:40.305Z | Fetch only matching nested Array document only | 1,933 |
null | [
"data-modeling"
]
| [
{
"code": "{\n_id: Object_id,\nlang: \"english\"\ntitle: \"song title\",\nartistid: \"artistid\",\nsongURL: \"songURL\",\n}\n{\n_id: Object_id,\nrefid: Object_id,\nref: \"songs\",\nlang: \"spanish\",\ntitle: \"titulo de la cancion\",\n}\n",
"text": "Using a “songs” collection, and a generic “translations” collection, which can contain translations of documents from other collections.If I need to recreate some of the “songs” documents (as I am retrieving them from an external api), then the Object_id would change.In all documentation I see that by design, it’s better to use the Object_id of the document as a reference, but it is volatile. (I have also read that Sharding might change the Object_id).Why is not a best practice to have our own internal unique ID?\nAfter recreating the songs, I will lose the reference to their translations.Song Document example:Translation Document example:",
"username": "Mnt_Bl"
},
{
"code": "",
"text": "Hi @Mnt_Bl,\nI will try to answer to some of your questione:Why is not a best practice to have our own internal unique ID?I raccomend you to read: https://www.mongodb.com/docs/manual/reference/method/ObjectId/After recreating the songs, I will lose the reference to their translationsIt’s normal because the 4 byte of timestamp will change surely when you create a new documento songHoping Is It useful for you!Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "thanks @Fabio_Ramohitaj,My question is more about the need of creating and maintaining a different unique id to identify documents.I think there has to be some considerations when best practices are defined, such as:Another case can be when dealing with countries ISO 3166. Parts of a country document can change (an alpha id, a country code, or even the country name). But a change can have a major impact to the application.In this case, I am also thinking to create my own internal id, so in case a country document changes, I can easily update the document and the impact to the overall application would be low.But I am quite new in MongoDB, and not sure if this approach is correct, and other smart & experienced people have a well-founded solution.",
"username": "Mnt_Bl"
}
]
| Problem with referenced documents after re-creating collection documents | 2023-01-15T07:55:26.291Z | Problem with referenced documents after re-creating collection documents | 607 |
null | [
"python",
"field-encryption"
]
| [
{
"code": "",
"text": "I am starting to explore CSFLE and currently using community version 4.4\nWe are open to moving to enterprise version or Atlas.Our application is an enterprise application. We have implemented multi-tenancy with one DB per tenant model. From our application (Python) we maintain a pool of DB connections (rather driver does) and we just switch the MongoDB db to use based on the tenant-ID. That means, we can just keep one connection pool and use it for any tenant.With CSFLE, I am wondering the following:\nHow can we keep tenant specific encryption keys under tenant DB and still maintain connection pool and easily switch DBs?The automatic encryption/decryption parameter (which hold info about key namespace, db+collection) needs to be passed to MongoClient consturctor, that is when MongoDB connection is created.If we do keep tenant specific keys under tenant db, it seems, we have to create individual MongoDB connection per tenant. Which seems wrong to me.Any suggestion? Am I missing anything?\nHelp will be greatly appreciated.",
"username": "WootCloud_Admin"
},
{
"code": "",
"text": "Yes, your understanding that you would need one connection for each tenant is correct. As the FLE keys are stored under the tenant database, you would have to initialize the connection with the data key id specific for each tenant. MongoDB should have had better support for such use cases.",
"username": "Anvesh_Reddy_Patlolla"
},
{
"code": "",
"text": "In my case, I already have one connection for each tenant. But the connections are created by request for the tenant in the token.\nMy problem is that I’ll need do get each tenant key for each time I want to connect to the database.\nBut I realize that when you use a cloud KMS, you don’t have to put key data in the connection, the database itself will do this. (It’s right?)",
"username": "Gabriel_Anderson"
}
]
| CSFLE and Multi-tenancy, encryption key per tenant | 2022-08-10T21:35:51.015Z | CSFLE and Multi-tenancy, encryption key per tenant | 2,586 |
null | [
"sharding"
]
| [
{
"code": "",
"text": "",
"username": "1111003"
},
{
"code": "",
"text": "hi,",
"username": "Serkan_Ozdemir"
}
]
| What is the official recommendation regarding usage of sharded cluster balancer in production environment? | 2022-05-24T17:21:36.600Z | What is the official recommendation regarding usage of sharded cluster balancer in production environment? | 2,211 |
null | [
"java",
"connecting"
]
| [
{
"code": "",
"text": "Hi,I am currently trying to create a automated test. One step is to verify in mongoDB. I have the SRV connection string and was able to connect to the cluster using Robo3T so i guess the credentials are fine.\nI tried creating a function in my java project to connect to mongoDB but was having error:INFO org.mongodb.driver.cluster - Exception in monitor thread while connecting to server XXXXXXX.mongodb.net:1058\ncom.mongodb.MongoSocketWriteException: Exception sending message\nat com.mongodb.internal.connection.InternalStreamConnection.translateWriteException(InternalStreamConnection.java:550)\nat com.mongodb.internal.connection.InternalStreamConnection.sendMessage(InternalStreamConnection.java:432)\nat com.mongodb.internal.connection.InternalStreamConnection.sendCommandMessage(InternalStreamConnection.java:272)\nat com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:256)\nat com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:83)\nat com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:33)\nat com.mongodb.internal.connection.InternalStreamConnectionInitializer.initializeConnectionDescription(InternalStreamConnectionInitializer.java:105)\nat com.mongodb.internal.connection.InternalStreamConnectionInitializer.initialize(InternalStreamConnectionInitializer.java:62)\nat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:129)\nat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117)\nat java.base/java.lang.Thread.run(Thread.java:834)\nCaused by: javax.net.ssl.SSLException: Connection reset\nat java.base/sun.security.ssl.Alert.createSSLException(Alert.java:127)\nat java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:350)\nat java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:293)\nat java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:288)\nat java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:144)\nat java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1408)\nat java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1314)\nat java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:440)\nat java.base/sun.security.ssl.SSLSocketImpl.ensureNegotiated(SSLSocketImpl.java:819)\nat java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1189)\nat com.mongodb.internal.connection.SocketStream.write(SocketStream.java:99)\nat com.mongodb.internal.connection.InternalStreamConnection.sendMessage(InternalStreamConnection.java:429)\n… 9 more\nSuppressed: java.net.SocketException: Connection reset by peer: socket write error\nat java.base/java.net.SocketOutputStream.socketWrite0(Native Method)\nat java.base/java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:110)\nat java.base/java.net.SocketOutputStream.write(SocketOutputStream.java:150)\nat java.base/sun.security.ssl.SSLSocketOutputRecord.encodeAlert(SSLSocketOutputRecord.java:81)\nat java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:381)\n… 19 more\nCaused by: java.net.SocketException: Connection reset\nat java.base/java.net.SocketInputStream.read(SocketInputStream.java:186)\nat java.base/java.net.SocketInputStream.read(SocketInputStream.java:140)\nat java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:476)\nat java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:470)\nat java.base/sun.security.ssl.SSLSocketInputRecord.decode(SSLSocketInputRecord.java:160)\nat java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:110)\n… 16 more",
"username": "Jimbo_Peji"
},
{
"code": "",
"text": "Hi Jimbo,I am also facing same issue . how did u fixed this issue?Thanks\nPrabhu",
"username": "prabhu_padala"
}
]
| MongoDB connection reset using java driver | 2021-11-27T01:40:18.265Z | MongoDB connection reset using java driver | 2,907 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "",
"text": "Hi experts,I want to have “$or” query to combine multiple clauses. Each clause returns a recordset, and for each record set, only the first record should be returned and merged into final result.Is it possible to do it in mql?Best regards,\nJennifer",
"username": "Yinhua_Zhao"
},
{
"code": "",
"text": "You can likely do this using the Aggregation Pipeline. If you can share a couple of sample documents and an example of what the expected output would be it would be easier to provide further assistance.",
"username": "alexbevi"
},
{
"code": "`db[\"mydata\"].find({\n $or:[\n { \"device\": \"device1\",\n \"reader\": \"x\", \n \"measurement\": \"temperature\", \n \"SourceTimeUtc\": { \"$lte\":ISODate(\"2023-01-11T06:07:47.280Z\") },\n $sort: { SourceTimeUtc:-1},\n $limit: 1\n },\n { \"device\": \"device1\",\n \"reader\": \"y\", \n \"measurement\": \"temperature\", \n \"SourceTimeUtc\": { \"$lte\":ISODate(\"2023-01-11T06:07:47.280Z\") },\n $sort: { SourceTimeUtc:-1},\n $limit: 1\n },\n { \"device\": \"device1\",\n \"reader\": \"x\", \n \"measurement\": \"humidity\", \n \"SourceTimeUtc\": { \"$lte\":ISODate(\"2023-01-11T06:07:47.280Z\") },\n $sort: { SourceTimeUtc:-1},\n $limit: 1\n }\n ]\n }\n )`\n",
"text": "Hi ,\nHere is the fake mql I need. But it has grammar error.",
"username": "Yinhua_Zhao"
},
{
"code": "$facettest.foodb.foo.drop();\ndb.foo.insertMany([\n { device: \"device1\", reader: \"x\", measurement: \"temperature\", sourceTimeUtc: ISODate(\"2023-01-11T06:07:47.280Z\") },\n { device: \"device1\", reader: \"x\", measurement: \"temperature\", sourceTimeUtc: ISODate(\"2023-01-12T06:07:47.280Z\") },\n { device: \"device2\", reader: \"x\", measurement: \"temperature\", sourceTimeUtc: ISODate(\"2023-01-11T06:07:47.280Z\") },\n { device: \"device1\", reader: \"y\", measurement: \"temperature\", sourceTimeUtc: ISODate(\"2023-01-11T06:07:47.280Z\") },\n { device: \"device1\", reader: \"y\", measurement: \"temperature\", sourceTimeUtc: ISODate(\"2023-01-12T06:07:47.280Z\") },\n { device: \"device2\", reader: \"y\", measurement: \"temperature\", sourceTimeUtc: ISODate(\"2023-01-11T06:07:47.280Z\") },\n { device: \"device1\", reader: \"x\", measurement: \"humidity\", sourceTimeUtc: ISODate(\"2023-01-11T06:07:47.280Z\") },\n { device: \"device1\", reader: \"x\", measurement: \"humidity\", sourceTimeUtc: ISODate(\"2023-01-12T06:07:47.280Z\") },\n { device: \"device2\", reader: \"x\", measurement: \"humidity\", sourceTimeUtc: ISODate(\"2023-01-11T06:07:47.280Z\") }\n]);\ndb.foo.createIndex({ device: 1, sourceTimeUtc: -1 });\n{ device: \"device1\", sourceTimeUtc: { $lte: ISODate(\"2023-01-11T06:07:47.280Z\") }$facet$setUnion$replaceRootdb.foo.aggregate([\n { $match: { device: \"device1\", sourceTimeUtc: { $lte: ISODate(\"2023-01-11T06:07:47.280Z\") } } },\n { $facet: {\n \"results1\": [\n { $match: { reader: \"x\", measurement: \"temperature\"} },\n { $sort: { sourceTimeUtc: -1 } },\n { $limit: 1 }\n ],\n \"results2\": [\n { $match: { reader: \"y\", measurement: \"temperature\"} },\n { $sort: { sourceTimeUtc: -1 } },\n { $limit: 1 }\n ],\n \"results3\": [\n { $match: { reader: \"x\", measurement: \"humidity\"} },\n { $sort: { sourceTimeUtc: -1 } },\n { $limit: 1 }\n ],\n }},\n { $project: { results: { $setUnion: [ \"$results1\", \"$results2\", \"$results3\" ] } } },\n { $unwind: \"$results\" },\n { $replaceRoot: { newRoot: \"$results\" } }\n])\n[\n {\n \"_id\": {\n \"$oid\": \"63c17b47415a20047f425a07\"\n },\n \"device\": \"device1\",\n \"reader\": \"x\",\n \"measurement\": \"temperature\",\n \"sourceTimeUtc\": {\n \"$date\": \"2023-01-11T06:07:47.280Z\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"63c17b47415a20047f425a0a\"\n },\n \"device\": \"device1\",\n \"reader\": \"y\",\n \"measurement\": \"temperature\",\n \"sourceTimeUtc\": {\n \"$date\": \"2023-01-11T06:07:47.280Z\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"63c17b47415a20047f425a0d\"\n },\n \"device\": \"device1\",\n \"reader\": \"x\",\n \"measurement\": \"humidity\",\n \"sourceTimeUtc\": {\n \"$date\": \"2023-01-11T06:07:47.280Z\"\n }\n }\n]\n",
"text": "Hi @Yinhua_Zhao,It looks like what you’re looking to do is effectively merge the results of 3 different filters into a single result set. As it appears you are targeting a single collection you could do something like the following using $facet.First, we’re just going to setup a collection (name test.foo) with some sample data and create an index the pipeline can use to more efficiently retrieve the results.Next we’ll filter the collection for common documents that all filter permutations can use ({ device: \"device1\", sourceTimeUtc: { $lte: ISODate(\"2023-01-11T06:07:47.280Z\") }). The $facet stage allows you to define 3 new filters that can be applied to the results from the previous stage, which we’ll then combine into a single array (using $setUnion) and return as the result of the pipeline by unwinding the resulting array and replacing the pipeline’s output (using $replaceRoot).For the sample documents above the result should be:",
"username": "alexbevi"
},
{
"code": "({ device: 1, reader: 1, measurement :1, sourceTimeUtc: -1}) db[\"mydata\"].find({ \"device\": \"device1\",\n \"reader\": \"x\", \n \"measurement\": \"temperature\", \n \"SourceTimeUtc\": { \"$lte\":ISODate(\"2023-01-11T06:07:47.280Z\") \n }).sort(SourceTimeUtc:-1).limit(1)\n }\n",
"text": "Hi Alex,Thank you so much! Now I know the $facet should functionally work.But I have a concern about the performance. Each device can contains 100, 000 rows to 10,000,000 of data. I just learned that $facet stage don’t use index. Given the query normally query the latest data, there will be too much data flowing into $facet stage because typically the $match stage filter out all the data belong to current device at that time.In current data model, the index is ({ device: 1, reader: 1, measurement :1, sourceTimeUtc: -1}). So below query is very fast because keys scanned is 1 in the execution plan.In this new data model, the index is ({ device: 1, sourceTimeUtc: -1}). The keys scanned could be 10,000,000 rows. The performance will be a major concern.Best regards,Jennifer Zhao",
"username": "Yinhua_Zhao"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Merge first record into the final result of $or query | 2023-01-11T13:05:45.986Z | Merge first record into the final result of $or query | 562 |
null | []
| [
{
"code": "",
"text": "I have a single server with an open connection to the DB, and let’s say at most 6 connections running on CI builds at any given time.Lately I’ve been getting this email from mongo atlas twice a day.What gives?Could I be missing something in the atlas config?Also, where can I see all the open connections in Atlas gui?",
"username": "Adam_Goldman"
},
{
"code": "",
"text": "Hi @Adam_Goldman - Welcome to the community There isn’t a feature at the moment in the Atlas UI that will show you all the specific active connections to your MongoDB instance. You can however, review the available metrics (specifically Connections) to see if the connections surge up instantaneously or are gradually building up.If you haven’t already done so, perhaps going over the Fix Connection Issues documentation may be of use. As mentioned in the docs:Connection alerts are generally a symptom of a larger problem. Employing one of the strategies outlined above will fix the immediate problem, but a permanent solution usually requires either:When you close all application / instances connecting to the cluster, does the connections go back down to 0 within a few minutes? Also, vice versa, do the connections go up to ~500 after starting the application(s)?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks Jason.I see I have at night time, when there are no builds running and I should have only one connection open (the 1 production server I have), but I have 20 on one shard, and 6 and 6 on the other two.Any ideas why is that?On the docs it says when you restart your application all connections automatically close, but that doesn’t seem to be the case.Is there a protocol for debugging this?\nimage2528×750 81.5 KB\n",
"username": "Adam_Goldman"
},
{
"code": "",
"text": "Hi @Adam_Goldman,I see I have at night time, when there are no builds running and I should have only one connection open (the 1 production server I have), but I have 20 on one shard, and 6 and 6 on the other two.Any ideas why is that?On the docs it says when you restart your application all connections automatically close, but that doesn’t seem to be the case.Are you aware of any other servers connecting to the Atlas instance? You can try removing all network access list entries temporarily, wait a few minutes, and then see if the connection counts have dropped on the Atlas metrics page for your cluster.Just to clarify, is this an M0 tier cluster? I assumed based off the number of connections alert you’ve prompted but would like to confirm.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "In idle time I have 4 actually:1 Production server\n2 Production Admin server\n3 Dev server\n4 Dev server AdminAnd rarely more than 2-3 PRs getting merged in parallel, so it should peak around 7.Is there a “mongo”/“atlas” protocol of debugging this issue? I can imagine this happens daily to many users.Here’s what I found on my end:Only production server up: 10 connectionsThat’s as of writing these lines right nowAnything else I can do to give insight here?",
"username": "Adam_Goldman"
},
{
"code": "",
"text": "In idle time I have 4 actually:1 Production server\n2 Production Admin server\n3 Dev server\n4 Dev server AdminAnd rarely more than 2-3 PRs getting merged in parallel, so it should peak around 7.Since we have no visibility into the actual inner workings of your deployment and CI system, it’s difficult to say what’s the “proper” number of connections. However, connections from the various driver(s) can differ but the connection counts can be affected depending on certain settings. For e.g. (from the Connection Monitoring and Pooling specs):If minPoolSize is set, the Connection Pool MUST be populated until it has at least minPoolSize total Connections. This MUST occur only while the pool is “ready”. If the pool implements a background thread, it can be used for this. If the pool does not implement a background thread, the checkOut method is responsible for ensuring this requirement is met.Is there a “mongo”/“atlas” protocol of debugging this issue? I can imagine this happens daily to many users.Assuming this is an M0 tier cluster, if you have removed all entries from the network access entries list and still believe the connections remain abnormally high then you can contact the in-app chat support team to check if there are any issues with the cluster. This troubleshooting step was to try and identify if the connections were coming from the application(s) for the M0 tier cluster.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Yes this is the free tier M0 cluster\nI have removed the network access list and indeed the connections dropped to 0.Since we have no visibility into the actual inner workings of your deployment and CI systemNot sure what you mean here Deployment shouldn’t affect connections AFAIK, I have only one production server and one admin production server, and when they get deployed it should be just one more connection while we replace the old server with the new one (unless I’m missing something?).And even when the CI isn’t active eg outside working hours, I still have sometimes 60+ connections some how.So given that information, what would you suggest should be my next step?",
"username": "Adam_Goldman"
},
{
"code": "mongoshminPoolSize500mongosh",
"text": "Yes this is the free tier M0 cluster\nI have removed the network access list and indeed the connections dropped to 0.Thank you for confirming Adam.Deployment shouldn’t affect connections AFAIK, I have only one production server and one admin production server, and when they get deployed it should be just one more connection while we replace the old server with the new one (unless I’m missing something?).And even when the CI isn’t active eg outside working hours, I still have sometimes 60+ connections some how.Connections do not necessarily work in this manner with specific regards to MongoDB and connection count. The following example is not exactly the same but may shed some more light onto this. In this example, I have a particular cluster which I connect to with mongosh from a single “server”. I connect with minPoolSize value of 500. After connecting successfully, we can see the connections increase by ~500 without even running a single operation from the mongosh connection:\nimage6166×644 127 KB\nIn this example, I have a “single server” (or single client) connecting to my MongoDB replica set yet it alone has increased the connection count by ~500 without performing any operations.So given that information, what would you suggest should be my next step?Note that since this is a shared instance, what you’re seeing here may not reflect what you’ll see in a dedicated instance, and thus what normally happens in a dedicated instance may not apply in this case. There are certain limitations of shared instances, thus it may not fit all use cases. Anecdotally, I have seen M0 clusters connection count having a delay to reflect the actual connection into the servers, so it’s possible that if your CI system have multiple connect/disconnect routines, M0’s connection counting lagged behind and thus do not reflect the true connection count. This discrepancy adds up, and thus you’re seeing the warning.With regards to your particular set up, you may wish to investigate how the CI/CD system(s) are connecting to the MongoDB instance. As mentioned previously, there also may be slight variation in each of the driver(s) and how they handle connections to the MongoDB instance(s). The root cause may be as simple as connections not being closed properly.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "mongoose.createConnection",
"text": "Thanks for the detailed responses @Jason_Tranyou may wish to investigate how the CI/CD system(s) are connecting to the MongoDB instanceNot sure what you mean by that, I literally do mongoose.createConnection once, and then I close the server, which according to mongodb docs should close all connections as well.The root cause may be as simple as connections not being closed properly.So again I’m not sure how this could happen since in the docs, at least as far as I understand, all connections are closed once the server closes, which happens multiple times a day every time we deploy.",
"username": "Adam_Goldman"
},
{
"code": "client.close()Connections to your cluster have exceeded 500",
"text": "Hi @Adam_GoldmanIf I understand correctly, this is your issue:Am I following this correctly so far?If yes, then I think this may be an artifact of an M0 cluster and how it counts connections. It’s a shared cluster, meaning that the telemetry on it may be lagging behind the actual numbers. This was alluded to by @Jason_Tran earlier (emphasis mine):Anecdotally, I have seen M0 clusters connection count having a delay to reflect the actual connection into the servers, so it’s possible that if your CI system have multiple connect/disconnect routines, M0’s connection counting lagged behind and thus do not reflect the true connection count. This discrepancy adds up, and thus you’re seeing the warning.In my mind there are a couple of ways to investigate that theory further:Please let us know if you have further questions.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "So as I said I have a few apps:\n1 Production server\n2 Production Admin server\n3 Dev server\n4 Dev server AdminI do NOT call client.close, since according to the docs connections get closed on server shut off, and in anyway my connectinos jump to 60 on the first time I open my DB to connections.",
"username": "Adam_Goldman"
}
]
| Connections to your cluster(s) have exceeded 500 - With a single server | 2023-01-04T13:50:10.156Z | Connections to your cluster(s) have exceeded 500 - With a single server | 2,747 |
null | [
"aggregation",
"flexible-sync",
"mongopush"
]
| [
{
"code": "",
"text": "I want to push not matching documents and collections from the local database to a remote database, what could be the best way to do that? Please note that both databases are standalone databases, not replicas or shards.Requirement:How can I push only selected documents to a remote database once the difference has been calculated? I read about aggregation and pipeline but it does not work with two standalone databases. Please suggest to me the best way to do this case.Or if we can use aggregation with pipeline when replicas are not there, please suggest this way too.\nOr if with replicas can I achieve this condition? like updating a few documents remotely instead of updating everything automatically. I want to update some documents from local to remote only on some API calls. Can someone help me to get this case correctly?",
"username": "Prasanna_Sasne"
},
{
"code": "$mergemongodumpmongoexportmongorestoremongoimport",
"text": "Hello @Prasanna_Sasne,I want to show the documents from the local database which does not match with the remote database and then I want to push only a few documents from the resulting documents. I do not want to push all documents automatically from the local to the remote database.I read about aggregation and pipeline but it does not work with two standalone databases.I believe if you have two separate servers you cannot (without some custom code) do something like a $merge in one operation, to aggregate and then output the results to another collection, on another server.Even if you output your aggregated documents to another database on your local server, if you used replication, you’d need to replicate the whole server (so all databases).Besides custom code, you could script (on your local database server) something to:Regularly run a mongodump or mongoexport to export documents of whatever collection your aggregation output goes to on Server A into a file or to standard outThen use mongorestore or mongoimport and a connection string to connect to, and insert documents into your Server BIn both cases you apply a query to define what data you dump/export or import/export via options to those programs. Also, if you are “restoring” to an existing collection MongoDB will be smart enough to use the _id to avoid any duplication.",
"username": "Justin_Jenkins"
},
{
"code": "",
"text": "Thank you for your response. I am more interested to push local database collections to a remote database on API calls. And I do not want to use the command line interface so, mongodump will not be useful.I tried creating a replica and writing to the secondary node (Ideally I should not write to the secondary node) but collections from the secondary node also were automatically synchronizing with a remote database. Can you please tell me, what the correct behavior of the replica is? I set the high priority for the primary node and low priority for the secondary node, but still, every write on the secondary node was reflected in the primary node. So my question is",
"username": "Prasanna_Sasne"
},
{
"code": "rs.status()\nrs.isMaster()\n",
"text": "Can you please tell me, what the correct behavior of the replica is?Great question! A replica set copies (replicates) the contents of one sever node to another node. This action isn’t on the database or collection level. There are various technical resons for this, but suffice to say it is meant to work on the server level.I set the high priority for the primary node and low priority for the secondary nodeThere is an election process when a Replica Set starts up and based off various factors a “primary” is “elected”. While the “priority” effects which node will be the “primary” (or the server you can write to and have its contents replicated to the other nodes) it is merely a weighting. You can set a priority from 1-1000, or 0. A 0 is the only way to make sure a node won’t be a primary.but still, every write on the secondary node was reflected in the primary node.Unless you overwrite the default this isn’t happening. You can only write to the primary. Perhaps the server you don’t consider to be the primary is actually the primary? You should be able to determine this by logging into the server. Here are some commands you can try:Again, the priority doesn’t guarantee which node will be the primary, it just makes it more likely. The reason this doesn’t really matter is each node will have the same data.What configuration shall I make in the secondary node so that it won’t sync automatically with the primary node?That particular configuration is impossible. The idea of replica set is the same data is replicated across all the nodes so if any one node goes down you can still provide all the same data from one of the other nodes. This is also why when you connect to a replica set you don’t connect directly to a particular server, but rather allow MongoDB to direct you to the primary.Can I really push only a few documents from secondary to primary? Not synchronizing everything with the primary? Or everything which is written to the secondary node, will automatically sync with the primary node?All the data on a node will be replicated to the other nodes in the set, by design. If you want subsets of your data on another server that requires a programmatic solution.Replica Sets are meant to copy everything in an idempotent manner between servers so any one server can become the primary at any time.P.S.There is one caveat, although I don’t recommend it … technically anything in the local database isn’t replicated to other nodes. You could (in theory) store the data you don’t want replicated in that database and then and then aggregate out into some other database documents you want replicated (i.e. data in any other database).This is basically circumventing how replication is supposed to work though, so again … not recommended for most uses cases.",
"username": "Justin_Jenkins"
}
]
| Push not matching documents and collections from local database to remote database | 2023-01-13T23:29:37.096Z | Push not matching documents and collections from local database to remote database | 1,399 |
[]
| [
{
"code": "",
"text": "\nScreenshot 2023-01-11 at 03.13.101920×1148 116 KB\n\nThe window does not respond while trying to import json data to collection. The data volume is not as issue as I tried adding the data individually to get the same result. Please help me resolve this asap.Thanks!",
"username": "Anupam_Ghosal"
},
{
"code": "",
"text": "@Anupam_Ghosal when you inserted the documents individually, were you using the default ObjectID on the document, or the same ObjectIDs shown in the screenshot? I was able to reproduce your error using invalid ObjectIDs, but this functionality worked as expected when I used valid ObjectIDs.We’ll work to make our error messaging here more clear but in the meantime, you may consider uploading bulk data from a JSON file using either MongoDB Compass or mongoimport. Please do post here if you are able to otherwise resolve this issue.",
"username": "Julia_Oppenheim"
}
]
| Atlas freezes while importing json data to collection | 2023-01-11T03:29:21.046Z | Atlas freezes while importing json data to collection | 623 |
|
null | [
"aggregation",
"queries",
"node-js",
"crud",
"views"
]
| [
{
"code": "const local_database_name = ... ;\nconst local_collection_name = ... ;\nconst local_client = new MongoClient( local_uri ) ;\nconst local_database = client.db( local_database_name ) ;\nconst local_collection = database.collection( local_collection_name ) ;\nconst local_query = { ... }\nconst local_documents = local_collection.find( local_query ).toArray() ;\n\nconst net_database_name = ... ;\nconst net_collection_name = ... ;\nconst net_client = new MongoClient( net_uri ) ;\nconst net_database = client.db( net_database_name ) ;\nconst net_collection = database.collection( net_collection_name ) ;\nconst net_query = { ... }\nconst net_documents = net_collection.find( net_query ).toArray() ;\n\n/* magic_function is the function that compares metadata and do what ever it needs\n to update the local_documents from the net_documents\n*/\nconst modified_documents = magic_function( local_documents , net_documents ) ;\n\nconst temp_collection_name = \"_temp_collection\" ;\nconst temp_collection = local_database.collection( temp_collection_name ) ;\n\n/* insert the modified documents into the temporary collection\n*/\ntemp_collection.insertMany( modified_documents ) ;\n\n/* merge modified documents into the original local collection\n*/\ntemp_collection.aggregate( { \"$merge\" : {\n \"into\" : local_collection_name ,\n \"on\" : _id ,\n \"whenMatched\" : \"merge\" ,\n \"whenNotMatched\" : \"discard\" } } )\n",
"text": "Follow up on Aggregate multiple databases (one DB is placed locally on PC, second - on host in net) - #4 by Andrei by @AndreiSome incomplete js code for you scenario.You might need some await here and there but I am not fluent enough with JS to know exactly where it is needed.You might want to use different values for whenMatched and whenNotMatched.",
"username": "steevej"
},
{
"code": "",
"text": "Merci bien vôtre aide!",
"username": "Andrei"
},
{
"code": "",
"text": "Are you using two standalone databases? or these are replicas? Does aggregate is working for two standalone databases if they are not replicas or shards?",
"username": "Prasanna_Sasne"
}
]
| Followup: Aggregate multiple databases (one DB is placed locally on PC, second - on host in net) | 2022-11-01T16:36:03.856Z | Followup: Aggregate multiple databases (one DB is placed locally on PC, second - on host in net) | 1,867 |
null | [
"serverless"
]
| [
{
"code": "",
"text": "I have created a MongoDB Atlas serverless db in the AWS region of eu-west-1.\nThen I was trying to write data into this db from a AWS lambda function. This function is also in the AWS region of eu-west-1.The connection between my db and my lambda function was built via Private Endpoint.The average data writing speed was 35 KB/s, which was so slow. I wonder what may have caused that?",
"username": "Guo_Js"
},
{
"code": "",
"text": "I recommend opening a support case",
"username": "Andrew_Davidson"
}
]
| Writing data into serverless db from AWS lambda is extremely slow | 2023-01-06T16:58:18.536Z | Writing data into serverless db from AWS lambda is extremely slow | 1,383 |
null | [
"queries"
]
| [
{
"code": "{ _id:\"1\", a: \"xx\", b: \"yy\", createTime:ISODate(\"2022-05-16T06:07:47.280Z\"), lastUpdateTime:ISODate(\"2022-05-16T06:07:47.280Z\"), v: [\"v1\"] }\n { _id:\"2\", a: \"xx\", b: \"yy\", createTime:ISODate(\"2022-05-16T06:07:47.280Z\"), lastUpdateTime:ISODate(\"2022-05-17T07:07:47.280Z\"), v: [\"v1\", \"v2\"] }\n db[\"data\"].find({a: \"xx\", \"b\": \"yy\", \"createTime\": { \"$lte\":ISODate(\"2022-05-16T06:07:47.280Z\") }}).sort({lastUpdateTime :-1}).limit(1)\n",
"text": "In a mongo collection, createTime and lastUpdateTime are defined as timestamps and keeps increasing. User might query frequently about the latest updated value. Given the condition this collection might contain 100 millions of data, index shall be created.Sample documents:Typical query pattern:There’re two options about the index creation based on ESR rule.Both index can help this query. I don’t assume there’re obvious difference while I doubt there’re different performance impact during insert/update/delete. Since timestamp is increasing by nature, I guess the “asc” index should be created. But not sure how to verify it.",
"username": "Yinhua_Zhao"
},
{
"code": "sort({lastUpdateTime :-1})\n",
"text": "Hi @Yinhua_Zhao ,Actually if the sort is usually desc like the following:I suggest to createCreateIndex({ a: 1, b: 1, lastUpdateTime :-1, createTime: 1}).This is a more appropriate query shape for the index an should result in better scanning.The write maintenance overhead on performance would be the same with -1 or 1…Thanks\nPavel",
"username": "Pavel_Duchovny"
}
]
| Performance impact of an ascending or a descending index during CUD | 2023-01-13T14:44:39.622Z | Performance impact of an ascending or a descending index during CUD | 1,110 |
null | [
"dot-net",
"replication",
"connecting"
]
| [
{
"code": "",
"text": "Exception raised : System.TimeoutException: A timeout occurred after 30000ms selecting a server using CompositeServerSelector{ Selectors = WritableServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 }, OperationsCountServerSelector }. Client view of cluster state is { ClusterId : “1”, ConnectionMode : “ReplicaSet”, Type : “ReplicaSet”, State : “Connected”, Servers : [{ ServerId: “{ ClusterId : 1, EndPoint : “[10.0.1.212]:27017” }”, EndPoint: “10.0.1.212:27017”, ReasonChanged: “Heartbeat”, State: “Disconnected”, ServerVersion: , TopologyVersion: , Type: “Unknown”, HeartbeatException: \"MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server. —> System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond",
"username": "Marco_Aerlic"
},
{
"code": "ping/// example\nping server1.mongodb.net\ntelnet27017/// example\ntelnet server1.mongodb.net 27017\n",
"text": "Hi @Marco_Aerlic - Welcome to the community.I am not sure if you’ve managed to resolve this yet but in saying so, the timeout exception error you’ve provided generally indicates that the driver in use is unable to connect to the MongoDB instance.Would you be able to share the following details if you’re still having issues connecting:I would also recommend you trying to run the following tool to try to help checking Atlas connections from the host server environment: GitHub - pkdone/mongo-connection-check: Tool to check the connectivity to a remote MongoDB deployment, providing advice if connectivity is not achieved.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Jason, I know this is late but I stumbled on your response when looking for an answer to a similar problem I am having. Before I get into it any further, I’d like to know more about the tool that you mentioned in your response. I am trying to connect from Windows Server 2022 and the tool lists only uses for Linux, Windows 10 and MacOSX… will it run on the 2022 Server?",
"username": "Mark_Linebarger"
},
{
"code": "",
"text": "Hi Mark - Welcome to the community Unfortunately i’m not sure if this works on Windows Server 2022. However, in saying so, the tool is based on the following blog : Paul Done's Technical Blog: Some Tips for Diagnosing Client Connection Issues for MongoDB Atlas. The blog contains details on a few diagnostic tools (some mentioned in my previous reply). If you encounter any errors with those specifically you could go from there supplying any response / errors here on the forums.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks Jason, I’ll give it a look-see. It looks like it has some good information and I’m hoping that it will help. I appreciate your response.",
"username": "Mark_Linebarger"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
]
| System.TimeoutException: A timeout occurred after 30000ms selecting a server | 2022-09-30T07:04:36.072Z | System.TimeoutException: A timeout occurred after 30000ms selecting a server | 10,000 |
null | [
"server",
"release-candidate"
]
| [
{
"code": "",
"text": "MongoDB 6.0.4-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 6.0.3. The next stable release 6.0.4 will be a recommended upgrade for all 6.0 users.\nFixed in this release:",
"username": "Aaron_Morand"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB 6.0.4-rc0 is released | 2023-01-13T20:50:53.593Z | MongoDB 6.0.4-rc0 is released | 1,201 |
null | [
"crud"
]
| [
{
"code": "await client.db(\"VsteamEdu\").collection(\"leads\").updateMany( \n { },\n {\n $pull:\n {\n bookings:\n {\n allEventDates: { $elemMatch: { \"dayBookingRef\": { $eq: \"000117-D4\"} } }\n }\n }\n })\n {\n bookings: [ {\n allEventDates: [\n {\n dayBookingRef: \"000117-D3\"\n },\n {\n dayBookingRef: \"000117-D4\"\n }\n ]\n } ]\n }\n",
"text": "I have followed the online documentation to pull an item from an array and looked through the threads but I can’t find the solution to this problem. I’m trying to pull an item from a nested array, but it removes the nested arrays instead of just the single element. Can some one please explain this behaviour? and the potential solution. Thanks in advance.Small sample of data:",
"username": "VSTEAM_Education"
},
{
"code": " await client.db(\"VsteamEdu\").collection(\"LeadsCopy\").updateOne( \n { \"bookings.allEventDates.dayBookingRef\": \"000117-D6\" },\n {\n $pull:\n {\n \"bookings.$.allEventDates\": {\n \"dayBookingRef\": \"000117-D6\"\n } \n } \n })\n",
"text": "I found a work around using the code below. I’m still not sure why the code earlier removes the whole array. It would be great if some can explain.",
"username": "VSTEAM_Education"
},
{
"code": "",
"text": "Hi @VSTEAM_EducationCan you add a reproducible example for the original (first) query https://mongoplayground.net so we can run it?",
"username": "santimir"
},
{
"code": "[\n {\n bookings: [\n {\n allEventDates: [\n {\n dayBookingRef: \"000127-D6\"\n },\n {\n dayBookingRef: \"000127-D5\"\n }\n ]\n }\n ]\n },\n {\n \"key\": 2\n }\n]\ndb.collection.update({\n \"_id\": ObjectId(\"5a934e000102030405000000\")\n},\n{\n $pull: {\n bookings: {\n allEventDates: {\n $elemMatch: {\n \"dayBookingRef\": {\n $eq: \"000127-D6\"\n }\n }\n }\n }\n }\n})\n[\n {\n \"_id\": ObjectId(\"5a934e000102030405000000\"),\n \"bookings\": []\n },\n {\n \"_id\": ObjectId(\"5a934e000102030405000001\"),\n \"key\": 2\n }\n]\n",
"text": "HiI have modelled the same behaviour on the playground. Please see the link below. As you can see it returns an empty array. Please let me know what is wrong with my original code. Thanks again.Mongo playground: a simple sandbox to test and share MongoDB queries onlineCode in play ground is below:Configuration:Query:Result:Thanks,Vidura",
"username": "VSTEAM_Education"
},
{
"code": "$pullbookings{ $pull: { <field1>: <value|condition>, ... } }\nfield1field1a.bba.0: \"hello\"a.0.ba.0ba.0.ba.$.ba.$[].b$$",
"text": "I think the reason here is that $pull is removing an array element from bookings field:The signature of this update operator isfield1 is where it removes from, and it must point to an array.See the Docs, at this section. for an example of what I just said (why It didn’t work.)So for internal fields, we use dot notation, in place of the field1 of the notation above.It is important to note how MDB interprets dot notation, which I still get wrong sometimes.It follows that if we want to update a field with an array of documents we would end up with a.0.b for updating the first element (document) in the array a.0 with a field bIf we want to loop, we can specify the “magic index $” and replace a.0.b for a.$.b.If we want to update not just one array item but all that match, we use a.$[].b:Using $ will stop after removing a single element matching the condition, this one does it for all.I didn’t really expect or really knew this, but $ will fail if the query does not store the field we search for.",
"username": "santimir"
},
{
"code": "",
"text": "HiSorry for the delayed reply and thank you for the detailed explanation. Much appreciated.Regards,Vidura",
"username": "VSTEAM_Education"
}
]
| $pulll one item from array removes the whole nested array | 2022-12-30T14:39:56.279Z | $pulll one item from array removes the whole nested array | 2,778 |
null | [
"node-js",
"crud"
]
| [
{
"code": "function delete_game(game) {\n games = games.filter(item => item !== game);\n db.update_listing(game.game_id, {\n p1_score: game.p1.score,\n p2_score: game.p2.score,\n moves: game.moves,\n winner_id: game.winner_id\n }, \"game\");\n // Update player statistics based on last game (MMR, FILLED CELLS etc)\n db.update_listing([game.p1, game.p2], null, \"player_game\");\n game = null;\n console.log(`Game deleted. New active game count is: ${games.length}`)\n}\nasync update_listing(_id_listing, _new_listing, _type) {\n let result = null;\n let update_completed = false\n try {\n // Connect the client to the server\n await client.connect();\n // Establish and verify connection\n await client.db(\"admin\").command({ ping: 1 });\n\n switch(_type) {\n case \"player\":\n result = await client.db(\"game_data\").collection(\"players\").updateOne({ _id: ObjectId(_id_listing) }, { $set: _new_listing });\n console.log(`Sono stati aggiornati ${result.modifiedCount} documenti.`);\n update_completed = result.modifiedCount > 0 ? true : false;\n break;\n case \"game\":\n result = await client.db(\"game_data\").collection(\"games\").updateOne({ _id: ObjectId(_id_listing) }, { $set: _new_listing });\n console.log(`Sono stati aggiornati ${result.modifiedCount} documenti.`);\n update_completed = result.modifiedCount > 0 ? true : false;\n break;\n case \"player_game\":\n for (let i = 0; i < _id_listing.length; ++i) {\n result = await client.db(\"game_data\").collection(\"players\").updateOne({ _id: ObjectId(_id_listing[i].id) }, { \n $inc: {\n filled_cells: _id_listing[i].filled_cells,\n stolen_cells: _id_listing[i].stolen_cells,\n closed_lines: _id_listing[i].closed_lines\n },\n $set: {\n mmr: _id_listing[i].mmr\n }\n });\n console.log(`Sono stati aggiornati ${result.modifiedCount} documenti.`);\n update_completed = result.modifiedCount > 0 ? true : false;\n }\n break;\n default:\n break;\n }\n }\n catch (e) {\n console.log(e);\n return null;\n }\n finally {\n // Ensures that the client will close when you finish/error\n await client.close();\n console.log(\"DB disconnected successfully to server\");\n }\n return update_completed;\n }\n",
"text": "I am having some problems making data updates of different documents in different collections.\nMy code runs from a NodeJS-based server.This is the part of the code that calls the two document update functions:While this is the part of the code that deals with performing the functions inherent in the database:Unfortunately, if I run the code the document updates are not executed, and in my server console I get the error “PoolClosedError [MongoPoolClosedError]: Attempted to check out a connection from closed connection pool.” However, the connection to the db is opened and closed correctly.Can anyone tell me where I am going wrong? Unfortunately, I am a neophyte in using MongoDB and surely this problem is caused by my error in handling the communication with the db.",
"username": "Vallo"
},
{
"code": "",
"text": "Open/create a client at the start of your application and don’t close it until the application exits, use this client throughout the application.The driver itself will maintain a connection pool.",
"username": "chris"
},
{
"code": "",
"text": "My application runs a multiplayer online game. Are you advising me to start a database connection when I start the server, and leave it always on until I stop the server one day to close the game?I am new to using Mongo DB, but I thought it was a good idea to open and close the connection to the db only when needed so as to avoid “intrusions.”",
"username": "Vallo"
}
]
| Problem updating multiple documents on different collections | 2023-01-10T12:05:03.539Z | Problem updating multiple documents on different collections | 1,103 |
null | [
"dot-net"
]
| [
{
"code": "",
"text": "Hello Mongo Community. This is my first post here because I am STUCK. I’m trying to complete the lab for LESSON 2: INSERTING A DOCUMENT IN C# APPLICATIONS in the ‘MongoDB CRUD operations in C#’ course.I know I have the code correct. I ended up looking at the hints to confirm. When I test the solution I receive the following error:Incorrect solution 1/1\nThe document were not found in the database. Please try againSeems like there must be a cluster connection issue? Do I need to whitelist my IP address?Thank you for the help.",
"username": "Chris_Scharf"
},
{
"code": "",
"text": "No need of whitelist of IP if you are using IDE\nDid your insert succed?\nYou can check from shell the record you inserted",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Still no luck. The code only needs one line added to be correct.The instructions say: Before you begin, please note that you are now connected to an Atlas cluster.Which cluster? What is the connection string to check (CLI or Atlas) if the record was inserted?Still getting this error:1 ATTEMPTS\nIncorrect solution 1/1\nThe document were not found in the database. Please try again",
"username": "Chris_Scharf"
},
{
"code": "",
"text": "I found the problem. It ended up being ID-10-T error. I missed the step of running dotnet in the CLI. Thank you for the help.",
"username": "Chris_Scharf"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Error when submitting lab from Instuqt IDE | 2023-01-13T03:54:12.297Z | Error when submitting lab from Instuqt IDE | 1,420 |
null | [
"queries",
"atlas-search"
]
| [
{
"code": "",
"text": "Hello,\nFew questions regarding Atlas:Thanks!",
"username": "Prasad_Kini"
},
{
"code": "M0M2M5M10+durationMillisdb.collection.getIndexes()db.collection.explain(\"executionStats\").find(...)COLLSCANcollStats$search$search",
"text": "Hi @Prasad_Kini,Currently, as per the Create an Atlas Search Index documentation, you cannot create more than:There are no limits to the number of indexes you can create on M10+ clusters.These are two separate instances, if the data is not the same, more than likely the data being returned is not the same. Do you have more information regarding this?durationMillis is the total time the query took to complete. I’m not too familiar with Studio3T, it is not an officially supported MongoDB product. Do you have more information regarding these fields or have a sample document from each that you could provide?Could you provide sample document(s) from each environment as well as:This could depend on various factors in addition to the fact that one is a COLLSCAN whilst the other uses indexes. However, there is more information needed as requested above. In addition to all the above questions and information, can you provide:I understand you have mentioned $search, indexes and local instance of MongoDB however please note that Atlas Search’s $search stage is only available for Atlas instances. Details on my following post response may be useful here.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi Jason,\nThank you so much for the detailed reply.I am on an M20 cluster with 3 nodes. I don’t always see all my queries in the profiler on the portal, even the recent ones. Few questions:The overall goal of this exercise is to be able to properly understand how my queries are performing. Hence I need to know what I be looking at.Thanks again,\nPrasad",
"username": "Prasad_Kini"
},
{
"code": "",
"text": "Hi @Prasad_Kini,These are great questions!Hope this helps!Thanks,\nFrank",
"username": "Frank_Sun"
},
{
"code": "",
"text": "Thanks Frank. At present, I am using the profiler just to get some performance benchmarks and am not worried about it not capturing some queries as there is not any load on the cluster.Is there a profiler view that will show me the queries from all the nodes?How does the cluster decide which node to send a given query to? When using a tool such as Studio 3T, once connected, will the query be always sent to one specific node?Thanks again,\nPrasad",
"username": "Prasad_Kini"
},
{
"code": "",
"text": "Unfortunately, there’s no Profiler view that will show you queries across all nodes in a replica set. However, this is a feature enhancement my team is aware of that we are looking to prioritize.I’m not too familiar with Studio 3T, as @Jason_Tran mentioned its not an officially MongoDB product. However, MongoDB provides the ability to set read preference, which can allow you to specify where you want to route read operations to. You can find more information about read preference here.Thanks,\nFrank",
"username": "Frank_Sun"
},
{
"code": "",
"text": "Hi Frank,\nCould you please share the link for the feature enhancement so that I can vote on it?Thanks much,\nPrasad",
"username": "Prasad_Kini"
},
{
"code": "",
"text": "Hi @Prasad_Kini,Here’s the link for the feature enhancement.Thanks,\nFrank",
"username": "Frank_Sun"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
]
| MongoDB Atlas Search Indexes & Performance (as compared to a local Mongo instance) | 2023-01-06T00:18:26.633Z | MongoDB Atlas Search Indexes & Performance (as compared to a local Mongo instance) | 1,746 |
null | [
"aggregation",
"java",
"crud"
]
| [
{
"code": " @Query(\"{'_id': ?0, 'lockStatus' : false}\")\n @Update(pipeline = {\"{$set: { counter: {$add: [ '$counter', 1 ] }} }\",\n \"{$set: { lockStatus: { $in: ['$counter', ?1 ]} }}\",\n \"{$set: { lockUpdatedOn : ?2, 'metadata.updatedUuid': ?5, 'metadata.updatedTime': ?2}}\",\n \"{ $set:{'completedDetail':{$cond:[{$eq:['$lockStatus',true]},{$concatArrays:[{$ifNull:['$completedDetail',[]]},\"\n + \"[{'card':?3, 'transactionId': 'NOT SET', 'transNo': '$counter', 'redeemStatus': false,\"\n + \" 'metadata': ?4 }]]},'$completedDetail']}}}\"\n })\n Bson filter = Filters.and(Filters.eq(ID, generateCounterId(request)), Filters.eq(LOCK_STATUS, false));\n\n List<Bson> updates = new ArrayList<>();\n updates.add( Updates.inc(COUNTER, 1));\n updates.add(Updates.set(LOCK_STATUS, Filters.in(COUNTER, getIWTransactionNumbers(request))));\n updates.add(Updates.combine(\n Updates.set(LOCK_UPDATED_ON, new Date()),\n Updates.set(METADATA_UPDATED_UUID, updatedUuid),\n Updates.set(METADATA_UPDATED_TIME, new Date())\n\n ));\n\n// few more updates for the $cond operation\n\n collection.updateOne(filter, updates);\n",
"text": "I have a complex native query which I want to convert to java DSL for future maintainability.Now the problem is I am not sure how to convert the $cond operation to DSLI have tried something like thisAny help will be really appreciated",
"username": "Sanjay_Kumar6"
},
{
"code": "",
"text": "This is an update as aggregation pipeline, right? You are trying to use regular update builders it looks like - you cannot mix the two, the update either has to use all regular update modifiers, or it has to use all aggregation pipeline syntax (that’s not a Java restriction, that’s an update restriction in general).Asya",
"username": "Asya_Kamsky"
},
{
"code": " AggregationExpression aggExpression = context ->\n Document.parse(\"{'card':'342134', 'transactionId': 'NOT SET', 'rStatus': false }\");\n \n AggregationUpdate aggUpdate = AggregationUpdate\n .update()\n .set(COUNTER).toValue(ArithmeticOperators.valueOf(COUNTER).add(1))\n .set(LOCK_STATUS).toValue(ArrayOperators.In.arrayOf(Arrays.asList(1, 2)).containsValue(REF_COUNTER))\n .set(\n SetOperation.builder()\n .set(LOCK_UPDATED_ON).toValue(new Date())\n .and()\n .set(METADATA_UPDATED_UUID).toValue(\"updatedUuid\")\n .and()\n .set(METADATA_UPDATED_TIME).toValue(new Date())\n )\n .set(COMPLETED).toValue(\n ConditionalOperators\n .when(ComparisonOperators.valueOf(LOCK_STATUS).equalToValue(true))\n .then(ArrayOperators.arrayOf(REF_COMPLETED).concat(aggExpression))\n .otherwiseValueOf(REF_COMPLETED)); \n\nmongoTemplate\n .update(Counter.class)\n .matching(Query.query(Criteria.where(ID).is(counterId).and(LOCK_STATUS).is(false)))\n .apply(aggUpdate).all();\nWrite operation error on server localhost:12345. Write error: WriteError{code=28664, message='$concatArrays only supports arrays, not object', details={}}.; nested exception is com.mongodb.MongoWriteException: Write operation error on server localhost:12345. Write error: WriteError{code=28664, message='$concatArrays only supports arrays, not object', details={}}.\norg.springframework.dao.DataIntegrityViolationException: Write operation error on server localhost:12345. Write error: WriteError{code=28664, message='$concatArrays only supports arrays, not object', details={}}.; nested exception is com.mongodb.MongoWriteException: Write operation error on server localhost:12345. Write error: WriteError{code=28664, message='$concatArrays only supports arrays, not object', details={}}.\n\tat app//org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:117)\n\tat app//org.springframework.data.mongodb.core.MongoTemplate.potentiallyConvertRuntimeException(MongoTemplate.java:3044)\n\n",
"text": "Thanks for the quick reply.I was trying to build a solution around your suggestion.I have used below AggregationUpdate logic for converting the Native query to Java DSLThis looks like the sloution for my problem. But when I am running this I am getting below error:The COMPLETED field is a array [ ] of documents which initially is empty.I am not sure how to fix it. Any suggestion are highly appreciated.",
"username": "Sanjay_Kumar6"
},
{
"code": "",
"text": "I would debug this by trying to run the same update in the shell against a sample data - if the error doesn’t appear then the issue is something in how the update is expressed in Java, if you get the same error then some field that’s expected to be an array maybe isn’t?Asya",
"username": "Asya_Kamsky"
},
{
"code": "AggregationExpression aggExpression = context ->\n Document.parse(\"{'card':'342134', 'transactionId': 'NOT SET', 'rStatus': false }\");\n \n AggregationUpdate aggUpdate = AggregationUpdate\n .update()\n .set(COUNTER).toValue(ArithmeticOperators.valueOf(COUNTER).add(1))\n .set(LOCK_STATUS).toValue(ArrayOperators.In.arrayOf(Arrays.asList(1, 2)).containsValue(REF_COUNTER))\n .set(\n SetOperation.builder()\n .set(LOCK_UPDATED_ON).toValue(new Date())\n .and()\n .set(METADATA_UPDATED_UUID).toValue(\"updatedUuid\")\n .and()\n .set(METADATA_UPDATED_TIME).toValue(new Date())\n )\n .set(COMPLETED).toValue(\n ConditionalOperators\n .when(ComparisonOperators.valueOf(LOCK_STATUS).equalToValue(true))\n .then(ArrayOperators.arrayOf(REF_IW_COMPLETED).concat(ObjectOperators.ObjectToArray.valueOfToArray(aggExpression)))\n .otherwiseValueOf(REF_COMPLETED)); \n\nmongoTemplate\n .update(Counter.class)\n .matching(Query.query(Criteria.where(ID).is(counterId).and(LOCK_STATUS).is(false)))\n .apply(aggUpdate).all();\n'$concatArrays only supports arrays, not object'",
"text": "This is very generic answer to my query.Also, the query works in the shell and as native query.Anyways, was able to progress by updating the code as below:This fix the above problem of'$concatArrays only supports arrays, not object'But now I am getting another problem, in the field COMPLETED array, now four new entries are getting created with default values.Now sure about why?",
"username": "Sanjay_Kumar6"
}
]
| Converting a $cond Native query to Java DSL | 2023-01-11T17:17:25.871Z | Converting a $cond Native query to Java DSL | 1,177 |
null | [
"data-modeling"
]
| [
{
"code": "{\n A: [el_1, el_2, etc.]\n}\n{\n A: {\n el_1: true,\n el_2: true,\n etc...\n }\n}\nel_x$in",
"text": "Instead of:I want to create this schema:The reason is because in order to find if el_x is $in A, if it is an array, the time complexity is O(N).However, as I understand it, the time complexity to find a key value pair is either O(1) or O(logN).Can someone confirm this? Can someone confirm this?",
"username": "Big_Cat_Public_Safety_Act"
},
{
"code": "",
"text": "I am specifically referring to an object stored in a MongoDB database. So how does MongoDB implement it?",
"username": "Big_Cat_Public_Safety_Act"
}
]
| What is the time complexity to find the value of a property of a field that is an object | 2023-01-10T05:33:55.804Z | What is the time complexity to find the value of a property of a field that is an object | 1,022 |
null | [
"api"
]
| [
{
"code": "",
"text": "Is there any SDKs (node?) or at a minimum an OpenAPI, swagger, or other schema document published with the API or available somewhere? I would like to automate some interactions with the admin API but right now that involves browsing the API docs for each endpoint and figuring out the various options/etc vs just generating from the api schema. I’d rather not have to write a scraper that retrieves what I need from the docs site.Also, if there was a published API schema then the Atlas portion of the mongocli and things like the Atlas Terraform provider could be kept current more easily through code generation. As it is now, things like the TF provider are missing a lot of API endpoints.Thanks for a great product!",
"username": "Daniel_Shepherd"
},
{
"code": "",
"text": "yes, the download URL is on the spec page:\nhttps://www.mongodb.com/docs/atlas/reference/api-resources-spec/",
"username": "andresil"
}
]
| Atlas admin API SDK, OpenAPI, swagger, or other schema? | 2021-12-16T20:48:41.872Z | Atlas admin API SDK, OpenAPI, swagger, or other schema? | 4,270 |
[]
| [
{
"code": " app.login(credentials: Credentials.emailPassword(email: email, password: password)) { [weak self](result) in\n \n DispatchQueue.main.async {\n // self!.setLoading();\n switch result {\n case .failure(let error):\n Logger.shared.log(\"Login failed: \\(error)\");\n failCallback(error, nil)\n return\n \n case .success(let user):\n print(\"Login succeeded!\");\n guard let `self` = self else { return }\n \n // some code....\n \n }\nLogin failed: Error Domain=realm::app::HttpError Code=431 \"http error code considered fatal\" UserInfo={realm::app::HttpError=Client Error: 431, NSLocalizedDescription=http error code considered fatal}\n#Failed to login: http error code considered fatal\nhttp error code considered fatal\n",
"text": "Hello.I’m using MongoDB Realm Sync in a production environment.\nRecently, some users fail to log in.\nLooking at the server log at that time, the status of the login request is OK and it looks like there is no problem.\nHowever, the client SDK is returning an error and the user is unable to log in.Client code:Client log:I looked up the error code and it seems that the header of the request is too long.The HTTP 431 Request Header Fields Too Large response status code\n indicates that the server refuses to process the request because the request's\n HTTP headers are too long.\n The request may be resubmitted after reducing the size of the request...Does anyone have any knowledge about this?RealmSwift Version : 10.19.0\nMongo DB Atlas Version : 4.4.10",
"username": "Enoooo"
},
{
"code": "",
"text": "Do you have an example app that you can share that generates this error? It’s much easier to help if there’s something that we can reproduce.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "It seems that your User data is too big to be sent.",
"username": "Ruben_Moha"
}
]
| Login fails with error code 431 | 2021-12-13T11:54:35.978Z | Login fails with error code 431 | 3,116 |
|
[]
| [
{
"code": "",
"text": "This question was covered under this query once… Disable: QUERY RESULTS 2-2 OF MANYJust like this…\n\nimage1156×460 17.4 KB\nBut there is no further update. I have requested in the chat, but if anyone has a solution, it would be helpful.Thanks in advance ",
"username": "Abhishek_Gaonkar"
},
{
"code": "",
"text": "Have you tried the NEXT button?Can you post the whole data pane? So that we can see COLLECTION SIZE, … and the query you used.",
"username": "steevej"
},
{
"code": "",
"text": "Hi @Abhishek_Gaonkar ,\nThe response was given in this thread\nIf you have any questions, feel free to ask ",
"username": "vgmda"
}
]
| Instead of showing all (5) of my documents, it shows as 1 of many, 2 of many | 2021-10-23T15:59:30.588Z | Instead of showing all (5) of my documents, it shows as 1 of many, 2 of many | 2,610 |
|
[]
| [
{
"code": "",
"text": "Hello,I tried to learn MongoDB and I don’t understand why on Atlas website, on a collection I have:\nQUERY RESULTS 1-1 OF MANY\nQUERY RESULTS 2-2 OF MANY(next page)\nQUERY RESULTS 3-3 OF MANY(next page)I put here screenshots\ntop-1-of-many1825×648 93.4 KB\nbottom-1-of-many1756×658 72.2 KBI want to have, for each collection, all data on one page, not on 20 pages, like is now.\nLike this:\nright-way1400×689 83.6 KBPlese someone tell me, how to fix this?",
"username": "Ionel_Antohe"
},
{
"code": "",
"text": "Hi @Ionel_Antohe,This definitely looks likes a possible bug in the data explorer.Please contact support or atlas chat to report this.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thank you very much @Pavel_Duchovny for answer. I will contact them.",
"username": "Ionel_Antohe"
},
{
"code": "",
"text": "Hi @Ionel_Antohe ,\nUnfortunately, this is a known and intended behaviour of Data Explorer for collections with very large documents. We limit the maximum number of bytes we send down to the client to below a safe limit for browsers by reducing down to 1 document when 20 documents would exceed the limit. If the sum of all of the documents being returned by the server is greater than 800000 bytes, then we will only display 1 document per page.I can see that you have an image object that might increase each document’s size. As a workaround, if you use MongoDB Compass you should be able to view all documents.I hope this helps! ",
"username": "vgmda"
}
]
| Disable: QUERY RESULTS 2-2 OF MANY | 2020-08-22T08:45:22.645Z | Disable: QUERY RESULTS 2-2 OF MANY | 3,442 |
|
null | [
"atlas-search",
"text-search"
]
| [
{
"code": "filter{\n \"$search\":{\n \"index\":\"my_search\",\n \"compound\":{\n \"filter\":[{\n \"text\":{\n \"path\":\"address.area\",\n \"query\":[\"Vale\",\"St. Paul\"]\n }\n }]\n }\n }\n}\n",
"text": "I’m using Atlas Search and the $search pipeline stage and using a filter with the compound operator. The relevant section of my pipeline looks like this:When testing the query the results are correct when I use one or more string values containing a single word, such as “Vale”. However, when I use a string containing a whitespace such as “St. Paul” instead of filtering results I get ALL items from all counties returned. It’s like it’s either ignoring it or doing some kind of fuzzy matching even though it is a filter and no fuzzy search is set.Can anyone provide any insight as to why that is the case and how I may resolve it?Thanks.",
"username": "Ian"
},
{
"code": "",
"text": "I managed to resolve the issue. It was down to the search analyzer. By default I was using the Standard analyzer. This analyzer “divides text into terms based on word boundaries”.So when it saw a word boundary such as “St. Paul” it was breaking the words up into separate “tokens”: “St.” and “Paul”. As I have a few area names starting with “St.” it was returning all of those that matched. This explains why it was working with single-word terms but not multi-word.To resolve the issue I set the Keyword analyzer only on the field in question. The Keyword analyzer “accepts a string or array of strings as a parameter and indexes them as single terms. Only exact matches on the field are returned.” This is the behaviour I needed as I was passing the exact phrase.With the Keyword analyzer applied to the field the search returned the correct results.",
"username": "Ian"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| $search filter not working for words with spaces in them | 2023-01-12T18:00:58.148Z | $search filter not working for words with spaces in them | 1,724 |
null | [
"java"
]
| [
{
"code": "TimeManagerSessionTimeIntervaltimeIntervalsRealmListSessionSessionSessionTimeIntervalsession.timeIntervals.deleteAllFromRealm()session.timeIntervalsTimbertimeIntervalsRealmListrealm.refresh()onSuccessobject TimeManager {\n\n private var sessions: RealmResults<Session>? = null\n\n private val sessionIdsToProcess = mutableSetOf<String>()\n\n init {\n\n val realm = Realm.getDefaultInstance()\n\n sessions = realm.where(Session::class.java).findAllAsync()\n sessions?.addChangeListener { _, _ ->\n\n if (sessionIdsToProcess.isNotEmpty()) {\n\n realm.executeTransactionAsync { asyncRealm ->\n\n val sessions = sessionIdsToProcess.mapNotNull { asyncRealm.findById<Session>(it) }\n\n for (session in sessions) {\n Timber.d(\"TIME INTERVAL COUNT = ${session.timeIntervals.size}\")\n session.timeIntervals.deleteAllFromRealm()\n val fti = TimeInterval()\n session.timeIntervals.add(fti)\n asyncRealm.insertOrUpdate(session)\n }\n sessionIdsToProcess.clear()\n }\n }\n }\n realm.close()\n }\n}\nsession.timeIntervals",
"text": "I have a singleton TimeManager that permanently listens to changes in my Session to perform backoffice stuff.In the code below, I’m adding a TimeInterval object to the timeIntervals RealmList of my Session object after deleting the existing ones, each time a Session changes.On a first change triggered by the UI, everything is fine and my Session has one TimeInterval object.On a second change triggered by the UI, what’s going wrong is that nothing is deleted when I do session.timeIntervals.deleteAllFromRealm() as session.timeIntervals is empty, as the Timber log shows.So it looks like the timeIntervals RealmList is not up to date, but calling realm.refresh() does not solve the issue. If I restart my app between the two changes, everything is fine.I’ve tried to add an onSuccess listener just to see if it was called and it does. There is pretty much no lag as I’m doing this on an empty Realm.Here is the code:How to make sure that session.timeIntervals is up to date?",
"username": "Laurent_Morvillier"
},
{
"code": "",
"text": "I’ve created a github repository to make it easily testable:\nhttps://github.com/laurentmorvillier/realmtest",
"username": "Laurent_Morvillier"
}
]
| Realm object is not up to date | 2023-01-10T09:54:46.891Z | Realm object is not up to date | 679 |
null | [
"java",
"kafka-connector"
]
| [
{
"code": "Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted. Error: org/bson/internal/CodecRegistryHelper (org.apache.kafka.connect.runtime.WorkerSinkTask:616)\njava.lang.NoClassDefFoundError: org/bson/internal/CodecRegistryHelper\nat com.mongodb.client.internal.MongoClientImpl.<init>(MongoClientImpl.java:73)\nat com.mongodb.client.internal.MongoClientImpl.<init>(MongoClientImpl.java:63)\nat com.mongodb.client.MongoClients.create(MongoClients.java:108)\nat com.mongodb.kafka.connect.sink.MongoSinkTask.getMongoClient(MongoSinkTask.java:193)\nat com.mongodb.kafka.connect.sink.MongoSinkTask.bulkWriteBatch(MongoSinkTask.java:229)\nat java.base/java.util.ArrayList.forEach(ArrayList.java:1541)\nat com.mongodb.kafka.connect.sink.MongoSinkTask.put(MongoSinkTask.java:131)\nat org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:584)\nat org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:334)\nat org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:235)\nat org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:204)\nat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:201)\nat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:256)\nat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\nat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\nat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\nat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\nat java.base/java.lang.Thread.run(Thread.java:829)\n",
"text": "we are getting follwing error for both sink and source connectors. we are using 1.8.1 mongo connect. we dont know if more tools needs to be installed or any jars needs to be added.",
"username": "Krishna_Agrawal"
},
{
"code": "",
"text": "Hi @Krishna_Agrawal,A java.lang.NoClassDefFoundError indicates that you are missing required classes to use the Kafka connector.Its not clear from your post how you have installed the connector, please review the installation documentation. It appears you are not using the uber jar and therefore are missing classes.Ross",
"username": "Ross_Lawley"
}
]
| java.lang.NoClassDefFoundError: org/bson/internal/CodecRegistryHelper | 2023-01-13T08:13:26.846Z | java.lang.NoClassDefFoundError: org/bson/internal/CodecRegistryHelper | 2,726 |
null | [
"aggregation",
"queries",
"node-js"
]
| [
{
"code": "{\n\t\"_id\" : ObjectId(\"63baa42ae4d7dbffcea784a1\"),\n\t\"items\" : [\n\t\t{\n\t\t\t\"product\" : ObjectId(\"637f223fe8d959c7a381ea6c\"),\n\t\t\t\"variant\" : \"637f223fe8d959c7a381ea70\",\n\t\t\t\"defaultPrice\" : 103,\n\t\t\t\"quantity\" : 5,\n\t\t},\n\t\t{\n\t\t\t\"product\" : ObjectId(\"637f223fe8d959c7a381ea6c\"),\n\t\t\t\"variant\" : \"637f223fe8d959c7a381ea6d\",\n\t\t\t\"defaultPrice\" : 3,\n\t\t\t\"quantity\" : 6,\n\t\t},\n\t\t{\n\t\t\t\"product\" : ObjectId(\"63808202b212793c115b8bed\"),\n\t\t\t\"variant\" : \"\",\n\t\t\t\"defaultPrice\" : 2,\n\t\t\t\"quantity\" : 10,\n\t\t}\n\t]\n}\n\n",
"text": "Hello there, so what i need to do is match the sum of the quantity inside the documents\nthis is the schema of the documentAnd for exp i need to filter all documents that has 21 as the sum of the “items.quantity”.What to do here ?\nThanks in advance",
"username": "med_amine_fh"
},
{
"code": "db.collection.find({$expr: {\n $gte: [{$sum: \"$items.quantity\"}, 21]\n}})\n",
"text": "Hi,You can execute this find request.you can use $expr to use aggregation expression in find commandthis allows you to use the $sum expression with a field in the document as a parameter",
"username": "Samuel_LEMAITRE"
},
{
"code": "COLLSCANtotaltotaltriggertotaltotaldb.collection.find({total: {$gte: 21}});\n",
"text": "To go further,My the previous request implies a COLLSCAN.\nIf you plan to executed that kind of request regularly on a large volume of data.I recommand you to do something to improve your request respond time.Create an index on a total field.You can precalculated your total field.\nOr if you use mongodb atlas use trigger, to calculate the total field on update/insertOr by using changestream on insert/update to update the total field.MongoDB triggers, change streams, database triggers, real timeThen you can simply run a request likeThen you exploit your newly created index and have a faster respond time.",
"username": "Samuel_LEMAITRE"
}
]
| How to use some sort of $sum inside $match | 2023-01-12T08:40:14.313Z | How to use some sort of $sum inside $match | 856 |
null | []
| [
{
"code": " _app = App.Create(new AppConfiguration(myRealmAppId) {\n BaseFilePath = FileService.AppDataDirectory,\n });\n _user = await _app.LogInAsync(Credentials.Anonymous());\n _config = new FlexibleSyncConfiguration(_user)\n {\n PopulateInitialSubscriptions = (realm) =>\n {\n realm.Subscriptions.Add(realm.All<LanguageEntity>());\n }\n };\n var _realm = Realm.GetInstance(_config);\n await _realm.Subscriptions.WaitForSynchronizationAsync();\n\n var _localRealm = (new OfflineDataAccess()).GetRealmInstance();\n\n _realm.Write(() =>\n {\n var languages = _localRealm.All<LanguageEntity>();\n foreach (var language in languages)\n {\n _realm.Add(new LanguageEntity() { Code = language.Code, _id = language._id });\n }\n });\n\n await _realm.Subscriptions.WaitForSynchronizationAsync();\nnamespace Data.Entities\n{\n [MapTo(\"Language\")]\n public class LanguageEntity : RealmObject\n {\n [PrimaryKey]\n public ObjectId _id { get; set; } = ObjectId.GenerateNewId();\n public string Code { get; set; } = string.Empty;\n public DateTimeOffset CreatedAt { get; set; } = DateTimeOffset.Now;\n }\n}\n",
"text": "I have this code:And it throwsRealmException: Cannot write to class Language when no flexible sync subscription has been created.If I change Realm.GetInstance() to await Realm.GetInstanceAsync() - it just hangs there indefinitely with these logs (after which it’s just ping-pong logs:APP: Realm : 2023-01-13 04:19:35.526 Debug: WebSocket::initiate_client_handshake()\nAPP: Realm : 2023-01-13 04:19:35.758 Debug: WebSocket::handle_http_response_received()\nAPP: Realm : 2023-01-13 04:19:35.760 Detail: Connection[1]: Negotiated protocol version: 7\nAPP: Realm : 2023-01-13 04:19:35.761 Debug: Connection[1]: Will emit a ping in 12788 milliseconds\nAPP: Realm : 2023-01-13 04:19:35.763 Debug: Connection[1]: Session[1]: Sending: IDENT(client_file_ident=18, client_file_ident_salt=576499042362797227, scan_server_version=12, scan_client_version=5, latest_server_version=12, latest_server_version_salt=4481538362546054629, query_version: 0 query_size: 2, query: “{}”)\nAPP: Realm : 2023-01-13 04:19:35.765 Debug: Connection[1]: Session[1]: Sending: MARK(request_ident=2)\nAPP: Realm : 2023-01-13 04:19:36.140 Debug: Connection[1]: Session[1]: Received: MARK(request_ident=2)\nAPP: Realm : 2023-01-13 04:19:36.142 Debug: Connection[1]: Session[1]: Sending: UPLOAD(progress_client_version=13, progress_server_version=12, locked_server_version=12, num_changesets=0)\nAPP: Realm : 2023-01-13 04:19:49.259 Debug: Connection[1]: Sending: PING(timestamp=133122415, rtt=0)\nAPP: Realm : 2023-01-13 04:19:49.321 Debug: Connection[1]: Received: PONG(timestamp=133122415)\nAPP: Realm : 2023-01-13 04:19:49.326 Debug: Connection[1]: Round trip time was 67 milliseconds\nAPP: Realm : 2023-01-13 04:19:49.327 Debug: Connection[1]: Will emit a ping in 55078 milliseconds\nThe thread 0x759c has exited with code 0 (0x0).\nThe thread 0x7560 has exited with code 0 (0x0).\nThe thread 0x7004 has exited with code 0 (0x0).\nThe thread 0x29cc has exited with code 0 (0x0).\nThe thread 0x675c has exited with code 0 (0x0).\nAPP: Realm : 2023-01-13 04:20:44.900 Debug: Connection[1]: Sending: PING(timestamp=133178056, rtt=67)\nAPP: Realm : 2023-01-13 04:20:44.962 Debug: Connection[1]: Received: PONG(timestamp=133178056)\nAPP: Realm : 2023-01-13 04:20:44.964 Debug: Connection[1]: Round trip time was 64 milliseconds\nAPP: Realm : 2023-01-13 04:20:44.965 Debug: Connection[1]: Will emit a ping in 55907 millisecondsHere’s my LanguageEntity:I was starting with no local / remote data, so it’s a fresh start.My Atlas App Id: “dosham-lxwuu”, please take a look.",
"username": "Movsar_Bekaev"
},
{
"code": "PopulateInitialSubscriptionsPopulateInitialSubscriptionsPopulateInitialSubscriptionsDebugLogger.LogLevel = LogLevel.DebugApp.CreateGetInstanceGetInstanceAsync2023-01-13 07:21:50.355 Debug: App: log_in_with_credentials: app_id: dosham-lxwuu\n2023-01-13 07:21:50.359 Debug: App: version info: platform: Realm .NET version: Realm .NET - sdk version: 10.19.0 - core version: 12.13.0\n2023-01-13 07:21:50.639 Debug: App: update_hostname: https://westeurope.azure.realm.mongodb.com | wss://ws.westeurope.azure.realm.mongodb.com\n2023-01-13 07:21:51.094 Debug: App: do_authenticated_request: GET https://westeurope.azure.realm.mongodb.com/api/client/v2.0/auth/profile\n2023-01-13 07:21:51.178 Debug: Realm sync client ([realm-core-12.13.0])\n2023-01-13 07:21:51.178 Debug: Supported protocol versions: 2-7\n2023-01-13 07:21:51.178 Debug: Platform: macOS Darwin 21.6.0 Darwin Kernel Version 21.6.0: Sun Nov 6 23:31:13 PST 2022; root:xnu-8020.240.14~1/RELEASE_ARM64_T6000 arm64\n2023-01-13 07:21:51.178 Debug: Build mode: Release\n2023-01-13 07:21:51.178 Debug: Config param: one_connection_per_session = true\n2023-01-13 07:21:51.178 Debug: Config param: connect_timeout = 120000 ms\n2023-01-13 07:21:51.178 Debug: Config param: connection_linger_time = 30000 ms\n2023-01-13 07:21:51.178 Debug: Config param: ping_keepalive_period = 60000 ms\n2023-01-13 07:21:51.178 Debug: Config param: pong_keepalive_timeout = 120000 ms\n2023-01-13 07:21:51.178 Debug: Config param: fast_reconnect_limit = 60000 ms\n2023-01-13 07:21:51.178 Debug: Config param: disable_upload_compaction = false\n2023-01-13 07:21:51.178 Debug: Config param: disable_sync_to_disk = false\n2023-01-13 07:21:51.178 Debug: User agent string: 'RealmSync/12.13.0 (macOS Darwin 21.6.0 Darwin Kernel Version 21.6.0: Sun Nov 6 23:31:13 PST 2022; root:xnu-8020.240.14~1/RELEASE_ARM64_T6000 arm64) '\n2023-01-13 07:21:51.197 Detail: Connection[1]: Session[1]: Binding '/Users/nikola.irinchev/mongodb-realm/dosham-lxwuu/63c1068fc865975c79e72031/default.realm' to ''\n2023-01-13 07:21:51.197 Debug: Connection[1]: Session[1]: Activating\n2023-01-13 07:21:51.197 Info: Connection[1]: Session[1]: client_reset_config = false, Realm exists = true, client reset = false\n2023-01-13 07:21:51.197 Debug: Connection[1]: Session[1]: client_file_ident = 0, client_file_ident_salt = 0\n2023-01-13 07:21:51.197 Debug: Connection[1]: Session[1]: last_version_available = 6\n2023-01-13 07:21:51.197 Debug: Connection[1]: Session[1]: progress_server_version = 0\n2023-01-13 07:21:51.197 Debug: Connection[1]: Session[1]: progress_client_version = 0\n2023-01-13 07:21:51.197 Debug: Connection[1]: Session[1]: Progress handler called, downloaded = 0, downloadable(total) = 0, uploaded = 0, uploadable = 57, reliable_download_progress = false, snapshot version = 6\n2023-01-13 07:21:51.197 Debug: Connection[1]: Session[1]: Progress handler called, downloaded = 0, downloadable(total) = 0, uploaded = 0, uploadable = 57, reliable_download_progress = false, snapshot version = 6\n2023-01-13 07:21:51.197 Debug: Connection[1]: Session[1]: Progress handler called, downloaded = 0, downloadable(total) = 0, uploaded = 0, uploadable = 57, reliable_download_progress = false, snapshot version = 6\n2023-01-13 07:21:51.198 Debug: WebSocket::Websocket()\n2023-01-13 07:21:51.198 Detail: Resolving 'ws.westeurope.azure.realm.mongodb.com:443'\n2023-01-13 07:21:51.244 Detail: Connecting to endpoint '40.74.36.35:443' (1/1)\n2023-01-13 07:21:51.269 Info: Connected to endpoint '40.74.36.35:443' (from '10.46.0.5:58799')\n2023-01-13 07:21:51.411 Debug: WebSocket::initiate_client_handshake()\n2023-01-13 07:21:51.512 Debug: WebSocket::handle_http_response_received()\n2023-01-13 07:21:51.513 Detail: Connection[1]: Negotiated protocol version: 7\n2023-01-13 07:21:51.513 Debug: Connection[1]: Will emit a ping in 9396 milliseconds\n2023-01-13 07:21:51.856 Debug: Connection[1]: Session[1]: Received: IDENT(client_file_ident=19, client_file_ident_salt=3673400305230201167)\n2023-01-13 07:21:51.858 Debug: Connection[1]: Session[1]: Sending: IDENT(client_file_ident=19, client_file_ident_salt=3673400305230201167, scan_server_version=0, scan_client_version=0, latest_server_version=0, latest_server_version_salt=0, query_version: 0 query_size: 2, query: \"{}\")\n2023-01-13 07:21:51.859 Debug: Connection[1]: Session[1]: Sending: MARK(request_ident=1)\n2023-01-13 07:21:51.952 Debug: Connection[1]: Received: DOWNLOAD CHANGESET(server_version=18, client_version=0, origin_timestamp=253524112007, origin_file_ident=1, original_changeset_size=1113, changeset_size=1113)\n2023-01-13 07:21:51.952 Debug: Connection[1]: Session[1]: Received: DOWNLOAD(download_server_version=18, download_client_version=0, latest_server_version=18, latest_server_version_salt=8495948514551732795, upload_client_version=0, upload_server_version=0, downloadable_bytes=0, last_in_batch=true, query_version=0, num_changesets=1, ...)\n2023-01-13 07:21:51.957 Info: Connection[1]: Session[1]: Begin processing pending FLX bootstrap for query version 0. (changesets: 1, original total changeset size: 1113)\n2023-01-13 07:21:51.957 Debug: Connection[1]: Session[1]: Finished changeset indexing (incoming: 1 changeset(s) / 95 instructions, local: 1 changeset(s) / 3 instructions, conflict group(s): 3)\n2023-01-13 07:21:51.957 Debug: Connection[1]: Session[1]: Finished transforming 1 local changesets through 1 incoming changesets (3 vs 95 instructions, in 3 conflict groups)\n2023-01-13 07:21:51.968 Debug: Connection[1]: Session[1]: Integrated 1 changesets out of 1\n2023-01-13 07:21:51.968 Info: Connection[1]: Session[1]: Integrated 1 changesets from pending bootstrap for query version 0, producing client version 10 in 11 ms. 0 changesets remaining in bootstrap\n2023-01-13 07:21:51.968 Debug: Connection[1]: Session[1]: Progress handler called, downloaded = 1113, downloadable(total) = 1113, uploaded = 0, uploadable = 57, reliable_download_progress = true, snapshot version = 10\n2023-01-13 07:21:51.970 Debug: Connection[1]: Session[1]: Received: MARK(request_ident=1)\n2023-01-13 07:21:51.970 Debug: Connection[1]: Session[1]: Sending: UPLOAD(progress_client_version=6, progress_server_version=0, locked_server_version=18, num_changesets=1)\n2023-01-13 07:21:51.970 Debug: Connection[1]: Session[1]: Fetching changeset for upload (client_version=5, server_version=0, changeset_size=57, origin_timestamp=253524111194, origin_file_ident=0)\n2023-01-13 07:21:51.970 Debug: Connection[1]: Session[1]: Sending: QUERY(query_version=1, query_size=30, query=\"{\"Language\":\"(TRUEPREDICATE)\"}\"\n2023-01-13 07:21:51.970 Debug: Connection[1]: Session[1]: Sending: MARK(request_ident=2)\n2023-01-13 07:21:51.970 Debug: Connection[1]: Session[1]: Sending: UPLOAD(progress_client_version=10, progress_server_version=18, locked_server_version=18, num_changesets=0)\n2023-01-13 07:21:51.970 Debug: Connection[1]: Session[1]: Progress handler called, downloaded = 1113, downloadable(total) = 1113, uploaded = 0, uploadable = 57, reliable_download_progress = true, snapshot version = 11\n2023-01-13 07:21:52.104 Debug: Connection[1]: Session[1]: Received: DOWNLOAD(download_server_version=19, download_client_version=5, latest_server_version=19, latest_server_version_salt=8560354801592955379, upload_client_version=6, upload_server_version=0, downloadable_bytes=0, last_in_batch=true, query_version=0, num_changesets=0, ...)\n2023-01-13 07:21:52.106 Debug: Connection[1]: Session[1]: Sending: UPLOAD(progress_client_version=12, progress_server_version=19, locked_server_version=19, num_changesets=0)\n2023-01-13 07:21:52.106 Debug: Connection[1]: Session[1]: Progress handler called, downloaded = 1113, downloadable(total) = 1113, uploaded = 57, uploadable = 57, reliable_download_progress = true, snapshot version = 12\n2023-01-13 07:21:52.170 Debug: Connection[1]: Received: DOWNLOAD CHANGESET(server_version=20, client_version=5, origin_timestamp=253524112215, origin_file_ident=1, original_changeset_size=0, changeset_size=0)\n2023-01-13 07:21:52.170 Debug: Connection[1]: Session[1]: Received: DOWNLOAD(download_server_version=20, download_client_version=5, latest_server_version=20, latest_server_version_salt=0, upload_client_version=6, upload_server_version=0, downloadable_bytes=0, last_in_batch=true, query_version=1, num_changesets=1, ...)\n2023-01-13 07:21:52.171 Info: Connection[1]: Session[1]: Begin processing pending FLX bootstrap for query version 1. (changesets: 1, original total changeset size: 0)\n2023-01-13 07:21:52.172 Debug: Connection[1]: Session[1]: Integrated 1 changesets out of 1\n2023-01-13 07:21:52.172 Info: Connection[1]: Session[1]: Integrated 1 changesets from pending bootstrap for query version 1, producing client version 14 in 1 ms. 0 changesets remaining in bootstrap\n2023-01-13 07:21:52.172 Debug: Connection[1]: Session[1]: Sending: UPLOAD(progress_client_version=14, progress_server_version=20, locked_server_version=20, num_changesets=0)\n2023-01-13 07:21:52.173 Debug: Connection[1]: Session[1]: Progress handler called, downloaded = 1113, downloadable(total) = 1113, uploaded = 57, uploadable = 57, reliable_download_progress = true, snapshot version = 14\n2023-01-13 07:21:52.174 Debug: Connection[1]: Session[1]: Progress handler called, downloaded = 1113, downloadable(total) = 1113, uploaded = 57, uploadable = 57, reliable_download_progress = true, snapshot version = 15\n2023-01-13 07:21:52.175 Debug: Connection[1]: Session[1]: Received: MARK(request_ident=2)\n2023-01-13 07:21:52.175 Debug: Connection[1]: Session[1]: Marking query version 1 as complete after receiving MARK message\n",
"text": "Regarding the error you’re getting - considering you’re using PopulateInitialSubscriptions, my guess would be that you had opened the Realm before, which means that PopulateInitialSubscriptions wouldn’t have run - it may not be obvious but it runs only once for the lifetime of the Realm. You can try deleting the local realm file and running your app again. Might be worth also logging something inside PopulateInitialSubscriptions just to verify it actually runs.Regarding GetInstanceAsync not completing, you increase the log level to Debug? You can do that by calling Logger.LogLevel = LogLevel.Debug before you call App.Create.I tried plugging your code in a simple console application and it seemed to have worked for me - I didn’t do any writes as I don’t have the local realm with the language entities, but it could successfully synchronize and both GetInstance and GetInstanceAsync complete. Here are the debug logs from when I tried it:",
"username": "nirinchev"
},
{
"code": "",
"text": "It turns out that error was due to my class names, I got curious why the error mentions class “Language” - which didn’t exist and not the LanguageEntity, looks like MapTo() - doesn’t work with FlexibleSync, I renamed my entities to correspond to database schema and it doesn’t give me that error any more, and it looks like, this error only happens when using PopulateInitialSubscriptions, because yesterday I used with Subscriptions.Update the LanguageEntity class name and it didn’t produce that error.I still have issues though, the app hangs after first couple of inserts, my hair are getting white with all these issues, but I hope you’ll make it stable one day One of the other issues btw, is that I always have to run the app twice for it just to pass first GetInstance, i.e. when there is no .realm file, but still it’s like moving one line at a day.",
"username": "Movsar_Bekaev"
},
{
"code": "MapToLanguageMapTo(\"Language\")",
"text": "MapTo should work with flexible sync. Just to be clear what it does is it configures the name of the class/property in the database. And it needs to match the schema on the server - i.e. if your Atlas collection is called Language, then MapTo(\"Language\") is the right attribute to apply. The reason why the error is mentioning the database name rather than the C# class is because it’s being thrown by the database and it doesn’t know what the public name is (we have an issue to fix this though).Regarding your other issues, if you come across unexpected behavior, the fastest way to get help would be to file a Github issue where one of the engineers will try and assist you. And of course, if you have feedback about how to make things more intuitive, we’d be more than happy to receive that as well.",
"username": "nirinchev"
},
{
"code": "",
"text": "I understood that, but there was nothing wrong with database and server - both contained Language schema, but when I renamed the classes and removed the MapTo - everything started to work, first time in a week! Yes, I’ll file an issue on github, I think this is a bug, thank you",
"username": "Movsar_Bekaev"
}
]
| PopulateInitialSubscriptions doesn't work | 2023-01-13T04:25:28.058Z | PopulateInitialSubscriptions doesn’t work | 893 |
null | []
| [
{
"code": "",
"text": "What is the passing score/percent for Dev/DBA certification exams? it’s not mentioned anywhere…Thanks",
"username": "Essam_El-Sherif"
},
{
"code": "pass/fail",
"text": "Hi @Essam_El-Sherif,Welcome to the MongoDB Community Forums In keeping with certification industry best practices, MongoDB has opted not to publish exam scores or passing scores. We will continue to offer examinees a pass/fail result and topic-level performance percentages.Please reach out to [email protected] if you have further questions.Thank you,\nKushagra Kesav",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "2 posts were split to a new topic: Is the questions from the exam prep are same?",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
]
| Certification Exam Passing Score | 2023-01-05T22:22:48.402Z | Certification Exam Passing Score | 3,217 |
[
"app-services-user-auth",
"android"
]
| [
{
"code": "{\"code\": 47, \"message\": \"invalid id token: 'aud' must be a string containing the client_id\"}",
"text": "Just follow Apple ID Authentication. But, we still can’t use sign with apple natively (IOS) and other platforms.In Client ID, if I set App ID then IOS will work, and Android will get this error:\n{\"code\": 47, \"message\": \"invalid id token: 'aud' must be a string containing the client_id\"}\nIf I set the Service ID then Android works, and IOS will get the above error.Because apple natively is using clientId as App ID, but other is using Service ID.\nAnd we can’t put both in the Client ID\nimage2326×1220 215 KB\nI also found an old post, but unfortunately it’s still unresolved!",
"username": "Nyan"
},
{
"code": "",
"text": "Hi, @Ian_Ward Currently I am working on a React native project.\nCan you help me with a solution to this problem?I also refer firebase, they only require Service ID as Client Id for other platform (except apple).\n\nimage2208×832 63.2 KB\n",
"username": "Nyan"
},
{
"code": "",
"text": "I’m in need of a solution to this also. Surely there are lots of people coming across this situation when developing both native iOS and Android apps?",
"username": "BenJ"
},
{
"code": "",
"text": "Apple ID AuthenticationDo we have any solution for this, please?",
"username": "Nhan_Nguyen_Dinh"
}
]
| Apple Sign In | Issue with Client ID | 2022-05-27T14:53:44.828Z | Apple Sign In | Issue with Client ID | 3,846 |
|
null | [
"dot-net"
]
| [
{
"code": " FlexibleSyncConfiguration = new FlexibleSyncConfiguration(RealmUser)\n {\n PopulateInitialSubscriptions = (realm) =>\n {\n IQueryable<Player> player = realm.All<Player>().Where(n => n.OwnerId =RealmUser.Id);\n IQueryable<AgeGroup> ageGroups = realm.All<AgeGroup>();\n IQueryable<Game> games = realm.All<Game>();\n realm.Subscriptions.Add(player);\n realm.Subscriptions.Add(ageGroups);\n realm.Subscriptions.Add(games);\n }\n };\nRealm realmInstance = await Realm.GetInstanceAsync(flexibleSyncConfiguration); public class Player: RealmObject\n {\n [PrimaryKey]\n [MapTo(\"_id\")]\n public ObjectId Id { get; set; } = ObjectId.GenerateNewId();\n\n [MapTo(\"owner_id\")]\n public string OwnerId { get; set; }\n\n [MapTo(\"fullName\")]\n public string FullName { get; set; }\n\n [MapTo(\"eMail\")]\n public string Email { get; set; }\n\n [MapTo(\"country\")]\n public string Country { get; set; } \n\n [MapTo(\"mobileNumber\")]\n public string MobileNumber { get; set; }\n\n [MapTo(\"province\")]\n public string Province { get; set; }\n\n [MapTo(\"city\")]\n public string City { get; set; }\n\n [MapTo(\"profileImage\")]\n public byte[] ProfileImage { get; set; } \n\n [MapTo(\"ageGroup\")]\n public AgeGroup AgeGroup { get; set; }\n\n\n [MapTo(\"favouriteGames\")]\n public IList<Game> FavouriteGames { get; }\n }\npublic class Game : RealmObject\n {\n [PrimaryKey]\n [MapTo(\"_id\")]\n public ObjectId Id { get; set; } = ObjectId.GenerateNewId();\n\n [MapTo(\"name\")]\n public string Name { get; set; }\n\n\n }\n public class AgeGroup: RealmObject\n {\n [PrimaryKey]\n [MapTo(\"_id\")]\n public ObjectId Id { get; set; } = ObjectId.GenerateNewId();\n\n public string AgeGroupName { get; set; }\n }\n",
"text": "I am developing .NET Maui app with MongoDB App Services and FlexibleSync. I am getting issue in the following code:Before the above subscriptions are added the below line is called and it does not return and the app hangs in there.Realm realmInstance = await Realm.GetInstanceAsync(flexibleSyncConfiguration);I am using this same code in App.cs and LoginViewModel becuase I need the data from MongoDB in both of these classes before deciding the next Page for Navigation.The Models are:Please not that the Game and AgeGroup are capped collections with just 5 to 6 documents which is read only and client does not write to these collections. My questions are:What correction in this code will enable data downloadWhy the GetInstanceAsync does not return exceptionHow to set timeout on GetInstanceAsync()",
"username": "Paramjit_Singh"
},
{
"code": "GetInstanceAsyncGetInstanceAsync",
"text": "GetInstanceAsync takes a cancellation token argument - you can use that to set a timeout (see this article for examples).Regarding why GetInstanceAsync doesn’t throw an exception - it’s due to this bug - the original design of the API was assuming that eventually a connection will be established, so it’ll keep retrying the connection forever. We are now aware of certain conditions which will prevent sync from ever working (such as a schema mismatch), but haven’t gotten around to updating the API to communicate those errors.Finally, I’m not sure why your code doesn’t work, but I’m guessing there’s an issue with the communication between the client and the server. This should be surfaced at least in the client logs, but also possibly in the server logs. Can you run your app and share the client logs from an attempted connection?",
"username": "nirinchev"
},
{
"code": "{\n \"rules\": {\n \"AgeGroup\": [\n {\n \"name\": \"user\",\n \"applyWhen\": {},\n \"read\": true,\n \"write\": false\n }\n ],\n \"Game\": [\n {\n \"name\": \"user\",\n \"applyWhen\": {},\n \"read\": true,\n \"write\": false\n }\n ]\n },\n \"defaultRoles\": [\n {\n \"name\": \"admin\",\n \"applyWhen\": {\n \"%%user.custom_data.isGlobalAdmin\": true\n },\n \"read\": true,\n \"write\": true\n },\n {\n \"name\": \"user\",\n \"applyWhen\": {},\n \"read\": {\n \"owner_id\": \"%%user.id\"\n },\n \"write\": {\n \"owner_id\": \"%%user.id\"\n }\n }\n ]\n}\n",
"text": "I believe there is some issue on Device Sync. When I disable sync, delete all schemas and try to re-enable the sync I get the following error:permissions contain rule for table “AgeGroup” which does not exist in schemaAgain clicking on Re-Enable Sync produce this error:permissions contain rule for table “Game” which does not exist in schemaThe permission json is as follows:",
"username": "Paramjit_Singh"
},
{
"code": "",
"text": "Sync is Enabled after generating schemas for the AgeGroup and Game Table from the Sample data. The original error still exists. You asked for the client logs. But I am not logging in the app. Can you please explain what info you want . The server logs sometimes show:InitialSyncNotCompleted Error\nError:\nattempted to start a session while initial sync is in progress (ProtocolErrorCode=229)and sometimesOK\nLogs:\n[\n“Connection was active for: 1s”\n]Also this message at the top:Enabling Sync …approximately 0/11 (0.00%) documents copiedI have 6 docs in Game and 5 docs in AgeGroup. Does the above message is about them",
"username": "Paramjit_Singh"
},
{
"code": "Logger.Default = Logger.File(\"/usr/realm.log\");\n",
"text": "The Realm SDK will automatically log operations to the console. You can configure that to also log to a file if you’re not able to capture the console logs. Here’s the docs about it and the file logger would be something like:Make sure to call this before you create an App instance though.",
"username": "nirinchev"
},
{
"code": "[DOTNET] 2023-01-12 13:57:14.441 Info: Connection[1]: Session[1]: client_reset_config = false, Realm exists = true, client reset = false\n[DOTNET] 2023-01-12 13:57:14.898 Info: Connected to endpoint '3.210.32.164:443' (from '10.0.2.16:45886')\n[EGL_emulation] app_time_stats: avg=1.95ms min=1.19ms max=4.43ms count=60\n[DOTNET] 2023-01-12 13:57:15.313 Info: Verifying server SSL certificate using 155 root certificates\n[EGL_emulation] app_time_stats: avg=1.83ms min=1.16ms max=3.88ms count=61\n[DOTNET] 2023-01-12 13:57:16.492 Info: Connection[1]: Session[1]: Received: ERROR \"Client tried to connect using flexible sync before initial sync is complete\" (error_code=229, try_again=true, error_action=Transient)\n[DOTNET] 2023-01-12 13:57:16.494 Info: Connection[1]: Disconnected\n[EGL_emulation] app_time_stats: avg=1.88ms min=1.13ms max=3.98ms count=60\n[JavaBinder] !!! FAILED BINDER TRANSACTION !!! (parcel size = 316)\n[GmsClient] IGmsServiceBroker.getService failed\n[GmsClient] android.os.DeadObjectException: Transaction failed on small parcel; remote process probably died, but this could also be caused by running out of binder buffe\n[GmsClient] \tat android.os.BinderProxy.transactNative(Native Method)\n[GmsClient] \tat android.os.BinderProxy.transact(BinderProxy.java:584)\n[GmsClient] \tat com.google.android.gms.common.internal.zzac.getService(com.google.android.gms:play-services-basement@@18.1.0:8)\n[GmsClient] \tat com.google.android.gms.common.internal.BaseGmsClient.getRemoteService(com.google.android.gms:play-services-basement@@18.1.0:14)\n[GmsClient] \tat com.google.android.gms.common.api.internal.zabt.run(com.google.android.gms:play-services-base@@18.1.0:7)\n[GmsClient] \tat android.os.Handler.handleCallback(Handler.java:942)\n[GmsClient] \tat android.os.Handler.dispatchMessage(Handler.java:99)\n[GmsClient] \tat android.os.Looper.loopOnce(Looper.java:201)\n[GmsClient] \tat android.os.Looper.loop(Looper.java:288)\n[GmsClient] \tat android.os.HandlerThread.run(HandlerThread.java:67)\n[EGL_emulation] app_time_stats: avg=1.86ms min=1.18ms max=4.24ms count=60\n[DOTNET] 2023-01-12 13:57:18.885 Info: Connected to endpoint '3.210.32.164:443' (from '10.0.2.16:45888')\n[EGL_emulation] app_time_stats: avg=1.78ms min=1.09ms max=3.66ms count=60\n[DOTNET] 2023-01-12 13:57:19.276 Info: Verifying server SSL certificate using 155 root certificates\n[EGL_emulation] app_time_stats: avg=1.82ms min=1.18ms max=3.29ms count=60\n[DOTNET] 2023-01-12 13:57:20.654 Info: Connection[1]: Session[1]: Received: ERROR \"Client tried to connect using flexible sync before initial sync is complete\" (error_code=229, try_again=true, error_action=Transient)\n[DOTNET] 2023-01-12 13:57:20.655 Info: Connection[1]: Disconnected\n[EGL_emulation] app_time_stats: avg=1.92ms min=1.11ms max=3.93ms count=61\n[EGL_emulation] app_time_stats: avg=1.83ms min=1.22ms max=5.71ms count=60\n[JavaBinder] !!! FAILED BINDER TRANSACTION !!! (parcel size = 316)\n",
"text": "This is the copy of the output window in Visual Studio. The connection is logged multiple times until cancelled by the Cancellation Token:",
"username": "Paramjit_Singh"
},
{
"code": "Client tried to connect using flexible sync before initial sync is complete",
"text": "The error you’re getting - Client tried to connect using flexible sync before initial sync is complete indicates that the server is still initializing sync. Generally that shouldn’t take a whole lot of time and only happens the first time you enable sync. If it doesn’t go away after a while, it likely indicates some issue with the server and/or your cluster (e.g. maybe you have a whole lot of data on a very small cluster).You could check the server logs for any errors that could provide more insight into why initial sync is taking so long. If there’s nothing useful there, your best bet would be to open a support ticket and someone from the server team will investigate further.",
"username": "nirinchev"
}
]
| Data download on Initialization with FlexibleSyncConfiguration | 2023-01-01T16:18:30.448Z | Data download on Initialization with FlexibleSyncConfiguration | 1,643 |
[]
| [
{
"code": "",
"text": "Hello everyone. I have just started learning MongoDb. and I have a kinda problem.I created db ,created collection and now I want to import my log file but I’m encountering this error: Cannot read property ‘_id’ of undefined\nthe content of my files are like that.\nall records have same format.I dont understand why I’m encountering this error?If you help me ı would be grateful\n\n61902×830 99.6 KB\n",
"username": "furkan_eren"
},
{
"code": "mongoimport",
"text": "Welcome to the MongoDB Community Forums @furkan_eren !It looks like you are importing a JSON file. Can you provide some more details on how you are importing this:JSON files are often imported using a GUI like MongoDB Compass (which has a feature to Import and Export Data) or the command-line mongoimport tool.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thanks for your answer. I use MongoDB 4.4.6 Community edition . I also use MongoDbCompass for importing process but there is a problem here . I can’t send more photo in here because I’m new user and system does not allow me to share a photo.",
"username": "furkan_eren"
},
{
"code": "",
"text": "Hi @furkan_eren ,It is preferable to share a text snippet (for example, a JSON document to import if the data isn’t confidential) so that someone is able to try to reproduce the issue.The specific version of Compass you are using as well as the steps you are taking would also be helpful information.Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "{“DATE”:“2021-05-01 02:20:57.9671”,“REPORT_ID”:“674a82c5-200a-497a-9c01-0c951c991c6a”,“APP_VERSION_CODE”:1,“APP_VERSION_NAME”:“3.64”,“PACKAGE_NAME”:“tr.gov.iski.ortaksayacokuma”,“FILE_PATH”:\"/data/user/0/tr.gov.iski.ortaksayacokuma/files\",“PHONE_MODEL”:“ASUS_X00ID”,“BRAND”:“asus”,“PRODUCT”:“WW_Phone”,“ANDROID_VERSION”:“8.1.0”,“BUILD”:{“ASUSCID”:“ASUS”,“ASUSSKU”:“WW”,“AUTO_START”:true,“BOARD”:“ASUS_X00ID”,“BOOTLOADER”:“unknown”,“BRAND”:“asus”,“CHARACTERISTICS”:“nosdcard”,“COUNTRYCODE”:“WW”,“CPU_ABI”:“arm64-v8a”,“CPU_ABI2”:\"\",“CTA”:false,“CTA_APP”:false,“CTA_IMAGE”:false,“CTA_OS”:false,“DEVICE”:“ASUS_X00IDB”,“DISPLAY”:“OPM1.171019.011.WW_Phone-15.2016.1907.519-0”,“FINGERPRINT”:“asus/WW_Phone/ASUS_X00IDB:8.1.0/OPM1.171019.011/15.2016.1907.519-0:user/release-keys”,“HARDWARE”:“qcom”,“HOST”:“ubuntu”,“ID”:“OPM1.171019.011”,“ISASUSCNSKU”:false,“ISASUSVERMAX”:false,“IS_CONTAINER”:false,“IS_DEBUGGABLE”:false,“IS_EMULATOR”:false,“IS_ENG”:false,“IS_TREBLE_ENABLED”:false,“IS_USER”:true,“IS_USERDEBUG”:false,“MANUFACTURER”:“asus”,“MODEL”:“ASUS_X00ID”,“PERMISSIONS_REVIEW_REQUIRED”:false,“PRODUCT”:“WW_Phone”,“RADIO”:“unknown”,“SERIAL”:“HBAXB7615040RNJ”,“START_TRACKER”:false,“SUPPORTED_32_BIT_ABIS”:[“armeabi-v7a”,“armeabi”],“SUPPORTED_64_BIT_ABIS”:[“arm64-v8a”],“SUPPORTED_ABIS”:[“arm64-v8a”,“armeabi-v7a”,“armeabi”],“TAGS”:“release-keys”,“TIME”:1563959888000,“TYPE”:“user”,“UNKNOWN”:“unknown”,“USER”:“jenkins”,“VERSION”:{“ACTIVE_CODENAMES”:[],“BASE_OS”:\"\",“CODENAME”:“REL”,“INCREMENTAL”:“15.2016.1907.519-0”,“PREVIEW_SDK_INT”:0,“RELEASE”:“8.1.0”,“RESOURCES_SDK_INT”:27,“SDK”:“27”,“SDK_INT”:27,“SECURITY_PATCH”:“2019-07-05”}},“TOTAL_MEM_SIZE”:25028554752,“AVAILABLE_MEM_SIZE”:19363155968,“BUILD_CONFIG”:{“APPLICATION_ID”:“org.acra”,“BUILD_TYPE”:“release”,“DEBUG”:false,“FLAVOR”:\"\",“VERSION_CODE”:-1,“VERSION_NAME”:“5.4.0”},“CUSTOM_DATA”:{},“IS_SILENT”:false,“STACK_TRACE”:“java.lang.NullPointerException: FileDescriptor must not be null\\n\\tat android.os.ParcelFileDescriptor.(ParcelFileDescriptor.java:187)\\n\\tat android.os.ParcelFileDescriptor$1.createFromParcel(ParcelFileDescriptor.java:1045)\\n\\tat android.os.ParcelFileDescriptor$1.createFromParcel(ParcelFileDescriptor.java:1037)\\n\\tat android.bluetooth.IBluetooth$Stub$Proxy.connectSocket(IBluetooth.java:1996)\\n\\tat android.bluetooth.BluetoothSocket.connect(BluetoothSocket.java:363)\\n\\tat io.palaima.smoothbluetooth.BluetoothService$ConnectThread.run(BluetoothService.java:2)\\n”,“INITIAL_CONFIGURATION”:{“FlipFont”:0,“appBounds”:“Rect(0, 0 - 720, 1280)”,“assetsSeq”:0,“colorMode”:5,“compatScreenHeightDp”:547,“compatScreenWidthDp”:320,“compatSmallestScreenWidthDp”:320,“densityDpi”:320,“fontScale”:0.85,“hardKeyboardHidden”:2,“keyboard”:“KEYBOARD_NOKEYS”,“keyboardHidden”:1,“locale”:“tr_TR”,“mcc”:286,“mnc”:1,“navigation”:1,“navigationHidden”:2,“orientation”:1,“screenHeightDp”:616,“screenLayout”:“SCREENLAYOUT_SIZE_NORMAL+SCREENLAYOUT_LONG_YES+SCREENLAYOUT_LAYOUTDIR_LTR+SCREENLAYOUT_ROUND_NO”,“screenWidthDp”:360,“seq”:10,“smallestScreenWidthDp”:360,“touchscreen”:“TOUCHSCREEN_FINGER”,“uiMode”:“UI_MODE_TYPE_NORMAL+UI_MODE_NIGHT_NO”,“userSetLocale”:false},“CRASH_CONFIGURATION”:{“FlipFont”:0,“appBounds”:“Rect(0, 0 - 720, 1280)”,“assetsSeq”:0,“colorMode”:5,“compatScreenHeightDp”:547,“compatScreenWidthDp”:320,“compatSmallestScreenWidthDp”:320,“densityDpi”:320,“fontScale”:0.85,“hardKeyboardHidden”:2,“keyboard”:“KEYBOARD_NOKEYS”,“keyboardHidden”:1,“locale”:“tr_TR”,“mcc”:286,“mnc”:1,“navigation”:1,“navigationHidden”:2,“orientation”:1,“screenHeightDp”:616,“screenLayout”:“SCREENLAYOUT_SIZE_NORMAL+SCREENLAYOUT_LONG_YES+SCREENLAYOUT_LAYOUTDIR_LTR+SCREENLAYOUT_ROUND_NO”,“screenWidthDp”:360,“seq”:10,“smallestScreenWidthDp”:360,“touchscreen”:“TOUCHSCREEN_FINGER”,“uiMode”:“UI_MODE_TYPE_NORMAL+UI_MODE_NIGHT_NO”,“userSetLocale”:false},“DISPLAY”:{“0”:{“currentSizeRange”:{“smallest”:[720,672],“largest”:[1280,1232]},“flags”:“FLAG_SUPPORTS_PROTECTED_BUFFERS+FLAG_SECURE”,“metrics”:{“density”:2,“densityDpi”:320,“scaledDensity”:“x2.0”,“widthPixels”:720,“heightPixels”:1280,“xdpi”:268.9410095214844,“ydpi”:268.6940002441406},“realMetrics”:{“density”:2,“densityDpi”:320,“scaledDensity”:“x2.0”,“widthPixels”:720,“heightPixels”:1280,“xdpi”:268.9410095214844,“ydpi”:268.6940002441406},“name”:“Yerleşik Ekran”,“realSize”:[720,1280],“rectSize”:[0,0,720,1280],“size”:[720,1280],“rotation”:“ROTATION_0”,“isValid”:true,“orientation”:0,“refreshRate”:60.000003814697266,“height”:1280,“width”:720,“pixelFormat”:1}},“USER_COMMENT”:null,“USER_EMAIL”:“N/A”,“USER_APP_START_DATE”:“2021-04-30T09:02:25.084+03:00”,“USER_CRASH_DATE”:“2021-04-30T13:54:58.232+03:00”,“DUMPSYS_MEMINFO”:“N/A”,“LOGCAT”:“N/A”,“INSTALLATION_ID”:“c542afaf-73c1-4a4a-96d8-e513bacfd55c”,“DEVICE_FEATURES”:{“android.hardware.sensor.proximity”:true,“asus.software.zenui.zentv”:true,“asus.hardware.touchgesture.swipe_up”:true,“asus.hardware.touchgesture.double_tap”:true,“android.hardware.sensor.accelerometer”:true,“asus.software.lockscreen.cmweather”:true,“android.hardware.faketouch”:true,“android.hardware.usb.accessory”:true,“android.hardware.telephony.cdma”:true,“android.software.backup”:true,“asus.software.onehand”:true,“android.hardware.touchscreen”:true,“android.hardware.touchscreen.multitouch”:true,“asus.software.theme.animated_theme”:true,“android.software.print”:true,“asus.software.sku.WW”:true,“android.software.activities_on_secondary_displays”:true,“android.software.voice_recognizers”:true,“android.software.picture_in_picture”:true,“android.hardware.fingerprint”:true,“android.hardware.sensor.gyroscope”:true,“asus.software.themes_store”:true,“android.hardware.opengles.aep”:true,“android.hardware.bluetooth”:true,“android.hardware.camera.autofocus”:true,“android.hardware.telephony.gsm”:true,“android.software.sip.voip”:true,“asus.software.preload”:true,“asus.software.presafe”:true,“android.hardware.usb.host”:true,“asus.software.twinapps”:true,“android.hardware.audio.output”:true,“android.software.verified_boot”:true,“android.hardware.camera.flash”:true,“android.hardware.camera.front”:true,“android.hardware.screen.portrait”:true,“asus.software.gamewidget.zenui45”:true,“android.hardware.sensor.stepdetector”:true,“android.software.home_screen”:true,“asus.software.sensor_service”:true,“android.hardware.microphone”:true,“asus.hardware.display.bluelight.reading_mode”:true,“asus.hardware.display.bluelight”:true,“android.software.autofill”:true,“android.hardware.bluetooth_le”:true,“android.hardware.sensor.compass”:true,“android.hardware.touchscreen.multitouch.jazzhand”:true,“android.software.app_widgets”:true,“android.software.input_methods”:true,“android.hardware.sensor.light”:true,“android.hardware.vulkan.version”:true,“android.software.companion_device_setup”:true,“asus.hardware.display.splendid”:true,“android.software.device_admin”:true,“android.hardware.camera”:true,“asus.software.whole_system_onehand”:true,“android.hardware.screen.landscape”:true,“android.hardware.ram.normal”:true,“android.software.managed_users”:true,“android.software.webview”:true,“android.hardware.sensor.stepcounter”:true,“asus.software.zenui”:true,“asus.software.sensor_service.terminal”:true,“android.hardware.camera.any”:true,“android.hardware.vulkan.compute”:true,“android.software.connectionservice”:true,“android.hardware.touchscreen.multitouch.distinct”:true,“android.hardware.location.network”:true,“android.software.cts”:true,“android.software.sip”:true,“asus.software.sensor_service.eartouch”:true,“android.hardware.wifi.direct”:true,“android.software.live_wallpaper”:true,“asus.software.theme.living_theme”:true,“asus.software.zenui.five”:true,“android.hardware.location.gps”:true,“asus.software.pagemarker”:true,“android.software.midi”:true,“asus.software.marketapp”:true,“asus.software.project.ZC554KL”:true,“android.hardware.wifi”:true,“android.hardware.location”:true,“android.hardware.vulkan.level”:true,“asus.hardware.display.splendid.reading_mode”:true,“android.hardware.telephony”:true,“asus.hardware.touchgesture.launch_app”:true,“glEsVersion”:“3.2”},“ENVIRONMENT”:{“getDataDirectory”:\"/data\",“getDataMiscCeDirectory”:\"/data/misc_ce\",“getDataMiscDirectory”:\"/data/misc\",“getDataPreloadsAppsDirectory”:\"/data/preloads/apps\",“getDataPreloadsDemoDirectory”:\"/data/preloads/demo\",“getDataPreloadsDirectory”:\"/data/preloads\",“getDataPreloadsFileCacheDirectory”:\"/data/preloads/file_cache\",“getDataPreloadsMediaDirectory”:\"/data/preloads/media\",“getDataSystemCeDirectory”:\"/data/system_ce\",“getDataSystemDeDirectory”:\"/data/system_de\",“getDataSystemDirectory”:\"/data/system\",“getDownloadCacheDirectory”:\"/data/cache\",“getExpandDirectory”:\"/mnt/expand\",“getExternalStorageDirectory”:\"/storage/emulated/0\",“getExternalStorageState”:“mounted”,“getLegacyExternalStorageDirectory”:\"/sdcard\",“getLegacyExternalStorageObbDirectory”:\"/sdcard/Android/obb\",“getOdmDirectory”:\"/odm\",“getOemDirectory”:\"/oem\",“getRootDirectory”:\"/system\",“getStorageDirectory”:\"/storage\",“getVendorDirectory”:\"/vendor\",“isExternalStorageEmulated”:true,“isExternalStorageRemovable”:false},“SHARED_PREFERENCES”:{“default”:{“acra.legacyAlreadyConvertedTo4.8.0”:true,“acra.legacyAlreadyConvertedToJson”:true,“acra.lastVersionNr”:1}}}",
"username": "furkan_eren"
},
{
"code": "",
"text": "It is a log file and I want to create database and put the file into the database.But I encounter with error",
"username": "furkan_eren"
},
{
"code": "",
"text": "I have the same problem im trying to import a json file with this format{“user_id”:“q-kq52Hm8a5ajWJ_dUlOoA”,“name”:“Minji”,“review_count”:2,“yelping_since”:“2010-07-16 15:37:15”,“useful”:8,“funny”:1,“cool”:2,“elite”:\"\",“friends”:“3iYVBhusw8WzL-fpHfzSJg, AHYWPHP0A_liXRQ1vwOTOQ, DnNCUxnLZJLKchU7ghBHEA, aOaBLef9O9WEYOOu_Ag0jg, byLkv1cyexwDrEEucXRl6A, W0VtBrATFgbrh5-dlT9enA, S5sN_XnLI7JLd07a03xf1w, sxO4cKLCHgPWDwLkTFm_xA, 5A_mUZytIYM0325LK52ghg, BLoIigmmcL–C89jCt75aA, AnZEey8ewxg2bOJwHwIpdQ, wDEhZdJRvC3RCE9unKKPgA, qtI1d1FEsyXPLF4ZeAs5Zg, gJPLWZ02gps2Z-aDtrg5pA, OZ2-ketYLpnad_HFnKwZ4g, q90aHWhAArWMNDIPMx3nww, jcQhpVQFpj8os1DPGkbQ3A, 9m3zKiIVD_oRWErUm8W92A, 2AQjYOwmO1VMv0pxMFgmFQ, hJZdG_TBMzpkSNAQzKP2pQ, nETnIaD_oxfdm7BCpWPQjg, lvIgUIjKTGDTVhHJt3uAfw”,“fans”:0,“average_stars”:1.0,“compliment_hot”:0,“compliment_more”:0,“compliment_profile”:0,“compliment_cute”:0,“compliment_list”:0,“compliment_note”:0,“compliment_plain”:0,“compliment_cool”:0,“compliment_funny”:0,“compliment_writer”:0,“compliment_photos”:0}I did import a lot of files exactly like this but for some reason in this one im getting this:Failed to import with the following error:0 / ~1535625.9217676679",
"username": "WeaChris_N_A"
},
{
"code": "",
"text": "Do you fix that? I’m also facing the same problem",
"username": "Sea_Farer"
}
]
| Cannot read property '_id' of undefined | 2021-07-08T07:40:40.795Z | Cannot read property ‘_id’ of undefined | 9,827 |
|
null | []
| [
{
"code": "{\"t\":{\"$date\":\"2023-01-03T06:03:26.735+01:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22576, \"ctx\":\"CurlConnPool-195\",\"msg\":\"Connecting\",\"attr\":{\"hostAndPort\":\"ocsp.<host>:80\"}}{\"t\":{\"$date\":\"2023-01-03T06:05:26.763+01:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22572, \"ctx\":\"CurlConnPool-198\",\"msg\":\"Dropping all pooled connections\",\"attr\":{\"hostAndPort\":\"ocsp.<host>:80\",\"error\":\"ConnectionPoolExpired: Pool for ocsp.<host>:80 has expired.\"}}",
"text": "Hi.On one of our MongoDB servers I’m seeing this error every ~2 days is logs:{\"t\":{\"$date\":\"2023-01-03T06:03:26.735+01:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22576, \"ctx\":\"CurlConnPool-195\",\"msg\":\"Connecting\",\"attr\":{\"hostAndPort\":\"ocsp.<host>:80\"}}{\"t\":{\"$date\":\"2023-01-03T06:05:26.763+01:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22572, \"ctx\":\"CurlConnPool-198\",\"msg\":\"Dropping all pooled connections\",\"attr\":{\"hostAndPort\":\"ocsp.<host>:80\",\"error\":\"ConnectionPoolExpired: Pool for ocsp.<host>:80 has expired.\"}}Is this something that we should worry about? What might be causing these errors?",
"username": "AM_88"
},
{
"code": "",
"text": "Hello @AM_88 ,Welcome to The MongoDB Community Forums! As this is an informational message hence I don’t think this should interfere with any other ongoing operations. Are you facing any other issues or just wanted to know more about this log message? If you are, please share below details:Regards,\nTarun",
"username": "Tarun_Gaur"
}
]
| ConnectionPoolExpired error - Dropping all pooled connections | 2023-01-05T08:52:16.942Z | ConnectionPoolExpired error - Dropping all pooled connections | 1,235 |
null | []
| [
{
"code": "app.use(bodyParser.json({ limit: \"30mb\", extended: true }));\napp.use(bodyParser.urlencoded({ limit: \"30mb\", extended: true }));\napp.use(cors());\napp.use(express.static('public')); \napp.use('/assets', express.static('assets'));\n\n/* FILE STORAGE */\nconst storage = multer.diskStorage({\n destination: function (req, file, cb) {\n cb(null, \"public/assets\");\n },\n filename: function (req, file, cb) {\n cb(null, file.originalname);\n },\n});\nconst upload = multer({ storage });\n\n/* ROUTES WITH FILES */\napp.post(\"/auth/register\", upload.single(\"picture\"), register);\napp.post(\"/posts\", verifyToken, upload.single(\"picture\"), createPost);\n\n/* ROUTES */\napp.use(\"/auth\", authRoutes);\napp.use(\"/users\", userRoutes);\napp.use(\"/posts\", postRoutes);\n",
"text": "I’ve deployed a basic social app and i can fetch all the data submitted by users except images, because they do not get stored, hence I receive a Failed to load resource: the server responded with a status of 404 () cannot find myappname.onrender.com/assets/img1.jpgI installed multer gridfs and set it up but it still doesn’t work:",
"username": "Pitar_Petrov"
},
{
"code": "",
"text": "Hi @Pitar_Petrov and welcome to the MongoDB community forum!!Ideally, there are three basic ways to work with images in MongoDB. The forum post gives you a detailed description on how to use the recommended methods while working with images.I installed multer gridfsSince we do not have enough expertise on using multer with gridfs, I would recommend you see if this blog post Uploading Files to MongoDB with GridFS and Multer Using NodeJS is useful to you. Please note that this blog post is external to MongoDB, so we cannot guarantee the correctness of it but perhaps may be able to point you toward the right direction.\nFurther, you could visit the multer community for a more detailed response.Finally, the official MongoDB documentation for GridFS with MongoDB would also be a recommended read.Best Regards\nAasawari",
"username": "Aasawari"
}
]
| Storing images in the database | 2023-01-05T23:45:39.334Z | Storing images in the database | 1,621 |
null | [
"crud"
]
| [
{
"code": "{\n \"_id\": {\n \"$oid\": \"63b6e12d650c56ca2e720c86\"\n },\n \"userName\": \"Harshit Gupta \",\n \"email\": \"[email protected]\",\n \"password\": \"$2b$10$Ys0AfxbtQKX3XytM1n85dO0dUL2bFAMQiD3w5nSw6zFmda4W4yRn6\",\n \"tasks\": [\n {\n \"habbitName\": \"Jogging\",\n \"Description\": \"defended\",\n \"Sunday\": false,\n \"Monday\": true,\n \"Tuesday\": false,\n \"Wednesday\": true,\n \"Thursday\": false,\n \"Friday\": true,\n \"Saturday\": false,\n \"Month\": [\n {\n \"date\": 0,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720c8d\"\n }\n },\n {\n \"date\": 1,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720c8e\"\n }\n },\n {\n \"date\": 2,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720c8f\"\n }\n },\n {\n \"date\": 3,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720c90\"\n }\n },\n {\n \"date\": 4,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720c91\"\n }\n },\n {\n \"date\": 5,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720c92\"\n }\n },\n {\n \"date\": 6,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720c93\"\n }\n },\n {\n \"date\": 7,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720c94\"\n }\n },\n {\n \"date\": 8,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720c95\"\n }\n },\n {\n \"date\": 9,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720c96\"\n }\n },\n {\n \"date\": 10,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720c97\"\n }\n },\n {\n \"date\": 11,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720c98\"\n }\n },\n {\n \"date\": 12,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720c99\"\n }\n },\n {\n \"date\": 13,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720c9a\"\n }\n },\n {\n \"date\": 14,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720c9b\"\n }\n },\n {\n \"date\": 15,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720c9c\"\n }\n },\n {\n \"date\": 16,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720c9d\"\n }\n },\n {\n \"date\": 17,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720c9e\"\n }\n },\n {\n \"date\": 18,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720c9f\"\n }\n },\n {\n \"date\": 19,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720ca0\"\n }\n },\n {\n \"date\": 20,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720ca1\"\n }\n },\n {\n \"date\": 21,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720ca2\"\n }\n },\n {\n \"date\": 22,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720ca3\"\n }\n },\n {\n \"date\": 23,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720ca4\"\n }\n },\n {\n \"date\": 24,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720ca5\"\n }\n },\n {\n \"date\": 25,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720ca6\"\n }\n },\n {\n \"date\": 26,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720ca7\"\n }\n },\n {\n \"date\": 27,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720ca8\"\n }\n },\n {\n \"date\": 28,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720ca9\"\n }\n },\n {\n \"date\": 29,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720caa\"\n }\n },\n {\n \"date\": 30,\n \"done\": false,\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720cab\"\n }\n }\n ],\n \"timeRemind\": \"23:12\",\n \"_id\": {\n \"$oid\": \"63b6e14c650c56ca2e720c8c\"\n },\n \"createdAt\": {\n \"$date\": {\n \"$numberLong\": \"1672929612245\"\n }\n },\n \"updatedAt\": {\n \"$date\": {\n \"$numberLong\": \"1672929612245\"\n }\n }\n } ],\n \"timeRemind\": \"03:04\",\n \"_id\": {\n \"$oid\": \"63b7098a4b7294dcdae71128\"\n },\n \"createdAt\": {\n \"$date\": {\n \"$numberLong\": \"1672939914561\"\n }\n },\n \"updatedAt\": {\n \"$date\": {\n \"$numberLong\": \"1672939914561\"\n }\n }\n }\n ],\n \"createdAt\": {\n \"$date\": {\n \"$numberLong\": \"1672929581689\"\n }\n },\n \"updatedAt\": {\n \"$date\": {\n \"$numberLong\": \"1672939914562\"\n }\n },\n \"__v\": 2\n}\n\nUser.findOneAndUpdate({\n _id: req.user.id,\n tasks:{\n $elemMatch:{\n _id:req.params.taskId,\n Month:{\n $elemMatch:{\n date: dateIndice,\n }\n }\n }\n }\n }\n ,{\n \"tasks.$.done\": true\n },{\n projection:{\n tasks:{\n $elemMatch:{\n _id: req.params.taskId,\n Month:{\n $elemMatch:{\n date: dateIndice,\n }\n }\n }\n }\n } \n }\n );\n",
"text": "I have this Mongo DB document structure…I want to first select user id then specific the task id and the document which gets selected then I want to update specific date of the month field can you please suggest the specific query…first select select user by user id then a task by task id then in Month field i want to update a n element(done attribute to true) whose date is 4.I tried to apply the following operation.",
"username": "HARSHIT_GUPTA2"
},
{
"code": "taskusernametimeremind",
"text": "Hi @HARSHIT_GUPTA2 and welcome to the MongoDB community forum!!Based on the document structure shred above, it seems it is not well formed and thus it makes it difficult for us to reproduce in local environment.\nCould you share an updated document which would help to provide the working query?Adding to that, the example document contains a lot of information in a single document. If this is subjected to grow indefinitely in future, this might make the querying on data difficult and slow. On top of that, MongoDB has a hard limit of 16MB per document that is not configurable, so a document that grows indefinitely can hit this limit.If your use case allows it, the entire document structure could be split into three different collections as:Let us know if you have any further questions.Best Regards\nAasawari",
"username": "Aasawari"
}
]
| Mongo DB query update nested document | 2023-01-05T19:14:49.901Z | Mongo DB query update nested document | 732 |
null | [
"configuration",
"storage"
]
| [
{
"code": "storage:\n wiredTiger:\n engineConfig:\n cacheSizeGB: 9\nJan 4 09:27:26 s kernel: [21037324.308350] conn315770 invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0\n\nJan 4 09:27:26 s kernel: [21037324.308627] oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=/,mems_allowed=0,oom_memcg=/system.slice/mongod.service,task_memcg=/system.slice/mongod.service,task=mongod,pid=237767,uid=107\nJan 4 09:27:26 s kernel: [21037324.308731] Memory cgroup out of memory: Killed process 237767 (mongod) total-vm:22557452kB, anon-rss:21047624kB, file-rss:4536kB, shmem-rss:0kB, UID:107 pgtables:41772kB oom_score_adj:0\n total used free shared buff/cache available\nMem: 24048 16848 2020 0 5179 6807\nSwap: 0 0 0\n\n",
"text": "Hello,I use WiredTiger engine and ave storage configuration doneServer RAM: 24 GB.\nInformation from systemctl status mongod\nMemory: 19.8 GB (limit 20.1GB).Sometimes without any reason I haveHow possible to identify why MongoDB uses more RAM than possible to use?free -m output",
"username": "Staff_IT"
},
{
"code": "",
"text": "Hello @Staff_IT ,Welcome to The MongoDB Community Forums! The most common reason for OOMkilled process is that the process is using more RAM than what the server has, and the server has no swap configured. Anecdotally, this also typically mean that the hardware is under-provisioned for the workload.Setting the WT cache does not mean that the whole mongod will adhere to that much memory. MongoDB uses memory on top of WT cache for other database purposes e.g. query processing, incoming connections, etc. Currently there is no method to limit this memory usage.One straightforward way to prevent this OOMkill is to provision a swap space. However if the hardware is actually underprovisioned for the workload, it will make it very slow due to swapping. However there’s less chance of MongoDB getting OOMkilled by the kernel.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Linux oom-killer | 2023-01-09T09:31:46.117Z | Linux oom-killer | 2,625 |
null | [
"aggregation",
"java",
"scala"
]
| [
{
"code": "db.products.aggregate([\n {\n $project: {\n \"resulted_size\": { $sum: { $bsonSize: \"$$ROOT\" } }\n }\n }\n])\ncollection.\n aggregate(\n Seq(\n Aggregates.group(\n null,\n Accumulators.sum(\"resulted_size\", BsonDocument(\"$bsonSize\" -> BsonString(\"$$ROOT\")))\n )\n )\n )\n",
"text": "Hello everyone, I try to represent next mongo queryin my Scala code, I use mongo-java-driver and when I try to use MQL I see the next error ‘Unrecognized expression $bsonSize’Code:How can I need to represent this aggregation in my code?",
"username": "devsol_pwn3d"
},
{
"code": "$bsonSize",
"text": "@devsol_pwn3d, the $bsonSize operator was introduced in MongoDB 4.4.If you’re connecting to a MongoDB 4.2 or older cluster the operator would not be available.",
"username": "alexbevi"
},
{
"code": "",
"text": "Thank you, yes the version is 4.2.18(",
"username": "devsol_pwn3d"
},
{
"code": "$sum$project$group",
"text": "Note that you cannot use $sum this way in $project - used a a regular expression it expects an array (to sum its elements). The only time you can give it a number rather than an array of numbers is when you are using it as an accumulator in $group.Asya",
"username": "Asya_Kamsky"
},
{
"code": "group",
"text": "I see you’re using group in your Scala code, so it’s just your shell aggregate that’s using the wrong stage name most likely.",
"username": "Asya_Kamsky"
}
]
| Mongo aggregation $bsonSize unrecognized | 2023-01-12T18:45:21.841Z | Mongo aggregation $bsonSize unrecognized | 1,558 |
null | []
| [
{
"code": "",
"text": "Hello,\nI have set the profiling level to 2 using the command db.setProfilingLevel(2), but it doesn’t seem like all the queries are being captured. The profiler GUI on Atlas does not display some of the queries.Any pointers on how to fix this?Thanks!",
"username": "Prasad_Kini"
},
{
"code": "",
"text": "This has been answered on another topic.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
]
| Mongo Atlas Profiler | 2023-01-06T02:02:07.907Z | Mongo Atlas Profiler | 791 |
null | []
| [
{
"code": "",
"text": "Hello All,Is there a rule of thumb for how long an index removal and replacement on a large collection takes? I need to rebuild an index on a collection with 86 billion documents in it and need to scope out downtime. Any help would be appreciated. I have a 9 shard cluster hosted on r5.xlarge AWS EC2’s. Each server utilizes a 4TB EBS volume rated for 12000 IOPS.",
"username": "Ian_Beck"
},
{
"code": "",
"text": "Hi @Ian_Beck,\nas far as I know for the drop there isn’ t effort time, but when you restore the index, depends on how many index you’ ve to restoring and in how many data you’ re restoring this index. So is difficulty to estimate how many time you need for create the index!Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "I figured that might be the answer, thanks for the help Fabio.",
"username": "Ian_Beck"
}
]
| Index Drop and Rebuild Execution Time | 2023-01-11T22:37:20.173Z | Index Drop and Rebuild Execution Time | 662 |
null | [
"java",
"production"
]
| [
{
"code": "",
"text": "The 4.8.2 MongoDB Java & JVM Drivers release is a patch to the 4.8.1 release.The documentation hub includes extensive documentation of the 4.8 driver.You can find a full list of bug fixes here.",
"username": "Valentin_Kovalenko"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB Java Driver 4.8.2 Released | 2023-01-12T22:43:59.966Z | MongoDB Java Driver 4.8.2 Released | 2,245 |
null | [
"change-streams"
]
| [
{
"code": "Model.watch().on('change', data => console.log(data));\n",
"text": "Hello everyone,I’m trying my hands on change streams for the first time.\nI have something like this.But 1. is it possible to access the properties of fullDocument? and 2. how would I do this?",
"username": "CHH_N_A"
},
{
"code": "",
"text": "Additional parameters will be required to get the full document on updates.MongoDB triggers, change streams, database triggers, real time",
"username": "chris"
},
{
"code": "async function getDocumentsFromChangeStream() {\n const changeStream = Model.watch([], { fullDocument: 'updateLookup' }) as ChangeStream<Model>;\n\n changeStream.on('change', data => {\n if (data.operationType === 'insert') {\n return data.fullDocument;\n }\n });\n}\n",
"text": "For now I have something like:So the use case would be to post some data (in my case testcases) to mongodb and then return each inserted document to running some tests with it.\nthis function should run insde a cron so that it listens every each x second on changes and returns the documents from the function. If you have any suggestions to it, it would be very welcome.",
"username": "CHH_N_A"
},
{
"code": "{ fullDocument: 'updateLookup' }",
"text": "The full document will be available with inserts anyway, the { fullDocument: 'updateLookup' } is if you want the full document returned when updates are made too.I would suggest that you keep the app running and processing the change stream full time rather than executing it periodically.You will need to capture the resume token and persist it if you want the app to pick up from where it finished at program exit.MongoDB triggers, change streams, database triggers, real time",
"username": "chris"
}
]
| How to access fullDocument properties from a change stream? | 2023-01-09T13:59:37.076Z | How to access fullDocument properties from a change stream? | 1,578 |
null | [
"mongodb-shell"
]
| [
{
"code": "",
"text": "Hi there!\nI can connect remote MongoDB server with authority by mongosh.\nbut I can’t connect by intellij idea database with authority.",
"username": "_N_A72"
},
{
"code": "",
"text": "I resolve this problem by change intellij idea database authentication from ‘user & password’ to ‘SCRAM-SHA-1’!!! thank you~~\nthis is my checklist:\nremote host: centos7.6\nmongodb: 6.0.2\nintellij idea: 2023.3",
"username": "_N_A72"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How can I connect remote MongoDB server with Intellij Idea database? | 2023-01-12T19:11:36.674Z | How can I connect remote MongoDB server with Intellij Idea database? | 1,434 |
null | [
"queries",
"swift",
"graphql"
]
| [
{
"code": "query {\n event {\n action\n }\n}\n{\n \"data\": {\n \"event\": {\n \"action\": \"create\"\n }\n }\n}\nquery {\n event {\n something: action\n }\n}\n{\n \"data\": {\n \"event\": {\n \"something\": null\n }\n }\n}\nquery {\n __typename\n eventsquery__kpaue1a96yh4: events {\n __typename\n actionevent__1tnikdq0kcc8m: action\n }\n}\n{\n \"data\": {\n \"__typename\": \"Query\",\n \"eventsquery__kpaue1a96yh4\": [\n {\n \"__typename\": \"Event\",\n \"actionevent__1tnikdq0kcc8m\": null\n },\n {\n \"__typename\": \"Event\",\n \"actionevent__1tnikdq0kcc8m\": null\n },\n {\n \"__typename\": \"Event\",\n \"actionevent__1tnikdq0kcc8m\": null\n },\n ...\n ]\n }\n}\n",
"text": "Hello,I am new to MongoDB and was trying to make a small project using the Atlas App Service GraphQL API. I am trying to have an iOS app interface with my database using SwiftGraphQL which, after testing on other GraphQL APIs, is a functional library. The issue I am having is with GraphQL aliases. Currently I can run the following query with this result:Query:Result:Although when I try to use an alias for one of the “action” field, it gives me a null value:\nQuery:Response:What is happening with my Swift GraphQL library is that it is making this query to the API and returning null values\nQuery:Response:I am currently trying to use and test the syntax from here.\nIs GraphQL aliasing not available for the Realm GraphQL API, or am I doing something wrong on my end?If it is any help here is some information on my environment:I have tried running this query on both the GraphiQL client on the MongoDB website, as well as the Apollo Sandbox. Both have had the same response to the queries above.Thank you",
"username": "Teddy_Bersentes"
},
{
"code": "",
"text": "Hello @Teddy_Bersentes ,I have the same issue.\nDid you find a solution for this?I have create a request to the realm feedback page\nPlease vote for it Realm GraphQL returning “null” when using a GraphQL alias – MongoDB Feedback Engine",
"username": "cyril_moreau"
}
]
| Realm GraphQL returning "null" when using a GraphQL alias | 2022-06-14T21:23:59.113Z | Realm GraphQL returning “null” when using a GraphQL alias | 2,852 |
null | [
"graphql"
]
| [
{
"code": "\"message\": \"runtime error: slice bounds out of range [-1:]\"\n",
"text": "Reference: GraphQLCurrently Realm GraphQL only allows field aliases on top level queries, and not on keys. An error will get returned stating:Are there any plans on compliance with this GraphQL specification? It would be very much appreciated.",
"username": "ajedgarcraft"
},
{
"code": "",
"text": "Can I get an official response on this? A lot of the tooling that we use at the agency I work with have fully compliant GraphQL specs which are causing errors when queries are being sent to the Realm GraphQL service. Thanks.",
"username": "ajedgarcraft"
},
{
"code": "",
"text": "Hey @ajedgarcraft - I’m sorry for such a late response. This isn’t an available feature at the moment - can you provide more insight into why you want to use @alias and the general need for full GraphQL compliance? While this isn’t on our roadmap right now, Adding it here would help us gauge interest - Realm: Top (0 ideas) – MongoDB Feedback Engine",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Field aliasing is an integral feature of the GraphQL spec with very clear use cases documented elsewhere – https://graphql.org/learn/queries/#aliases – I would argue not being able to use it limits the usage of the GraphQL API in a myriad of situations.",
"username": "Nikolaj_Selvik"
},
{
"code": "",
"text": "As @Nikolaj_Selvik stated field aliasing is integral and the spec compliance has nested aliasing. Many tools/libraries for various language and frameworks are compliant with GraphQL specs which therefore makes them unable to work with MongoDB Realm GraphQL. Realm GraphQL is a great time saver but it would be much more useful if it was fully spec compliant.If you desire further specifics - something like GitHub - dillonkearns/elm-graphql: Autogenerate type-safe GraphQL queries in Elm. is incapable of working with MongoDB Realm GraphQL because it utilizes nested field aliases when generating type-safe code for interacting with GraphQL schema. Dillon Kearns package uses GraphQL specification.Spec compliance shouldn’t be a feature request, it should be available out the box to assure there aren’t unnecessary hinderances for anyone naturally expecting spec compliance. It would strengthen use cases.",
"username": "ajedgarcraft"
},
{
"code": "",
"text": "Frankly I am shocked that this feature is not included. The idea that any large project would not need to transform / map names on some data in this case via the use of an alias seems extremely short sighted. In my 20 years of working in hundreds of projects I have never seen or heard of a project that did not need some kind of mapping. In an ideal world we would all follow some common naming schema, reducing the bureaucracy and sending many programmers to the unemployment office. Fortunately we don’t live such a technocratic utopia, and the people consuming that data can’t seem to agree, hence the need to change name here and there.",
"username": "Scott_Johnston"
},
{
"code": "",
"text": "Hello,Any update about this alias spec?\nI have create a request to the realm feedback page\nPlease vote for it Realm GraphQL returning “null” when using a GraphQL alias – MongoDB Feedback Engine",
"username": "cyril_moreau"
}
]
| GraphQL Field Alias Spec Compliance | 2021-04-21T16:06:27.296Z | GraphQL Field Alias Spec Compliance | 6,225 |
[]
| [
{
"code": "",
"text": "Hi guys,I joined a MERN stack project where Mongo is deployed on AWS by using MongoDB Atlas. Mongo collections have less than 150k records. I tried to test a user flow that generates ~110 mongo queries. I estimate ~110 queries based on all mongo spans tracked by DataDog.All these mongo spans have duration between 50ms-500ms in production/development environment. I created a test suite in JMeter where I test this flow with 500 virtual users where ramp-up period is 60s. When I run this test, mongo spans have extremely long duration >30s and they cause request timeout errors on the server.I tried to upgrade the Mongo Atlas environment to M200 (I tried both General option and Local NVMe SSD) and I tried M300 as well. It didn’t help, mongo’s spans duration is too long. When the test was running, I didn’t notice any spikes in Mongo Atlas → Real Time monitor. CPU with Disk Util were under 5%. When I run the test and I see it’s failing, I stop the test and check traces in DataDog, there is not more 1000 mongo spans(queries) in DataDog.When I open Mongo Atlas Profiling View, I can see that queries execution time is a bit slower when test is running, but most of them are missing. Do you know why profiling view is missing some queries and doesn’t show slow queries >30s I can see in DataDog?How is it possible that such as a strong environment M200/M300 is not able to process <50k queries with collections <150k records within one minute?Do you have any idea how I can identify what’s the issue with Mongo server? I attached screenshots from Metrics view where you can see some spikes when tests were running on M200 configuration.\nM200_general_metrics1920×13020 863 KB\nThere are 3 recommendations in Performance Advisor to add an index to 3 collections. Do you think this can be the issue why Mongo server is so slow?",
"username": "And_Ga"
},
{
"code": "",
"text": "IfCPU with Disk Util were under 5%then the following conclusion is wrongHow is it possible that such as a strong environment M200/M300 is not able to process <50k queries with collections <150k records within one minute?A slow response time on a client does not imply slow queries as indicated bydoesn’t show slow queries >30sThe corollary is that the bottleneck is elsewhere.Do you download the documents from the query? You client simulating 500 virtual users might not have the bandwidth to download that much data.What is the load on the machine running the 500 virtual users? Your client might not be powerful enough to make the query fast enough.If you increase the capacities on one side and it is not faster then you increased the capacities on the wrong side.My conclusion is that you are too fast to conclude thatthe issue with Mongo serverAdding index will definitively help, usually, but if CPU and Disk is under 5% I do not think it will.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks! I am running the test on Mac M1 machine that is suppose to simulate “browser”. All queries are run by Heroku server that is running on Performance L-dyno with enabled auto scaling. Do you still think this can be the issue?",
"username": "And_Ga"
},
{
"code": "",
"text": "I know nothing aboutPerformance L-dynoso I cannot comment.The only thing I can add is that if you increase the capacity on one side and it is not faster then the bottleneck is on the other side. Starting from this observation, I would try to decrease the capacity on the MongoDB side until performances degrade. That would give you a baseline of what your current test setup is able to load the server.",
"username": "steevej"
},
{
"code": "",
"text": "Just find out the issue is with the Heroku server. Even if you use autoscaling with performance dynos, Heroku server is not able to process a lot of requests. You just need to use more dynos. The Heroku server was blocked on load tests. I just don’t understand why DataDog shows long mongo spans if mongo queries weren’t executed. When I checked mongo logs, the queries are missing.",
"username": "And_Ga"
}
]
| Mongo Atlas performance issues | 2023-01-08T14:25:08.233Z | Mongo Atlas performance issues | 1,172 |
|
null | [
"kotlin"
]
| [
{
"code": "",
"text": "Hello, I have question related to new Realm for Kotlin where Builder require specify list of objects (e.g. schema). On iOS it seems that this is not required. I wonder if this is something that is really required or and whenever same requirement will come on iOS. For us it would be better if we could add objects dynamically as long as we have one codebase with multiple configuration using different sets of objects. But if this is required and important we will find a way to handle it.",
"username": "Vladimir_Belohradsky"
},
{
"code": "",
"text": "Currently, it is required due to the way the Kotlin Compiler works. I.e. setting it manually allow us to enable incremental compilation, which is quite significant when compiling the project.We have some ideas for fixing it, but it will require some experimentation. You can follow Support default schema creation · Issue #90 · realm/realm-kotlin · GitHub for this issue.",
"username": "ChristanMelchior"
}
]
| Schema enforcement in Kotlin | 2023-01-12T15:44:39.987Z | Schema enforcement in Kotlin | 1,160 |
null | [
"database-tools",
"backup"
]
| [
{
"code": "",
"text": "What is the alternative to mongodump in version 5? I don’t see mongodump in bin folder. Please help.",
"username": "Ana"
},
{
"code": "",
"text": "You can still use mongodump, they just removed it from the mongod download and put it in a download called “Database tools” that way they can be developed separately.",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "Thanks so much.\nNow I have upgraded to 5.0…is it okay to upgrade to 6 next or better go from 5.0 to 5.1 to 5.2 etc??",
"username": "Ana"
},
{
"code": "",
"text": "You can use this doc to understand the whole process. But you can go to 6.0. It’s Major release you need to do in order not minor releases.",
"username": "tapiocaPENGUIN"
}
]
| MongoUpgrade from 4.4 to 5.014 - Backups | 2023-01-12T14:31:40.430Z | MongoUpgrade from 4.4 to 5.014 - Backups | 883 |
null | [
"aggregation"
]
| [
{
"code": "[\n {\n profileId: 123,\n datePosted: '2022-01-01',\n content: 'this is post content'\n },\n {\n profileId: 123,\n datePosted: '2022-01-09',\n content: 'this is a different posts content'\n },\n {\n profileId: 456,\n datePosted: '2022-02-03',\n content: 'this is some other posts content'\n },\n]\n[\n {\n profileId: 123,\n postsByMonth: [\n {\n month: '2022-01-01',\n posts: [\n {\n datePosted: '2022-01-01',\n content: 'this is post content'\n },\n {\n datePosted: '2022-01-09',\n content: 'this is a different posts content'\n },\n ]\n },\n ]\n }, \n {\n profileId: 456,\n postsByMonth: [\n {\n month: '2022-02-01',\n posts: [\n {\n datePosted: '2022-02-03',\n content: 'this is some other posts content'\n }\n ]\n }\n ]\n }\n]\n",
"text": "I would like to group documents based on a reference field (profileID), and then group posts for each month of the past year into an array of documents. Here is the current schema:Here is the desired schema:What is the best way to accomplish this using aggregation? Thank you in advance!!",
"username": "Alexander_Miller"
},
{
"code": "$groupprofileIddatePostedprofileId",
"text": "Best would be two $group stages, first one groups by profileId and month of datePosted, and then the second groups by profileId pushing posts into an array…Asya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "Thank you! Grouping by two fields during the first stage was just what I needed.",
"username": "Alexander_Miller"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Group documents by common field and bucket by month | 2023-01-11T21:49:27.557Z | Group documents by common field and bucket by month | 599 |
null | [
"dot-net",
"field-encryption"
]
| [
{
"code": "",
"text": "Hello everyone,I’m preparing a mechanism to automatically encrypt some fields that contains sensitive personal information in our system. I’ve been following the guideline to setup but I’m struggling with MongoDB.Driver.Encryption.MongoEncryptionException exception being raised with the message ‘Encryption related exception: command not supported for auto encryption: buildinfo.’\nI’m using the C# driver 2.18.0 which uses libmongocrypt 1.6.0 and I’ve tried with the mongrocryptd versions 5.0.14 and 6.0.3, everything in Windows.\nI’ve been searching regarding this issue and it seems it was fixed in a previous libmongocrypt version, but it is still happening to me. I also read that this happens because the health check uses this command but I haven’t found a way to disable it in the C# driver.\nDoes someone else have this problem or know how to solve it?Thank you very much!",
"username": "Jorge_Luis_Calderin_Garcia"
},
{
"code": "buildInfo",
"text": "hey @Jorge_Luis_Calderin_Garcia . I’m not sure how you can see this exception. One of the reasons against it that we don’t call buildInfo starting from 2.15 driver. Can you please provide a repro?",
"username": "Dmitry_Lukyanov"
},
{
"code": "",
"text": "Hi Dimitry! I checked and it turned to be a library called Hangfire, I was using the same encrypted Mongo Client, and they seems to be using the command for some internal stuff. I used a raw client for that library and it started to work fine. Thank you very much!",
"username": "Jorge_Luis_Calderin_Garcia"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Field Auto Encryption failing because of buildinfo field not supported | 2023-01-12T09:49:17.581Z | Field Auto Encryption failing because of buildinfo field not supported | 1,045 |
[
"atlas-device-sync"
]
| [
{
"code": "",
"text": "Realm sync stopped working, asking to Restart Syncing.\nWhen attempting to restart, it did not work.\nSo we tried to Terminate Sync.\nNow the app is stuck with the following message:Sync is currently terminating…Please wait for sync to finish terminating before enabling again.The Enable Sync button is also disabled.\n\nmongo1009×165 13.1 KB\nThis status has been there for about 1:30 hours by now. Since this is a live application, we would like to resolve this problem soon.\nHow can we start syncing the application again?",
"username": "Sandeepani_Senevirathna"
},
{
"code": "",
"text": "Hi, can you share your app_id / the URL in the browser?",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "This problem appears to be solved for now after notifying the MongoDB support.\nBut some data is not available according to the GraphQL results. The data is missing from the date when we had a sync failure last week when we were trying to create a flexible sync app and it overloaded the cluster which is in the M10 tier. But earlier we could see the relevant data in the mobile devices which were syncing but not in the MongoDB.\nWe wish to upgrade to the M20 tier now since there are some sync errors as raised here: https://www.mongodb.com/community/forums/t/error-starting-sync/202872.\nAny help is much appreciated.",
"username": "Sandeepani_Senevirathna"
},
{
"code": "",
"text": "Hi,\nIm currently facing the same issue, some help will be appreciated.\nApp id: ocla-prd-taxms\n\nScreen Shot 2023-01-11 at 16.58.571070×548 23.8 KB\nThanks",
"username": "Ivan_Koop"
},
{
"code": "",
"text": "Hi, looks like you already got into a good place. Unfortunately, terminating flexible sync can take a while since we need to delete a lot of metadata. One of the big improvements of flexible sync is that terminating sync should be almost instantaneous Let me know if I can help in any way, but seems like this auto-resolved and the issue is just that it took a bit of time to delete all of the metadata.",
"username": "Tyler_Kaye"
}
]
| Stuck on terminate sync - Sync is currently terminating | 2022-12-05T08:13:48.605Z | Stuck on terminate sync - Sync is currently terminating | 2,461 |
|
null | [
"data-modeling"
]
| [
{
"code": "{\n \"UserName\": \"The only one user\",\n \"Posts\": [\n {\n \"Date\": \"01.01.2023 23:40:02\",\n \"Type\": 3\n \"Message\": \"Hello world\"\n },\n{\n \"Date\": \"02.01.2023 22:42:02\",\n \"Type\": 2\n \"Message\": \"Hello earth\"\n }\n ]\n}\n",
"text": "Dear Mongo Community,I’m look back to 25 years of relational database and are now diving into document-based-mongodb. And I have a question how to model the following case:Assuming I have I forum with posts:Imaging there are a lot of messages per user and a lot of user.\nIf I want to display the last 20 messages across all users what would the query look like?\nOr do i have to reorganize the posts into a seperate collection?Thank you, for your input.\nLukas",
"username": "Lukas_Bauhaus"
},
{
"code": "{ \"Posts\" : { \"$slice\" : [ -1 , 20 ] }\n",
"text": "Please do not store your date as string.Dates as strings take more space, are slower to compare and in your specific format cannot be sorted.You could use a projection that uses something:",
"username": "steevej"
}
]
| Querying nested objects | 2023-01-12T12:17:29.019Z | Querying nested objects | 635 |
null | [
"database-tools",
"c-driver"
]
| [
{
"code": "cd",
"text": "I would like to announce mongovi v2.0.0: Release v2.0.0 · timkuijsten/mongovi · GitHubmongovi is a command line interface for MongoDB that I’ve built and been using for the past 6 years. It has the following features:Version 2.0.0 adds support for UTF-8 and includes many, many performance improvements, code cleanups and simplifications.I’m currently looking for someone that can help with getting it uploaded as a package to Debian:\nhttps://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1028411\nand https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1028418",
"username": "Tim_Kuijsten"
},
{
"code": "cd",
"text": "I was struggling at the second step but was finally able to make it happen.",
"username": "Juniper_Scott"
},
{
"code": "",
"text": "Thanks for the feedback!If you have any suggestions for better wording/documentation or like to elaborate on what was confusing, please let me know.",
"username": "Tim_Kuijsten"
}
]
| Announcing mongovi v2.0.0 | 2023-01-10T19:41:33.575Z | Announcing mongovi v2.0.0 | 1,239 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.