image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"node-js"
] | [
{
"code": "import mongodb from 'mongodb';\n\nconst { ObjectId } = mongodb;\n\nnew ObjectId();\n\nOR\n\nnew ObjectId(stringHash);\n\n {\n \"keyword\": \"bsonType\",\n \"params\": {\n \"bsonType\": \"objectId\"\n },\n \"message\": \"should be objectId got 641adae6a6e5c8128965ae9a\",\n \"instancePath\": \"/widgets/116/parentId\",\n \"schemaPath\": \"#/properties/widgets/items/properties/parentId/bsonType\",\n \"schema\": \"objectId\",\n \"data\": \"641adae6a6e5c8128965ae9a\"\n }\n .... etc\n",
"text": "Since I upgraded to mongodb driver 5 for node js, I can not create custom object id.This is the way it works with the driver 4.x:But when I use now this approach I get validation errors:",
"username": "Anton_Tonchev"
},
{
"code": "const MongoClient = require('mongodb').MongoClient;\nconst ObjectId = require('mongodb').ObjectId;\nconst assert = require('assert');\n\nconst url = 'mongodb+srv://<username>:<password>@cluster0.sqm88.mongodb.net/?retryWrites=true&w=majority';\nconst dbName = 'Test1';\nconst client = new MongoClient(url, { useNewUrlParser: true });\nrun()\n.catch(console.dir)\n.finally(async () => {await client.close();})\n\nasync function run() {\n\n await client.connect();\n console.log(\"Connected correctly to server\");\n\n const db = client.db(dbName);\n\n const coll = db.collection(\"sample\");\n \n let res = await coll.insertOne({_id: new ObjectId('6097a1c12714b94348359a2c'), a: 1});\n assert(res.insertedId);\n console.log(`Inserted Document _id: ${res.insertedId}`);\n}\nConnected correctly to server\nInserted Document _id: 6097a1c12714b94348359a2c\nconst ObjectId = require('mongodb').ObjectId;",
"text": "Hi @Anton_Tonchev and welcome to MongoDB community forums!!I tried to insert a custom objectId for _id using the below code snippet and I was able to insert the data successfully into the collection.Output:Please make sure, you have imported\nconst ObjectId = require('mongodb').ObjectId;\nto the code.\nThe forum post has more details on the new features in the latest release. See release notes for more details.NodeJs version: 5.1.0\nMongoDB version: 6.0.5(Atlas Deployment)Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Hi @Anton_Tonchev could you provide an actual stack trace for the underlying error? What exactly is trying to “validate” the ObjectId? This string is perfectly fine when constructing and ObjectId directly from the BSON library or importing it from the Driver.",
"username": "Durran_Jordan"
},
{
"code": "",
"text": "Hi @Durran_Jordan and @AasawariThanks for the reply, the problem seems to be at AJV, which I use to validate the documents before delivering them to the mongodbAJV compared if the bsonType of the document is ObjectID but now it changed to ObjectId After knowing this I fixed the validation check, and now it works fine.",
"username": "Anton_Tonchev"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Can not use custom ObjectId since mongodb driver 5 | 2023-04-17T08:30:40.792Z | Can not use custom ObjectId since mongodb driver 5 | 1,017 |
null | [
"dot-net",
"unity"
] | [
{
"code": "MongoCommandException: Command saslContinue failed: bad auth : authentication failed.\nMongoDB.Driver.Core.WireProtocol.CommandUsingQueryMessageWireProtocol`1[TCommandResult].ProcessReply (MongoDB.Driver.Core.Connections.ConnectionId connectionId, MongoDB.Driver.Core.WireProtocol.Messages.ReplyMessage`1[TDocument] reply) (at <471405d605184dfebce84a25a9bd22a1>:0)\nMongoDB.Driver.Core.WireProtocol.CommandUsingQueryMessageWireProtocol`1[TCommandResult].ExecuteAsync (MongoDB.Driver.Core.Connections.IConnection connection, System.Threading.CancellationToken cancellationToken) (at <471405d605184dfebce84a25a9bd22a1>:0)\nMongoDB.Driver.Core.Authentication.SaslAuthenticator.AuthenticateAsync (MongoDB.Driver.Core.Connections.IConnection connection, MongoDB.Driver.Core.Connections.ConnectionDescription description, System.Threading.CancellationToken cancellationToken) (at <471405d605184dfebce84a25a9bd22a1>:0)\nRethrow as MongoAuthenticationException: Unable to authenticate using sasl protocol mechanism SCRAM-SHA-1.\nMongoDB.Driver.Core.Authentication.SaslAuthenticator.AuthenticateAsync (MongoDB.Driver.Core.Connections.IConnection connection, MongoDB.Driver.Core.Connections.ConnectionDescription description, System.Threading.CancellationToken cancellationToken) (at <471405d605184dfebce84a25a9bd22a1>:0)\nMongoDB.Driver.Core.Authentication.DefaultAuthenticator.AuthenticateAsync (MongoDB.Driver.Core.Connections.IConnection connection, MongoDB.Driver.Core.Connections.ConnectionDescription description, System.Threading.CancellationToken cancellationToken) (at <471405d605184dfebce84a25a9bd22a1>:0)\nMongoDB.Driver.Core.Authentication.AuthenticationHelper.AuthenticateAsync (MongoDB.Driver.Core.Connections.IConnection connection, MongoDB.Driver.Core.Connections.ConnectionDescription description, System.Collections.Generic.IReadOnlyList`1[T] authenticators, System.Threading.CancellationToken cancellationToken) (at <471405d605184dfebce84a25a9bd22a1>:0)\nMongoDB.Driver.Core.Connections.ConnectionInitializer.AuthenticateAsync (MongoDB.Driver.Core.Connections.IConnection connection, MongoDB.Driver.Core.Connections.ConnectionInitializerContext connectionInitializerContext, System.Threading.CancellationToken cancellationToken) (at <471405d605184dfebce84a25a9bd22a1>:0)\nMongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync (System.Threading.CancellationToken cancellationToken) (at <471405d605184dfebce84a25a9bd22a1>:0)\nMongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool+PooledConnection.OpenAsync (System.Threading.CancellationToken cancellationToken) (at <471405d605184dfebce84a25a9bd22a1>:0)\nMongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool+ConnectionCreator.CreateOpenedInternalAsync (System.Threading.CancellationToken cancellationToken) (at <471405d605184dfebce84a25a9bd22a1>:0)\nMongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool+ConnectionCreator.CreateOpenedOrReuseAsync (System.Threading.CancellationToken cancellationToken) (at <471405d605184dfebce84a25a9bd22a1>:0)\nMongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool+AcquireConnectionHelper.AcquireConnectionAsync (System.Threading.CancellationToken cancellationToken) (at <471405d605184dfebce84a25a9bd22a1>:0)\nMongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.AcquireConnectionAsync (System.Threading.CancellationToken cancellationToken) (at <471405d605184dfebce84a25a9bd22a1>:0)\nMongoDB.Driver.Core.Servers.Server.GetChannelAsync (System.Threading.CancellationToken cancellationToken) (at <471405d605184dfebce84a25a9bd22a1>:0)\nMongoDB.Driver.Core.Operations.RetryableWriteContext.InitializeAsync (System.Threading.CancellationToken cancellationToken) (at <471405d605184dfebce84a25a9bd22a1>:0)\nMongoDB.Driver.Core.Operations.RetryableWriteContext.CreateAsync (MongoDB.Driver.Core.Bindings.IWriteBinding binding, System.Boolean retryRequested, System.Threading.CancellationToken cancellationToken) (at <471405d605184dfebce84a25a9bd22a1>:0)\nMongoDB.Driver.Core.Operations.BulkMixedWriteOperation.ExecuteAsync (MongoDB.Driver.Core.Bindings.IWriteBinding binding, System.Threading.CancellationToken cancellationToken) (at <471405d605184dfebce84a25a9bd22a1>:0)\nMongoDB.Driver.OperationExecutor.ExecuteWriteOperationAsync[TResult] (MongoDB.Driver.Core.Bindings.IWriteBinding binding, MongoDB.Driver.Core.Operations.IWriteOperation`1[TResult] operation, System.Threading.CancellationToken cancellationToken) (at <49b0ee9e114a4b9a88d894384d069a5a>:0)\nMongoDB.Driver.MongoCollectionImpl`1[TDocument].ExecuteWriteOperationAsync[TResult] (MongoDB.Driver.IClientSessionHandle session, MongoDB.Driver.Core.Operations.IWriteOperation`1[TResult] operation, System.Threading.CancellationToken cancellationToken) (at <49b0ee9e114a4b9a88d894384d069a5a>:0)\nMongoDB.Driver.MongoCollectionImpl`1[TDocument].BulkWriteAsync (MongoDB.Driver.IClientSessionHandle session, System.Collections.Generic.IEnumerable`1[T] requests, MongoDB.Driver.BulkWriteOptions options, System.Threading.CancellationToken cancellationToken) (at <49b0ee9e114a4b9a88d894384d069a5a>:0)\nMongoDB.Driver.MongoCollectionImpl`1[TDocument].UsingImplicitSessionAsync[TResult] (System.Func`2[T,TResult] funcAsync, System.Threading.CancellationToken cancellationToken) (at <49b0ee9e114a4b9a88d894384d069a5a>:0)\nMongoDB.Driver.MongoCollectionBase`1[TDocument].InsertOneAsync (TDocument document, MongoDB.Driver.InsertOneOptions options, System.Func`3[T1,T2,TResult] bulkWriteAsync) (at <49b0ee9e114a4b9a88d894384d069a5a>:0)\nSignUp.SignUpFunction () (at Assets/Scripts/SignUp.cs:52)\nSystem.Runtime.CompilerServices.AsyncMethodBuilderCore+<>c.<ThrowAsync>b__7_0 (System.Object state) (at <1f66344f2f89470293d8b67d71308c07>:0)\nUnityEngine.UnitySynchronizationContext+WorkRequest.Invoke () (at <4014a86cbefb4944b2b6c9211c8fd2fc>:0)\nUnityEngine.UnitySynchronizationContext.Exec () (at <4014a86cbefb4944b2b6c9211c8fd2fc>:0)\nUnityEngine.UnitySynchronizationContext.ExecuteTasks () (at <4014a86cbefb4944b2b6c9211c8fd2fc>:0)\n",
"text": "Hi! I build a mobile app in Unity and I can`t connect to cloud Atlas MongoDB. I downloaded all libraries(driver etc) and have this error:",
"username": "Mariana_Khamuda"
},
{
"code": "",
"text": "Can you connect by shell with same id/pwd?\nbad authentication means wrong combination of user & password",
"username": "Ramachandra_Tummala"
},
{
"code": "t have shell. So I canve added drivers to my Unity project and added drivers in NuGet. I am beginner in that and don",
"text": "I dont have shell. So I cant connect to cloud Atlas MongoDB without shell?\nIve added drivers to my Unity project and added drivers in NuGet. I am beginner in that and dont understand how to connect right to cloud mongo\nThanks for your answers!",
"username": "Mariana_Khamuda"
},
{
"code": "",
"text": "Sorry for interrupting, I missed my correct password, so after I’ve created new user - I`ve connected to Atlas.",
"username": "Mariana_Khamuda"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Authentication failed Atlas MongoDB | 2023-04-13T12:24:04.298Z | Authentication failed Atlas MongoDB | 2,306 |
[
"serverless",
"kolkata-mug"
] | [
{
"code": "Lead Full Stack Developer @ FinarbSoftware Engineer @ Redhat",
"text": "\nKolkata MUG1920×1080 226 KB\nKolkata MongoDB User Group is excited to announce its first meetup on Saturday, April 22, 2023, at 10:00 AM at Blob Co-Working Space. The event will include two sessions with demos, some games, lunch, and swag to win! This is a meet-up for everyone, experienced developers, architects, and startup founders to come, share ideas and case studies, and meet new people in the community. Come join us for a day of learning lunch and fun…! To RSVP - Please click on the “ ✓ RSVP ” link at the top of this event page if you plan to attend. The link should change to a green button if you are RSVPed. You need to be signed in to access the button.The door will open at 10:00. Please be on time to hang out with the team.Event Type: In-Person\nLocation: Blob CoWorking Space, 7th floor, Yamuna Building, 86, Golaghata Rd, Dakshindari, Kolkata, West Bengal 700048Lead Full Stack Developer @ Finarb–Software Engineer @ Redhat",
"username": "Sumanta_Mukhopadhyay"
},
{
"code": "",
"text": "Looks like an awesome first MUG!!",
"username": "Veronica_Cooley-Perry"
},
{
"code": "",
"text": "I think the year is wrong in the first line below the banner image.",
"username": "jeyaraj"
},
{
"code": "",
"text": "Thanks for the catch @jeyaraj ",
"username": "Harshit"
},
{
"code": "",
"text": "Waiting for such an awesome meetup in Jaipur😁",
"username": "Khushi_Agarwal"
}
] | Kolkata MUG: MongoDB Inaugural Kickoff Meetup | 2023-04-02T19:02:23.955Z | Kolkata MUG: MongoDB Inaugural Kickoff Meetup | 2,363 |
|
null | [
"connector-for-bi"
] | [
{
"code": "2021-11-30T11:47:05.711-0700 W SCHEMA [manager] error initializing schema: error inserting document(s): (Unauthorized) not authorized on mongosqld_data to execute command { insert: \"schemas\", writeConcern: { w: \"majority\" }, $db: \"mongosqld_data\" }H:\\> \"C:\\Program Files\\MongoDB\\Connector for BI\\2.14\\bin\\mongosqld.exe\" --config \"C:\\Users\\johnson.steve\\mongodb\\mongosqld.conf\"\n2021-11-30T11:47:04.518-0700 I CONTROL [initandlisten] mongosqld starting: version=v2.14.4 pid=29052 host=US-LAPTOP-XXX\n2021-11-30T11:47:04.522-0700 I CONTROL [initandlisten] git version: df0cf0b57e9aac0ab6d545eee0d4451d11d0c6e9\n2021-11-30T11:47:04.522-0700 I CONTROL [initandlisten] OpenSSL version OpenSSL 1.0.2n-fips 7 Dec 2017 (built with OpenSSL 1.0.2s 28 May 2019)\n2021-11-30T11:47:04.522-0700 I CONTROL [initandlisten] options: {config: \"C:\\\\Users\\\\johnson.steve\\\\mongodb\\\\mongosqld.conf\", systemLog: {verbosity: 1}, schema: {sample: {namespaces: [gamotdb.CUSTID0016*]}, stored: {mode: \"auto\", source: \"mongosqld_data\", name: \"mySchema\"}}, security: {enabled: true}, mongodb: {net: {uri: \"mongodb://targetsystem:27017\", auth: {username: \"my_username\", password: \"<protected>\", source: \"sourcedb\"}}}}\n2021-11-30T11:47:04.528-0700 I SCHEMA [manager] attempting to initialize schema\n2021-11-30T11:47:04.528-0700 I SCHEMA [manager] sampling schema\n2021-11-30T11:47:04.528-0700 I NETWORK [initandlisten] waiting for connections at 127.0.0.1:3307\n2021-11-30T11:47:04.645-0700 I SCHEMA [sampler] sampling MongoDB for schema...\n2021-11-30T11:47:05.680-0700 I SCHEMA [sampler] mapped schema for 2 namespaces: \"sourcedb\" (2): [\"CUSTID0016MasterTrackerOrders\", \"CUSTID0016SiteDetailsData\"]\n2021-11-30T11:47:05.680-0700 I SCHEMA [manager] persisting schema\n\n2021-11-30T11:47:05.711-0700 W SCHEMA [manager] error initializing schema: error inserting document(s): (Unauthorized) not authorized on mongosqld_data to execute command { insert: \"schemas\", writeConcern: { w: \"majority\" }, $db: \"mongosqld_data\" }\n",
"text": "Hello community,We’ve installed Connector for BI (Mongosqld) and are running into what I hope is a simple issue.We are connecting to target MongoDB and are successfully sampling data. The process fails while trying to insert schema to local mongosqld_data. the error message is:2021-11-30T11:47:05.711-0700 W SCHEMA [manager] error initializing schema: error inserting document(s): (Unauthorized) not authorized on mongosqld_data to execute command { insert: \"schemas\", writeConcern: { w: \"majority\" }, $db: \"mongosqld_data\" }Does anyone have guidance on how to correct this issue?Thanks,\nSteveFull log file:",
"username": "Steve_Johnson1"
},
{
"code": "",
"text": "I have the same issue.",
"username": "L_Z"
},
{
"code": "security:\n enabled: true\n \nmongodb:\n net:\n uri: localhost:27017\n auth:\n username: \"db_user\"\n password: \"...\"\n source: \"my_auth_db\"\n \nnet:\n bindIp: 0.0.0.0\n port: 3307\n\n \nprocessManagement:\n service:\n name: mongosqld\n displayName: mongosqld\n description: \"BI Connector SQL proxy server\"\nmongosqld.exe --config \"C:\\Program Files\\MongoDB\\Connector for BI\\2.14\\example-mongosqld-config.yml\"",
"text": "Got it working with this config:And executing the mongosqld.exe with this command:\nmongosqld.exe --config \"C:\\Program Files\\MongoDB\\Connector for BI\\2.14\\example-mongosqld-config.yml\"",
"username": "L_Z"
}
] | Connector for BI / mongosqld authentication issue | 2021-11-30T18:58:46.040Z | Connector for BI / mongosqld authentication issue | 3,791 |
null | [
"queries",
"react-native"
] | [
{
"code": "",
"text": "Please I have been unable for about a month to get a running production build when I use Realm with React Native. The app is running fine on a development build, but it won’t run on a production build, and crash instantly. Please are there some specific versions I must use to get a running production build?\nI have read about the topic in many places, and it is actually an issue, but no working solution for me yet.Versions::\n“expo”: “^48.0.10”,\n“react-native”: “^0.71.6”,\n“realm”: “^11.7.0”,@henna.s",
"username": "Oben_Tabiayuk"
},
{
"code": "# React Native and Expo compatibility\n\n\n| Realm JavaScript | React Native | Expo | Hermes | npm | node |\n|------------------------|--------------------|----------|--------|--------|--------|\n| 11.8.0 | >= 0.71.4 | N/A | ✅ | >= 7 | >= 13 |\n| 11.7.0 | >= 0.71.4 | N/A | ✅ | >= 7 | >= 13 |\n| 11.6.0 | >= 0.71.4 | N/A | ✅ | >= 7 | >= 13 |\n| 11.5.2 | >= 0.71.4 | N/A | ✅ | >= 7 | >= 13 |\n| 11.5.1 | = 0.71.0 | N/A | ✅ | >= 7 | >= 13 (but not 19) |\n| 11.5.0 | = 0.71.0 | N/A | ✅ | >= 7 | >= 13 (but not 19) |\n| 11.4.0 | = 0.71.0 | N/A | ✅ | >= 7 | >= 13 (but not 19) |\n| 11.3.1 | >= 0.70.0 | 47 | ✅ | >= 7 | >= 13 (but not 19) |\n| 11.2.0 | >= 0.70.0 | 47 | ✅ | >= 7 | >= 13 (but not 19) |\n| 11.1.0 | >= 0.70.0 | 47 | ✅ | >= 7 | >= 13 |\n| 11.0.0 | >= 0.70.0 | 47 | ✅ | >= 7 | >= 13 |\n| 11.0.0-rc.2 | >= 0.70.0 | N/A | ✅ | >= 7 | >= 13 |\n| 11.0.0-rc.1 | >= 0.69.0 < 0.70.0 | 46 | ✅ | >= 7 | >= 13 |\n| 11.0.0-rc.0 | >= 0.66.0 < 0.69.0 | 45 | ✅ | >= 7 | >= 13 |\n| 10.24.0 | >= 0.64.0 | 44 | ❌ | >= 7 | >= 13 |\n",
"text": "Hello @Oben_Tabiayuk,Thank you for reaching out and raising your concerns. I was on a break last week so couldn’t reach out sooner.The app is running fine on a development build, but it won’t run on a production build, and crash instantly.Could you please share more information on the crash report or a stack trace from the production build?Are you using the same versions in your development build that is working?You can find out about compatible versions on React Github. Expo 48 version will be added soon, so keep a lookout.I look forward to your response.Cheers, \nHenna",
"username": "henna.s"
}
] | Realm react-native production build crashing | 2023-04-10T10:56:45.550Z | Realm react-native production build crashing | 988 |
null | [
"node-js",
"replication"
] | [
{
"code": "",
"text": "My server died randomly when I was using it. Its status is now unavailable. and I just got a notification that the replica set has no primary. Is anyone else having trouble or know how to fix this issue? (I also signed up for 24/7 support but it has been syncing for 45 minutes.",
"username": "Aidan_Ford"
},
{
"code": "",
"text": "Hello @Aidan_Ford,Welcome to the MongoDB Community forums I would recommend contacting Atlas in-app chat support regarding this. Please provide the chat support with a cluster link.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Server Died Randomly | 2023-04-16T00:45:49.329Z | Server Died Randomly | 592 |
[
"kotlin"
] | [
{
"code": "",
"text": "I want to create database collection name as “car_details” instead of “CarDetails” (which is model class name)\nIs there any annotation which will internally map to my own custom name instead of model class name?class CarDetails : RealmObject {\nvar make: String = “”\nvar model: String = “”\nvar miles: Int = 0\n}",
"username": "Raviteja_Aketi"
},
{
"code": "",
"text": "@Raviteja_Aketi: Sadly that’s not possible as of now. You can follow and upvote this issue for more updates.",
"username": "Mohit_Sharma"
},
{
"code": "",
"text": "Is it possible to create collections from console and insert data from mobile application (kotlin)?\nThere will not be any schema initialization from kotlin code. It has to get list of collections and insert data into the right collectionThis way i can use my own collections names instead of model class name\nimage1284×466 31.3 KB\n",
"username": "Raviteja_Aketi"
},
{
"code": "",
"text": "Hi, @Raviteja_Aketi has the right idea here. I think this documentation page should do a good job of explaining it: https://www.mongodb.com/docs/atlas/app-services/sync/data-model/data-model-map/The TLDR is that MongoDB has a Namespace (database.collection) and Realm has a TableName. The Schemas tab is where you can define the mapping between these two. By default we use the collection name to be the table name, but if you add a “title” to the json schema mapping then you can effectively map which table maps to which collection. Then you can have your car_details collection have a schema with the title “CarDetails” and I think that will get you what you want.Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "But practically this is not possible with Kotlin SDK. Kotlin sdk always deals with data classes. I dont find any code to get list of collections and insert data into specific collection",
"username": "Raviteja_Aketi"
},
{
"code": "",
"text": "I am not sure I understand your concerns / what you are trying to do then. Can you clarify:",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Hi @Tyler_KayeHere is the example. I need an example in Kotlin to map this model class to my custom collection namesModel class\nclass CarDetails : RealmObject {\nvar make: String = “”\nvar model: String = “”\nvar miles: Int = 0\n}I want to map this to car_details collection",
"username": "Raviteja_Aketi"
},
{
"code": "DB_NAME.Car_Details",
"text": "Hi, like I linked above, if you want to map that class to a synced collection car_details, you can add a new schema in the “schemas” tab for the car_details collection and give the json schema a “title” field of CarDetails. Then you would have a Realm Class with the title CarDetails that maps to a MongoDB namespace DB_NAME.Car_Details.Does this make sense?",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Hi @Tyler_KayeThere is no way to edit “title” field from MongoDB UI console and also there is no specific annotation in kotlin to edit this (Extend `@PersistedName` annotation to apply to classes · Issue #1138 · realm/realm-kotlin · GitHub).Kotlin is default considering model class name as schema title but we don’t have provision to modify this\nScreenshot 2023-04-17 at 12.33.11 PM2388×1088 196 KB\n",
"username": "Raviteja_Aketi"
},
{
"code": "",
"text": "You can edit it right in that page if you want. Just note that updating the title is a “breaking” change since it is functionally “removing” the old table.Please try the following:Thanks,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "@PersistedName",
"text": "Here is the issue:Extend @PersistedName annotation to apply to classes · Issue #1138 · realm/realm-kotlin · GitHubIn Kotlin, we can create like this. There is no way to put our custom collection names.\nWhat ever you suggest that is not possible in Kotlin language.class CarDetails : RealmObject {\nvar make: String = “”\nvar model: String = “”\nvar miles: Int = 0\n}",
"username": "Raviteja_Aketi"
},
{
"code": "",
"text": "Yes, I understand. Your model doesnt need to even know about what the collection name is though. It can just know that its table name is “CarDetails” and Device Sync knows the mapping it has to the “car_details” field through the “title” field in the JSON Schema.Do you mind reading through https://www.mongodb.com/docs/atlas/app-services/sync/data-model/data-model-map/ and letting me know clearly what it is you are trying to accomplish and what you cannot do?",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "My model class name is CarDetails but i want to see it as a car_details as a collection name under mognogdb console.I had gone through this but there is no way to define or modify schema “title” using kotlin code\nhttps://www.mongodb.com/docs/atlas/app-services/sync/data-model/data-model-map/It would be great if i get any kotlin examples for this mapping",
"username": "Raviteja_Aketi"
},
{
"code": "",
"text": "Yes, this is not possible from the Kotlin side I do not think, but my point is that it should not matter if you setup the backend like I said above. You do not need to modify the “title” in the Kotlin code, you want to define it in the JSON Schema in the backend.",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "As per my understanding,But I want to maintain local database and sync data based on online/offline conditions.\nI should maintain the same schema which is there at server level.\nIt would be great if i get any example kotlin code to understad this",
"username": "Raviteja_Aketi"
},
{
"code": "",
"text": "Hi @Mohit_SharmaI guess we have a provision to give custom schema titles using java annotations.\nCan we create models in java language use it in kotlin realm? Is this feasible ?",
"username": "Raviteja_Aketi"
},
{
"code": "",
"text": "@Raviteja_Aketi: No, that wouldn’t make sense as you are using Realm Kotlin SDK which doesn’t have implementation for this annotation but should be out very soon.On a separate note, why do you want the collection name precisely like car_detail? Camel is normally used in MongoDB naming for collections as shown in the sample dataset if this is the primary concern.\nScreenshot 2023-04-18 at 09.41.28590×1098 42 KB\n",
"username": "Mohit_Sharma"
},
{
"code": "",
"text": "I just gave an example. I need my custom collection name instead of model class name.\nI just came across below example from Java SDK but don’t know whether same kotlin annotation works with kotlin SDK or not. Do you have any idea?\nimage2230×1424 334 KB\n",
"username": "Raviteja_Aketi"
},
{
"code": "",
"text": "As mentioned earlier, No that wouldn’t work as Kotlin doesn’t have implementation “@RealmClass”.",
"username": "Mohit_Sharma"
}
] | How do i have my own custom database collection names in Kotlin - Realm? | 2023-04-14T10:11:27.203Z | How do i have my own custom database collection names in Kotlin - Realm? | 1,235 |
|
null | [
"text-search"
] | [
{
"code": "",
"text": "Hi MongoDB community,\nI am using the db.collection.find{‘$text’:{‘$search’:‘\"\"works at company1\" \"Adam\" \"’}} on a collection to extract documents which either have the phrase “works at company1” or have they keyword “Adam” or both. After a lot of debugging, i found out that mongosb breaks the phrase “works at company1” into words based on spaces and searches for “works” “at” “company1” and also gets me documents where i have “works at company2” in the value. The concerned documet doesnt have the keyword “Adam” in it so i am sure that the phrase “works at company1” is being split. Is this an error on part of mongodb creator team or is this done intentionally?",
"username": "Abdul_Mateen1"
},
{
"code": "$textworks at company1Adam",
"text": "Hey @Abdul_Mateen1,Welcome to the MongoDB Community Forums! Based on what you described and from the provided query, it seems you’re using $text operator for self-managed deployments and not the text operator for MongoDB Atlas search. There’s a way for you to include phrases in your find query as included in the documentation: Phrases using textSince you have posted in the MongoDB Atlas category of the foums, I’m also recommending you the search operator that you can use, so do check and see if you can use the phrase operator of MongoDB Atlas. This way, you should be able to search for the works at company1 phrase. You can also add multiple phrases together so you can make your search include Adam too. But keep in mind, you’ll have to use aggregation in order to use this operator.I have tried to list down all possible options for you to explore. I hope this helps. If you have any further questions or require additional assistance, please provide sample documents along with your search definition(if any) and your expected output, along with letting us know whether you’re using MongoDB Atlas or running a self-hosted, on-prem deployment.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "works at company1Adam",
"text": "ou can use, so do check and see if you can use the phrase operator of MongoDB Atlas. This way, you should be able to search for the works at company1 phrase. You can also add multiple phrases together so you can make your search include Adam too. But keep in mind, you’ll have to use aggregation in order to use this operator.Hi Satyam,I accidentally posted my question in the wrong forum but when i went to change the category, it was too late and i wasn’t allowed to edit the categories anymore.This is a self-managed deployment. If you look at the reference page of $textPhrases documentation provided by mongoDB, it doesnt talk about the OR condition on Phrases i am discussing above. it doesn’t talk about what to do if to want to find two or more phrases with an OR condition between them. for example how to do an OR with : “Works at company1” , “IBM” . i think the documentation is either incomplete or mongodb hasn’t implemented this functionality yet for the $text search.",
"username": "Abdul_Mateen1"
}
] | $text command for OR conditions on phrases | 2023-04-14T10:35:41.539Z | $text command for OR conditions on phrases | 871 |
null | [
"queries"
] | [
{
"code": "",
"text": "I try to delete register but i used this command deleteMany({“fechaultimoevento”:{$lt : new Date (“14/02/2021 00:00:00”)}, and don`t workI saw the register the field, have this format:\nfechaultimoevento:2023-03-23T03:05:26.000+00:00I try with that format of date and don´t work, and i don´t know how change the format beacuse i use the new Date.How can give me any advise?And i need the query to consult (select) i check this register exist really",
"username": "Diana_Fernandes"
},
{
"code": "",
"text": "Hello @Diana_Fernandes ,Welcome to The MongoDB Community Forums! I try to delete registerI’m not certain I understand what you mean by “register” in this context. Could you provide some examples?To understand your use case better, can you please share below details?Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | Delete Register in Mongo | 2023-04-14T12:53:43.790Z | Delete Register in Mongo | 621 |
null | [
"flutter"
] | [
{
"code": "",
"text": "I have developed a flutter app and I have managed to implement flexible sync but I have no idea when the app has finished the initial sync or when it’s uploading data to Atlas. As a developer I can see the logs but the users have no way of seeing if all the data is now downloaded or if there’s a pending sync.Any help would be appreciated.",
"username": "Mfundo_Sydwell"
},
{
"code": "Manage a Sync Session - Flutter SDK",
"text": "Hello @Mfundo_Sydwell ,Are you looking for this - Monitor sync upload progress?When you use Atlas Device Sync, the Realm Flutter SDK syncs data with Atlas in the background using a sync session. The sync session starts whenever you open a synced realm.The sync session manages the following:You can access the Session of any synced realm through the Realm.syncSession property.For more details, kindly go through below link on Manage a Sync Session - Flutter SDKRegards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Is there a way to track the syncing process on the Flutter SDK? | 2023-04-17T20:56:53.574Z | Is there a way to track the syncing process on the Flutter SDK? | 785 |
null | [
"aggregation",
"node-js"
] | [
{
"code": "result = await Customer.aggregate([\n { $match: { storeId: { $in: stores } } },\n {\n $project: {\n ...selectedFields,\n customerName: { $concat: [\"$first_name\", \" \", \"$last_name\"] },\n },\n },\n {\n $facet: {\n groupedDocs: [\n {\n $match: {\n \"default_address.phone\": { $exists: true, $ne: \"\" },\n },\n },\n {\n $match: {\n $expr: {\n $gte: [\n { $strLenCP: { $ifNull: [\"$default_address.phone\", \"\"] } },\n 9,\n ],\n },\n },\n },\n {\n $sort: { _id: 1 },\n },\n {\n $group: {\n _id: {\n $substrCP: [\n \"$default_address.phone\",\n { $subtract: [{ $strLenCP: \"$default_address.phone\" }, 9] },\n 9,\n ],\n },\n orders_count: { $sum: \"$orders_count\" },\n total_spent: { $sum: { $toDouble: \"$total_spent\" } },\n last_order_name: { $last: \"$last_order_name\" },\n last_order_id: { $last: \"$last_order_id\" },\n ...groupProjection,\n },\n },\n {\n $set: {\n _id: \"$docId\",\n },\n },\n {\n $unset: \"docId\",\n },\n ],\n filteredDocs: [\n // {\n // $match: {\n // \"default_address.phone\": { $exists: false, $eq: \"\" },\n // },\n // },\n {\n $match: {\n $expr: {\n $lt: [\n { $strLenCP: { $ifNull: [\"$default_address.phone\", \"\"] } },\n 9,\n ],\n },\n },\n },\n ],\n },\n },\n {\n $project: {\n docs: { $concatArrays: [\"$filteredDocs\", \"$groupedDocs\"] },\n },\n },\n { $unwind: \"$docs\" },\n { $replaceRoot: { newRoot: \"$docs\" } },\n\n { $match: filter },\n {\n $unset: \"customerName\",\n },\n {\n $sort: { _id: -1 },\n },\n {\n $facet: {\n data: [{ $skip: startIndex }, { $limit: limit }],\n count: [{ $count: \"total\" }],\n },\n },\n ])``",
"text": "i want to merge some fields who has same number and number i want to match last 9 digits and and i want validations = separate docs which has invalid address of ‘default_address.phone’ or has null or undefined and after group i want also this docs and grouped dos code provided",
"username": "Rajpoot_Safee"
},
{
"code": "validations = separate docs",
"text": "Hi @Rajpoot_Safee,Welcome to the MongoDB Community forum I want to merge some fields that have the same number and number I want to match the last 9 digits and I want validations = separate docs which have the invalid address of ‘default_address.phone’ or have null or undefined and after group, I want also this doc and grouped dos code providedWhich fields do you want to merge fields after matching the last 9 digits of a phone number? Can you please elaborate on it further to help us understand the conditions?Also, it would be helpful if you could help us with further information:Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "orders_count, total_spent($sum)\nlast_order_name, last_order_id (replace by last)\nall others fields (keep first)\nalso want get docs without changing data and grouping that have null or “” or less than 9 digits default_address.phone\ngrouping docs must sorted by older create first before groupingoutput = {data: , count: Number}\ncollection = Customer\n“mongodb”: “^4.9.0”,\n“mongoose”: “^5.12.15”,",
"username": "Rajpoot_Safee"
},
{
"code": "MongoDB sample documentdesired output",
"text": "Hello @Rajpoot_Safee,Apologies for the late response.orders_count, total_spent($sum)\nlast_order_name, last_order_id (replace by last)Based on the shared information, it’s difficult to comprehend the document structure. Please share the MongoDB sample document from your collection if the issue still persists with the desired output.all other fields (keep first)\ngrouping docs must be sorted by older create first before groupingCan you please elaborate on what you mean by the above line?Best,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Merge docs fields who has same phone number | 2023-03-04T11:46:13.990Z | Merge docs fields who has same phone number | 801 |
null | [] | [
{
"code": " mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\n Active: failed (Result: core-dump) since Sun 2023-03-19 21:05:35 UTC; 7s ago\n Docs: https://docs.mongodb.org/manual\n Process: 1870 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=dumped, signal=ILL)\n Main PID: 1870 (code=dumped, signal=ILL)\n CPU: 5ms\n \nmar 19 21:05:35 database systemd[1]: Started MongoDB Database Server.\nmar 19 21:05:35 database systemd[1]: mongod.service: Main process exited, code=dumped, > status=4/ILL\nmar 19 21:05:35 database systemd[1]: mongod.service: Failed with result 'core-dump'.\nLinux database 5.15.0-67-generic #74-Ubuntu SMP Wed Feb 22 14:14:39 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux.",
"text": "Hello MongoDB Community,I have been experiencing an issue when trying to run MongoDB 6.0.5 on my Ubuntu system with an AMD Ryzen 5 5500U processor (x86_64 architecture). When attempting to start the MongoDB service, I receive an “Illegal Instruction” error (signal=ILL), and the service fails to start. I have tried multiple troubleshooting steps, such as reinstalling MongoDB, checking the configuration file, and ensuring my system is up to date, but the issue persists.Here is the error log from my system:My system information is as follows:\nLinux database 5.15.0-67-generic #74-Ubuntu SMP Wed Feb 22 14:14:39 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux.I understand that MongoDB requires an AMD Bulldozer or later processor for x86_64 systems, and the Ryzen 5 5500U is based on the Zen 2 architecture, which should be compatible. However, I am still encountering issues.I would appreciate any assistance or insights from the community on how to resolve this problem or any suggestions for further troubleshooting steps.Thank you in advance for your help!P.s.: Mongo doesn’t even generate a log file.",
"username": "Psycrow_N_A"
},
{
"code": "",
"text": "Hello @Psycrow_N_A,Welcome to the MongoDB Community forums I have been experiencing an issue when trying to run MongoDB 6.0.5 on my Ubuntu system with an AMD Ryzen 5 5500U processor (x86_64 architecture)Sorry, you experienced these crashes. To further understand the problem, could you share the following information:Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "wget -qO - https://www.mongodb.org/static/pgp/server-6.0.asc | sudo apt-key add -",
"text": "Hi,I’ll try to answer as much as i can to help me and others with same issues.It’s basically those steps. Later i’ve tried with 6.04 and a 5.0 version as well, none of them worked. Them i’ve tried the Turnkey LXC Container for Mongodb (it uses the 4.4.4 version) and it worked really well. But i’m sad because i was willing to use latest version since i’m developing a new app and making a virtual machine for the database only for that purpose.Glad those infos can help!",
"username": "Psycrow_N_A"
},
{
"code": "",
"text": "Hello @Psycrow_N_A,Upon looking I found that Proxmox VM, uses KVM64 as the default CPU, and KVM64 doesn’t support full AVX instruction. As a result, it is not compatible with binaries compiled for newer microarchitectures, such as MongoDB 5.0+.This issue has also been reported by other members of the Proxmox community on forums and GitHub.Since you are intending to use the latest MongoDB 6.0, your current options include:Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "A post was split to a new topic: Facing Issue while installing MongoDB on RedHat",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | MongoDB 6.0.5 Compatibility Issue with Ryzen 5 5500U on Ubuntu | 2023-03-19T21:26:48.872Z | MongoDB 6.0.5 Compatibility Issue with Ryzen 5 5500U on Ubuntu | 1,336 |
null | [
"connecting"
] | [
{
"code": "",
"text": "Hello, I set0.0.0.0/0 (includes your current IP address)IP white list as 0.0.0.0/0But I can not access the DB while using my VPNPlease check this kindlyThank you",
"username": "hoyuen_kim"
},
{
"code": "",
"text": "Hi @hoyuen_kim,Welcome to the MongoDB Community forums But I can not access the DB while using my VPNCan you please provide any error messages you are receiving when attempting to connect to the database? Also, are you able to access it without the VPN?Additionally, can you specify from where you are trying to connect, such as your application, mongo shell, MongoDB Compass etc.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "MongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you’re trying to access the database from an IP that isn’t whitelisted. Make sure your current IP address is on your Atlas cluster’s IP whitelist: https://www.mongodb.com/docs/atlas/security-whitelist/And when if I turn off the VPN it works, But using VPN is mandatory for me on company laptop environment, also my localhost is located in company laptop…this is the error log that I got, Please check this kindly",
"username": "hoyuen_kim"
},
{
"code": "0.0.0.0/0",
"text": "Hello @hoyuen_kim,MongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you’re trying to access the database from an IP that isn’t whitelisted. Make sure your current IP address is on your Atlas cluster’s IP whitelist: https://www.mongodb.com/docs/atlas/security-whitelist/And when if I turn off the VPN it works, But using VPN is mandatory for me on the company laptop environment, also my localhost is located in the company laptopAs you are already using the 0.0.0.0/0 which allows the connection from anywhere.I suspect the VPN is blocking the connection to your MongoDB Cluster. I’ll suggest contacting your IT department to see if there is a solution that allows you to connect to MongoDB Atlas using the VPN.Let us know if you have any further queries.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "net.bindIp0.0.0.0",
"text": "@hoyuen_kimYou need to specify the URI for the mongodb server and verify configs.MongoDB Connection scripts for Mongoose I posted in this post here.Otherwise, your MongoDB configs are off./etc/mongod.conf and make sure you update the net.bindIp setting to 0.0.0.0 . and ensure you have firewall with port 27017 open, and MongoDB user with the necessary permissions to access the database over VPN, you need access perms. You also need to make sure you add a route to the VPN configuration to route traffic to the MongoDB server’s IP address.",
"username": "Brock"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | Cannot access cloud DB with VPN | 2023-04-12T01:30:47.362Z | Cannot access cloud DB with VPN | 1,876 |
null | [] | [
{
"code": "",
"text": "Hello there,I have a new question regarding this project.\nSince I work a lot with GCP for my actual job, I would like to host my app using GCP products as much as possible.\nI would like to know if’s possible to interact with a MongoDB M0 cluster from GCP Cloud Function ?\nIf it is possible, do you have any tutorial available ?\nThank you in advance.",
"username": "Gregory_Desprez"
},
{
"code": "M0M2/M5",
"text": "Hey @Gregory_Desprez,Since your new question does not pertain to data modeling but MongoDB Atlas, I have made this into a new topic.You can deploy M0 free clusters and M2/M5 shared clusters on Google Cloud in several regions. There is a similar forum post regarding connecting to MongoDB from GCP that you can read:Hope this helps. Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
}
] | Connecting MongoDB with GCP | 2023-04-17T14:13:27.443Z | Connecting MongoDB with GCP | 640 |
null | [
"charts"
] | [
{
"code": "",
"text": "Hi, we are able to embed the charts created in MongoDB Atlas in our application. While creating the charts in Atlas we can use a number of filters, which is great. But what we are not able to get out of this feature is that after we embed the MongoDB chart in our application, we would like our customers / clients to continue to use the filters as well. So we wanted to know if there’s any way we can include the MongoDB’s own filters to also get embedded within the iFrame.\nI know there’s a way we can pass filters through the iFrame URL, but that will require us to develop the filters for each report. We would like to create charts in atlas and just embed the iframe in our apps and let users control what kind of data they want to see and filter them as well. We want to remove any dependency on coding and development and give full control to our support team as well as the end users and clients.",
"username": "Shreyas_Gombi"
},
{
"code": "",
"text": "Hi @Shreyas_Gombi -While the embedded dashboards do not include the filtering UI, you can get this UI if you use the option to share a dashboard through a public link. This URL can be rendered in an iframe if desired.Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Atlast Charts iFrame Embed with Filters | 2023-04-17T23:54:47.868Z | MongoDB Atlast Charts iFrame Embed with Filters | 913 |
null | [] | [
{
"code": "{\n \"collection\":\"Events\",\n \"database\":\"PDC\",\n \"dataSource\":\"PDCCluster\",\n \"filter\":\n [{\"pxCommitDateTime\":{<:ISODate(\"2020-04-17T08:50:15.000+00:00\")}}]\n}\n",
"text": "I am using Data API (Postman tool) to query records from my collection. But when I am trying to add datetime filter for the same, I am not able to figure out the correct syntax, I tried multiple syntaxes but nothing seems to work.my Input Query:-I have to do something likeSelect * from Events where pxCommitDateTime < “” And pxCommitDateTime > “”Please help.",
"username": "Priyabrata_Nath"
},
{
"code": "",
"text": "Hello @Priyabrata_Nath ,Welcome to The MongoDB Community Forums! Please check below thread as a similar issue has been resolved there.Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | How to query data from collection using datetime as filter, I am using DataAPI for the same | 2023-04-17T11:54:48.583Z | How to query data from collection using datetime as filter, I am using DataAPI for the same | 1,247 |
null | [] | [
{
"code": "",
"text": "Hi, is there any daily request or read limit for MongoDB Atlas database? Like, how many reads per day I can make?",
"username": "Marco_Ivan"
},
{
"code": "M0M2/M5M0M2M5",
"text": "Hello @Marco_Ivan ,Welcome to The MongoDB Community Forums! As per Atlas M0 (Free Cluster), M2, and M5 LimitationsM0 free clusters and M2/M5 shared clusters limit the number of read and write operations per second. The rate limits vary by cluster tier as follows:Atlas handles clusters that exceed the operations per second rate limit as follows:Please go through below Limitations documentation for additional detailsRegards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Atlas Free | 2023-04-17T10:14:36.605Z | MongoDB Atlas Free | 694 |
null | [
"node-js"
] | [
{
"code": "",
"text": "when i run my nodejs project i got this error on monitor processCannot parse config file: ‘/home/YDAPI/config/development.json’: SyntaxError: Expected double-quoted property name in JSON at position 191so what should i correct for it ?",
"username": "Mohamed_Shawky1"
},
{
"code": "",
"text": "Hi dear I’m feel happy to help you don’t worry man\nThe error message you received indicates that there is a syntax error in your JSON configuration file, specifically in the file ‘/home/YDAPI/config/development.json’. The error message suggests that there is an issue with a property name that is not double-quoted.To resolve this error, you should open the ‘development.json’ file and look for the property name at position 191. Make sure that the property name is enclosed in double-quotes (\" \") as JSON requires all property names to be double-quoted.For example, if you have a property named “name” in your JSON file, it should be written as:{\n“name”: “value”,\n“property2”: “value2”,\n“property3”: “value3”\n}If the issue is not with the property name at position 191, then you should check the rest of the file for other syntax errors. Once you have corrected the syntax errors, save the file and try running your Node.js project again.",
"username": "Marinipaving_Andmasonry"
},
{
"code": "",
"text": "thanks for your fast reply,\nbut i have strange issue my file not has 191 line it’s all about 121 line only so I don’t understand",
"username": "Mohamed_Shawky1"
},
{
"code": "",
"text": "Hi bro, now hope you can fix this problem\nThe error message suggests that there is a problem with the syntax of your JSON configuration file located at /home/YDAPI/config/development.json. Specifically, the error message states that it expected a double-quoted property name at position 191.To fix the issue, you should open the configuration file and carefully review the contents of the file around position 191, looking for any property names that may be missing double-quotes.Make sure that all property names in the JSON file are surrounded by double-quotes, like this:\n{\n“propertyName”: “propertyValue”,\n“anotherPropertyName”: “anotherPropertyValue”\n}\nIf you find any property names that are not surrounded by double-quotes, add them in and save the file. Then try running your Node.js project again. If the problem persists, carefully review the file for any other syntax errors or formatting issues.",
"username": "Marinipaving_Andmasonry"
},
{
"code": "",
"text": "@Marinipaving_Andmasonry, how is your second post any different from the first one?Let seeThe error message you received indicates that there is a syntax error in your JSON configuration file, specifically in the file ‘/home/YDAPI/config/development.json’.vsThe error message suggests that there is a problem with the syntax of your JSON configuration file located at /home/YDAPI/config/development.json.AlsoThe error message suggests that there is an issue with a property name that is not double-quoted.compared toSpecifically, the error message states that it expected a double-quoted property name at position 191.And more withTo resolve this error, you should open the ‘development.json’ file and look for the property name at position 191. Make sure that the property name is enclosed in double-quotes (\" \") as JSON requires all property names to be double-quoted.in comparison toTo fix the issue, you should open the configuration file and carefully review the contents of the file around position 191, looking for any property names that may be missing double-quotes.Please do not rephrase the same answer again and again. This make us losing time as we have open and read the thread that does not bring any new information. It is starting to look as if you fed ChatGPT the same question and you got a slightly different answer but the same explanation but different words.my file not has 191 line it’s all about 121 line onlyYou assumed that position 191 meant line 191. It could be 191 characters or token. Open the file with firefox it might give you a better error message. You could also share the file and we will see. But read Formatting code and log snippets in posts first as it needs to be marked down correctly for us to really see the raw content.",
"username": "steevej"
},
{
"code": "",
"text": "i upload file that has an issue but I removed some sensitive keys for API1 file sent via WeTransfer, the simplest way to send your files around the worldplease check and tell me where the double quoted issue ?",
"username": "Mohamed_Shawky1"
},
{
"code": "",
"text": "The error is wrong, running it through JSON parser in VSCode you’re missing the keys for:Should look like this throughout, anything that doesn’t have an actual required value, you need to go through and setup.After that, I’d rerun it and see what the next error is.",
"username": "Brock"
},
{
"code": "",
"text": "yes i already removed key for security issue that’s all but I have",
"username": "Mohamed_Shawky1"
}
] | I got error on parse conf file nodejs | 2023-04-15T11:29:25.776Z | I got error on parse conf file nodejs | 1,547 |
null | [
"crud",
"swift"
] | [
{
"code": "func fInsertSome(_ sCollection: String, documents: [some Codable]) throws {\n\tlet zCollection = zDb.collection(sCollection.lowercased()) // returns actual collection from collection name\n\tlet azDocuments = try documents.map { try BSONEncoder().encode($0) }\n\tdo {\n\t\ttry zCollection.insertMany(azDocuments)\n\t}\n\tcatch {\n\t\tif let errorCode = (error as? MongoSwift.MongoError.BulkWriteError)?.writeFailures?.first?.code, errorCode == 11000 {\n\t\t\t// one of the documents has a duplicate key. If so, switch to writing them one by one, and handle each one individually:\n\t\t\tfor document in documents {\n\t\t\t\tdo {\n\t\t\t\t\tlet zDocument = try BSONEncoder().encode(document)\n\t\t\t\t\ttry zCollection.insertOne(zDocument)\n\t\t\t\t}\n\t\t\t\tcatch {\n\t\t\t\t\tif let errorCode = (error as? MongoSwift.MongoError.WriteError)?.writeFailure?.code, errorCode == 11000 {\n\t\t\t\t\t}\n\t\t\t\t\telse {\n\t\t\t\t\t\tprint(\"This error occurred when trying to insertOne: \", error)\n\t\t\t\t\t\tthrow error\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\telse {\n\t\t\tprint(\"This error occurred when trying to insertMany: \", error)\n\t\t\tthrow error\n\t\t}\n\t}\n}\nif let errorCode ... errorCode == 11000",
"text": "Hi all. I’m in Swift 5.8, in Xcode 14.3, on macOS 13.3.1, MacBook Pro M1 Max. Locally hosted MongoDb Community Server 6.\nI have this function:I’m getting a warning on the two if let errorCode ... errorCode == 11000 lines:\n“Cast from ‘any Error’ to unrelated type ‘MongoError.BulkWriteError’ always fails”I’ve read the documentation (here) that says these two functions throws four types of errors each (including the two I’m using: MongoError.BulkWriteError, MongoError.WriteError), but it seems I’m not casting it correctly, or something…?I’ve tried a few things, including some suggestions from ChatGPT, but neither it nor I can figure it out, so far.Can anyone here help please?Thanks in advance!",
"username": "David_Thorp"
},
{
"code": "do {\n let result = try collection.bulkWrite(writes, options: options)\n print(\"Bulk write result: \\(result)\")\n} catch let error {\n print(\"Error during bulk write: \\(error)\")\n}\n let options = BulkWriteOptions()do {\n let result = try collection.bulkWrite(writes, options: options, session: session)\n print(\"Bulk write result: \\(BulkWriteResult)\")\n} catch let error {\n print(\"Error during bulk write: \\(error)\")\n}\ndo {\n let result = try zCollection.bulkWrite(requests)\n // BulkWrit result goes here\n} catch let error as MongoError.BulkWriteError {\n // This handles the error\n print(\"Bulk write error: \\(error)\")\n} catch {\n // This is for other errors.\n print(\"Error: \\(error)\")\n}\n",
"text": "To be honest, I didn’t even think ChatGPT would even know MongoSH, let alone the drivers.Those are very specific technologies that really aren’t as broad/common.But I think the problem is your errors, what is supposed to be first? and code? You have it in both your BulkWriteError, and your WriteError, what are you trying to achieve with this?What are you trying to define exactly?@David_ThorpThe way that should be executed should be aBecause you want to make sure you’re defining your options properly, and in what session.Because you’re supposed to define what exactly it’s supposed to do based on what has happened, so then you’d go in line for the insertOne or insertMany issue.You also should be invoking write options for bulk writes. You can do this by putting in: let options = BulkWriteOptions() before you put in the bulkwriteerror.@David_Thorp Does this make sense?Oh, and if you want to include sessions you can do:@David_Thorp I’m going to play with this for a few, I need a break from my project anyway, this is bugging me now…@David_Thorp Yeah, you have to throw it in a Do Catch Block, otherwise it shouldn’t work, Xcode has validated syntax and it deploys.This is how Do…Catch… works.https://docs.swift.org/swift-book/documentation/the-swift-programming-language/errorhandling/",
"username": "Brock"
}
] | MongoSwiftSync driver. MongoCollection.insertMany(). Cast from 'any Error' to unrelated type 'MongoError.BulkWriteError' always fails | 2023-04-14T00:19:21.282Z | MongoSwiftSync driver. MongoCollection.insertMany(). Cast from ‘any Error’ to unrelated type ‘MongoError.BulkWriteError’ always fails | 924 |
null | [
"queries",
"node-js",
"mongoose-odm"
] | [
{
"code": "AsubDocument.propertyX < subDocument.propertyY'A': {\n $elemMatch: {\n propertyX: {\n $lt: '$propertyY',\n },\n },\n},\n",
"text": "How can I match a document whose property A - an array of embedded documents - includes one such as subDocument.propertyX < subDocument.propertyY? Tried:(mongoose)",
"username": "Joao_Teixeira1"
},
{
"code": "",
"text": "Hello @Joao_Teixeira1 ,Welcome to The MongoDB Community Forums! Can you please share additional details for me to understand your use case better?Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | How to compare two fields in subdocuments within an array | 2023-04-14T14:43:59.048Z | How to compare two fields in subdocuments within an array | 619 |
null | [
"node-js"
] | [
{
"code": "",
"text": "Hi guys, I have one question please… Is it possible to detect read error in MongoDB using Node.JS library?\nI think write errors could be detected with “acknowledged” field, but is there a chance to detect read errors?\nIf yes, how please?\nThis is very important for me, my app is working on many concurrent connections, around 120, and I’ve found not all writes are saved correctly, I will improve this by tracking acknowledged field, and retrying… but I’m very curious about read errors.Thank you for your help,\nDavid",
"username": "David_David2"
},
{
"code": "retryReads",
"text": "Hello @David_David2 ,I’m very curious about read errors.MongoDB supports retryable reads, which means that if a read operation fails, MongoDB will automatically retry the operation a configurable number of times. You can enable retryable reads by setting the retryReads option to true when you create your MongoDB client. Please refer to Retryable Reads docs to learn more.is there a chance to detect read errors?\nIf yes, how please?Furthermore, you can use the callback function or Promises in Nodejs to detect read errors using the Node.js MongoDB driver.This will help you in better understanding of the overall error handling while read operations.\nPlease feel free to ask any additional questions, would be happy to help! Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Thank you very much for your help!",
"username": "David_David2"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Read and Write Errors Detection | 2023-04-13T17:28:49.112Z | Read and Write Errors Detection | 475 |
null | [
"atlas-cluster"
] | [
{
"code": " const { MongoClient, ServerApiVersion } = require('mongodb');\n const uri = \"mongodb+srv://QuackDev:\" + password + \"@rocketer.iolndfg.mongodb.net/?retryWrites=true&w=majority\";\n const client = new MongoClient(uri, {\n serverApi: {\n version: ServerApiVersion.v1,\n strict: true,\n deprecationErrors: true,\n }\n });\n",
"text": "Im new to mongodb, can someone explain to me how can i authenticate using SCRAM?I tried writing to the database, but it returned this error: UnhandledPromiseRejectionWarning: MongoServerError: (Unauthorized) not authorized on admin to execute commandThis is my code for the client:Am i doing something wrongly?",
"username": "Jayden_Yeo"
},
{
"code": "",
"text": "Am i doing something wrongly?It does not work, so yes you are doing something wrong. But the issue is simply because you are trying to read or write in the admin database rather than your database.Look at https://www.mongodb.com/docs/manual/reference/connection-string/ to see how to specify a different database.",
"username": "steevej"
},
{
"code": "",
"text": "That resolved my issue \nThanks for your help!",
"username": "Jayden_Yeo"
}
] | Cant read or write to atlas database | 2023-04-17T12:19:43.104Z | Cant read or write to atlas database | 835 |
[
"data-modeling"
] | [
{
"code": "[\n {\n \"timeStamp\":1451649600511,\n \"card\":\"Teferi's Protection\",\n \"sets\": [\n {\n \"set\": \"Double Masters 2022\",\n \"prices\": [\n {\"idPrice\": 1, \"price\": 12},\n {\"idPrice\": 2, \"price\": 13},\n {\"idPrice\": 3, \"price\": 14}\n ],\n \"meanPrice\":13\n },\n {\n \"set\": \"Commander 2017\",\n \"prices\": [\n {\"idPrice\": 1, \"price\": 11},\n {\"idPrice\": 2, \"price\": 12},\n {\"idPrice\": 3, \"price\": 13}\n ],\n \"meanPrice\": 12\n },\n {\n \"timeStamp\":1451649600512,\n \"card\":\"Teferi's Protection\",\n \"sets\": [\n {\n \"set\": \"Double Masters 2022\",\n \"prices\": [\n {\"idPrice\": 1, \"price\": 9},\n {\"idPrice\": 2, \"price\": 10},\n {\"idPrice\": 3, \"price\": 11}\n ],\n \"meanPrice\":10\n },\n {\n \"set\": \"Commander 2017\",\n \"prices\": [\n {\"idPrice\": 1, \"price\": 10},\n {\"idPrice\": 2, \"price\": 11},\n {\"idPrice\": 3, \"price\": 12}\n ],\n \"meanPrice\": 11\n },\n ...\n]\n",
"text": " → First (long) post in the Mongo DB community. I’m a french 31 yo Data Analyst and engineer working mainly with SQL (Bigquery/Teradata at the moment), I’ve been working for 6 years now and I didn’t specifically study “Data” before that.\nI’m more and more enthusiastic about data related technologies but, as you know, there is a lot to learn. To force myself into that (and shine at work) I’m trying to incrementally build an app/website that I would enjoy developping and that would be used for a portfolio.Another (last?) thing about me, one of my hobbies is the trading card game Magic The Gathering (MTG). One important aspect of this game is the secondary market, where A LOT of cards are sold/bought by professionals or people just like me, everyday, hours, minutes… (you got the idea…)Scrap cardmarket’s website regularly (How often ?) to extract prices/quantities and store them in a database, then transform the data and compute it in a meaningful way. (mean price at different intervals, quantity sold since previous day/week/month, compare to the entire set trend …)MVP : The user just select the cards he wants to follow and then the app will display tables and plots to help make decisions (Should I buy this card or should I sell this card ? When ?) I could also build an alerting system and/or predict prices.Here is a draft schema :Excalidraw is a virtual collaborative whiteboard tool that lets you easily sketch diagrams that have a hand-drawn feel to them.“Data that is access together should be stored together”This example shows the relative complexity of the subject.\nMainly : One card can be printed in X sets and I need to follow the prices for each set.\nAlso, a selled card can have a lot of specificities/filters (condition, location of seller, reputation of seller) :\nfilters202×652 20.2 KB\nAs I said, the data landscape is so big, it’s really difficult to chose a solution design, but here are some …Free / Open source : Can’t afford to put the app in the cloudAdopted by the data community (remember I want to use this project for a portfolio)Scalable (I will start by one card (with multiple sets) and one user (myself ) but I would like to have the option to grow and remain free or at least really cheap)Elegant : Concerning the visualization, I need it to be pretty I think Panel + Plotly is a good option. But I don’t know if it covers all my needs.Quite simple AND with Python : I know that if I make it too hard for myself, I won’t succeed to deliver a first version of the app (at least at the beginning), because I can’t spend too much time. (work, family, sport, friends, … you certainly have the same constraints ) Also, I want to use Python as much as possible.And here some …Also, for now, I did some tests locally on my Windows computer but what if I want to push it online for free or really cheap ? (less than 5 € a month for 10 users max. for example)The Web Scraping part, especially because I know I will have to use proxies if I don’t want my IP blocked when I will need to fetch a lot of cards quite often (And How often is one of my question as I explained above)… But at least I found this link for the basics : MongoDB Data Scraping & Storage Tutorial | MongoDB | MongoDBI will edit this post if you need some clarifications or to update it with the progression.\nThank you if you made it that far, and thank you in advance for your help.",
"username": "Gregory_Desprez"
},
{
"code": "",
"text": "Hey @Gregory_Desprez,Welcome to the MongoDB Community Forums! Give me some insights about the model.You are correct that in general one should design their schema according to how the data will be used instead of how the data will be stored. Thus, it may be beneficial to work from the required queries first, making it as simple as possible, and let the schema design follow the query pattern.\nI would suggest you to experiment with multiple schema design ideas. You can use mgeneratejs to create sample documents quickly in any number, so the design can be tested easily.Help me on the global solution.Given the prerequisites you mentioned, I believe you can make use of MongoDB Atlas to start and make your App. You can use Charts for visualization, and Pymongo driver to work with Python easily. For the pricing part, you can refer to Atlas Pricing.I would advise you to start with your data modeling, start working with a shared cluster first(which is free by the way), and once you have gained the necessary skills and feel the app is working as you expect, then you can start to think of scalability, adoption, etc.Additionally, since you mentioned you’re new to MongoDB, I am sharing some courses that you can do from our University Platform. They should really help you get started quickly.\nIntroduction to MongoDB\nData Modelling in MongoDB\nMongoDB for SQL ProsPlease let us know if you have any additional questions. Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "Hello @Satyam ,Thanks for the answers/tips.\nI will read and try all the webpages you linked.\nFor now, I’m struggling with the webscrapping part and the fact there is a “Load more button” with Javascript and an API call towards a different endpoint. (not covered in the tutorial I linked above)Regards,\nGrégory",
"username": "Gregory_Desprez"
},
{
"code": "",
"text": "A post was split to a new topic: Connecting MongoDB with GCP",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Modeling collection(s) to store prices evolution | 2023-02-23T15:26:43.575Z | Modeling collection(s) to store prices evolution | 1,241 |
|
null | [] | [
{
"code": "{\n \"_id\" : ObjectId(\"6419ccd559e822b43c3e7dce\"),\n \"articleid\" : \"155052228\",\n \"headline\" : \" the largest airline in India, unveiled a revolutionary three-point\",\n \n \"fulltext\" : \"#IndiGo, the largest airline in India, unveiled a revolutionary three-point disembarkation procedure that will let… https://t.co/f5Qa4rDWFh\",\n \"pubdate\" : \"2022-08-06 02:31:41\",\n \"article_type\" : \"online\",\n \"clientidArray\" : [\n \"I0027\",\n \"V0035\",\n \"A0218\",\n \"A0260\",\n \"B0177\"\n ],\n \"pubdateRange\" : ISODate(\"2022-08-06T02:31:41.000+0000\")\n}\n",
"text": "I have a complex searchs like AND, OR, NOT. I came across the documentation and find compound operators like should, must, mustnot which is equal to AND, OR, NOT.Now I got to know about queryString can anybody tells me what’s the major difference between these two.Also, I have these fields in the collection:So I need to fetch data by - daterange using “pubdateRange” field then “clientidArray” and then I will use must and should to search a keyword or text in 2 fields i.e. - healine and fulltext.According to you what should be ideal solution queryString or this compound operators.",
"username": "Utsav_Upadhyay2"
},
{
"code": "",
"text": "Hi @Utsav_Upadhyay2Now I got to know about queryString can anybody tells me what’s the major difference between these two.There is no logical difference between the compound operators and the queryString operators in the MongoDB Atlas Search.\nThe difference lies in the way the queries are written in both the operators, where the compound operators map to the AND OR and NOT logical operators, the queryString makes direct use of these operators.The examples in the documentation clearly mentions the usage of the queryString operators in the Altas search query.So I need to fetch data by - daterange using “pubdateRange” field then “clientidArray” and then I will use must and should to search a keyword or text in 2 fields i.e. - healine and fulltext.Could you help me with the query that you have tried using both the operators with the response that you have received?Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "{\n \"_id\":\"ObjectId(\"\"6368ca3fcb0c042cbc5b198a\"\")\",\n \"headline\":\"T20 World Cup: Zomato’s Epic Response To Careem Pakistan’s ‘Cheat Day’ Remark Is Unmissable\",\n \"subtitle\":\"\",\n \"fulltext\":\"The trade began on October 21</p><p>Ever for the reason that starting of the T20 world cup, Twitter has been flooded with memes and commentary. Now, Zomato and Careem Pakistan’s tongue-in-cheek trade on Twitter.\",\n \"clientidArray\":[\n \"D0382\",\n \"G0068\"\n ],\n \"pubdateRange\":\"ISODate(\"\"2022-11-07T08:55:38.000+0000\"\")\"\n}{\n \"_id\":\"ObjectId(\"\"6368ca3fcb0c042cbc5b199f\"\")\",\n \"headline\":\"Apple Wants to Drop ‘Hey' From the ‘Hey Siri' Wake Phrase for Voice Commands: Report\",\n \"subtitle\":\"\",\n \"fulltext\":\"Apple is said to be working on changing the wake phrase for its Siri voice assistant, and will change the phrase from ‘Hey Siri' to simply ‘Siri'.\",\n \"clientidArray\":[\n \"D0242\",\n \"M0104\"\n ],\n \"pubdateRange\":\"ISODate(\"\"2022-11-07T08:55:40.000+0000\"\")\"\n}\n",
"text": "I don’t know how to make a query with range and filter using queryString!So, above is the data sample doc.by Using queryString, I need to find the data in the field - fulltext, by filtering clientArray and with range pubdateRange.\nHow will I do this with queryString?",
"username": "Utsav_Upadhyay2"
},
{
"code": "",
"text": "Hi @Utsav_Upadhyay2Could you please clarify if the examples provided in the documentation are not clear enough for you to understand? If not, do you have any suggestions for more detailed examples to help you better comprehend the topic?Additionally, regarding the sample document you shared, could you please provide more information about the query you attempted to run and any errors that occurred during its execution?Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "db.movies.aggregate([\n {\n $search: {\n \"queryString\": {\n \"defaultPath\": \"fullplot\",\n \"query\": \"plot:(captain OR kirk) AND enterprise\"\n }\n }\n }\n])\n",
"text": "There is no example in the document of range and filter with queryString. so, I am looking for a solution where I can apply range, filter and queryString in Atlas search query.Could you please guide me how can I add filter and range (daterange) in below query ?",
"username": "Utsav_Upadhyay2"
}
] | How to use queryString with date range and filter? | 2023-04-07T08:19:41.904Z | How to use queryString with date range and filter? | 949 |
[
"dot-net"
] | [
{
"code": " public List<string>? Genres { get; set; } var distinctItems = movieCollection\n .Distinct(x => x.Title.Genres, filter)\n .ToList();\n",
"text": "Hi all\ncan someone explains the next issueI try to get data and have the error 'Cannot deserialize a ‘List’ from BsonType ‘String’\nMy data model has\n public List<string>? Genres { get; set; }\nMongo shows data as an array\n\nimage862×729 57.8 KB\n\nwhen I take data as BsonDocument I show an arraybut when I try to use poco-class I have the error 'Cannot deserialize a ‘List’ from BsonType ‘String’it is my query (but doesn’t work totally when I use poco class)I have tried to add [BsonRepresentation(BsonType.Array)] but I have the same errorPlease help to fix that. Thanks",
"username": "andreyshiryaev13"
},
{
"code": " var dataCount = movieCollection\n .Aggregate()\n .Group(\n x => x.Title.Genres.Select(g => g),\n x => new\n {\n Genre = x.Key,\n Count = x.LongCount()\n }\n ).ToList();\n",
"text": "I have added the next GroupBy query works correctly without error",
"username": "andreyshiryaev13"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | 'Cannot deserialize a 'List<String>' from BsonType 'String' | 2023-04-17T09:47:27.671Z | ‘Cannot deserialize a ‘List<String>’ from BsonType ‘String’ | 1,601 |
|
null | [
"aggregation"
] | [
{
"code": "db.getCollection(collection).aggregate([{$lookup: {\n from: 'Price',\n localField: 'id',\n foreignField: 'id',\n as: 'result'\n }},{$unwind: '$result'\n },{$addFields: {\n price: '$result.onsiterate',\n priceDiscount:'$result.discount',\n discount:'$result.discount',\n rateType:'$result.ratetype',\n isRatePerStay:'$result.israteperstay',\n mealType:'$result.mealinclusiontype',\n taxType:'$result.taxtype',\n }},{$project: {\n id:1,\n hotelcode:1,\n roomamenities:1,\n roomtype:1,\n ratedescription:1,\n price:1,\n priceDiscount:1,\n discount:1,\n rateType:1,\n isRatePerStay:1,\n mealType:1,\n taxType:1,\n }},{$out: 'Combine'}]);\n",
"text": "error 'operation exceeded time limit ’ helpme",
"username": "Anh_Tu_n_Hu_nh_Van"
},
{
"code": "$addFields$projectdb.getCollection(collection).aggregate([\n {\n $lookup: {\n from: 'Price',\n localField: 'id',\n foreignField: 'id',\n as: 'result'\n }\n },\n {\n $unwind: '$result'\n },\n {\n $project: {\n id: 1,\n hotelcode: 1,\n roomamenities: 1,\n roomtype: 1,\n ratedescription: 1,\n price: '$result.onsiterate',\n priceDiscount: '$result.discount',\n discount: '$result.discount',\n rateType: '$result.ratetype',\n isRatePerStay: '$result.israteperstay',\n mealType: '$result.mealinclusiontype',\n taxType: '$result.taxtype'\n }\n },\n {\n $out: 'Combine'\n }\n ]);\n$project",
"text": "Hi @Anh_Tu_n_Hu_nh_Van and welcome to the MongoDB community forum!!Based on your shared aggregation pipeline, I think you can optimise it by removing the $addFields stage since it duplicates the existing $project stage and may not be necessary.It simplifies the $project stage by directly mapping the fields to the output.error 'operation exceeded time limit ’ helpmeThis error occurs when the if the time limit is reached before the operation completes.\nFor more details you can visit the documentation for Adjust Maximum Time for Query Operations.However, if the issue still persist please share the following information to help you with appropriate query.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "db.getCollection('users').aggregate([\n { $match: { name: 'Giles.Welch' } },\n {\n $lookup: {\n from: 'Hotels',\n localField: '_id',\n foreignField: 'userId',\n as: 'hotels',\n },\n },\n { $unwind: '$hotels' },\n {\n $lookup: {\n from: 'RoomTypes',\n localField: 'hotels.roomTypeIds',\n foreignField: '_id',\n as: 'roomTypes',\n },\n },\n { $addFields: { 'hotels.roomTypeIds': '$roomTypes' } },\n { $project: { roomTypes: 0, password: 0, balance: 0 } },\n]);\n",
"text": "Thank you very much ,i use this to combine 3 three collectionsAny other way ??",
"username": "Anh_Tu_n_Hu_nh_Van"
},
{
"code": "",
"text": "Hi @Anh_Tu_n_Hu_nh_VanCan you help me with a sample data for all the collections and the desired output which would help me to reproduce in my local environment.Also, mention the MongoDB version you are on.Ragards\nAasawari",
"username": "Aasawari"
},
{
"code": " {\n \"_id\": {\n \"$oid\": \"642e9659793554fd9f944ec0\"\n },\n \"name\": \"Vernice90\",\n \"email\": \"[email protected]\",\n \"verify\": true,\n \"avatar\": \"https://cloudflare-ipfs.com/ipfs/Qmd3W5DuhgHirLHGVixi6V76LhCkZUz6pnFt5AJBiyvHye/avatar/997.jpg\",\n \"position\": \"HOTELIER\",\n \"isActive\": true,\n \"createAt\": {\n \"$date\": \"2023-04-06T09:52:17.612Z\"\n },\n \"updateAt\": {\n \"$date\": \"2023-04-06T09:52:17.612Z\"\n },\n \"hotels\": {\n \"_id\": {\n \"$oid\": \"642c63166c0f6bdd3b02927d\"\n },\n \"address\": \"121 Southgate Street\",\n \"city\": \"Gloucester\",\n \"country\": \"United Kingdom\",\n \"latitude\": 51.86043167,\n \"longitude\": -2.250770092,\n \"roomTypeIds\": [\n {\n \"_id\": {\n \"$oid\": \"642c63766c0f6bdd3b046092\"\n },\n \"price\": 49.42,\n \"priceDiscount\": 0,\n \"discount\": 0,\n \"mealType\": \"Free Breakfast\",\n \"images\": [],\n \"rateDescription\": \"Room size: 25 m²/269 ft², Shared bathroom, 1 single bed\",\n \"roomAmenities\": [\n \"Air conditioning\",\n \"Carpeting\",\n \"Clothes rack\",\n \"Fan\",\n \"Free Wi-Fi in all rooms!\",\n \"Hair dryer\",\n \"Heating\",\n \"In-room safe box\",\n \"Linens\",\n \"Shower\",\n \"Smoke detector\",\n \"Toiletries\",\n \"Towels\",\n \"TV [flat screen]\"\n ],\n \"updatedAt\": {\n \"$date\": \"2023-04-05T18:03:48.767Z\"\n },\n \"numberOfRoom\": 5,\n \"nameOfRoom\": \"Single Room with Shared Bathroom\"\n },\n {\n \"_id\": {\n \"$oid\": \"642c63766c0f6bdd3b046093\"\n },\n \"price\": 57.02,\n \"priceDiscount\": 0,\n \"discount\": 0,\n \"mealType\": \"Free Breakfast\",\n \"images\": [],\n \"rateDescription\": \"Room size: 36 m²/388 ft², Shared bathroom, 2 single beds\",\n \"roomAmenities\": [\n \"Air conditioning\",\n \"Carpeting\",\n \"Clothes rack\",\n \"Fan\",\n \"Free Wi-Fi in all rooms!\",\n \"Hair dryer\",\n \"Heating\",\n \"In-room safe box\",\n \"Linens\",\n \"Shower\",\n \"Smoke detector\",\n \"Toiletries\",\n \"Towels\",\n \"TV [flat screen]\"\n ],\n \"updatedAt\": {\n \"$date\": \"2023-04-05T18:03:48.767Z\"\n },\n \"numberOfRoom\": 1,\n \"nameOfRoom\": \"Twin Room with Shared Bathroom\"\n },\n {\n \"_id\": {\n \"$oid\": \"642c63766c0f6bdd3b046094\"\n },\n \"price\": 82.36,\n \"priceDiscount\": 0,\n \"discount\": 0,\n \"mealType\": \"Free breakfast for {2}\",\n \"images\": [],\n \"rateDescription\": \"Room size: 36 m²/388 ft², Shared bathroom, 2 single beds\",\n \"roomAmenities\": [\n \"Air conditioning\",\n \"Carpeting\",\n \"Clothes rack\",\n \"Fan\",\n \"Free Wi-Fi in all rooms!\",\n \"Hair dryer\",\n \"Heating\",\n \"In-room safe box\",\n \"Linens\",\n \"Shower\",\n \"Smoke detector\",\n \"Toiletries\",\n \"Towels\",\n \"TV [flat screen]\"\n ],\n \"updatedAt\": {\n \"$date\": \"2023-04-05T18:03:48.767Z\"\n },\n \"numberOfRoom\": 8,\n \"nameOfRoom\": \"Twin Room with Shared Bathroom\"\n }\n ],\n \"hotelName\": \"Spalite Hotel\",\n \"images\": [],\n \"starRating\": 5,\n \"start\": 3,\n \"propertyType\": \"Hotels\",\n \"userId\": {\n \"$oid\": \"642e9659793554fd9f944ec0\"\n },\n \"currency\": \"GBP\",\n \"createAt\": {\n \"$date\": \"2023-04-06T08:43:49.703Z\"\n },\n \"package\": \"YEAR\"\n }\n }\n]\n",
"text": "my version version v3.6.8\nMy data :",
"username": "Anh_Tu_n_Hu_nh_Van"
},
{
"code": "hotelRoomTypesusersHotelsRoomTypes",
"text": "Hello @Anh_Tu_n_Hu_nh_VanThank you for sharing the sample document.I use this to combine 3 three collectionsI believe you have shared a user’s sample document that contains an embedded hotel and RoomTypes document. Could you please share three distinct sample documents from your respective collections, which, as I understand from the above-shared document, are users, Hotels , and RoomTypes ?Any other way ??Based on the sample data shared,It would be helpful for us to assist you in a better way if you could help me with the specific details.my version v3.6.8Furthermore, the mentioned version has reached the end of life (refer Legacy Support Policy), and would recommend you upgrade to the latest version for new features and bug fixes.Regards\nAasawari",
"username": "Aasawari"
}
] | Combine two collection to one use $loopup in vsc | 2023-04-05T07:51:57.250Z | Combine two collection to one use $loopup in vsc | 858 |
null | [
"dot-net",
"flexible-sync"
] | [
{
"code": "public partial class TrainingsSession : IEmbeddedObject\n{\n [MapTo(\"_id\")] public ObjectId Id { get; set; } = ObjectId.GenerateNewId();\n \n [MapTo(\"exercisesForDay\")] [BsonIgnore] public IList<ExerciseForDay> ExercisesForDay { get; }\n}\npublic partial class ExerciseForDay : IRealmObject, ISortable, IEntity<ObjectId>\n{\n [PrimaryKey]\n [MapTo(\"_id\")]\n [BsonElement(\"_id\")]\n [BsonId]\n public ObjectId Id { get; set; } = ObjectId.GenerateNewId();\n\n [MapTo(\"partition\")]\n [BsonElement(\"partition\")]\n [BsonRequired]\n public string Partition { get; set; } = string.Empty;\n}\npublic partial class DiaryEntry : IRealmObject, IEntity<ObjectId>\n{\n [PrimaryKey]\n [MapTo(\"_id\")]\n [BsonId]\n [BsonElement(\"_id\")]\n public ObjectId Id { get; set; } = ObjectId.GenerateNewId();\n\n [MapTo(\"partition\")]\n [BsonElement(\"partition\")]\n [BsonRequired]\n public string Partition { get; set; } = string.Empty;\n\n [MapTo(\"sessions\")]\n [BsonElement(\"sessions\")]\n public IList<TrainingsSession> Sessions { get; }\n}\npublic partial class TrainingsDay : IEmbeddedObject, IGroupable\n{\n [MapTo(\"_id\")]\n [BsonId]\n [BsonElement(\"_id\")]\n public ObjectId Id { get; set; } = ObjectId.GenerateNewId();\n\n [MapTo(\"sessions\")]\n [BsonElement(\"sessions\")]\n public IList<TrainingsSession> Sessions { get; }\n}\npublic partial class Week : IEmbeddedObject, IGroupable\n{\n [MapTo(\"_id\")]\n [BsonElement(\"_id\")]\n [BsonId]\n public ObjectId Id { get; set; } = ObjectId.GenerateNewId();\n\n [MapTo(\"days\")] [BsonElement(\"days\")] public IList<TrainingsDay> Days { get; }\n}\npublic partial class Trainingsplan : IRealmObject, IEntity<ObjectId>\n{\n [PrimaryKey]\n [MapTo(\"_id\")]\n [BsonId]\n [BsonElement(\"_id\")]\n public ObjectId Id { get; set; } = ObjectId.GenerateNewId();\n\n \n [MapTo(\"partition\")]\n [BsonElement(\"partition\")]\n public string Partition { get; set; } = string.Empty;\n\n\n [MapTo(\"weeks\")]\n [BsonElement(\"weeks\")]\n public IList<Week> Weeks { get; }\n}\n",
"text": "I’m Currently trying to migrate my app from Partitionbased Sync to flexible Sync and ran into an Error without changing my Schema. ( I created a new App for this with flexible Sync in Atlas App Services and wanted to play around with it first). Tried to establish the Schema with Development Mode.Currently, it says:\nfailed to update schema: error checking for queryable fields change: error fetching schema provider for schemas: at least two embedded schemas with different properties have the same title - “TrainingsSession” - property “exerciseForDay” has linking type “cross-collection link” in one schema and “none” in another: please terminate sync first if you wish to make changes to the embedded schemas that share a title (ProtocolErrorCode=225)Trainingssession is an EmbeddObject that contains a list of RealmObjects (ExerciseForDay). Trainingssession itself is used in two different Places. A RealmObject (DiaryEntry ) and a EmbeddObject (TrainingsDay). Below are the relevant Classes (Only relevant Properties are displayed)Do i miss something with the Relationships ? The given schema worked perfectly fine with the Partitionbased sync.",
"username": "Jannis_N_A"
},
{
"code": "",
"text": "Hi @Jannis_N_A,Your schema seems correct, so it should work. Let me investigate a little and come back to you.",
"username": "papafe"
},
{
"code": "",
"text": "Hey @Jannis_N_A,We found the cause of the error and we will try to fix it as soon as possible. In the meanwhile, as a temporary workaround, you should be able to progress by specifying the JSON schema directly in the Atlas interface. Probably the easiest way for you would be simply to copy it from your previous partition based application.",
"username": "papafe"
},
{
"code": "",
"text": "Thank you very much for the fast reply and the solution. I will try it out!!\nIs there a GitHub issue I can follow to get notified when the issue is fixed? That would be really great\nKeep up the good work ",
"username": "Jannis_N_A"
},
{
"code": "",
"text": "Sorry for the very late reply. We have an internal bug report for that, but there is no public link for it.\nI can tell you though that the issue is still not fixed.\nI hope that in the meanwhile you were able to proceed and this didn’t cause much troubles!",
"username": "papafe"
},
{
"code": "",
"text": "@papafe No Problem ",
"username": "Jannis_N_A"
},
{
"code": "",
"text": "Hi. We believe we have fixed this issue and will be deploying it this coming Thursday. If you can try to reproduce this and let us know if it is no longer an issue that would be great! If not, no worries at all.Thanks,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Hi Tyler,\nI will try it out and check if everything work as expected \nI will let you know if I find any issues.\nThanks for notifying me, that the issue is fixed. Keep up the good Work \nThanks,\nJannis",
"username": "Jannis_N_A"
},
{
"code": "",
"text": "@Tyler_Kaye @papafe\nTried it out today and everything works fine \nThanks for fixing the issue and notifying \nThanks,\nJannis",
"username": "Jannis_N_A"
},
{
"code": "",
"text": "Thats great! I am glad to hear it.",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Flexible Sync dotnet Relationships EmbeddedObjects | 2023-03-17T09:04:17.355Z | Flexible Sync dotnet Relationships EmbeddedObjects | 1,220 |
[
"dot-net",
"data-modeling"
] | [
{
"code": "",
"text": "Could anyone give me some guidance over the following query?I am fairly new to Mongo having worked through the MongoDB University courses in October last year, and I started using it on a personal project at the beginning of the team. I am really enjoying it but still feel I have a fair way to go to understand some of these basics. As I’m sure most of your newbies are, my main focus so far has been in SQL data stores.My subject matter is football tournaments. The schema I have captured below is of a snapshot of a tournament in the planning phase. I will mention that my data access layer is .Net so if anyone has examples relating to this it would be good, but I can translate it over if not.My query is centred around the phaseGroups Object array. This maps back to a <int, TournamentPhase> dictionary in code and models the Group and Elimination phases of the tournament.My code initialises the phases and fixtures based on the selected TournamentFormat ruleset during Tournament creation and persists this to a document. The fixtures are generated up front to allow Tournament Planners to arrange them into a schedule prior to the teams signing up.I’m now in a position where I need to start updating these items.Looking at a specific example,\nphaseGroups[0].phases.teams - I need to access this array to insert and remove teams.Any feedback or tips on my approach would also be very welcome.\nTournament_Schema475×727 11.2 KB\n",
"username": "Glenn_Moseley1"
},
{
"code": "",
"text": "I’ve managed to put together a solution for this using elemmatch to step through the nested arrays, then using set to update the contents.",
"username": "Glenn_Moseley1"
},
{
"code": "",
"text": "For the benefit to all users please share you solution so that we all know.",
"username": "steevej"
},
{
"code": " MongoClient dbClient = new MongoClient(_connectionSettingsProvider.GetDataConnection());\n var database = dbClient.GetDatabase(_connectionSettingsProvider.GetDatabaseName());\n\n var collection = database.GetCollection<Models.Tournament>(\"tournaments\");\n\n var mappedPhase = _tournamentPhaseMapper.CreateNew(tournamentPhaseToSave);\n\n var tournamentFilter = Builders<Models.Tournament>.Filter.And(\n Builders<Models.Tournament>.Filter.Eq(\"_id\", new ObjectId(tournamentId)),\n Builders<Models.Tournament>.Filter.ElemMatch(t => t.PhaseGroups, g => g.Order == 1),\n Builders<Models.Tournament>.Filter.ElemMatch(t => t.PhaseGroups[0].Phases, g => g.Id == tournamentPhaseToSave.Id)\n );\n\n var tournamentResult = await collection.Find(tournamentFilter).FirstOrDefaultAsync();\n\n var update = Builders<Models.Tournament>.Update;\n var setter = update.Set(g => g.PhaseGroups[0].Phases[0], mappedPhase);\n \n await collection.UpdateOneAsync(tournamentFilter, setter); \n",
"text": "Yes of course. This is what I settled on. From looking at other solutions I thought that I could use [-1] in place of the positional operator but that didn’t work for me. No doubt I will be revisiting this, as this is very much a happy path scenario so I’ll try some things then.",
"username": "Glenn_Moseley1"
},
{
"code": "MongoClient dbClient = new MongoClient(_connectionSettingsProvider.GetDataConnection());\n var database = dbClient.GetDatabase(_connectionSettingsProvider.GetDatabaseName());\n\n var collection = database.GetCollection<Models.Tournament>(\"tournaments\");\n\n var tournamentFilter = Builders<Models.Tournament>.Filter.And(\n Builders<Models.Tournament>.Filter.Eq(\"_id\", new ObjectId(tournamentId)),\n Builders<Models.Tournament>.Filter.ElemMatch(t => t.PhaseGroups, g => g.Order == phaseGroupOrder));\n\n var projectionDefinition = Builders<Models.Tournament>.Projection.Include(t => t.PhaseGroups).Exclude(\"_id\");\n\n var tournamentResult = await collection\n .Aggregate()\n .Match(tournamentFilter)\n .Project<Models.Tournament>(projectionDefinition)\n .FirstOrDefaultAsync();\n \n var phaseGroup = tournamentResult.PhaseGroups.Where(g => g.Order == phaseGroupOrder).FirstOrDefault();\n var groupIndex = tournamentResult.PhaseGroups.ToList().IndexOf(phaseGroup);\n \n var phase = phaseGroup.Phases.FirstOrDefault(p => p.Id == tournamentPhaseToSave.Id);\n var phaseIndex = phaseGroup.Phases.ToList().IndexOf(phase);\n\n var mappedPhase = _tournamentPhaseMapper.CreateNew(tournamentPhaseToSave);\n\n var update = Builders<Models.Tournament>.Update;\n var setter = update.Set(g => g.PhaseGroups[groupIndex].Phases[phaseIndex], mappedPhase);\n \n await collection.UpdateOneAsync(tournamentFilter, setter); \n",
"text": "What I did ended up being incorrect.I ended up Projecting the collection I needed, finding the indexes of each nested level and using these in my Setter. I haven’t found a better way to update nested Arrays yet but always happy to take suggestions.",
"username": "Glenn_Moseley1"
},
{
"code": "",
"text": "Please publish sample document in text JSON format so that we can cut-n-paste into our systems.Also what determines which element of phaseGroups you want to update.And which element of phases you want to update.",
"username": "steevej"
},
{
"code": "",
"text": "I will come back to this. I’ve made some changes to my design and the new design doesn’t quite address this in the same way, but I am interested in best practices.",
"username": "Glenn_Moseley1"
}
] | Updating a document in a nested array | 2023-03-25T12:54:05.411Z | Updating a document in a nested array | 1,128 |
|
null | [] | [
{
"code": "ACA.B.C.D.",
"text": "Hello,\nI’m preparing for the Associate Developer Certification and searching for possible questions I found an interesting case.If the answer offers a deprecated but effective way to do something, is it a valid answer?For example in this question, which A and C are correct :Which of the following is a valid way to delete a document in MongoDB?\nA. db.users.remove({name: “John”})\nB. db.users.delete({name: “John”})\nC. db.users.deleteOne({name: “John”})\nD. All of the aboveAnd the other question is:Do I also have to study the old and deprecated methods for the exam?Thank you for your time ",
"username": "Pau_Trepat_Segura"
},
{
"code": "",
"text": "Hey @Pau_Trepat_Segura,Welcome to the MongoDB Community Forums! If the answer offers a deprecated but effective way to do something, is it a valid answer?The MongoDB Certification team regularly checks and updates its question bank to make sure that the questions related to any deprecated methods are not asked. So, you can stay assured that you will not find yourself in a situation as you mentioned.Do I also have to study the old and deprecated methods for the exam?The syllabus ie. the breakdown of the topics and what to prepare for those topics are all listed in the Developer Exam Study Guide. Kindly prepare using this guide only.Wishing you the best in your certification preparation! Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "Practicing with case studies can be a helpful way to prepare for certification exams, as they provide a realistic scenario to which you can apply your knowledge. Be sure to review the official exam guide and take practice exams to gauge your readiness.",
"username": "Harsh_Seo"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Developer certification correct answer of deprecated methods | 2023-04-15T14:09:46.980Z | Developer certification correct answer of deprecated methods | 952 |
null | [
"node-js"
] | [
{
"code": "",
"text": "doing the script-along and having an issue with carrying on when needing to use the mgenerate mtool to import the mgenerate_u\nserdata.json - output = mgenerate is no longer included with mtools. Please use https://www.npmjs.com/package/mgeneratejs instead. but when installing nodejs and npm - then trying npm install mgeneratejs - npm ERR! occur - any assistance would be kindly appreciated.",
"username": "Gareth_Furnell"
},
{
"code": "",
"text": "Did you try npm install - g mgeneratejs?\nCheck this thread",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I had tried that initially on the day of testing but it had the same ERR! - Now that I have come back on a Monday it has rectified the issue, Thanks for your response :).",
"username": "Gareth_Furnell"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mgeneratejs in m312 | 2023-04-14T14:07:41.826Z | Mgeneratejs in m312 | 1,060 |
null | [
"node-js",
"mongoose-odm",
"connecting",
"atlas-cluster"
] | [
{
"code": "const serverSelectionError = new ServerSelectionError();\n ^\n\nMongooseServerSelectionError: getaddrinfo ENOTFOUND ac-otj2fk4-shard-00-01.jqf3ndr.mongodb.net\n at NativeConnection.Connection.openUri (/home/runner/boilerplate-project-exercisetracker/node_modules/mongoose/lib/connection.js:824:32)\n at /home/runner/boilerplate-project-exercisetracker/node_modules/mongoose/lib/index.js:412:10\n at /home/runner/boilerplate-project-exercisetracker/node_modules/mongoose/lib/helpers/promiseOrCallback.js:41:5\n at new Promise (<anonymous>)\n at promiseOrCallback (/home/runner/boilerplate-project-exercisetracker/node_modules/mongoose/lib/helpers/promiseOrCallback.js:40:10)\n at Mongoose._promiseOrCallback (/home/runner/boilerplate-project-exercisetracker/node_modules/mongoose/lib/index.js:1265:10)\n at Mongoose.connect (/home/runner/boilerplate-project-exercisetracker/node_modules/mongoose/lib/index.js:411:20)\n at Object.<anonymous> (/home/runner/boilerplate-project-exercisetracker/index.js:9:10)\n at Module._compile (node:internal/modules/cjs/loader:1105:14)\n at Object.Module._extensions..js (node:internal/modules/cjs/loader:1159:10) {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n servers: Map(3) {\n 'ac-otj2fk4-shard-00-00.jqf3ndr.mongodb.net:27017' => ServerDescription {\n address: 'ac-otj2fk4-shard-00-00.jqf3ndr.mongodb.net:27017',\n type: 'RSSecondary',\n hosts: [\n 'ac-otj2fk4-shard-00-00.jqf3ndr.mongodb.net:27017',\n 'ac-otj2fk4-shard-00-01.jqf3ndr.mongodb.net:27017',\n 'ac-otj2fk4-shard-00-02.jqf3ndr.mongodb.net:27017'\n ],\n passives: [],\n arbiters: [],\n tags: {\n provider: 'AWS',\n workloadType: 'OPERATIONAL',\n nodeType: 'ELECTABLE',\n region: 'US_EAST_1'\n },\n minWireVersion: 0,\n maxWireVersion: 13,\n roundTripTime: 217.72000000000003,\n lastUpdateTime: 6177723,\n lastWriteDate: 2022-11-29T08:19:43.000Z,\n error: null,\n topologyVersion: {\n processId: ObjectId { [Symbol(id)]: [Buffer [Uint8Array]] },\n counter: 5\n },\n setName: 'atlas-u7gv3z-shard-0',\n setVersion: 11,\n electionId: null,\n logicalSessionTimeoutMinutes: 30,\n primary: 'ac-otj2fk4-shard-00-01.jqf3ndr.mongodb.net:27017',\n me: 'ac-otj2fk4-shard-00-00.jqf3ndr.mongodb.net:27017',\n '$clusterTime': {\n clusterTime: Timestamp { low: 10, high: 1669709983, unsigned: true },\n signature: { hash: [Binary], keyId: [Long] }\n }\n },\n 'ac-otj2fk4-shard-00-01.jqf3ndr.mongodb.net:27017' => ServerDescription {\n address: 'ac-otj2fk4-shard-00-01.jqf3ndr.mongodb.net:27017',\n type: 'Unknown',\n hosts: [],\n passives: [],\n arbiters: [],\n tags: {},\n minWireVersion: 0,\n maxWireVersion: 0,\n roundTripTime: -1,\n lastUpdateTime: 6186787,\n lastWriteDate: 0,\n error: MongoNetworkError: getaddrinfo ENOTFOUND ac-otj2fk4-shard-00-01.jqf3ndr.mongodb.net\n at connectionFailureError (/home/runner/boilerplate-project-exercisetracker/node_modules/mongoose/node_modules/mongodb/lib/cmap/connect.js:387:20)\n at TLSSocket.<anonymous> (/home/runner/boilerplate-project-exercisetracker/node_modules/mongoose/node_modules/mongodb/lib/cmap/connect.js:310:22)\n at Object.onceWrapper (node:events:642:26)\n at TLSSocket.emit (node:events:527:28)\n at emitErrorNT (node:internal/streams/destroy:157:8)\n at emitErrorCloseNT (node:internal/streams/destroy:122:3)\n at processTicksAndRejections (node:internal/process/task_queues:83:21) {\n cause: Error: getaddrinfo ENOTFOUND ac-otj2fk4-shard-00-01.jqf3ndr.mongodb.net\n at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:71:26) {\n errno: -3008,\n code: 'ENOTFOUND',\n syscall: 'getaddrinfo',\n hostname: 'ac-otj2fk4-shard-00-01.jqf3ndr.mongodb.net'\n },\n [Symbol(errorLabels)]: Set(1) { 'ResetPool' }\n },\n topologyVersion: null,\n setName: null,\n setVersion: null,\n electionId: null,\n logicalSessionTimeoutMinutes: null,\n primary: null,\n me: null,\n '$clusterTime': null\n },\n 'ac-otj2fk4-shard-00-02.jqf3ndr.mongodb.net:27017' => ServerDescription {\n address: 'ac-otj2fk4-shard-00-02.jqf3ndr.mongodb.net:27017',\n type: 'Unknown',\n hosts: [],\n passives: [],\n arbiters: [],\n tags: {},\n minWireVersion: 0,\n maxWireVersion: 0,\n roundTripTime: -1,\n lastUpdateTime: 6186661,\n lastWriteDate: 0,\n error: MongoNetworkError: getaddrinfo ENOTFOUND ac-otj2fk4-shard-00-02.jqf3ndr.mongodb.net\n at connectionFailureError (/home/runner/boilerplate-project-exercisetracker/node_modules/mongoose/node_modules/mongodb/lib/cmap/connect.js:387:20)\n at TLSSocket.<anonymous> (/home/runner/boilerplate-project-exercisetracker/node_modules/mongoose/node_modules/mongodb/lib/cmap/connect.js:310:22)\n at Object.onceWrapper (node:events:642:26)\n at TLSSocket.emit (node:events:527:28)\n at emitErrorNT (node:internal/streams/destroy:157:8)\n at emitErrorCloseNT (node:internal/streams/destroy:122:3)\n at processTicksAndRejections (node:internal/process/task_queues:83:21) {\n cause: Error: getaddrinfo ENOTFOUND ac-otj2fk4-shard-00-02.jqf3ndr.mongodb.net\n at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:71:26) {\n errno: -3008,\n code: 'ENOTFOUND',\n syscall: 'getaddrinfo',\n hostname: 'ac-otj2fk4-shard-00-02.jqf3ndr.mongodb.net'\n },\n [Symbol(errorLabels)]: Set(1) { 'ResetPool' }\n",
"text": "I am getting this connection error from replit since weekend.",
"username": "Valikhan_Dumshebayev"
},
{
"code": "MongooseServerSelectionError: getaddrinfo ENOTFOUNDMongooseServerSelectionErrorgetaddrinfo ENOTFOUNDcurl http://portquiz.net:27017/",
"text": "MongooseServerSelectionError: getaddrinfo ENOTFOUNDI am glad you made this separate post with error details. Although the error family is the same, MongooseServerSelectionError, your is different than the one in the other post: getaddrinfo ENOTFOUND. This error comes up when there is a problem in the DNS server your app’s host uses.However, I suspect the cause is the same: The container in which your app starts has the problem. Assuming your app is also a free one you may try the temporary solution I offered there: 8th answer in that postalso, check the given IP address along with the port test with this: curl http://portquiz.net:27017/. by the way, use the command in your repl’s shell.I also suggest sending a bug report from within the repl (help button on bottom-left) about this problem so to make them replit team aware of the situation.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "I’ve sent a bug report yesterday as you suggested. It looks like the problem has gone, at least for now.\nThank you for your swift response and helping out.",
"username": "Valikhan_Dumshebayev"
},
{
"code": "",
"text": "they have responded today saying they are aware of the problem Hey there,\nSorry to hear you’re having issues with MongoDB!\nWe are aware of this issue and are working on a fix. We will follow up as soon as we have an update!",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Is there a bug tracker issue we can follow or any announcement about the downtime? This would seem to violate the SLA as my hosting provider cannot connect, but nothing is being reported https://status.cloud.mongodb.com/.I did some debugging and it seems like a firewall issue on your end. All versions of MongoDB’s NodeJs driver are affected.",
"username": "Dave_Powers"
},
{
"code": "",
"text": "@Dave_Powers , this problem is not on MongoDB side. It is some new bug on some replit containers, probably arose on some cloud providers they use. Unfortunately, there is no estimate on how long this will bug us.\nIf it a free one, you can try your luck reloading your repl as many times and hope it starts into a container that can connect to MongoDB. I can’t say the same for powered repls as I don’t know how to stop/restart one (free ones stops when you leave the page).",
"username": "Yilmaz_Durmaz"
},
{
"code": "kill 1",
"text": "They seem to think it’s a bug on your end, based on Discord discussions. What exactly is happening?From the error it sounds like MongoDB is looking for a specific DNS record but can’t find it. It doesn’t seem to be looking for an A record, is it perhaps a AAAA/TXT? Could this be caused by an overly optimized DNS server like Cloudflare’s 1.1.1.1? Are there any known workarounds to get our apps up again?ReplIt uses Google Cloud Platform for hardware, with NixOS containers. NixOS is not the issue, I run the same version on my desktop. You can restart always-on containers with kill 1.",
"username": "Ray_Foss"
},
{
"code": "",
"text": "Hi @Ray_Foss, It is not about just restarting the repl, you need to do that until you hit a working IP range. I am not aware of discussions on discord. can you link us there?by the way, we opened this post after another discussion about a very similar problem (just a small difference in the error message). And replit teams had a response to my bug report saying they are aware. You can find the link to the other discussion, and their response in my above answers.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Found the workaround thank you. The connection is actively being blocked based on IP address, as every ReplIt container runs the same software… Therefore the likely primary source of the issue is the Atlas firewall/fail2ban configuration. To test this story, we could ssh from ReplIt to an SSH server with a known working IPv4 address and reverse port forward port 22. We can then use that as a dynamic proxy for a local MongoDB connection.Kubernetes nodes rarely have stable IP addresses in general, this should also sporadically affect Kubernetes users on GCP… Or anyone unfortunate enough to get a banned IP.Long term, giving Atlas IPv6 support should make it easier to avoid this situation… That way one rogue customer plugin/container/wasm module/function on your Kubernetes Node doesn’t get the whole Node banned.Would moving our Atlas cluster from AWS to GCP help?",
"username": "Ray_Foss"
},
{
"code": "node mongoconnets.jsconst mongoose = require('mongoose');\nconst mySecret = process.env['mongoUrl']\nconst intialDbConnection = async () => {\n try {\n console.log(\"connecting?\")\n await mongoose.connect(mySecret, {\n useNewUrlParser: true,\n useUnifiedTopology: true\n })\n console.log(\"db connected\")\n // await mongoose.disconnect()\n // console.log(\"test complete. db disconnected\")\n }\n catch (error) {\n console.error(error);\n }\n}\nconsole.log(new Date(Date.now()))\nintialDbConnection()\nsetInterval(()=>console.log(new Date(Date.now())),5000)\n",
"text": "@Ray_Foss I am not sure if that is true after using “allow access everywhere”. do you have time to test your theory? especially if you hit bugging IP addresses frequently. mine was (un)-lucky shot to get one of those addresses.PS: tunneling would work no matter what you do because that is what tunneling does it is like trying to connect from your own pc (remember I tried side-by-side with Compass), it would just work. so, not a good test/prod method.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "That seems like the most efficient way to potentially deal with this problem. I was using the MongoDB driver directly. This test would only rule out the cloud provider’s firewall, not any additional Atlas DoS attack protection, fail2ban config or firewall. I’ll still give this a try as it’s a quick test.By reverse port forwarding port 22, I meant tunneling to the Replit container so that connections from Compass would go through the container using a socks proxy.",
"username": "Ray_Foss"
},
{
"code": "",
"text": "Good news, it has been fixed in the last 12 hours or so\nReplit reproduction that checks AWS and GCP\nBad news, I have no idea what happened. The nature of it lends credence to it not being a DoS issue, but a staged release issue where by 50% of the nodes had a bad DNS config.",
"username": "Ray_Foss"
},
{
"code": "",
"text": "It appears the problem is still occurring sporadically, it’s as if there is a maximum number of connections per IP on a global basis? I certainly have enough connections available for my database.Can Atlas handle 100 projects connecting to different databases from the same IP?",
"username": "Ray_Foss"
},
{
"code": "",
"text": "Can Atlas handle 100 projects connecting to different databases from the same IP?Absolutely, but if you are using the free or on of the shared tier you get what you pay for, a cluster that is affected by what the other people/applications are doing.I have seen some post about people doing performance assessment on the free/shared tiers. It sure slows you down if you are on the same shared tier.But so far, nothing shows the issue is on Atlas side. It might be replit. The error ENOTFOUND is strictly DNS related. The resolution is not made by Atlas. It is highly distributed and cached. I did not looked at the TTL values but if one side received ENOTFOUND then it looks like the resolver on this erring side is the culprit.By the way, are you using the SRV style URI or the old style where replica set hosts are individually specified.The followingFrom the error it sounds like MongoDB is looking for a specific DNS record but can’t find it.is a big misunderstanding of DNS. Atlas is supplying the DNS information. Your application, using the driver, is not able to find the DNS information. Your side is not able to find the information. DNS is the pillar of the internet that allows everything to work with names rather than numbers. When we query on our side with a reliable resolver it works.One quick fix is to use google’s 8.8.8.8 and 8.8.4.4 free DNS servers. I just do not know how to bypass replit DNS resolvers to use google’s.The other fix, if you really think that Atlas is the issue, is to have manage your own mongod servers.",
"username": "steevej"
},
{
"code": "",
"text": "I always used paid stuff.I’ve traced the source of the issue… bad DNS servers. If your container was created with the bad DNS server, there is no remedy but to delete it and start over.A quick test is checking what happens when you run dig google.com. If you only get one response like Cloudflare likes to give, you have the bad unreliable DNS configuration. I’m very familiar with DNS… DNS can be used as a database and for auto configuration. The mongodb driver actually gets its replica information from a TXT record.",
"username": "Ray_Foss"
},
{
"code": "mongodb://mongodb+srv://",
"text": "@Ray_Foss, how often does your app fall into this problem? As I said, mine was just a bit of luck to get one of those to see this error. may I ask if you can try this: when you hit a host with the problem, can you try to connect with mongodb:// URL format instead of mongodb+srv://? I wonder if this has any relevance.\n(from the same page on Atlas where you get the srv string, select oldest driver version)",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Nice to know the following.I’ve traced the source of the issue… bad DNS servers. If your container was created with the bad DNS server, there is no remedy but to delete it and start over.It actually confirmsif one side received ENOTFOUND then it looks like the resolver on this erring side is the culprit.The mongodb driver actually gets its replica information from a TXT record.It gets part of it. The list of hosts is actually a SRV records.",
"username": "steevej"
},
{
"code": "",
"text": "I paid for a hacker plan and after two months it’s still not working.\nIt’s unbelievable…",
"username": "Hacking_Robot"
},
{
"code": "",
"text": "i had the problem 2 times already , i use replit hack plan , the first time i chage the new mongo account , i thougt the problems was from too many connection ,more than 500.\nthis time i dont know how to do ?\ni read the up messages , if the problem is DNS record , how to fix it ?",
"username": "hunhun_nuan"
},
{
"code": "",
"text": "Getting this again, seems like a problem with replit and not mongo",
"username": "Amir_Angel"
}
] | No connection to MongoDB from Replit | 2022-11-29T08:37:56.620Z | No connection to MongoDB from Replit | 7,018 |
null | [
"app-services-user-auth"
] | [
{
"code": "",
"text": "I’m looking to move across from Firebase to MongoDB (for a number of reasons). But user authentication is a little more “involved”. Based on the Realm UI, I’m required to enter an “email confirmation URL” that should point to a page that contains an email confirmation script.I’ve tried to look for some tutorials or guides on how to configure this as I’m new to programming. Can anyone point me in the right direction please?",
"username": "Anthony_CJ"
},
{
"code": "",
"text": "Hi @Anthony_CJ,The idea is you should set a link pointing to your application url endpoint where you would confirm the received token.Now the user is sended with an email by Realm system after registration the link he clicks should point to a place where you complete your registration with your sdk language.Here is the section for user confirmation flow for JS sdk (you should navigate to a similar docs for your sdk):Please let me know if this helps.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks. I understand the process, what I don’t know how to do is generate the email configuration script that I will locate at the email confirmation URL. I am building an iOS app so there’s so web app.",
"username": "Anthony_CJ"
},
{
"code": "",
"text": "Hi @Anthony_CJThis is more related to language specific redirect, you can also avoid email confirmation or do a function confirmation for your app.I would recommend looking into our IOS tutorial which shows email signup logic:\nhttps://docs.mongodb.com/realm/tutorial/ios-swift#enable-authenticationBest\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny. Thanks but I’ve got that bit sorted. That’s not what I’m asking about. I’m asking about the email confirmation URL and generating an email confirmation script.",
"username": "Anthony_CJ"
},
{
"code": "",
"text": "I have exactly the same question mark. It seems like you have to set up you own website with scripts to handle the confirmation link. Apparently in previous versions (Realm Cloud or Stitch) they provided this service for the clients but not anymore. I’m trying to develop a desktop app with Realm, so I don’t have a webserver, which could do that job. Thus this feature is not really useable for me.\nUsing custom functions to might be the answer, but because of a lack of examples I don’t see how I still can validate that users provided a valid email (for password resets) .",
"username": "David_Funk"
},
{
"code": "",
"text": "Hi @Anthony_CJ,I think its definitely possible to create a link which redirect to your ios app instead of a web page:iphone - ios url redirect from mail to app - Stack OverflowThis is a bit out of scope for realm per say.The code to confirm is:\nhttps://docs.mongodb.com/realm/ios/manage-email-password-users#confirm-a-new-user-s-email-addressAs I said if you don’t want to use an email you can send the user SMS via twilio or email with a code and have the confirmation function input this code from email.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "<html>\n<head>\n<script src=\"https://unpkg.com/[email protected]/dist/bundle.iife.js\"></script>\n<script>\nconst APP_ID = \"< YOUR APP_ID >\";\nconst app = new Realm.App({ id: APP_ID});\n//Grab Tokens\nconst params = new URLSearchParams(window.location.search);\nconst token = params.get('token');\nconst tokenId = params.get('tokenId');\n//Confirm client\napp.emailPasswordAuth\n .confirmUser(token, tokenId)\n .then(() => displayResult('success'))\n .catch(err => displayResult('error', err))\n//Display Message depending on result\nfunction displayResult(result, err) {\n const message = document.getElementById(\"message\");\n if (result === \"success\") {\n message.innerText = \"Your E-mail address has been verified.\\n\\n You may close this page. Thank you.\";\n }\n else if (result === \"error\") {\n message.innerText = \"Unable to register this user. Please try again to register.\" + err;\n }\n}\n</script>\n\n<title>E-mail Confirmation</title>\n<style>\n h1 {text-align: center;}\n h3 {text-align: center;}\n h5 {text-align: center; background-color: whitesmoke;}\n p {text-align: center;}\n div {text-align: center;}\n</style>\n</head>\n<body>\n <h1>My Apps Name</h1>\n <h3>Thanks for choosing our app!</h1>\n <h5 id = \"message\"></h5>\n</body>\n</html>\n",
"text": "Hello, I was in the same boat as you and here are the steps I did to add email confirmation to my application. I am going to assume you have some knowledge with the MongoDB Realm website.Click on the “Email/Password” provider.\n\nStep 12880×1458 311 KB\nEnsure it is ON.\n\nStep 22880×1458 363 KB\nInsert this code where you make the user sign up.\n\nStep 32880×1458 567 KB\nThis is the super simple code that confirms your user when clicks on the “confirm email” link that they get in their confirmation email.This line here “” allows us to use Realm as a global variable, and the rest is pretty self explanitory.\n** Ensure to put your own APP_ID **Now you have to host the script, this is super simple. Go to your Realm Apps home page and on the left hand side you should see a tab called “Hosting”, click on it.\n\nStep 42880×1458 528 KB\nThen put the code from the previous step into a file and upload that file by clicking “Upload Files”, and selecting that file.\n\nStep 52880×1458 272 KB\nThis is the last step to this process, and the easiest, simple copy the link of the script,\n\nStep 62880×1458 314 KB\n\nand add it to the email.\n\nStep 72880×1458 368 KB\nThats all you need to do to have email confirmation with your iOS application using MongoDB Realm and Swift. The process for password reseting is very identical as well.",
"username": "Sebastian_Gadzinski"
},
{
"code": "",
"text": "I would like to add here that copying the link and pasting it into the Email Confirmaion URL did not work for me because there was extra information included in the copied link. What did work was just adding the file name as a path to the web address like below:https://myappname.mongodbstitch.com/emailconfirmation.htmlI also wanted to note here that if you’re using a React application you don’t need to use vanilla javascript to create an email confirmation page. For example if you had an emailconfirmation route that made use of an emailconfirmation component you could simply paste the address of that route in your Email Confirmation URL like:https://myappname.mongodbstitch.com/emailconfirmationLastly I would like to mention that if the route uses capital letters it won’t load the page in the browser. I also used the single page application option in the settings to run my app.",
"username": "thecyrusj13_N_A"
},
{
"code": "",
"text": "The process for password reseting is very identical as well.@Sebastian_Gadzinski @thecyrusj13_N_A @Anthony_CJCan anyone please help me figure out how I can adapt Sebastian_Gadzinski’s code for password reset?Would be very happy. Too bad that the documentation is so very lacking.",
"username": "Nilom_N_A"
},
{
"code": "app.emailPasswordAuth\n .resetPassword(token, tokenId, passwordTextField.value)\n .then(() => displayResult('success'))\n .catch(err => displayResult('error', err))\n",
"text": "@Nilom_N_AHi there, I got the solution:step 1: put a password-type text field on the website\nstep 2: add a confirm button for the user to click after he inputted the new password\nstep 3: when user clicks the confirm button callthats it. Show result like “Change was successful. You may close this page now.”",
"username": "SirSwagon_N_A"
},
{
"code": "",
"text": "Thank you for the reply.But I mean if the user forgot the password. Then a password confirmation mail has to be sent to his email adress. And the link inside the mail will lead to the site with the change password field.That will most likely need a script like the registration mail example above.",
"username": "Nilom_N_A"
},
{
"code": "app.emailPasswordAuth\n .confirmUser(token, tokenId)\n .then(() => displayResult('success'))\n .catch(err => displayResult('error', err))\n",
"text": "@Nilom_N_A Yeah of cource, you use exactly the same script (HTML file) as posted above and then apply the 3 changes to it as I wrote. And remove the email confirmation content:from the script, obviously",
"username": "SirSwagon_N_A"
},
{
"code": "",
"text": "thanks this really helped!I only had to change the link, like someone else said:\"What did work was just adding the file name as a path to the web address like below:https://myappname.mongodbstitch.com/emailconfirmation.html\"you saved me a lot of time Greetings Harriët",
"username": "Harriet_Waninge"
},
{
"code": "<html>\n<head>\n<script src=\"https://unpkg.com/realm-web/dist/bundle.iife.js\"></script>\n<script>\nconst APP_ID = \"< YOUR APP_ID >\";\nconst app = new Realm.App({ id: APP_ID});\n//Grab Tokens\nconst params = new URLSearchParams(window.location.search);\nconst token = params.get('token');\nconst tokenId = params.get('tokenId');\n//Confirm client\nif (token && tokenId) {\n app.emailPasswordAuth\n .confirmUser({token, tokenId})\n .then(() => displayResult('success'))\n .catch(err => displayResult('error', err))\n} else {\n displayResult('error', 'Missing token or tokenId in URL parameters')\n}\n//Display Message depending on result\nfunction displayResult(result, err) {\n const message = document.getElementById(\"message\");\n if (result === \"success\") {\n message.innerText = \"Your E-mail address has been verified.\\n\\n You may close this page. Thank you.\";\n }\n else if (result === \"error\") {\n message.innerText = \"Unable to register this user. Please try again to register. \" + err;\n }\n}\n</script>\n\n<title>E-mail Confirmation</title>\n<style>\n h1 {text-align: center;}\n h3 {text-align: center;}\n h5 {text-align: center; background-color: whitesmoke;}\n p {text-align: center;}\n div {text-align: center;}\n</style>\n</head>\n<body>\n <h1>My Apps Name</h1>\n <h3>Thanks for choosing our app!</h1>\n <h5 id = \"message\"></h5>\n</body>\n</html>\n\n",
"text": "I updated the script above to work with the newest Realm Web SDK.",
"username": "Nilom_N_A"
},
{
"code": "",
"text": "Thanks for the tutorial! \nDo I have to pay for Dedicated Clusters for hosting to be enabled and host the confirmation page?",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "Hey Ciprian, no you do not have to pay for dedicated clusters for hosting to be enabled, I was using a shared cluster. You do have to have a Realm application as the hosting options are on the AppService tab.\nimage779×163 6.01 KB\nThere you can create a Realm Application and connect it to your cluster. Once you complete that process you should see the hosting option on the sidebar tab:\nimage605×907 33.8 KB\n",
"username": "Sebastian_Gadzinski"
},
{
"code": "",
"text": "its says that I have to upgrade it\n\nimage2661×2026 155 KB\n",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "When I use this with the most up to date realm web SDK version, I get an invalid JSON error. Does this still work for you?",
"username": "Campbell_Affleck"
}
] | Email Confirmation Script for User Authentication via Email Address | 2021-01-06T05:44:39.903Z | Email Confirmation Script for User Authentication via Email Address | 11,190 |
null | [
"storage"
] | [
{
"code": "db.shutdown();{\"t\":{$date\":\"2023-04-11T08:49:48.577+02:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1681195788:577487][2009:0x7f84349a7cc0], txn rollback_to_stable: [WT_VERB_RECOVERY_PROGRESS] Rollback to stable has been running for 340 seconds and has inspected 426639 files. For more detailed logging, enable WT_VERB_RTS}}\n{\"t\":{\"$date\":\"2023-04-11T08:50:23.765+02:00\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error\",\"attr\":{\"error\":23,\"message\":\"[1681195823:765078][2009:0x7f84349a7cc0], file:index-32660--\n5356062899923388618.wt, txn rollback_to_stable: __posix_open_file, 808: /var/lib/mongodb/index-32660--5356062899923388618.wt: handle-open: open: Too many open files in system\"}}\nulimit -an\ncore file size (blocks, -c) 0\ndata seg size (kbytes, -d) unlimited\nscheduling priority (-e) 0\nfile size (blocks, -f) unlimited\npending signals (-i) 515430\nmax locked memory (kbytes, -l) 64\nmax memory size (kbytes, -m) unlimited\nopen files (-n) 999999\npipe size (512 bytes, -p) 8\nPOSIX message queues (bytes, -q) 819200\nreal-time priority (-r) 0\nstack size (kbytes, -s) 8192\ncpu time (seconds, -t) unlimited\nmax user processes (-u) 515430\nvirtual memory (kbytes, -v) unlimited\nfile locks (-x) unlimited\ncat /proc/2474/limits\nLimit Soft Limit Hard Limit Units\nMax cpu time unlimited unlimited seconds\nMax file size unlimited unlimited bytes\nMax data size unlimited unlimited bytes\nMax stack size 8388608 unlimited bytes\nMax core file size 0 unlimited bytes\nMax resident set unlimited unlimited bytes\nMax processes 140000 140000 processes\nMax open files 750000 750000 files\nMax locked memory unlimited unlimited bytes\nMax address space unlimited unlimited bytes\nMax file locks unlimited unlimited locks\nMax pending signals 515430 515430 signals\nMax msgqueue size 819200 819200 bytes\nMax nice priority 0 0\nMax realtime priority 0 0\nMax realtime timeout unlimited unlimited us\nroot@mongo lib/mongodb# ls -l | wc -l\n262131\n",
"text": "I have upgraded from version 4.2 to 4.4 on Debian Buster. It worked fine after the first start, but then I shut down the Mongo server using db.shutdown(); which was also confirmed as successful. I had to restart the container. The whole system runs in an LXC under Proxmox.Now the instance doesn’t start anymore. I am using a standalone version. Mongo seems to think that I did not shut down the server cleanly, so it is attempting a recovery.Unfortunately, all my attempts to recover have failed with the following error:This doesn’t make sense, since there should be enough resources available.We have a fairly large installation with ~1.5TB data and ~260K files.The recovery always restarts at around ~200K open files.I have checked the forums for a solution but other than increasing the open files limit I did not find anything.I´d be very thankful for any hints!Thank you\nFabio",
"username": "Fabio_Bacigalupo"
},
{
"code": "",
"text": "Hello @Fabio_Bacigalupo ,Welcome to The Community Forums! I saw that you haven’t had any response to this topic yet, were you able to find a solution to this?\nIf not, as mentioned by youIt worked fine after the first startDo you mean that after successfully updating the server, you were able to use it without any issues and later when you shut it down and restarted it, the server started giving the error?I have checked the forums for a solution but other than increasing the open files limit I did not find anything.There is a similar issue resolved, can you take a look at this?Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | Server does not start: "Too many files" | 2023-04-11T09:30:08.105Z | Server does not start: “Too many files” | 1,151 |
null | [] | [
{
"code": "https://cloud.mongodb.com/api/atlas/v1.0/groups/{groupId}/databaseUsers/{databaseName}/{username}",
"text": "Hello,we encountered a bug in the MongoDB Administration API with the endpoint: https://cloud.mongodb.com/api/atlas/v1.0/groups/{groupId}/databaseUsers/{databaseName}/{username}For us this happened when deleting IAM ROLE users.We imported the official OpenAPI spec into Postman (MongoDB Atlas Administration API). In Postman we created and deleted IAM role based database users. While creating database user works fine, deleting a database user (that for sure exists and can be found and deleted through the Atlas UI), always returns a 404 error: \"“Cannot find resource /api/atlas/v1.0/groups/GROUP_ID_MASKED/databaseUsers/$external/arn:aws:iam::AWS_ACCOUNT_MASKED:role/development-mongo-full-read-write-access.”You can see from the error message, that the resource identifier matches exactly the API documentation. Therefore we deem this to be a bug.",
"username": "Florian_Bischoff"
},
{
"code": "",
"text": "Okay, this is not a bug in the API itself, but problematic documentation.In the deprecated docs, it shows that you must url-encode the / in AWS arns. This piece is missing from the new API docs. In the new API docs, in the examples, ARNs are not url encoded…",
"username": "Florian_Bischoff"
},
{
"code": "",
"text": "This piece is missing from the new API docs. In the new API docs, in the examples, ARNs are not url encoded…Thanks for providing these details. I’ll take a look and make an internal ticket accordingly.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | [Solved] BUG Report MongoDB Administration API / Delete IAM Role database user | 2023-04-14T09:04:40.118Z | [Solved] BUG Report MongoDB Administration API / Delete IAM Role database user | 484 |
null | [] | [
{
"code": "",
"text": "Overnight, Atlas to v6.0.5 and broke\nour app completely (its not usable without db connection, even if all errors handled properly).It seems latest mongodb Node client v5.2.0 is not supporting the breaking changes:https://www.mongodb.com/docs/manual/legacy-opcodes/#op_queryWe cannot change cluster version as we are using M2 instances.",
"username": "Elis"
},
{
"code": "",
"text": "Hi @Elis,Could you contact the Atlas in-app chat support team regarding this? Please provide them with the errors you’re receiving as well.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Same exact issue here since we’re using some op_query as well with mongoose version 5.X.X. Are you able to solve the problem? I tried to upgrade to mongoose 6.x.x but too many errors to deal with.",
"username": "Jimmy_Cheng"
},
{
"code": "",
"text": "It would be best to create a new post with the exact details/errors. However, in saying so, there were some email(s) sent out several months ago advising to test the new version and included some details on what to do with regards to the upgrade. Refer to also the relevant change notes for 6.0 and Legacy Opcodes.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | Atlas M2 Cluster updated over night to v6 broke running | 2023-04-13T10:19:39.441Z | Atlas M2 Cluster updated over night to v6 broke running | 654 |
null | [
"containers"
] | [
{
"code": "",
"text": "I am trying to update the mongo version for my local machine. I noticed that when I update the image from version (from 3.4 to 3.6) then although the mongo image starts, no clients (my app and studio3T) can connect to the new image (get connection timeouts).Another method I tried is creating two new clean containers (one version 3.4 and the other 3.6) and getting a dump from 3.4 and copying it to the 3.6 container and doing a restore. The restore worked however when I tried to connect to the 3.6 database using studio3T, I got a connection issue again.My dump contained one small document.I am quite lost. How come something so simple isn’t possible?",
"username": "SwaggyDoggy_N_A"
},
{
"code": "command: mongo-new:\n image: mongo:3.6\n ports:\n - 27021:27017\n volumes:\n - mongo-test-new:/data/db\n command: --bind_ip_all\n",
"text": "I had to add --bind_ip_all when starting the container.So I changed my docker-compose.yml file to (Added the command: ):This was a change in version 3.6 [ref]",
"username": "SwaggyDoggy_N_A"
}
] | Upgrading mongo locally using docker not working | 2023-04-16T11:16:08.954Z | Upgrading mongo locally using docker not working | 873 |
null | [] | [
{
"code": "config.collectionsconfig.collectionsconfig.collections{\n \"_id\": \"src.data\",\n \"lastmodEpoch\": {\n \"$oid\": \"64340cfd942da8390e3ca588\"\n },\n \"lastmod\": {\n \"$date\": \"2023-04-10T13:19:57.427Z\"\n },\n \"timestamp\": {\n \"$timestamp\": {\n \"t\": 1681132797,\n \"i\": 6\n }\n },\n \"uuid\": {\n \"$binary\": {\n \"base64\": \"HLO3ebKGT4ONcEFkdwWRqw==\",\n \"subType\": \"04\"\n }\n },\n \"key\": {\n \"student.name\": 1,\n \"city\": \"hashed\"\n },\n \"unique\": false,\n \"chunksAlreadySplitForDowngrade\": false,\n \"noBalance\": false\n}\nconfig.collections{\n \"_id\": \"src.data\",\n \"lastmodEpoch\": {\n \"$oid\": \"643674a8ccaf8db708026956\"\n },\n \"lastmod\": {\n \"$date\": \"2023-04-12T09:06:48.649Z\"\n },\n \"timestamp\": {\n \"$timestamp\": {\n \"t\": 1681290408,\n \"i\": 3\n }\n },\n \"uuid\": {\n \"$binary\": {\n \"base64\": \"YzCh1albRzaJ8i3vtCoDLg==\",\n \"subType\": \"04\"\n }\n },\n \"key\": {\n \"student.name\": 1,\n \"city\": 1,\n \"order_id\": \"hashed\"\n },\n \"unique\": false,\n \"chunksAlreadySplitForDowngrade\": false,\n \"noBalance\": false\n}\nconfig.collections",
"text": "Hello,Querying config.collections returns data that includes the current shard keys of a collection. Resharding a collection with a new shard key changes the corresponding entry in config.collections. Is there a way to know the old shard keys of the collection i.e. the shard keys before resharding was performed?Entry in config.collections before reshardingEntry in config.collections after reshardingI can see that config.collections stores the timestamp at which resharding was performed. But does MongoDB store the historic data on shard keys? If yes, where do we find it?",
"username": "Ajay_Mathias1"
},
{
"code": "config.changelogconfig.changelog",
"text": "Hello @Ajay_Mathias1 ,Welcome to The MongoDB Community Forums! You can try to have a look at config.changelog collection in the config server. As this collection is capped in size, thus if you had a lot of DDL operation in the cluster since you originally sharded the collection there is the possibility that the original shard collection event was already deleted from config.changelogNote: Internal MongoDB MetadataThe config database is internal: applications and administrators should not modify or depend upon its content in the course of normal operation.Apart from that, MongoDB does not store historical data on shard keys. Once a collection is resharded with a new shard key, the old shard key information is removed and cannot be retrieved.For future, If you need to retain historical information on shard keys, you can include the old shard key information as part of the data stored in the collection itself, such as in a separate field or document, to ensure it is not lost when the collection is resharded.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Getting old shard keys before resharding | 2023-04-12T09:23:17.827Z | Getting old shard keys before resharding | 1,011 |
null | [
"node-js",
"data-modeling",
"indexes"
] | [
{
"code": "db.coll.explain('executionStats').findAndModify({query: {a: 'a', b: 'b', c: 'active', rand: { $lte: 0.444 }}, update: {$set: { c: 'done' }}, fields: { d: 1 }, new: true })\n\nEnterprise mydb> db.coll.findOne()\n{\n _id: ObjectId(\"64284cfd94ce0a6a77acbcdd\"),\n d: '8afa3c50-d496-4c34-b686-33f4fc7c96bf',\n c: 'active',\n b: 'abc',\n a: 'def',\n rand: 0.7887215406852792\n}\n\n\nEnterprise mydb> db.coll.getIndexes()\n[\n { v: 2, key: { _id: 1 }, name: '_id_' },\n {\n v: 2,\n key: { a: 1, b: 1, c: 1, rand: 1 },\n name: 'a_1_b_1_c_1_rand_1',\n partialFilterExpression: { c: 'active' }\n }\n]\n{\"t\":{\"$date\":\"2023-04-02T09:01:46.723+05:30\"},\"s\":\"W\", \"c\":\"COMMAND\", \"id\":23802, \"ctx\":\"conn536\",\"msg\":\"Plan executor error during findAndModify\",\"attr\":{\"error\":{\"code\":112,\"codeName\":\"WriteConflict\",\"errmsg\":\"WriteConflict error: this operation conflicted with another operation. Please retry your operation or multi-document transaction.\"},\"stats\":{\"stage\":\"PROJECTION_DEFAULT\",\"nReturned\":0,\"executionTimeMillisEstimate\":0,\"works\":2,\"advanced\":0,\"needTime\":0,\"needYield\":1,\"saveState\":1,\"restoreState\":1,\"failed\":true,\"isEOF\":1,\"transformBy\":{},\"inputStage\":{\"stage\":\"UPDATE\",\"nReturned\":0,\"executionTimeMillisEstimate\":0,\"works\":2,\"advanced\":0,\"needTime\":0,\"needYield\":1,\"saveState\":1,\"restoreState\":1,\"failed\":true,\"isEOF\":1,\"nMatched\":0,\"nWouldModify\":0,\"nWouldUpsert\":0,\"inputStage\":{\"stage\":\"LIMIT\",\"nReturned\":1,\"executionTimeMillisEstimate\":0,\"works\":1,\"advanced\":1,\"needTime\":0,\"needYield\":0,\"saveState\":2,\"restoreState\":1,\"isEOF\":1,\"limitAmount\":1,\"inputStage\":{\"stage\":\"FETCH\",\"nReturned\":1,\"executionTimeMillisEstimate\":0,\"works\":1,\"advanced\":1,\"needTime\":0,\"needYield\":0,\"saveState\":2,\"restoreState\":1,\"isEOF\":0,\"docsExamined\":1,\"alreadyHasObj\":0,\"inputStage\":{\"stage\":\"IXSCAN\",\"nReturned\":1,\"executionTimeMillisEstimate\":0,\"works\":1,\"advanced\":1,\"needTime\":0,\"needYield\":0,\"saveState\":2,\"restoreState\":1,\"isEOF\":0,\n",
"text": "This is the schema, query and db index used. Is the db index correct? Sometime it is fast, taking 20ms but sometime it is taking about 200ms. This duration is taken from db slow query log.What I observed is, this slowness is due to write conflicts. I added one rand field to minimize it. It lies between zero and one. But it is sometime causes write conflicts also. I think in production it will cause more write conflicts when collection has 20 concurrent threads picking one doc and updating it. Collection will have data in billions.When I used the sort in findAndModify, it is giving sometime this error message.",
"username": "ironman"
},
{
"code": "",
"text": "You want to get in a habit of enhancing the “rand” with sort, etc. because you got a lot of concurrent activity going on with your original index. I didn’t read your error admittedly, I’m just going by your index alone.db.coll.findAndModify({\nquery: {a: ‘a’, b: ‘b’, c: ‘active’, rand: { $lte: 0.444 }},\nsort: { _id: 1 },\nupdate: {$set: { c: ‘done’ }},\nfields: { d: 1 },\nnew: true\n})Rand helps with preventing writing conflicts, but to expand on why adding sort helps, is you’re essentially forcing MDB to create an order of operation which will help keep things from going into conflict. You can also (I’d highly recommend) shard your cluster so the operations are split between shards, which in combination with my above index for you should cut off a lot of write conflicts you’re experiencing.",
"username": "Brock"
}
] | Db index for findAndModify | 2023-04-02T03:18:20.062Z | Db index for findAndModify | 999 |
null | [
"data-modeling",
"indexes",
"transactions"
] | [
{
"code": "db.coll1.explain('executionStats').findAndModify({query: {status: 0, key1: 'VALUE1', rand: {$lte: 0.34234324234234234 }}, update: {$inc: {status: 1}}, fields: {key2: 1}, sort: {rand: 1}, new: true})\n{\n v: 2,\n key: { key1: 1, rand: 1, key2: 1 },\n name: 'key1_1_rand_1_key2_1',\n partialFilterExpression: { status: 0 }\n },\n\n{\n \"t\": {\n \"$date\": \"2023-04-04T13:48:31.818+05:30\"\n },\n \"s\": \"W\",\n \"c\": \"COMMAND\",\n \"id\": 23802,\n \"ctx\": \"conn239\",\n \"msg\": \"Plan executor error during findAndModify\",\n \"attr\": {\n \"error\": {\n \"code\": 112,\n \"codeName\": \"WriteConflict\",\n \"errmsg\": \"WriteConflict error: this operation conflicted with another operation. Please retry your operation or multi-document transaction.\"\n },\n \"stats\": {\n \"stage\": \"PROJECTION_DEFAULT\",\n \"nReturned\": 0,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 2,\n \"advanced\": 0,\n \"needTime\": 0,\n \"needYield\": 1,\n \"saveState\": 1,\n \"restoreState\": 1,\n \"failed\": true,\n \"isEOF\": 1,\n \"transformBy\": {},\n \"inputStage\": {\n \"stage\": \"UPDATE\",\n \"nReturned\": 0,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 2,\n \"advanced\": 0,\n \"needTime\": 0,\n \"needYield\": 1,\n \"saveState\": 1,\n \"restoreState\": 1,\n \"failed\": true,\n \"isEOF\": 1,\n \"nMatched\": 0,\n \"nWouldModify\": 0,\n \"nWouldUpsert\": 0,\n \"inputStage\": {\n \"stage\": \"CACHED_PLAN\",\n \"nReturned\": 1,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 1,\n \"advanced\": 1,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 2,\n \"restoreState\": 1,\n \"isEOF\": 1,\n \"inputStage\": {\n \"stage\": \"LIMIT\",\n \"nReturned\": 1,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 1,\n \"advanced\": 1,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 2,\n \"restoreState\": 1,\n \"isEOF\": 1,\n \"limitAmount\": 1,\n \"inputStage\": {\n \"stage\": \"FETCH\",\n \"filter\": {\n \"status\": {\n \"$eq\": 0\n }\n },\n \"nReturned\": 1,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 1,\n \"advanced\": 1,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 2,\n \"restoreState\": 1,\n \"isEOF\": 0,\n \"docsExamined\": 1,\n \"alreadyHasObj\": 0,\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"nReturned\": 1,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 1,\n \"advanced\": 1,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 2,\n \"restoreState\": 1,\n \"isEOF\": 0,\n \"keyPattern\": {\n \"key1\": 1,\n \"rand\": 1,\n \"key2\": 1\n },\n \"indexName\": \"key1_1_rand_1_key2_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"key1\": [],\n \"rand\": [],\n \"key2\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": true,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"key1\": [\n \"[\\\"VALUE1\\\", \\\"VALUE1\\\"]\"\n ],\n \"rand\": [\n \"[-inf.0, 0.1602877584416872]\"\n ],\n \"key2\": [\n \"[MinKey, MaxKey]\"\n ]\n },\n \"keysExamined\": 1,\n \"seeks\": 1,\n \"dupsTested\": 0,\n \"dupsDropped\": 0\n }\n }\n }\n }\n }\n },\n \"cmd\": {\n \"findAndModify\": \"coll1\",\n \"query\": {\n \"status\": 0,\n \"key1\": \"VALUE1\",\n \"rand\": {\n \"$lte\": 0.16028775844168722\n }\n },\n \"fields\": {\n \"key2\": 1\n },\n \"sort\": {\n \"rand\": 1\n },\n \"remove\": false,\n \"update\": {\n \"$inc\": {\n \"status\": 1\n }\n },\n \"upsert\": false,\n \"new\": true\n }\n }\n}\n",
"text": "This is the query.This is the index.Following is the error I am getting in mongo log. How to resolve this conflict. I am using sort and added rand field so that write conflict is minimised when multiple thread is trying to pick one document.",
"username": "ironman"
},
{
"code": "session = db.getMongo().startSession();\nsession.startTransaction();\ntry {\n const coll1 = session.getDatabase(\"dbname\").coll1;\n const doc = coll1.findOneAndUpdate(\n { status: 0, key1: 'VALUE1', rand: { $lte: 0.34234324234234234 } },\n { $inc: { status: 1 } },\n { returnNewDocument: true }\n );\n // Do some work with the updated document\n session.commitTransaction();\n} catch (error) {\n session.abortTransaction();\n throw error;\n} finally {\n session.endSession();\n}\n\n",
"text": "Rand isn’t working how you want it to.Try this:I put in transactions to isolate everything, and then I put in findOne and new document parameters when wrapped in transactions will fight the conflict you’re having, and hopefully prevent it.I hope this helps.",
"username": "Brock"
}
] | Write conflict in findAndModify when sort is used | 2023-04-04T08:36:06.037Z | Write conflict in findAndModify when sort is used | 1,170 |
null | [
"queries"
] | [
{
"code": "failed to execute source for 'node_modules/argon2/argon2.js': TypeError: Value is not an object: undefined\n\tat node_modules/@mapbox/node-pre-gyp/lib/pre-binding.js:29:18(4)\n\tat node_modules/argon2/argon2.js:34:30(93)\nexports = async (loginPayload) => {\n const argon2 = require('argon2');\n \n let response;\n\n \n const user = await context.services.get(\"mongodb-atlas\").db(\"database\").collection(\"collection\").find({email: loginPayload.email}).toArray();\n \n \n if(!user) {\n response = {\n message: 'User not found',\n status: 400\n }\n \n return JSON.stringify(response);\n } \n \n const verify = await argon2.verify(loginPayload.password, user.password);\n if(!verify) {\n response = {\n message: \"Invalid username or password\",\n status: 400\n }\n \n return JSON.stringify(response);\n }\n \n \n return {message: 'authenticated', user: JSON.stringify(user)};\n };\n",
"text": "I have tried every stitch of documentation I could find on this issue, and the resolution still evades me. I’m trying to write a simple function that verifies user login information based on an existing collection.In all instances you can see the dependency on the dependency tab, but the function refuses to find it. I just keep getting this error:super simple function that is trying to use it, with sensitive information removed of course:I could really use some good juju here if anyone has any… that is of course actually working.",
"username": "Jonathan_Emmett"
},
{
"code": "",
"text": "Curious whether you ended up resolving this? I’m having the exact same issue (different dependency though) where the dependency is added on the dependency tab but I get that error when the function is run",
"username": "Campbell_Affleck"
},
{
"code": "",
"text": "hey @Campbell_Affleck , No I didn’t sorry man. I ended up ditching it and going a different direction all together. I’m using supabase now.",
"username": "Jonathan_Emmett"
},
{
"code": "",
"text": "No worries, thanks for the quick response! I’ll give supabase a look.For anyone stumbling on this, I think the error was due to the dependency requiring a more recent Node.js version. As far as I can tell, the Atlas function environment is still stuck using v10, which the current versions of a lot of dependencies are no longer compatible with (argon2 requires v14). In my case (using mailgun.js), I was able to get past this specific error by specifying an outdated mailgun version, but with newer dependencies you might not have that option.",
"username": "Campbell_Affleck"
}
] | Function is unable to find dependency | 2023-03-12T21:40:01.217Z | Function is unable to find dependency | 804 |
null | [
"aggregation",
"data-modeling",
"python"
] | [
{
"code": "",
"text": "hi i need how to decrypt in aggregation when I’m encrypted in field level",
"username": "Harish_Kumar3"
},
{
"code": "",
"text": "Could you give an example of what you’d like to do?",
"username": "Shane"
},
{
"code": "",
"text": "actually i encrypted the email field and stored in database and in aggregation i need to decrypt\nlike this\npipeline = [\n{\n‘$project’: {\n‘email’: {\n‘$decrypt’: {\n‘input’: ‘$email’,\n‘key’: {\n‘provider’: ‘local’,\n‘key’: encryption_key\n}\n}\n}\n}\n}\n]result = DataEmployee.objects().aggregate(*pipeline)\ni need to decrypt the email filed in aggregationbut mongocompass show the stage is invalid",
"username": "Harish_Kumar3"
},
{
"code": "",
"text": "The server is intentionally not able to decrypt the field within an aggregation stage. Only the application is able to decrypt the data. This is the purpose of client side field level encryption. Why do you want to decrypt the data within the aggregation pipeline?",
"username": "Shane"
},
{
"code": "",
"text": "@Harish_Kumar3This isn’t something that would be done in an aggregation but by application, also, if you’re doing the Luxlin hacking tutorial, MongoDB patched all of that already months ago in the last security patch.The last update to the Realm React Native SDK also cut off the ability to intercept the in client aggregations in this same manner as well.The Luxlin hacking tutorial is no longer a working guide, if that’s what you’re following to do this. Just an FYI.BrnP4LMs and TVCOD4U’s hacking guide for intercepting aggregate data to decrypt aren’t valid anymore either, and haven’t been for some time.MongoDB accepted the pushes from C1PH3R Group and others that patched a lot of those issues.",
"username": "Brock"
}
] | How to decypt in aggreagation | 2023-04-13T08:58:16.151Z | How to decypt in aggreagation | 1,260 |
null | [] | [
{
"code": "",
"text": "Hi all, the agenda of this post is to discuss about how choosing to use MongoDB (existing infrastructure) GridFS features compared to setting up (purchasing) a possible new file server like samba as a file storage system.Key points we could discuss.\nCarbon efficiency.\nNetwork efficiency.\nOptimization.\nSecurity.",
"username": "Ke_Wei_Tan"
},
{
"code": "",
"text": "@Ke_Wei_Tan If you’re relying on Samba when you have GridFS you’re wasting resources…If you’re trying to sell GridFS via MongoDB, going by your points in order:You don’t need to use other services or servers when it’s all self-contained in MongoDBs GridFS. This means there’s a lot less power needing to be used vs making other fileserver like Samba.It connects to all MongoDB infrastructure organically, meaning you don’t need to be concerned about additional SDN or other networking solutions to implement.Access to data and speed of data being already able to store large files etc, this makes it very efficient organically.Authorization and authentication control integrations, as well as at rest encryptions paired with in transit encryptions all come together by default, so there’s a lot less work involved securing data going to and from, and in GridFS.",
"username": "Brock"
},
{
"code": "",
"text": "This topic was automatically closed after 180 days. New replies are no longer allowed.",
"username": "system"
}
] | Let discuss Sustainable Software Engineering, MongoDB GridFS V.S File Server (SAMBA) | 2023-03-23T09:48:35.946Z | Let discuss Sustainable Software Engineering, MongoDB GridFS V.S File Server (SAMBA) | 1,069 |
[
"serverless",
"app-services-data-access"
] | [
{
"code": "{\n \"tenant_id\": \"%%user.data.tenant_id\"\n}\n{\n \"%%database.name\": \"%%user.data.tenant_id\"\n}\n",
"text": "Hi,We are building a multi-tenant application and we would like to store data related to different end users in the same cluster. The data for each user would be stored in their respective databases or collections in the same cluster. The data needs to be queried from a browser, each end user should only be allowed to access their own data.We are using the custom JWT authentication because it allows us to pass an end-user id in the token. We configured the authentication to grab this id and store it in the user object.\nimage1621×743 72.1 KB\n\n\nimage1538×469 42.7 KB\nThen we created a rule to allow each user to access documents that have this id\n\nimage942×292 20.3 KB\nWith this approach, we have to store the id in each document. This seems unnecessary, it will bloat out data and increase our data transfer.Ideally, we would like to set a rule based on the database name or collection name. We would like to create this database or collection with the id and match it with one included in the JWT token.Instead of our rule:we would have something like:We combed the documentation many times but I could not find such a feature. Did we miss something? Is there a better way to implement this?Note: we are using serverless instances and the Web SDK.",
"username": "MattB"
},
{
"code": "",
"text": "Hi @MattB,You are correct that there’s no way to represent this with the existing rules expansions. Feel free to file a feature request here: Atlas App Services: Top (238 ideas) – MongoDB Feedback Engine.In the meantime, the scheme you are currently using is what we would recommend.",
"username": "Kiro_Morkos"
},
{
"code": "",
"text": "Thanks for the quick answer!I filed the feature request: Add expansion for database and collection in Rules Expressions – MongoDB Feedback Engine",
"username": "MattB"
},
{
"code": "tenant_id",
"text": "Actually, I forgot to ask - are you trying to represent this in a default rule, or collection rule?If you use specify roles for each collection independently (as opposed to default roles), you could instead hardcode the respective tenant_id in the role for each collection, and then you won’t need the field in your documents. This may be cumbersome to maintain, but you could come up with an automated workflow to generate the correct rules and deploy them to your app via the CLI, github integration, or admin API.",
"username": "Kiro_Morkos"
},
{
"code": "{\n \"%%user.data.tenant_id\": \"640c4af0-e748-4766-8bd9-b4b7e2fadcae\"\n}\n",
"text": "That’s a much better idea! Thanks!I was able to use this rule:If I can ask a follow-up question. We may have a case where a user has access to multiple tenants. So we tried to pass an array of tenant ids in the JWT token claims.\nIs there a way to check if “640c4af0-e748-4766-8bd9-b4b7e2fadcae” is in the array stored in the user’s provider data? My attempts were unsuccessful. I think the user’s provider data is interpreted as a string and not an array.",
"username": "MattB"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Database or collection-level data access rules with custom JWT authentication | 2023-04-11T10:06:43.251Z | Database or collection-level data access rules with custom JWT authentication | 1,319 |
|
null | [] | [
{
"code": "",
"text": "Please how do I start a mongoldb atlas cli",
"username": "Poise_Paul"
},
{
"code": "mongodb-atlas-cli",
"text": "Hi,\nYou can start from this tutorial (install mongodb-atlas-cli) and connect to an existing Atlas account",
"username": "Arkadiusz_Borucki"
},
{
"code": "",
"text": "Hi I’m trying to open the atlas cli shell to log into my previously made cluster",
"username": "Poise_Paul"
},
{
"code": "",
"text": "try to follow those steps",
"username": "Arkadiusz_Borucki"
},
{
"code": "",
"text": "I have the atlas cli installed already but I’m unable to open the atlas shell cli",
"username": "Poise_Paul"
},
{
"code": "",
"text": "Hello @Poise_Paul\ncan you please elaborate on the steps how you try to open the shell and, in case, which error you get.\nWhen you post code/log messages please have a look on this quick guide from @Stennie_X Formatting code and log snippets in posts\nRegards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "It should be said that if you’re under the impression that you can access brew, npm, or install of mongo-atlas-cli from the IDE. This is the the source of your error. You will have to go into your terminal or native command line for your device, and use either homebrew or some equivalent to download the mongodb atlas CLI on your device.Thereafter, you will perform the steps in the instructions whereby it is recommended that you create a new email and password to manage this account. Upon which, you will run the first line of code from the 1st step and when you press on Mongo Authentication, and login. It will prompt you to enter the registration code that it provided from running the first line of code. After which, the IDE will check that you have connected your native CLI to the Atlas CLI, and you can click next because you’ve completed this stage of the practice.",
"username": "Ivan_Spence"
}
] | Mongo Db Atlas CLI | 2022-07-11T22:04:27.233Z | Mongo Db Atlas CLI | 3,817 |
null | [] | [
{
"code": "",
"text": "Hey everyone! I have been lurking the forums for a bit of time and am curious what some of the best ways to go about troubleshooting Device sync or triggers are because I’ve been trying some new things at my job using device sync and I get weird errors or problems and I’d like to know some more wholesale ways to tackle and go about troubleshooting problems. The documentation doesn’t have anything for troubleshooting problems and errors so any advice for me to learn to be more independent with this product line would be greatly appreciated! Thank you in advance!!!",
"username": "UrDataGirl"
},
{
"code": "",
"text": "EDIT NOTE: - Everyone is going to have a different approach, these are just some approaches that worked for me, also note that some of the “extra” I always did, is something that just I did, if others in TS do it too, that’s awesome and amazing. But this shouldn’t be an expectation put on the TS department, I just liked to go the extra mile when I troubleshoot and resolve issues.There’s a lot more detail in my blog for troubleshooting problems with Realm/Device Sync/App Services, as well as more details in something else I’m writing. But the below is just stuff I do and did to aid the process.@UrDataGirl is there any specific type of issue you’d like addressed in how to troubleshoot?So this comes from 2 years on Technical Services side, this is how I personally went about troubleshooting various issues:Logs are amazing, the more logs the better.\n– My coworkers would use Splunk and other services to look at peoples stuff, but I usually cheated, because constantly doing all of the queries over and over just felt stupid to me, so I automated queries and outputs. I would route the customer splunk logs (as a customer you can route your logs with the data api) to a dashboard I built that funneled results to an Atlas test cluster I had, and from there I just put in the customers Realm App ID from the URL of the app and populate their logs. And then index the error messages and then just do a quick sort to separate all the errors that were the same thing etc. Then forward to another tab in my dashboard that would show the data trends for the errors and see what exactly things were looking like, then compare to the Atlas system metrics.– For you on the above you can do the following:\n— Run the Data API to forward logs to a central collection in your Atlas cluster, then from there connect the BI connector to Tableau or Flask etc. and run it to a dashboard and run the data models etc. the sorts and so on. for very similar effect, the only issue is Splunk Logs support sees may have better details than what you may see, so any errors that are unclear or not making sense definitely do open a ticket for support to look at.Device Sync Issues\n– Pending on the error message and behavior, I would look at the core issue, for say iOS issues, I would always ask for TestFlight logs whenever possible. etc. Crashalytics and TestFlight were so much better in details of what is happening in an app than the Device Sync logs are for anything client side. Those are almost always gold.\n– Having language specific knowledge for your SDK choice is a must, (Like me, I know 11 languages and very fluent in 5 of them) otherwise you’re going to just look a fool when the issue can be a functional problem with the code itself, and you’re not seeing it because you don’t know the language.\n— Easy case example:\n---- I had a Swift SDK issue that was at the point the customer was about to abandon the product because they couldn’t make it work without corruption issues, etc. And odd errors coming up, the way I solved/resolved a lot of the issues was by identifying it wasn’t any one issue, but multiple issues. Resource conflicts for dependencies, mismatching dependencies needed from one package to the next etc. And then threading issues with the pointers.\n---- Knocking things out one at a time, was far above and beyond what anyone in TS is required to do, don’t mention this on any surveys etc. TS doesn’t care for the recognition or needs it, they just want to help you be successful with your app. (And they’ll get scolded for it. not like I care tbh, because I did my job and the customer is still a customer and is happy with Realm, I still talk to them as they found me on LinkedIn) But the resolution just came down to threading and timing services to share and engage the resources as they were needed and isolating dependency versions between services that needed one version vs another version. Then after that all the problems associated with Realm crashing etc. (Device Sync today) went away. But this was all because of my knowledge of Swift it was even possible to help the customer in this example.Look at the WHOLE environment.\n– Don’t be a fool and waste time with tunnel vision on one thing, because that issue can just be a symptom of other issues.\n— Great case example, I worked with a customer who had issues with Axios for the better part of almost 2 years that no one had actually addressed root cause. Was found to be performance issues with Axios, by moving to a different service they gained the functionality and speeds they were desiring. That wouldn’t have been possible to determine, or find had I not took a step back and walked through their entire environment with services to map out what is supposed to do what, and how everything was connected in their environment.\n– Look at all dependencies, all SDKs and all APIs in use for the ecosystem that Atlas or Realm is interacting with, in fact the issue may not even be related to Device Sync, and may be what’s up in Atlas. Or it could be an issue with something not even related to either.\n— Great case example:\n----I handled a customer who was using PostGRE SQL that was routing data to production systems running CNC machines etc. It would then forward data to a middleware translation service that converted JSON to the appropriate file types the CNC machines were using and then would convert back the results to JSON and so on. Without knowing this service existed, it would have been impossible to determine why the Realm data on the clients were getting corrupted data displays, I spent 2 weeks with the customer walking them through rebuilding the middleware service so that the Device Sync App collecting the data and controlling the CNC machines was properly taking in the data from the middleware service.\n----- This goes back to knowing the language of the SDK, and taking a step back to see the WHOLE environment, not just focusing on the one part. The amount of engineering resources that would have been wasted on something that would have been enormous had the middleware not been given a deeper look.@UrDataGirl Another thing is USE CASEAssessing whether the use case fits the product is a big deal, this is actually the very first thing I consider when I look at a Relam issue, is whether or not it fits Device Syncs use case criteria, or if it should be a Core/Atlas use case situation. You’d be surprised how many people mix the two for the incorrect use case. Several times I’ve worked with customers for instance to migrate from a MongoDB Driver, to a Realm SDK, and vice-versa because they are trying to use the service for the wrong thing, and education on which for what is ambiguous at best in the literatures. So always, always verify the use case meets the criteria associated to client vs backend, and whether or not SDK or Driver is in play.These are just some examples and case examples, but these are main things that I personally do and have done to troubleshoot Realm(Device Sync). If you would like, you’re welcome to present specific issues you’re experiencing and I am more than happy to walk you through how we can troubleshoot it.",
"username": "Brock"
},
{
"code": "",
"text": "GRAPHQLIs its own troubleshooting and support category on its own, you’re going to have to do a lot of Wack O Mole tactics to determine what will make your GraphQL work, or whether you need to spin up an Apollo GraphQL server and have that navigate your GraphQL stuff.For instance GeoJSON is supported by MongoDB, and it’s supported by Apollo GraphQL, but it’s not support in Atlas GraphQL, or Realm.And Atlas GraphQL doesn’t support custom scalars, so you can’t use enum scalars either. So when you use GraohQL in Atlas you need to not only understand GraphQL, but you need to take the time and get acquainted with the limitations posed with Atlas GraphQL services and what needs to be implemented by third party services.Device Sync, Realm, and the Apollo GraphQL mobile clients can all work together on the mobile device just fine, and interface between Atlas and an Apollo GraphQL server all together very, very well. And the GraphQL Client is very performant so you’re not really causing much tech debt with it at all if you know how to use GraphQL.But that’s something to consider too based on whatever you’re troubleshooting, is what’s a limitation and what isn’t.@UrDataGirl In response to your below questions.I’m always down for extra work, and GeoJSON is common for graphic coordinate data.The main reason of using and choosing MongoDB, particularly 5.0 and above is the time-series data support it offers, having Realm not support GeoJSON is crippling for people who want to use Atlas for the mobile apps and use GeoJSON for all sorts of use cases. Transportation apps, delivery apps, having to plot coordinates for getting to a particular location or geottracking of an asset.Lots of use cases, but generally the typical work around is just implementing the Apollo GraphQL client into your app, and translating whatever you need to React.Native if not already a React app etc. And just connecting MongoDB to an Apollo GraphQL server and you get all of the time series data and the link to your mobile app.I actually ran this through in an interview a few weeks ago for a trucking fleet to prove concept. I failed the test due to running out of time, but still finished it 38 minutes pass the timeline (interviewer wanted to see how it works and how to do this.)Realm handled everything else in the app except for the time series data and the GeoJSON. That was all routed from Atlas to Apollo spun up in AWS. If I didn’t have to setup that extra stuff I would have made time, but anyways, that’s the general thing to it.As far as JWT SSO troubleshooting the most common issue lately, is the fact Functions are Node 10 like I mentioned above, make sure you’re not using JWT 9 because it’s not supported right now, otherwise verify your certificate and headers with your other services.You can search my account on here and find my AAD and IAM tutorials for interfacing Realm with them and see if those help. If you still have problems definitely let me know and I don’t mind jumping on a discord call or something, you’re always welcome to ping me.Regarding the dashboard stuff, yeah definitely it’ll all work. The Data API makes log forwarding really easy and simple, I especially like routing via Python as I have a bunch of things built with Python that goes to TKinter etc. with a lot of things handled with Pandas and Tensorflow to make stuff much easier to find and determine.If you want I can give you a general breakdown anytime how to implement a dashboard like that, the Data API can also interface with Splunk if your company or agency uses it, too.@UrDataGirl In response to How to troubleshoot Device Sync? - #7 by UrDataGirlThey didn’t ask for it, I mentioned it several times in meetings and no one cared to want it. Even in screen sharing showing how easy it was to just click buttons and get the outputs from then, nobody wanted the dashboard for themselves. -Insert shrugs here-@UrDataGirl regarding How to troubleshoot Device Sync? - #8 by UrDataGirl\nIt’s a way of getting around post limits but that’s fine, if you want later we can probably just build an example on out tonight if you’d like? JLMK. But yeah, if you’d like to move this over there for a more fluid conversation you’re welcome to message anytime.",
"username": "Brock"
},
{
"code": "",
"text": "Thank you for such a great response! Hey do you do consulting per chance? Can I message you on the other place? (^_^) and one error I have is a trigger keeps failing and turning off that handles my sso using jwt for my mobile app. How would you troubleshoot that and what would you use geojson for? Thx in advance!",
"username": "UrDataGirl"
},
{
"code": "",
"text": "Can you also walk thru more of this dashboard to troubleshoot like can we run it to a NOC or SOC?",
"username": "UrDataGirl"
},
{
"code": "",
"text": "Ok yeah that might be it I’m on jwt 9 so I just have to downgrade it I guess? I’ll work on that and maybe reach out to you but thank you so much for your information because it’s hard to find straight forward stuff like this.",
"username": "UrDataGirl"
},
{
"code": "",
"text": "Hey wait why didn’t your coworkers use the dashboard too? @Brock",
"username": "UrDataGirl"
},
{
"code": "",
"text": "Editing posts instead of replying is weird but I like weird haha. I’m baffled that nobody else wants something like that but I’m going to reach out in discord and move this over thank you again! @Brock",
"username": "UrDataGirl"
}
] | How to troubleshoot Device Sync? | 2023-04-15T19:07:04.002Z | How to troubleshoot Device Sync? | 943 |
null | [
"node-js",
"java",
"python"
] | [
{
"code": "",
"text": "Does the problem known as slow train only apply to Nodejs mongodb driver? The problem description goes like this:While Node.js is asynchronous, MongoDB is not. Currently, MongoDB uses a single execution thread per socket. This means that it will only execute a single operation on a socket at any given point in time. Any other operations sent to that socket will have to wait until the current operation is finished.The description clearly states that this is a mongodb limitation, but I have only scene this problem mentioned in nodejs community and in mongodb driver for Nodejs, that is why I am asking this question, so maybe a better way to rephrase this question is that, why is the problem only seems to be a concern in nodejs? Do other programming languages like python or java have a way to bypass the limitation of which I am not aware?",
"username": "Sadegh_Hosseini"
},
{
"code": "maxPoolSize=5",
"text": "Hey @Sadegh_Hosseini,The description clearly states that this is a mongodb limitation, but I have only scene this problem mentioned in nodejs community and in mongodb driver for Nodejs, that is why I am asking this question, so maybe a better way to rephrase this question is that, why is the problem only seems to be a concern in nodejs? Do other programming languages like python or java have a way to bypass the limitation of which I am not aware?The link you shared is to a much older version of the Node.js driver, however the docs for recent versions of the driver contain similar information. Each driver contains an FAQ section that addresses common concerns, however as you’ve surmised this isn’t really a Node-specific challenge.The MongoDB server doesn’t currently support socket-level multiplexing which results in operations blocking as a request is processed and a response is returned. For this reason drivers support connection pools to address this.For example, setting a maxPoolSize=5 will ensure up to 5 connections can be checked out to service concurrent operations before subsequent operations begin to queue. Most operations complete extremely quickly and return their checked out connections to the pool so the size of 5 here is just meant to be illustrative - likely having a pool of 5 would only result in 1-2 connections ever being checked out at once.",
"username": "alexbevi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Does the "slow train" problem only apply to Nodejs Mongodb driver? | 2023-04-15T08:10:59.724Z | Does the “slow train” problem only apply to Nodejs Mongodb driver? | 815 |
null | [
"aggregation",
"queries",
"node-js",
"mongoose-odm",
"compass"
] | [
{
"code": "const connectToMongo = require('./db');\nconst express = require('express')\nconnectToMongo();\n\nconst app = express()\nconst port = 3000\n\napp.use(express.json())\n\n//Available Routes\napp.use('/api/auth', require('./routes/auth'))\napp.use('/api/auth', require('./routes/notes'))\n\napp.listen(port, () => {\n console.log(`Example app listening at http://localhost:${port}`)\n})\nconst mongoose = require('mongoose');\nconst { Schema } = mongoose;\n\nconst UserSchema = new Schema({\n name:{\n type: String,\n require: true\n },\n email:{\n type:String,\n require:true,\n unique: true\n },\n password:{\n type:String,\n require:true\n },\n timestamp:{\n type:Date,\n default:Date.now\n }\n });\n\n module.exports = mongoose.model('user', UserSchema)\nconst express=require('express');\nconst User = require('../models/User');\nconst router=express.Router()\n\n\nrouter.get('/', (req, res)=>{\n console.log(req.body)\n const user = User(req.body)\n user.save()\n res.send(req.body)\n})\n\n\nmodule.exports = router\nconst mongoose = require('mongoose')\nconst mongoURI = \"mongodb://localhost:27017/\"\n\nconst connectToMongo=()=>{\n mongoose.set(\"strictQuery\", false);\n mongoose.connect(mongoURI,()=>{\n console.log(\"Connected to Mongo Successfully\")\n })\n}\n\nmodule.exports = connectToMongo;\n{\n \"name\":\"pratik\",\n \"email\":\"[email protected]\",\n \"password\":\"6626\"\n}\nusers.insertOne()",
"text": "I’m new in using MERN Stack & I’m trying to connect Mongo and Node but facing this issue while inserting Data into Database, using MongoDb CompassIndex.jsUser.jsauth.jsdb.jsThunderClient Request:Error: const err = new MongooseError(message); ^MongooseError: Operation users.insertOne() buffering timed out after 10000ms at Timeout. (D:\\Study\\React\\MERN\\inotebook\\backend\\node_modules\\mongoose\\lib\\drivers\\node-mongodb-native\\collection.js:175:23) at listOnTimeout (node:internal/timers:564:17) at process.processTimers (node:internal/timers:507:7)I guess the problem is because of the newer version, I’m trying to read the Docs and StackOverFlow but unable to Solve this Error what should I Do",
"username": "Pratik_Chitte"
},
{
"code": "",
"text": "You can try lowering the node.js version.",
"username": "Stowekr_clydefs"
},
{
"code": "",
"text": "same issue …solution anyone???",
"username": "Akshat_Kumar2"
},
{
"code": "",
"text": "This topic was automatically closed after 180 days. New replies are no longer allowed.",
"username": "system"
}
] | Error While Inserting Data Through ThunderClient Into Mongodb | 2023-01-23T13:39:37.822Z | Error While Inserting Data Through ThunderClient Into Mongodb | 1,538 |
null | [
"replication",
"compass",
"mongodb-shell"
] | [
{
"code": "mongoshsystemLog:\n destination: file\n path: /opt/homebrew/var/log/mongodb/mongo.log\n logAppend: true\nstorage:\n dbPath: /opt/homebrew/var/mongodb\nnet:\n bindIp: 127.0.0.1\n ipv6: true\nreplication:\n replSetName: rs0\n oplogSizeMB: 100\n",
"text": "Hello, I installed mongo locally using homebrew and added a replicaset and everything seems working fine when I use mongosh. then I installed MongoCompass and whenever I try to connect the DB the service killed I have to restart the mongo service from homebrew.\nand here is my mongod.conf filecan you please advise me?Edit: I tried to remove the replication from mongod.conf and I was able to connect from MongoCompass normally, yet I need the replication as my app working with sessions",
"username": "Hani_Ghazi"
},
{
"code": "",
"text": "Why mongod gets killed when you attempt to connect from Compass?\nWhat connect string you used when connecting to your replica\nPlease show output of rs.status() when you connected from mongosh",
"username": "Ramachandra_Tummala"
}
] | Mongo service killed when connect from MongoCompass | 2023-04-14T13:07:42.971Z | Mongo service killed when connect from MongoCompass | 529 |
null | [
"data-modeling"
] | [
{
"code": "{\n name: {\n type: String,\n required: true,\n trim: true,\n },\n email: {\n type: String,\n required: true,\n index: { unique: true },\n lowercase: true,\n trim: true,\n },\n events: [{ type: Schema.Types.ObjectId, ref: 'Event' }],\n formattedAddress: String,\n password: {\n type: String,\n required: true,\n trim: true,\n minlength: [6, 'Password must be 6 characters long'],\n },\n }\n{\n name: {\n type: String,\n required: true,\n trim: true,\n },\n startsAt: {\n type: Date,\n required: true,\n },\n endsAt: {\n type: Date,\n required: true,\n },\n prizeMoney: {\n type: Number,\n required: true,\n },\n subscriptionCategory: [\n {\n name: String,\n price: Number,\n tasks: [\n {\n name: String,\n description: String,\n scores: Number,\n },\n ],\n }\n ]\n subscribers: [{ type: Schema.Types.ObjectId, ref: 'User' }],\n },\n",
"text": "I’m creating an app for which users can register and join different events (by paying specified amount). An event can be subscribed in three different ways i.eLet’s name this as SubscriptionCategory. There would be different set of tasks in each SubscriptionCategory.Now to complete event users will have to complete the given tasks. Each task will have some scores which user can gain after completion of each task. At the finishing of event a user or a group which will have maximum scores will be declared as winner and would be eligible for the prize money.Schema I have been thinking is below:— DATEBASE NAME ( Social-Events )\n— — User ( collection )— — Event ( collection )Now the issues I’m facing are:Should I suppose to make kind of bridge collection similar to Bridge table we use to make in relational database?I’m sorry I’m new to NO-SQL and I can’t figure that on my own. I’ve followed basic guideline to model a database from attached link.",
"username": "Ali_Waqar"
},
{
"code": "approvedthings that are queried together should stay together",
"text": "Hey @Ali_Waqar,Welcome to the MongoDB Community Forums! I’m not able to store how a user subscribed to the event?You can do this in your event collection itself. You can add a new field in that collection that specifies how a user subscribed to that event.The Tasks user have completed in an eventFor this, I would recommend creating a new collection. In this, you can create references to both user and event collections for which the task was completed, as well as the details of the task completion. This should also allow you to store the tasks that each user has completed for each event, along with whether or not they have been approved by the admin by having a field named approved which can be of type boolean which you can set to true once the task has been approved by the admin.Additionally, I would like to point out a few things about data modeling in MongoDB. A general thumb rule to follow when designing schema in MongoDB is that things that are queried together should stay together. Thus, it may be beneficial to work from the required queries first, making it as simple as possible, and let the schema design follow the query pattern. You can use mgeneratejs to create sample documents quickly in any number, so the design can be tested easily.I’m also attaching some more resources that you should find useful:\nMongoDB Data Modelling Course\nMongoDB Schema DesignHope this helps. Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "mgeneratejsThanks for the help @Satyam. I feel I’m on right track then ",
"username": "Ali_Waqar"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | I want to create a schema for an application which includes users, events and event tasks | 2023-04-03T15:25:59.635Z | I want to create a schema for an application which includes users, events and event tasks | 873 |
null | [
"spring-data-odm"
] | [
{
"code": " @Override\n public Optional<Budget> updateBudgetContent(BudgetId budgetId,\n Optional<String> title,\n Optional<BigDecimal> limit,\n Optional<TypeOfBudget> typeOfBudget,\n Optional<BigDecimal> maxSingleExpense,\n String userId\n ) {\n budgetRepository.findBudgetByBudgetIdAndUserId(budgetId, userId).map(\n budgetFromRepository -> new Budget(budgetId,\n title.orElseGet(budgetFromRepository::title),\n limit.orElseGet(budgetFromRepository::limit),\n typeOfBudget.orElseGet(budgetFromRepository::typeOfBudget),\n maxSingleExpense.orElseGet(budgetFromRepository::maxSingleExpense),\n userId\n )).ifPresent(budgetRepository::save);\n return budgetRepository.findBudgetByBudgetIdAndUserId(budgetId, userId);\n }\n",
"text": "Hey, i have an issue when i want to patch update existing object in Spring Rest API project. I know that in database i have one object with unique ID and when i want to patch it i get duplicate key error. I know that mongoDB “save” method should update object when it exists by id and put new object when it doesn’t.here is some code:the “BudgetId” object has just String value field.",
"username": "Szymon_Kecik"
},
{
"code": "",
"text": "Ok i found the solution. I had to replace @Id to @MongoId in domain class. eot",
"username": "Szymon_Kecik"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | Duplicate key error when Patching | 2023-04-14T19:24:31.453Z | Duplicate key error when Patching | 1,087 |
[
"flutter"
] | [
{
"code": "",
"text": "Can we get a passage further at the top near the beginning like a badged warning indicating that you need to have Client Reset Logic in place in your application BEFORE you resync your clients?And then at the bottom can we get the Flutter SDK added to the list of SDKs with the guides how to implement Client Reset Logic please?Failing to have Client Reset Logic in your application from the beginning is a major source of bad changeset issues when apps that were offline for a period of time (usually more than 10 to 20 days is all it takes), as well as a big cause to global app outages that can’t be recovered after a termination of sync.It would be amazing to push more proactive approaches to make this feature more known to developers implementing Realm, and this would be a great first step in that direction.",
"username": "Brock"
},
{
"code": "",
"text": "\nScreenshot 2023-03-31 at 7.15.22 PM1794×1054 79.2 KB\n\nThis box, to the very top with a more clear warning client reset logic is needed before you even try terminating and resync. Even should be mentioned before even talking about pausing sync, this is a critical feature.This needs to be the very first thing someone sees opening that page, this is critical and will literal cause a global outage for anyones app after a term and resync if it’s not in place.Flutter here please.\nScreenshot 2023-03-31 at 7.18.21 PM2196×1382 326 KB\n",
"username": "Brock"
},
{
"code": "",
"text": "KOTLIN SDK is needed in both the warning box, and the very bottom of the page.",
"username": "Brock"
},
{
"code": "",
"text": "Hello @Brock ,Thank you for your interest in helping the community and for sharing your feedback for Kotlin SDK. The docs team is aware of this and is working on the client reset for Kotlin SDK and it will be reflected in the documentation soon.For anyone, trying to understand “Terminate and Re-enable Sync”, I have written some notes as per my findings as a Mobile Byte 2- Handling Sync Errors and MongoDB docs explain client-reset in more detail in Handle Errors section.I hope the provided information is helpful.Cheers, \nHenna",
"username": "henna.s"
},
{
"code": "",
"text": "",
"username": "henna.s"
},
{
"code": "",
"text": "",
"username": "Dave_Nielsen"
}
] | Request for modifying Documentation for Client Reset Logic/Termination of Sync | 2023-04-01T02:08:23.716Z | Request for modifying Documentation for Client Reset Logic/Termination of Sync | 1,303 |
|
null | [] | [
{
"code": "",
"text": "Unable to connect AppService from UI. It redirects me back to DataService tab. Is mongodb experiencing any issue.?",
"username": "Faisal_Ansari"
},
{
"code": "",
"text": "Hello @Faisal_Ansari ,Could you contact the Atlas in-app chat support team regarding this? Please provide them with the errors you’re receiving as well.Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | Unable to connect to mongodb AppService | 2023-04-14T05:28:19.435Z | Unable to connect to mongodb AppService | 397 |
null | [] | [
{
"code": "",
"text": "Hello,\nover the last week-end we saw the Oplog GB/h going from ~30MB/h to over 2GB/h in 24h. This caused some disk space issues and then changed the cluster’s tier from M10 to M20.\nWe do not understand why this Oplog rate increased to drastically, any idea what could trigger such a Oplog rate ?\nThanks for your help,\nLuc",
"username": "Luc_Juggery"
},
{
"code": "",
"text": "Hi @Luc_Juggery - Welcome to the community.We do not understand why this Oplog rate increased to drastically, any idea what could trigger such a Oplog rate ?If you’re certain there were no other changes on the application side I would advise contacting the Atlas in-app chat support team regarding this as they further insight into your cluster.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks @Jason_Tran\nNo other change in the app, I’ll do it ",
"username": "Luc_Juggery"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | Huge increase in Oplog GB/h | 2023-04-11T08:31:19.005Z | Huge increase in Oplog GB/h | 443 |
null | [
"swift"
] | [
{
"code": "",
"text": "Since updating the RealmSwift SDK to v12, I can no longer do live editing of the realm data in Realm Studio 11.1.2.\nI get this error displayedRealm file is currently open in another process which cannot share access with this process. All processes sharing a single file must be the same architecture.I am using an M1 Mac. Is this the issue?",
"username": "Stewart_Lynch"
},
{
"code": "",
"text": "Ahh, yeah, I see how that error message could be confusing. This actually isn’t related to the system architecture - this is the error message when there’s a mismatch between the realm file version produced by the SDK and the realm files that Realm Studio can use. There will be an updated version of Realm Studio out shortly - probably later this week. We’re currently pushing a bunch of updates ahead of MongoDB World next week, and our coordination could have been better around this. Sorry about that!",
"username": "Dachary_Carey"
},
{
"code": "",
"text": "@Dachary_CareyAny updates on this as we are running into the issue and it’s really slowing development.We have Realm Studio 11.2.1, SDK 10.28.2 and a macOS 12.4 AppIf RealmStudio is opened prior to opening the app, we get this error in XCodeError!Realm file is currently open in another process which cannot share access with this process. All processes sharing a single file must be the same architecture. For sharing files between the Realm Browser and an iOS simulator, this means that you must use a 64-bit simulator. Path: /Users/…If RealmStudio is opened after the app is opened, RealmStudio throws this errorFailed to open RealmThe file is already opened by another process, with an incompatible lock file format. Try up- or downgrading Realm Studio or SDK to match their versions of Realm Core.See Realm Studio changelog on GitHub for details on compatibility between versions.The Realm File(s) have been deleted and re-created to ensure they are fresh and up-to-date. (and yes, the .lock files all deleted etc)Is this corrected with RealmStudio 12.0 ( Jun 7 2022 release date)?Jay",
"username": "Jay"
},
{
"code": "",
"text": "This looks like a slightly different issue, @Jay . But these similar questions make me think I should add something to the docs to help folks troubleshoot these types of errors, so I’ll do that in the next week or two. In the meantime, the places where you can get info are the release notes for the SDK and Realm Studio.For example, in the Swift SDK, v 10.28.2 says it is compatible with Realm Studio v 12.0.0. The other clue is at the bottom of the release notes where it says Internal; there is a realm-core version bump. I don’t think the one in 10.28.2 is an issue, but if you look at the 10.27.0 version of the Swift SDK, that has a note about a Realm Studio limitation:Notably this includes Realm Studio, and v11.1.2 (the latest at the time of this release) cannot open Realm files which are simultaneously open in the simulator.On the Realm Studio side, the release notes for v12.0.0 have a small compatibility table that lists Swift SDK 10.27.0 and newer, and describes this simultaneous open limitation in more detail.Based on these notes, I’d say v 12.0.0 of Realm Studio should solve your problem.",
"username": "Dachary_Carey"
},
{
"code": "",
"text": "@Dachary_CareyAh! We are not using iOS, nor any simulators so at first glance the issue noted in the release notes was mostly interacting with the simulator, not a live working Realm file. I now see (upon second glance) the “opened by another process” meaning any process.Thanks for the heads upJay",
"username": "Jay"
},
{
"code": "",
"text": "I just installed latest versions of realm swift and realm studio for the first time. I am getting the same issue if I try to have realm studio with the database open when running in the simulator for iOS. If realm studio isn’t open it runs fine. Exact same error message as above. This is a one year old thread. Did anyone ever find out the issue?",
"username": "Jon_Bones_Jones"
},
{
"code": "",
"text": "@Dachary_CareyCan you please include what the specific versions are?RealmSwift:\nRealm Studio:\nCocoaPods (if that’s what’s being used):\nXCode:\nmacOS/iOS:",
"username": "Jay"
},
{
"code": "",
"text": "I have the same problem. I can’t use RealmStudio while the application is running.RealmSwift: 10.38.0\nRealm Studio: 13.0.2\nXCode: 14.3\nmacOS/iOS: 13.2.1/16.4Installed through SwiftPM.PS:\nI have also tried installing different versions of RealmStudio - it didn’t help. When I delete the default.realm.lock file, RealmStudio opens, but the data is not updated live.",
"username": "Dan_Vitalievich"
},
{
"code": "",
"text": "@Dan_Vitalievich and @Jon_Bones_JonesYep. I just created a fresh project and duplicated the issue.It appears the original Git #8057 was closed so I created a new Git #8206",
"username": "Jay"
},
{
"code": "",
"text": "This appears to have been resolved with Realm Studio release 14 - ensuring both the SDK and Studio are all updated to the most current versions so they all have the same format.",
"username": "Jay"
}
] | Realm Studio and Realm 12 (Mac) | 2022-05-29T00:34:59.639Z | Realm Studio and Realm 12 (Mac) | 4,925 |
null | [
"replication"
] | [
{
"code": "rs.status()",
"text": "I resynchronized the data of a node through the internal mechanism of the MongoDB replica set. After the synchronization was completed, I found that the total number of data in some collections was not equal. I checked the node status through rs.status(). The status is already SECONDARY, and the optime is also consistent with the PRIMARY node.I have newly synchronized a SECONDARY node, and the total number of data in the collection of this node is consistent with that of the PRIMARY node.But after a long time, the node data synchronized for the first time, the total number of data in some collections is still inconsistent with the PRIMARY node.I haven’t found any similar phenomena in the official documents.I would like to know what are the possible reasons for this problem?",
"username": "Yanfei_Wu"
},
{
"code": "rs.printSecondaryReplicationInfo()\nrs.printReplicationInfo()\n",
"text": "What is the result if you do the commands from the primary",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "Thank you for your reply, the image below is the result of executing the command.\n1681184549305715×224 58.3 KB\nFor this node at the end of IP 249, data is missing in some collections.",
"username": "Yanfei_Wu"
},
{
"code": "",
"text": "Hello, can you help me analyze what may be the cause?\nThanks so much.",
"username": "Yanfei_Wu"
},
{
"code": "",
"text": "How are you checking that data is missing in some collections ?",
"username": "chris"
},
{
"code": "db.stats()\ndb.count()\n",
"text": "",
"username": "Yanfei_Wu"
},
{
"code": "",
"text": "for example:The A collection of the PRIMARY node, through db.stats(), there are 1000 recordsAnd I checked the A collection through db.stats() on the node for the first synchronization to see that there are only 900 records",
"username": "Yanfei_Wu"
},
{
"code": "",
"text": "Did you do an initial sync or was the MongoDB process down and then you brought it back up and it synced the data?",
"username": "tapiocaPENGUIN"
},
{
"code": "{w: \"majority\"}{w:1}",
"text": "I notice your oplog is ~24GB but you only have 3.39 hours of headroom.Are you doing a lot of inserts to this cluster? It seems busy.Could a few seconds equal a discrepancy of 1000 documents?If the write concern is {w: \"majority\"} it could be the other members that are more up to date, if the write concern is {w:1} then both secondaries could be behind if there is a corresponding high insert rate.If you think this is unlikely then it might be prudent to create a bug report on https://jira.mongodb.org",
"username": "chris"
},
{
"code": "",
"text": "I was thinking the same thing the oplog headroom is very small, I wonder if the oplog was too small and that’s why it didn’t get all the documents when he was doing the sync? But",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "Yes, our MongoDB replica cluster writes a lot of data.The write level of the program is: {w: “majority”}Initially, this MongoDB replica cluster had three nodes, but two of them went down, and I removed the two down nodes from the cluster.\nThen I newly deployed a node of the same version, added it to the original replica set, and synchronized data through the internal synchronization mechanism of the MongoDB replica set. When the node role is normal and the optime is consistent through rs.status(), I observe the newly synchronized node There are missing data in the collection.\nI later deployed another node to join the cluster. After the latest node synchronization is completed, this node has the same data as the master node. So far, among the three nodes, the node data for the first synchronization is still missing collection records.So it is very strange why this phenomenon occurs.",
"username": "Yanfei_Wu"
},
{
"code": "",
"text": "During the synchronization of the first node, business reading and writing are suspended.\nOnly the new node is synchronizing the data of the primary node through the internal mechanism of the replica set.",
"username": "Yanfei_Wu"
}
] | MongoDB replica set SECONDARY node missing data | 2023-04-10T15:44:14.471Z | MongoDB replica set SECONDARY node missing data | 1,133 |
null | [
"aggregation",
"java"
] | [
{
"code": "ObjectId idString titleLinguaSetObjectId idString titleList<Category>List<Category>CategoryCategoryList<LinguaSet>LinguaSetCategoryList<Category>LinguaSets List<Category>LinguaSetSet<ObjectId>",
"text": "Hello\nI need help building a database query to return specific data\nI have a Category document, it is stored in the database with the name categories which has fields ObjectId id, String title\nThere is also a LinguaSet document, it is stored in the database with the name lingua_sets, which has the fields ObjectId id, String title, List<Category> categories and other fields\nThe List<Category> is annotated with the @DBRef annotation to refer to the Category document. The Category document knows nothing about the LinguaSet document\nThe question is how to build a database query to return a List<LinguaSet> where will the LinguaSet document be located, which has a certain Category in the List<Category>?\nThere should be no more than three LinguaSets for each category.\nThat is, if there is a Category with id 1 and 5 LinguaSet whose List<Category> contains this Category with id 1, then only 3 LinguaSet should be returned.\nYou need to get it by Set<ObjectId> ids which will come to the function.",
"username": "Yevhen"
},
{
"code": "Set<ObjectId> categoryIds = ... // set of category IDs to match\nint limit = 3; // limit the number of LinguaSets per category to 3\n\nList<Bson> pipeline = Arrays.asList(\n Aggregates.match(Filters.in(\"categories._id\", categoryIds)),\n Aggregates.lookup(\"categories\", \"categories._id\", \"_id\", \"categories\"),\n Aggregates.match(Filters.in(\"categories._id\", categoryIds)),\n Aggregates.group(\"$_id\", Accumulators.first(\"title\", \"$title\"),\n Accumulators.first(\"categories\", \"$categories\"),\n Accumulators.push(\"$$ROOT\"), Accumulators.limit(limit)),\n Aggregates.project(Projections.fields(\n Projections.excludeId(), Projections.include(\"title\", \"categories\"),\n Projections.computed(\"lingua_sets\", \"$$ROOT\"))),\n Aggregates.unwind(\"$lingua_sets\"),\n Aggregates.replaceRoot(\"$lingua_sets\")\n);\n\nList<LinguaSet> result = linguaSetsCollection.aggregate(pipeline, LinguaSet.class).into(new ArrayList<>());\n$lookupcategories_idcategories._id_idtitlecategorieslimitlingua_setslingua_setslimit",
"text": "Hey Yevhen,Nice to see ya!To build a database query to return a List where the LinguaSet document has a certain Category in the List, you can use the MongoDB aggregation pipeline with the $lookup and $match stages. Here is an example query:This query first matches LinguaSets that have at least one Category in the input set of category IDs. It then performs a $lookup stage to join the categories collection to the LinguaSet documents based on the _id and categories._id fields, and filters the results to only include Categories that match the input set of category IDs.Next, the query groups the results by LinguaSet _id, and for each group, it keeps the title and categories fields from the first document, pushes the entire document onto an array, and limits the array to limit elements.Then, the query projects the desired fields and renames the array to lingua_sets. Finally, it unwinds the lingua_sets array and replaces the root document with the array elements.This query returns a list of up to limit LinguaSet documents per Category in the input set of category IDs that match the query criteria.",
"username": "Brock"
},
{
"code": "",
"text": "Thanks for the answer\nBut as I can see Accumulators.push takes 2 parameters and Accumulators doesn’t have a limit method",
"username": "Yevhen"
},
{
"code": "",
"text": "I am using spring boot 3.0.1 because some of the features are not available\nHow can they be replaced?",
"username": "Yevhen"
},
{
"code": "Set<ObjectId> categoryIds = ... // set of category IDs to match\nint limit = 3; // limit the number of LinguaSets per category to 3\n\nList<Bson> pipeline = Arrays.asList(\n Aggregates.match(Filters.in(\"categories._id\", categoryIds)),\n Aggregates.lookup(\"categories\", \"categories._id\", \"_id\", \"categories\"),\n Aggregates.match(Filters.in(\"categories._id\", categoryIds)),\n Aggregates.group(\"$_id\", Accumulators.first(\"title\", \"$title\"),\n Accumulators.first(\"categories\", \"$categories\"),\n Accumulators.push(\"$$ROOT\"), Accumulators.first(\"count\", \"$count\")),\n Aggregates.project(Projections.fields(\n Projections.excludeId(), Projections.include(\"title\", \"categories\"),\n Projections.computed(\"lingua_sets\", new Document(\"$slice\", Arrays.asList(\"$lingua_sets\", limit))))),\n Aggregates.unwind(\"$lingua_sets\"),\n Aggregates.replaceRoot(\"$lingua_sets\")\n);\n\nList<LinguaSet> result = linguaSetsCollection.aggregate(pipeline, LinguaSet.class).into(new ArrayList<>());\n",
"text": "My apologies for the mistake in my previous message. You are correct that the Accumulators.push method does not have a limit method, and it takes only one parameter. To limit the number of LinguaSets per category, which is what I had confused the Accumulators.push for, you can use the $slice operator instead. Here is the corrected query:I used this on my local MBD in a similar way and it worked.In this query, the $slice operator limits the lingua_sets array to contain only the first limit elements. The rest of the query remains the same as the previous example.If this doesn’t work, please send whatever errors you’re getting etc.**HOLD UP, I haven’t tested this with other versions of springboot, if it works let me know. If not, let me know but I’ll try it later with the version of boot you’re using.",
"username": "Brock"
},
{
"code": "Accumulators.push(\"$$ROOT\")public static <TExpression> BsonField push(String fieldName, TExpression expression) {\n return accumulatorOperator(\"$push\", fieldName, expression);\n }\n",
"text": "Thanks for the correction, but like I said earlier, Accumulators.push(\"$$ROOT\") should have 2 parameters, not just one.\nThis is how it looks like in spring 3.0.1:",
"username": "Yevhen"
}
] | Get a certain amount of data from the database by criteria | 2023-04-12T22:26:27.788Z | Get a certain amount of data from the database by criteria | 603 |
null | [
"replication",
"sharding"
] | [
{
"code": "{\"t\":{\"$date\":\"2023-03-29T07:11:58.639+00:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22576, \"ctx\":\"establishCursors cleanup\",\"msg\":\"Connecting\",\"attr\":{\"hostAndPort\":\"hostname:port\"}}\n\n{\"t\":{\"$date\":\"2023-03-29T07:12:51.493+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4712102, \"ctx\":\"conn29626580\",\"msg\":\"Host failed in replica set\",\"attr\":{\"replicaSet\":\"rs1\",\"host\":\"hostname:port\",\"error\":{\"code\":202,\"codeName\":\"NetworkInterfaceExceededTimeLimit\",\"errmsg\":\"Couldn't get a connection within the time limit of 8ms\"},\"action\":{\"dropConnections\":false,\"requestImmediateCheck\":false,\"outcome\":{\"host\":\"hostname:port\",\"success\":false,\"errorMessage\":\"NetworkInterfaceExceededTimeLimit: Couldn't get a connection within the time limit of 8ms\"}}}}\n",
"text": "We faced an issue in our prod env, where one Primary VM stopped accepting any connections, and all prod API s started failing during the period.Attempt to manual Mongo login to the primary member was not successful. Manual try to start up MongoDB in the VM was also unsuccessful.Since , The problemed VM was still showing as “primary” according to rs.status(), No election happened among the available secondaries.We checked mongos logs, connection was getting accepted and no error was not present till 07:11 UTC. But the error suddenly started exact at 07:12 utc.Additional details:\nWe are using Mongo 4.4.7 community version. And we are having below configuration:\nconfig replicaset - 1 Primary, 2 Secondary\nshard1 - 1 Primary, 2 Secondary\nshard2 - 1 Primary, 2 Secondary\nshard3 - 1 Primary, 2 Secondary\n2 query router.We checked our query router available connection, during the issue period.\nQR1\n“current” : 7654,\n“available” : 43546,\n“totalCreated” : 134309236,\n“active” : 2890,\n“exhaustIsMaster” : 487,\n“exhaustHello” : 229,\n“awaitingTopologyChanges” : 716QR2\n“current” : 7746,\n“available” : 43454,\n“totalCreated” : 134299931,\n“active” : 2997,\n“exhaustIsMaster” : 487,\n“exhaustHello” : 229,\n“awaitingTopologyChanges” : 716",
"username": "Debalina_Saha"
},
{
"code": "",
"text": "Hi @Debalina_Saha and welcome to the MongoDB community forum!!The error message that you are facing could occur due to couple of reason and hence there could me multiple resolutions to the same error.Primarily, the issue could be resolved by increasing the connection pooling size as mentioned by user on the community post.Also, please upgrade to the latest 4.4.x version available(4.4.19) for bug fixes and new features introduced.Further, the documentation to add maxpool size in the connection string, could also be a good start.Also share whole output of below from all shards, including config server replica setLet us know if you have any further questions.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Hi @Aasawari ,\nThank you for your suggestions and response.\nI will try to check and analyse further about connection Pooling size.We are planning for version upgrade to Mongo 5.x.x . Kindly confirm the stable release for 5.x version.I have attached output of sh.status, rs.conf and rs.status from all shards rs1, rs2, rs3 and config. Kindly check and suggest.\nreqd_details.txt (42.0 KB)",
"username": "Debalina_Saha"
}
] | Mongo primary stopped accepting connections - No election happened | 2023-04-04T10:50:05.274Z | Mongo primary stopped accepting connections - No election happened | 1,100 |
null | [] | [
{
"code": "",
"text": "I think Mongo covers this problem, but I’d like to get confirmation.I have an app that books Warbird flights. A passenger will be displayed the number of seats available and can choose any number up to the displayed limit. The passenger selects one or more, then clicks pay.Before the seats are booked, I need to make sure that someone else did not buy the seats. If I was coding an SQL database, I would have to lock a flight record, read it and make sure the seats are still available, then if the seats were still available, record the purchase then write the flight record back and release the lock.According to this stackoverflow: node.js - MongoDB: Lock and unlock collection manually - Stack Overflow MongoDB handles this for me.Here’s a link to the beta web site https://indycaf-warbirds.onrender.com/.Thanks",
"username": "Jim_Olivi"
},
{
"code": "",
"text": "Hi @Jim_Olivi and welcome to the MongoDB community forum!!Can you confirm if you are seeking a confirmation regarding the post on stackoverflow about how locking works in MongoDB? More specifically, is it confirmation regarding the Answer (posted by Wan B.)?Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Thanks for the reply.Yes, I am looking for a confirmation of the stackoverflow post.If MongoDB does serialize reads/writes then I don’t have to worry about the integrity of the records.Lastly: if a record is changed before the initial transaction is completed, what error is raised?",
"username": "Jim_Olivi"
},
{
"code": "",
"text": "@Jim_Olivi Yes, it’s serialized.A write conflict occurs, and that write conflict will abort the transaction and retry. If it fails again you’ll get a writeConflict exception.https://docs.mongodb.com/manual/reference/transactions/#writeconflict-exception",
"username": "Brock"
}
] | Read update write integrity | 2023-04-08T22:20:48.783Z | Read update write integrity | 534 |
null | [
"queries"
] | [
{
"code": " {\n \"_id\": {\n \"$oid\": \"641841c3ae5e0d1f81c21df4\"\n },\n \"name\": \"backup-power\",\n \"slug\": \"backup-power\",\n \"label\": \"Back-up Power\",\n \"children\": [\n {\n \"name\": \"batteries\",\n \"slug\": \"backup-power/batteries\",\n \"label\": \"Batteries\",\n \"parent\": \"641841c3ae5e0d1f81c21df4\"\n },\n {\n \"name\": \"inverters\",\n \"slug\": \"backup-power/inverters\",\n \"label\": \"Inverters\",\n \"parent\": \"641841c3ae5e0d1f81c21df4\",\n \"children\": [\n {\n \"name\": \"hybrid-inverters\",\n \"slug\": \"backup-power/inverters/hybrid-inverters\",\n \"label\": \"Hybrid Inverters\",\n \"parent\": \"6418bf4cae5e0d1f81c21dfc\"\n }\n ]\n },\n {\n \"name\": \"portable-power\",\n \"slug\": \"backup-power/portable-power\",\n \"label\": \"Portable Power\",\n \"parent\": \"641841c3ae5e0d1f81c21df4\"\n }\n ]\n }\n",
"text": "Hey all,I have been searching the internet forever trying to find an answer and nothing yet.I want to return a document where some nested object field matches the search term. It would be a bonus if I could just return the nested object but the whole document is fine.I have the following document for example, note how there are nested children within children, i want to traverse these children to look if the ‘slug’ property matches. I dont know how deep the nesting will go.search term example: “backup-power/inverters/hybrid-inverters”Any advice here? Should I just query the top level document and then on the server traverse the object? Should i build up a query on the server based on the slug that will tell mongodb exactly where to look? Or is there some query that can do this automatically?",
"username": "Gregg_Ord-Hume"
},
{
"code": "queried_field_variable = \"slug\" \n\nfor each slash in search_term_variable\n{\n queried_field_variable = \"children.\" + queried_field_variable\n}\n\ndb.collection.find( { [queried_field_variable] : search_term } )\nquery = { \"name\" : \"backup-power\" }\nquery = { ...query , \"children.name\" : \"inverters\" }\nquery = { ...query , \"children.name.name\" : \"hybrid-inverters\" }\n",
"text": "First part of the problem finding the top level document.The only way I can find so far is to create a query that leverages the number of slashes in your search term because I don’t think you can write one generic query. Something with the logic:As a remark, you seem to have a lot of duplicated data in your slug field as it seems it is a concatenation of the ancestors name. Personally, I would forgo the slug field and use the same logic to build a query such as:you probably could leverage a much smaller index using name while leaving slug out. The fact that you may want to specify a search with slashes does not mean that your really have to have it in your data.Using the above logic, for backup-power/inverters/hybrid-inverters, the queried_field_variable would be equal to children.children.slug.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks, that was an idea I had.What your query will do is return the whole document anyways, so if I match on a child object somewhere inside the document I will still get the entire document back unless I do an aggregation.So what I decided to do was match on the top level document which was easy and then on my server I recursively loop through the children to find what I need - its not the most performant method but for now it will work.In the future I can look to create a view or aggregation that does what I need.I appreciate the response!",
"username": "Gregg_Ord-Hume"
}
] | Recurse through arrays of objects based on the same key | 2023-04-06T13:31:00.970Z | Recurse through arrays of objects based on the same key | 530 |
null | [
"node-js",
"crud",
"compass"
] | [
{
"code": "function readCSVFile(file_path, delimiter = ',') {\n return new Promise((resolve, reject) => {\n fs.readFile(file_path, 'utf8', (err, data) => {\n if (err) {\n reject('Failed to read CSV file');\n } else {\n const lines = data.trim().split('\\n');\n const headers = lines[0].split(delimiter).map((header) => header.replace(/[\\r]+/g, ''));\n //console.log(headers)\n const result = [];\n for (let i = 1; i < lines.length; i++) {\n const obj = {};\n const currentLine = lines[i].split(delimiter);\n\n if (currentLine.length !== headers.length) {\n console.log(i)\n continue;\n //reject('Invalid CSV format');\n }\n else {\n for (let j = 0; j < headers.length; j++) {\n obj[headers[j]] = currentLine[j].trim()\n }\n }\n result.push(obj);\n }\n resolve(result);\n }\n });\n });\n}\nconst batchSize = 50000 // Set the batch size to 1000\n\nreadCSVFile('./datasets/2021-05.csv')\n .then(data => {\n const trips = data.map(row => new Trip(row))\n // Split the trips array into three parts\n const numTrips = trips.length\n const numBatches = Math.ceil(numTrips / batchSize)\n const batches = Array.from({ length: numBatches }, (_, i) =>\n trips.slice(i * batchSize, (i + 1) * batchSize)\n )\n\n console.log(batches.length)\n // Insert each batch separately using insertMany\n Promise.all(\n batches.map(batch => {\n \n Trip.insertMany(batch).then(results => {\n \n console.log(`Inserted ${results.length} trips to MongoDB`)\n })\n .catch(error => {\n console.error(error)\n })\n })\n )\n\n })\n .catch(error => {\n console.error(error)\n });\n err = new ServerSelectionError();\n \nMongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you're trying to access the database from an IP that isn't whitelisted. Make sure your current IP address is on your Atlas cluster's IP whitelist: https://www.mongodb.com/docs/atlas/security-whitelist/\n",
"text": "Hi everybody,\nCan anyone tell me how to upload a CSV file of about 1mil record to MongoDB? I’m working on it, but I recognized that the CSV file with about 200k records still uploads normally, but when I raise it to 300k, I get the error “Could not connect to any servers in your MongoDB Atlas cluster” right away, even though I set up the IP 0.0.0.0 and current IP.\nNote: The CSV file is still uploaded to MongoDB normally via MongoDB Compass.This is the CSV reader function, when i<= 200.000, it works well but i<= 300.000, the application crashesI have tried to split the file into smaller batches but it is not successfulThe error is :Thank you for helping",
"username": "Khoa_Dinh_Dang"
},
{
"code": " err = new ServerSelectionError();\n \nMongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. \nmongod",
"text": "Hi @Khoa_Dinh_Dang and welcome to MongoDB community forums!!Can anyone tell me how to upload a CSV file of about 1mil record to MongoDB?I tried to directly import the MongoDB data using the mongoimport MongoDB tools using the below command:mongoimport --uri=mongodb+srv://cluster0.sqm88.mongodb.net/test --username --password --db test --collection CSV --file test.csv --type csv --fields=It worked fine for me. Could you please try using the Database tools and confirm if you are facing a similar problem?If you wish to import through the application, could you please confirm whether you are using a VPN connection or proxy while connecting to the mongod client? If yes, please turn off the VPN and try connecting again, or try with a different internet connection.Note, for batch insert/upsert operation, mongoimport uses a maximum size of 100,000. Please refer to batches in mongoimport for further details.Regards\nAasawari",
"username": "Aasawari"
}
] | Error connection when uploading a large csv file with appox 1 million records | 2023-04-12T12:37:41.061Z | Error connection when uploading a large csv file with appox 1 million records | 761 |
null | [
"replication",
"compass",
"transactions",
"storage"
] | [
{
"code": "# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: C:\\Program Files\\MongoDB\\Server\\6.0\\data\n journal:\n enabled: true\n# engine:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: C:\\Program Files\\MongoDB\\Server\\6.0\\log\\mongod.log\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1\n\n\n# replication options\nreplication:\n replSetName: rs0\n oplogSizeMB: 100\n enableMajorityReadConcern: true\n\n\n\n#processManagement:\n\nsecurity:\n authorization: disabled \n\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:\nrs0\"C:\\Program Files\\MongoDB\\Server\\6.0\\bin\\mongod.exe\" --port 27017 --replSet rs0 --dbpath=\"C:\\Program Files\\MongoDB\\Server\\6.0\\data\"\nC:\\Windows\\System32>\"C:\\Program Files\\MongoDB\\Server\\6.0\\bin\\mongod.exe\" --port 27017 --replSet rs0 --dbpath=\"C:\\Program Files\\MongoDB\\Server\\6.0\\data\"\n{\"t\":{\"$date\":\"2023-04-14T02:09:01.610+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-04-14T02:09:01.615+03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:03.136+03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"thread1\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2023-04-14T02:09:03.138+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:03.138+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:03.138+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:03.138+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"thread1\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-04-14T02:09:03.140+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":4360,\"port\":27017,\"dbPath\":\"C:/Program Files/MongoDB/Server/6.0/data\",\"architecture\":\"64-bit\",\"host\":\"DESKTOP-J8NQ0TF\"}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:03.140+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23398, \"ctx\":\"initandlisten\",\"msg\":\"Target operating system minimum version\",\"attr\":{\"targetMinOS\":\"Windows 7/Windows Server 2008 R2\"}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:03.140+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.5\",\"gitVersion\":\"c9a99c120371d4d4c52cbb15dac34a36ce8d3b1d\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"windows\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:03.140+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Microsoft Windows 10\",\"version\":\"10.0 (build 22621)\"}}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:03.141+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"net\":{\"port\":27017},\"replication\":{\"replSet\":\"rs0\"},\"storage\":{\"dbPath\":\"C:\\\\Program Files\\\\MongoDB\\\\Server\\\\6.0\\\\data\"}}}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:03.143+03:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=7620M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:03.217+03:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":74}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:03.217+03:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:03.376+03:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-04-14T02:09:03.377+03:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22140, \"ctx\":\"initandlisten\",\"msg\":\"This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip <address> to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-04-14T02:09:03.377+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5853300, \"ctx\":\"initandlisten\",\"msg\":\"current featureCompatibilityVersion value\",\"attr\":{\"featureCompatibilityVersion\":\"unset\",\"context\":\"startup\"}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:03.378+03:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":5071100, \"ctx\":\"initandlisten\",\"msg\":\"Clearing temp directory\"}\n{\"t\":{\"$date\":\"2023-04-14T02:09:03.378+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"}\n{\"t\":{\"$date\":\"2023-04-14T02:09:03.547+03:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"C:/Program Files/MongoDB/Server/6.0/data/diagnostic.data\"}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:03.550+03:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20320, \"ctx\":\"initandlisten\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"local.startup_log\",\"uuidDisposition\":\"generated\",\"uuid\":{\"uuid\":{\"$uuid\":\"ea3d8409-9cec-48d6-98ff-a3631856ea64\"}},\"options\":{\"capped\":true,\"size\":10485760}}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:03.914+03:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"initandlisten\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"collectionUUID\":{\"uuid\":{\"$uuid\":\"ea3d8409-9cec-48d6-98ff-a3631856ea64\"}},\"namespace\":\"local.startup_log\",\"index\":\"_id_\",\"ident\":\"index-1-8305521056144148365\",\"collectionIdent\":\"collection-0-8305521056144148365\",\"commitTimestamp\":null}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:03.915+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"initandlisten\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigStartingUp\",\"oldState\":\"ConfigPreStart\"}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:03.915+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6005300, \"ctx\":\"initandlisten\",\"msg\":\"Starting up replica set aware services\"}\n{\"t\":{\"$date\":\"2023-04-14T02:09:03.916+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4280500, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to create internal replication collections\"}\n{\"t\":{\"$date\":\"2023-04-14T02:09:03.916+03:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20320, \"ctx\":\"initandlisten\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"local.replset.oplogTruncateAfterPoint\",\"uuidDisposition\":\"generated\",\"uuid\":{\"uuid\":{\"$uuid\":\"68a7307c-6ed3-44e7-9fa3-c995511837f4\"}},\"options\":{}}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:03.917+03:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.\",\"nextWakeupMillis\":200}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:04.005+03:00\"},\"s\":\"W\", \"c\":\"REPL\", \"id\":21533, \"ctx\":\"ftdc\",\"msg\":\"Rollback ID is not initialized yet\"}\n{\"t\":{\"$date\":\"2023-04-14T02:09:04.026+03:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"initandlisten\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"collectionUUID\":{\"uuid\":{\"$uuid\":\"68a7307c-6ed3-44e7-9fa3-c995511837f4\"}},\"namespace\":\"local.replset.oplogTruncateAfterPoint\",\"index\":\"_id_\",\"ident\":\"index-3-8305521056144148365\",\"collectionIdent\":\"collection-2-8305521056144148365\",\"commitTimestamp\":null}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:04.026+03:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20320, \"ctx\":\"initandlisten\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"local.replset.minvalid\",\"uuidDisposition\":\"generated\",\"uuid\":{\"uuid\":{\"$uuid\":\"aae4d3d7-ccb0-4b2f-b3ed-e28275cc8490\"}},\"options\":{}}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:04.119+03:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.\",\"nextWakeupMillis\":400}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:04.119+03:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"initandlisten\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"collectionUUID\":{\"uuid\":{\"$uuid\":\"aae4d3d7-ccb0-4b2f-b3ed-e28275cc8490\"}},\"namespace\":\"local.replset.minvalid\",\"index\":\"_id_\",\"ident\":\"index-5-8305521056144148365\",\"collectionIdent\":\"collection-4-8305521056144148365\",\"commitTimestamp\":null}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:04.120+03:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20320, \"ctx\":\"initandlisten\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"local.replset.election\",\"uuidDisposition\":\"generated\",\"uuid\":{\"uuid\":{\"$uuid\":\"6d483ec5-6ba9-4e00-8632-88dbf0a46ca1\"}},\"options\":{}}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:04.191+03:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"initandlisten\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"collectionUUID\":{\"uuid\":{\"$uuid\":\"6d483ec5-6ba9-4e00-8632-88dbf0a46ca1\"}},\"namespace\":\"local.replset.election\",\"index\":\"_id_\",\"ident\":\"index-7-8305521056144148365\",\"collectionIdent\":\"collection-6-8305521056144148365\",\"commitTimestamp\":null}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:04.192+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4280501, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to load local voted for document\"}\n{\"t\":{\"$date\":\"2023-04-14T02:09:04.192+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21311, \"ctx\":\"initandlisten\",\"msg\":\"Did not find local initialized voted for document at startup\"}\n{\"t\":{\"$date\":\"2023-04-14T02:09:04.193+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4280502, \"ctx\":\"initandlisten\",\"msg\":\"Searching for local Rollback ID document\"}\n{\"t\":{\"$date\":\"2023-04-14T02:09:04.193+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21312, \"ctx\":\"initandlisten\",\"msg\":\"Did not find local Rollback ID document at startup. Creating one\"}\n{\"t\":{\"$date\":\"2023-04-14T02:09:04.194+03:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20320, \"ctx\":\"initandlisten\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"local.system.rollback.id\",\"uuidDisposition\":\"generated\",\"uuid\":{\"uuid\":{\"$uuid\":\"ce6758f0-715c-4c6f-b7d6-de32e54cc633\"}},\"options\":{}}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:04.217+03:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"initandlisten\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"collectionUUID\":{\"uuid\":{\"$uuid\":\"ce6758f0-715c-4c6f-b7d6-de32e54cc633\"}},\"namespace\":\"local.system.rollback.id\",\"index\":\"_id_\",\"ident\":\"index-9-8305521056144148365\",\"collectionIdent\":\"collection-8-8305521056144148365\",\"commitTimestamp\":null}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:04.218+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21531, \"ctx\":\"initandlisten\",\"msg\":\"Initialized the rollback ID\",\"attr\":{\"rbid\":1}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:04.220+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21313, \"ctx\":\"initandlisten\",\"msg\":\"Did not find local replica set configuration document at startup\",\"attr\":{\"error\":{\"code\":47,\"codeName\":\"NoMatchingDocument\",\"errmsg\":\"Did not find replica set configuration document in local.system.replset\"}}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:04.225+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"initandlisten\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigUninitialized\",\"oldState\":\"ConfigStartingUp\"}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:04.227+03:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20320, \"ctx\":\"initandlisten\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"local.system.views\",\"uuidDisposition\":\"generated\",\"uuid\":{\"uuid\":{\"$uuid\":\"48a10d2a-4188-4da4-bca7-0c4164b6fe49\"}},\"options\":{}}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:04.257+03:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"initandlisten\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"collectionUUID\":{\"uuid\":{\"$uuid\":\"48a10d2a-4188-4da4-bca7-0c4164b6fe49\"}},\"namespace\":\"local.system.views\",\"index\":\"_id_\",\"ident\":\"index-11-8305521056144148365\",\"collectionIdent\":\"collection-10-8305521056144148365\",\"commitTimestamp\":null}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:04.258+03:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22262, \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"}\n{\"t\":{\"$date\":\"2023-04-14T02:09:04.263+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20714, \"ctx\":\"LogicalSessionCacheRefresh\",\"msg\":\"Failed to refresh session cache, will try again at the next refresh interval\",\"attr\":{\"error\":\"NotYetInitialized: Replication has not yet been configured\"}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:04.263+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":40440, \"ctx\":\"initandlisten\",\"msg\":\"Starting the TopologyVersionObserver\"}\n{\"t\":{\"$date\":\"2023-04-14T02:09:04.263+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20712, \"ctx\":\"LogicalSessionCacheReap\",\"msg\":\"Sessions collection is not set up; waiting until next sessions reap interval\",\"attr\":{\"error\":\"NamespaceNotFound: config.system.sessions does not exist\"}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:04.268+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":40445, \"ctx\":\"TopologyVersionObserver\",\"msg\":\"Started TopologyVersionObserver\"}\n{\"t\":{\"$date\":\"2023-04-14T02:09:04.269+03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"127.0.0.1\"}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:04.269+03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:04.519+03:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":600}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:05.120+03:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":800}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:05.920+03:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":1000}}\n{\"t\":{\"$date\":\"2023-04-14T02:09:06.921+03:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":1200}}\nnetstat -an | findstr 27017 TCP 127.0.0.1:27017 0.0.0.0:0 LISTENING\n TCP 127.0.0.1:54047 127.0.0.1:27017 TIME_WAIT\n TCP 127.0.0.1:54060 127.0.0.1:27017 TIME_WAIT\n",
"text": "I have this config file in my windows machine:I would like to enable replication, I have named my set rs0.When I try to start the instance using:I always get this:Now the problem is, even though the instance starts, I cannot connect to 27017 using compass or anything, I can’t create the replica set in any way!Also running netstat -an | findstr 27017 outputs:but I just can’t connect to the instance.",
"username": "Rawand_Ahmed_Shaswar"
},
{
"code": "",
"text": "You have shown your config file but what you started is from command line\nWhat error are you getting when you connect from shell mongo/mongosh?Did you disable your default mongod which runs as service on same port 27017?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I get “error connection refused” — and yes, the default instance is stopped",
"username": "Rawand_Ahmed_Shaswar"
},
{
"code": "",
"text": "You have to run rs.initiate() by opening another cmd prompt\nThe mongod you started from cmd line is running in foreground\nSo leave that session as it is and open another cmd session and run initiate\nAre you trying to connect from same server or remotely\nYou need to look at bindip param also if remote connection are involved\nWhat command you issued to connect?",
"username": "Ramachandra_Tummala"
}
] | Creating replica sets in windows | 2023-04-13T23:10:02.140Z | Creating replica sets in windows | 1,033 |
null | [
"python",
"sharding"
] | [
{
"code": "for rec in aggregation_query:\n process(rec)\nCursorNotFound: Cursor not found (namespace: 'my_db.my_collection', id: 5454438319793971081)",
"text": "Hi,\nI’m running a sharded cluster and I recently upgraded mongo from 3.0 to 4.2. Some programs that run previously without errors now raise CursorNotFound error in loops likewhere process can be quite time consuming and aggregation_query can return > 10000 values…\nErrors come around 65mn from the beginning of the job (so much lower than the cursor timeout parameter, see below)I don’t use explicit sessionsDB parameters cursorTimeoutMillis and localLogicalSessionTimeoutMinutes have already be “ugraded” to the equivalent of 2 hours (7200000 and 120 respectively).How can I get more information (which systemLog component should I put to a debug level) ?\nAny idea of how to solve that ?Messages are like\nCursorNotFound: Cursor not found (namespace: 'my_db.my_collection', id: 5454438319793971081)\nIn which log (mongod or mongos) can I find this id ?Context is : mongoDB 4.2.3, pymongo 3.10.1",
"username": "RemiJ"
},
{
"code": "",
"text": "Hi @RemiJ, welcome!Errors come around 65mn from the beginning of the job (so much lower than the cursor timeout parameter, see below)That seems to be quite a long time to iterate, is it possible to refactor the application code to reduce the process iteration time ? Perhaps utilise $out to a temporary collection and spawn multiple processes.Cursor timeout is one of the possible reasons why the cursor could no longer be found. Could you ensure that the options were set correctly ? If an iteration of a cursor batch takes longer than the default cursor timeout of 10 minutes, the server deemed the cursor idle and will close it.Could you check on logs whether there’s anything happening on the shard (i.e. replica set or config servers election) around the 65 minutes ?Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Hi @wan,I had a look in the log. Actually, it comes from the sessions handler that kills the session after 30mn and not the 2 hours as specified by localLogicalSessionTimeoutMinutes=120. All the cursors attached to the session are deleted at same time.I updated my program to issue sessions refresh every 10mn so sessions don’t expire before the end of processing.But now, when cursor in the main “for” loop is a find and not an aggregate, I still get cursor timeout due to sharding… (cursor is active in (one) shard but expires in others)",
"username": "RemiJ"
},
{
"code": "",
"text": "@RemiJ how did you resolve this issue? I’m facing the same issue on a sharded cluster (version 4.0.28), java driver 4.7.0",
"username": "Nachiket_G_Kallapur"
},
{
"code": "",
"text": "Before starting my request, I start a session and a thread that wakeup periodically (10mn) to refresh this session.\nThen I use this session in all the long requests.\nWhen the requests are done, I end the session.\n(using mongodb 4.2)",
"username": "RemiJ"
}
] | CursorNotFound after some time running large process | 2020-03-10T12:52:20.634Z | CursorNotFound after some time running large process | 19,541 |
null | [
"replication"
] | [
{
"code": "db.adminCommand({\n \"setDefaultRWConcern\" : 1,\n \"defaultWriteConcern\" : {\n \"w\" : 1\n }\n })\ncfg = rs.config()\ncfg.members[1].priority = 0.5\nrs.reconfig(cfg)\nrs.stepDown()\n\n{\n \"set\" : \"reps0\",\n \"date\" : ISODate(\"2023-04-14T03:06:49.245Z\"),\n \"myState\" : 1,\n \"term\" : NumberLong(37),\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"heartbeatIntervalMillis\" : NumberLong(2000),\n \"majorityVoteCount\" : 2,\n \"writeMajorityCount\" : 2,\n \"votingMembersCount\" : 3,\n \"writableVotingMembersCount\" : 2,\n \"optimes\" : {\n \"lastCommittedOpTime\" : {\n \"ts\" : Timestamp(1681441604, 1),\n \"t\" : NumberLong(37)\n },\n \"lastCommittedWallTime\" : ISODate(\"2023-04-14T03:06:44.455Z\"),\n \"readConcernMajorityOpTime\" : {\n \"ts\" : Timestamp(1681441604, 1),\n \"t\" : NumberLong(37)\n },\n \"appliedOpTime\" : {\n \"ts\" : Timestamp(1681441604, 1),\n \"t\" : NumberLong(37)\n },\n \"durableOpTime\" : {\n \"ts\" : Timestamp(1681441604, 1),\n \"t\" : NumberLong(37)\n },\n \"lastAppliedWallTime\" : ISODate(\"2023-04-14T03:06:44.455Z\"),\n \"lastDurableWallTime\" : ISODate(\"2023-04-14T03:06:44.455Z\")\n },\n \"lastStableRecoveryTimestamp\" : Timestamp(1681441554, 1),\n \"electionCandidateMetrics\" : {\n \"lastElectionReason\" : \"priorityTakeover\",\n \"lastElectionDate\" : ISODate(\"2023-04-14T01:07:43.324Z\"),\n \"electionTerm\" : NumberLong(37),\n \"lastCommittedOpTimeAtElection\" : {\n \"ts\" : Timestamp(1681434458, 1),\n \"t\" : NumberLong(36)\n },\n \"lastSeenOpTimeAtElection\" : {\n \"ts\" : Timestamp(1681434458, 1),\n \"t\" : NumberLong(36)\n },\n \"numVotesNeeded\" : 2,\n \"priorityAtElection\" : 1,\n \"electionTimeoutMillis\" : NumberLong(10000),\n \"priorPrimaryMemberId\" : 1,\n \"numCatchUpOps\" : NumberLong(0),\n \"newTermStartDate\" : ISODate(\"2023-04-14T01:07:43.589Z\"),\n \"wMajorityWriteAvailabilityDate\" : ISODate(\"2023-04-14T01:07:44.485Z\")\n },\n \"members\" : [\n {\n \"_id\" : 0,\n \"name\" : \"MONGO-PC1:27017\",\n \"health\" : 1,\n \"state\" : 1,\n \"stateStr\" : \"PRIMARY\",\n \"uptime\" : 7163,\n \"optime\" : {\n \"ts\" : Timestamp(1681441604, 1),\n \"t\" : NumberLong(37)\n },\n \"optimeDate\" : ISODate(\"2023-04-14T03:06:44Z\"),\n \"lastAppliedWallTime\" : ISODate(\"2023-04-14T03:06:44.455Z\"),\n \"lastDurableWallTime\" : ISODate(\"2023-04-14T03:06:44.455Z\"),\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"electionTime\" : Timestamp(1681434463, 1),\n \"electionDate\" : ISODate(\"2023-04-14T01:07:43Z\"),\n \"configVersion\" : 8,\n \"configTerm\" : 37,\n \"self\" : true,\n \"lastHeartbeatMessage\" : \"\"\n },\n {\n \"_id\" : 1,\n \"name\" : \"MONGO-PC2:27017\",\n \"health\" : 1,\n \"state\" : 2,\n \"stateStr\" : \"SECONDARY\",\n \"uptime\" : 7157,\n \"optime\" : {\n \"ts\" : Timestamp(1681441604, 1),\n \"t\" : NumberLong(37)\n },\n \"optimeDurable\" : {\n \"ts\" : Timestamp(1681441604, 1),\n \"t\" : NumberLong(37)\n },\n \"optimeDate\" : ISODate(\"2023-04-14T03:06:44Z\"),\n \"optimeDurableDate\" : ISODate(\"2023-04-14T03:06:44Z\"),\n \"lastAppliedWallTime\" : ISODate(\"2023-04-14T03:06:44.455Z\"),\n \"lastDurableWallTime\" : ISODate(\"2023-04-14T03:06:44.455Z\"),\n \"lastHeartbeat\" : ISODate(\"2023-04-14T03:06:48.888Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"2023-04-14T03:06:48.160Z\"),\n \"pingMs\" : NumberLong(0),\n \"lastHeartbeatMessage\" : \"\",\n \"syncSourceHost\" : \"MONGO-PC1:27017\",\n \"syncSourceId\" : 0,\n \"infoMessage\" : \"\",\n \"configVersion\" : 8,\n \"configTerm\" : 37\n },\n {\n \"_id\" : 2,\n \"name\" : \"MONGO-PC3:27017\",\n \"health\" : 1,\n \"state\" : 7,\n \"stateStr\" : \"ARBITER\",\n \"uptime\" : 7157,\n \"lastHeartbeat\" : ISODate(\"2023-04-14T03:06:48.863Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"2023-04-14T03:06:48.711Z\"),\n \"pingMs\" : NumberLong(0),\n \"lastHeartbeatMessage\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"configVersion\" : 8,\n \"configTerm\" : 37\n }\n ],\n \"ok\" : 1,\n \"operationTime\" : Timestamp(1681441604, 1)\n}\n{\n \"_id\" : \"reps0\",\n \"version\" : 8,\n \"term\" : 37,\n \"members\" : [\n {\n \"_id\" : 0,\n \"host\" : \"MONGO-PC1:27017\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 1,\n \"tags\" : {\n\n },\n \"secondaryDelaySecs\" : NumberLong(0),\n \"votes\" : 1\n },\n {\n \"_id\" : 1,\n \"host\" : \"MONGO-PC2:27017\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 0.5,\n \"tags\" : {\n\n },\n \"secondaryDelaySecs\" : NumberLong(0),\n \"votes\" : 1\n },\n {\n \"_id\" : 2,\n \"host\" : \"MONGO-PC3:27017\",\n \"arbiterOnly\" : true,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 0,\n \"tags\" : {\n\n },\n \"secondaryDelaySecs\" : NumberLong(0),\n \"votes\" : 1\n }\n ],\n \"protocolVersion\" : NumberLong(1),\n \"writeConcernMajorityJournalDefault\" : true,\n \"settings\" : {\n \"chainingAllowed\" : true,\n \"heartbeatIntervalMillis\" : 2000,\n \"heartbeatTimeoutSecs\" : 10,\n \"electionTimeoutMillis\" : 10000,\n \"catchUpTimeoutMillis\" : -1,\n \"catchUpTakeoverDelayMillis\" : 30000,\n \"getLastErrorModes\" : {\n\n },\n \"getLastErrorDefaults\" : {\n \"w\" : 1,\n \"wtimeout\" : 0\n },\n \"replicaSetId\" : ObjectId(\"64559d997743r6d2t0a36aez\")\n }\n}\n",
"text": "Hello Everyone, At first, Thank you all for helping in this forum during the past few weeks,Second I have an issue regarding my replica set and I hope to understand and fix it.I have a 3 nodes replica set ( 3 Devices in the same network )1 Primary (Priority 1) MONGO-PC1:27017\n1 Secondary (Priority 0.5) MONGO-PC2:27017\n1 Arbiter (Priority 0) MONGO-PC3:27017Note: Before I adding the arbiter node I write the following command:And the following to adjust Priority of the secondary node:Now When The arbiter node and The Secondary node are goes down for maintenance, the Primary node changes to Secondary status and no way to go primary again without the Arbiter node & My question is: how to make the primary node to still primary even the arbiter & secondary goes down. I think it may be something related with WriteConcern or Priority or Votes as I did not get the concept of it yet.Rs.Status();Rs.conf():Thanks for helping",
"username": "Mina_Ezeet"
},
{
"code": "",
"text": "You should have majority nodes up i,e 2 in a 3 node replica\nYour primary will not remain primary if two nodes are down\nCheck this thread",
"username": "Ramachandra_Tummala"
}
] | ReplicaSet 3 Nodes Issue | 2023-04-14T03:13:05.418Z | ReplicaSet 3 Nodes Issue | 634 |
null | [
"connector-for-bi"
] | [
{
"code": "",
"text": "Hi,\nI am currently working on to integrate mongoDB with power BI. I have configured the BI connector and was able to connect. But after connecting few fields and tables are missing.\nPlease do help on this issue.",
"username": "Swapna_Sivalingam"
},
{
"code": "",
"text": "Hi @Swapna_Sivalingam welcome to the community!Have you followed the tutorial in https://www.mongodb.com/docs/bi-connector/current/local-quickstart/ to setup the connector and https://www.mongodb.com/docs/bi-connector/current/connect/powerbi/ to connect to PowerBI Desktop?But after connecting few fields and tables are missing.Can you elaborate on how this looks like?Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi everyone,I also have this problem: Using MongoDB Atlas, when I add a new field in an existing collection, add information in this new field and connect with MongoDD PowerBI connector, this new field does not appear, or it appears sometimes and sometimes not.This issue is happening often since I use Mongo Atlas.Does anyone know why?Thanks in advance",
"username": "Mario_Gonzalez"
},
{
"code": "",
"text": "These are my old troubleshooting notes for Power BI to Atlas from mid 2022, I haven’t touched Power BI in a hot minute since I finished the support Ramping Plan for it.5 Things to look for connecting Power BI before opening HELP ticket.",
"username": "Brock"
}
] | MongoDB connector issue with power bi | 2022-10-28T14:19:31.957Z | MongoDB connector issue with power bi | 2,763 |
null | [
"queries"
] | [
{
"code": "",
"text": "So I was using this way to find my the documents and it was working perfectly fine:\nAs an example:\nconst sessions = await safeerSessionModel.find({\nsafeer: {\nid: safeer.id,\nname: safeer.name,\n},\n});\nAs of today it is not working, when I run the query it returns 0 documentsbut when I change it to following\nconst sessions = await safeerSessionModel.find({\n“safeer.id”: safeer.id,\n“safeer.name”: safeer.name,\n},\n});\nit starts working again and shows me the list of expected documents.\nI am unable to figure out why it this happening",
"username": "M_khaziq"
},
{
"code": "{ _id: 0, safeer: { id: 1, name: 2 }}\n{ _id: 3, safeer: { name: 2, id: 1 }}\n{ _id: 0, safeer: { id: 1, name: 2, x: 4 }}\n/* looking for object safeer equals to id:1,name:2 */\nc.find( { safeer : { id :1 , name : 2}})\n/* only object with same fields and same order is returned */\n{ _id: 0, safeer: { id: 1, name: 2 }}\nc.find( { \"safeer.id\" :1 , \"safeer.name\" : 2})\n/* all documents are returned because they all have safeer.id:1 and safeer.name:2*/\n{ _id: 0, safeer: { id: 1, name: 2 }}\n{ _id: 3, safeer: { name: 2, id: 1 }}\n{ _id: 0, safeer: { id: 1, name: 2, x: 4 }}\n",
"text": "I do not know if it is your case with mongoose, but 2 objects are equals if they have the same fields and values in the same order. And this has been like that since how long I remember. Object c:{a:1,b:2} is not equal to c:{b:2,a:1} because the fields b and a are not in the same order. Also, object c:{a:1,b:2} is not equal to c:{a:1,b:2,d:3} are not equals because of the extra field.Perhaps, your model has change and new fields are in safeer or the field order has been altered. Or perhaps some data has been inserted or updated without the model, with a different order or with extra fields. The second query works because the equality is field equality rather than object equality.Starting collectionUsing object equalityUsing field value equalitySo the difference in result you see has always been like that, at least in plain mongo. You notice the difference now because your model has changed or your data has changed. I wrote in plain mongo, because you should not rule out that may be mongoose introduced a breaking change where using object equality is mapped differently from mongoose to mongo. May be mongoose reorder the fields so that they are always consitent.",
"username": "steevej"
},
{
"code": "",
"text": "in my model the field safeer is of type object, I have not given fixed key value pairs to it,example: something = new mongoose.Schema({\nsafeer: Object,\n})but before this object used to store only 2 values, recently I added new value to it example phone no, only the new documents are having the safeer object with 3 key val pairs, old ones only have 2.\nCan this cause an issue ?",
"username": "M_khaziq"
},
{
"code": "",
"text": "I added new value to it example phone noCan this cause an issue ?Yes, as I already mentioned2 objects are equals if they have the same fields and values in the same orderbject c:{a:1,b:2} is not equal to c:{a:1,b:2,d:3} are not equals because of the extra field.perhaps some data has been inserted or updated … with extra fields",
"username": "steevej"
},
{
"code": "",
"text": "Thanks, I have updated it, works now.",
"username": "M_khaziq"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Embedded document query not working! | 2023-04-13T10:11:53.812Z | Embedded document query not working! | 970 |
null | [] | [
{
"code": "",
"text": "Morning,\nIf we want to rebuild our collection but already have an FTS index over the top of the collection, how to achieve this? Currently we drop the collection by database.DropCollection(“mycollection”) but found this also drops the FTS index?Thanks",
"username": "Joe_Palin"
},
{
"code": "database.DropCollection(\"mycollection\")mongodumpdatabase.DropCollection(\"mycollection\")database.CreateCollection(\"mycollection\")database.Collection(\"mycollection\").Indexes.CreateOneAsyncIndexKeys.Textmongorestore",
"text": "When you drop a collection in MongoDB using database.DropCollection(\"mycollection\"), all indexes associated with that collection are also deleted. Therefore, dropping the collection will also remove the FTS index.To rebuild the collection without losing the FTS index, you can follow these steps:Create a backup of the existing collection data and FTS index configuration using the mongodump command. This will create a BSON dump of your data in a specific directory.Drop the collection using database.DropCollection(\"mycollection\").Recreate the collection with the desired schema using the database.CreateCollection(\"mycollection\") method.Create the FTS index again using the database.Collection(\"mycollection\").Indexes.CreateOneAsync method. You can use the IndexKeys.Text method to create a text index on one or more fields.Restore the data from the backup using the mongorestore command. This will populate the newly created collection with the previous data.By following these steps, you can rebuild your collection without losing the FTS index. However, note that this process can be time-consuming and resource-intensive, depending on the size of your data. Therefore, it’s important to plan and execute the rebuild process carefully to avoid any potential data loss or downtime.You made me second guess myself in rechecking if I was correct, and yes I did end up going into a GitHub. io page. How did I get to them? I don’t even have a clue, but the info can be referenced in them but do let me know if you can open them because I’ve never gone to the GitHub version of the MDB website before tonight.I guess I can’t include the GitHub webpages due to number of link restrictions, but the GitHub website for the C# Driver matches what I’ve said.Sources:\nhttps://www.mongodb.com/article/mongodb-fts-index-how-to-rebuild-a-collection-without-losing-an-fts-index",
"username": "Brock"
}
] | FTS Index deleted when CollectionDropped | 2023-04-13T06:52:02.376Z | FTS Index deleted when CollectionDropped | 590 |
null | [] | [
{
"code": "",
"text": "Hi\nwe have a RS with a given number of nodes, and multiple production applications querying it through proper connection strings.\nLet’s say I want to physically change only a couple of them (HW upgrade).My idea was to add the nodes, change the URI strings to accomodate the target topology, and then remove the nodes that are not addressed by any URI anymore.I can add the new nodes, replicate config and initial set of data and then rs.reconfig() my RS. My RS will run on all nodes at this time. These nodes will already be reachable from the RS and the applications.But , at this specific point, which can last some time, the applications will now lack these new nodes in their URI. I can set the priority to 0 in order for the new nodes to never become primary, because I can think that the driver will crash if their current connection string has no primary.What is the real impact (as long as these new servers never get promoted to PRIMARY) ? Is there any possibility of a driver failing because the RS string doesn’t match all of the current RS members ?Would you handle that differently ?",
"username": "MBO"
},
{
"code": "rs.conf()PRIMARYSECONDARY",
"text": "Hi @MBO and welcome to the MongoDB Community forum!!As per my understanding from the scenario, I would recommend you looking at the documentation on Maintenance of Replica set to understand how the performance is maintained on the replica sets.However, in addition to above, could you help me with some more details on the replica set configuration like:Regards\nAasawari",
"username": "Aasawari"
}
] | RS partial node upgrade : URI impact | 2023-04-04T06:59:48.387Z | RS partial node upgrade : URI impact | 670 |
null | [
"queries",
"data-modeling",
"swift"
] | [
{
"code": "class Gear: Object {\n @Persisted var in_use = false\n}\n",
"text": "I am going to throw this out as a general question: is there a best practice to handle any kind of object (record) locking - optimistic, pessimistic or otherwise?Here’s the scenarioSuppose theres a multi user, multi location app for selling gears and pulleys. Users can enter orders for those items - the order can be abandoned (user clicks Cancel) or saved (user clicks OK)Imagine a UI where the user clicks ‘New Order’ and can then enter some line items on the order - those exist in memory only while the order is being created and then persisted when the order is saved.A customer comes in and orders a gear and a pulley. The user adds a gear and a pulley to the order.While they are entering the customers order, another user at another workstation decides the company no longer carry gears and deletes the gear object.But wait - that object is already on the order (not saved yet, just in memory)Ideally, what should happen is that the user whose attempting to delete the gear should be told “That gear is in use and cannot be deleted at this time”But what happens is; they delete it, which invalidates the object - the user’s app where the order is being entered likely crashes or perhaps the item just disappears off the screen (depending on how it’s coded)One solution is to keep a “in use” property flag on the object, and when it’s added to an order in process, set the flag to true. It’s a simple solution; before allowing an object to be deleted only of that property is set to false. If it’s “in use” stop the user from deleting it.However. What if the “in use” flag is set and the app crashes, or the power goes out, or the user force-quts the app or any number of things. They flag would remain set, even if the item was not in use.Is there a strategy here?How can objects be flagged as in use so they are not inadvertently deleted while they are being used?SQL offers record locking as a generally built in option. Other NoSQL databases offer user persistence so in the above example, an app is disconnected from the server, a server function can run - in this case upon user/app disconnect it could reset any in-use properties to false.Realm/Mongo doesn’t seem to offer anything like that so how are other handling it?",
"username": "Jay"
},
{
"code": "let realm = await Realm.open(config);\nlet orderId = ...; // ID of the order being edited\nlet order = realm.objectForPrimaryKey(\"Order\", orderId);\nlet items = order.items;\nlet modifiedItems = items.filtered(\"modificationDate > $0\", user.lastSyncDate);\nif (modifiedItems.length > 0) {\n // Notify the user that the data has been modified and they need to refresh\n return;\n}\nrealm.write(() => {\n // Update the order and its items\n order.customerName = ...;\n items[0].quantity = ...;\n \n // Set the modification date on the objects\n let now = new Date();\n order.modificationDate = now;\n items.forEach((item) => item.modificationDate = now);\n});\n// Get a reference to the Realm instance\nlet realm = try! Realm()\n\n// Get the object to edit\nlet gear = realm.object(ofType: Gear.self, forPrimaryKey: gearId)\n\n// Store the current transaction version\nlet version = realm.configuration.currentTransactionVersion\n\n// Begin a write transaction\nrealm.beginWrite()\n\n// Make changes to the object\ngear.name = \"New Gear Name\"\n\n// Check if the transaction version has changed\nif version != realm.configuration.currentTransactionVersion {\n // Transaction version has changed - handle conflict\n // Display a message to the user and prompt them to review changes\n // The user can then choose to merge or discard their changes\n} else {\n // Transaction version has not changed - commit changes\n try! realm.commitWrite()\n}\nclass ObjectLock: Object {\n @Persisted(primaryKey: true) var objectId: String\n @Persisted var lockedByUser: String\n @Persisted var lockStartTime: Date\n}\nObjectLockObjectLocklet realm = try! Realm()\nlet objectToModify = realm.object(ofType: MyObjectType.self, forPrimaryKey: objectId)\n\n// Perform the modification inside a write transaction\ntry! realm.write {\n objectToModify.propertyToModify = newValue\n}\nclass MyObjectType: Object {\n @Persisted(primaryKey: true) var id: String\n @Persisted var propertyToModify: String\n @Persisted var version: Int // optimistic locking version number\n}\n\nlet realm = try! Realm()\nlet objectToModify = realm.object(ofType: MyObjectType.self, forPrimaryKey: objectId)\n\n// Modify the object's properties\nobjectToModify.propertyToModify = newValue\n\n// Increment the object's version number\ntry! realm.write {\n objectToModify.version += 1\n}\n\n// Save the changes to the object, checking the version number in the process\ntry! realm.write {\n realm.add(objectToModify, update: .modified)\n}\nlet gear = realm.object(ofType: Gear.self, forPrimaryKey: gearId)\n\n// Add a transaction observer to the gear object\nlet observerToken = gear?.observe { change in\n switch change {\n case .change(let properties):\n // The gear object has changed, handle the change\n // This could involve updating the UI or taking some other action\n case .deleted:\n // The gear object has been deleted, handle the deletion\n // This could involve removing the object from the UI or taking some other action\n case .error(let error):\n // There was an error observing the transaction, handle the error\n // This could involve displaying an error message or taking some other action\n }\n}\n\n// Make changes to the gear object\ntry! realm.write {\n gear?.inUse = true\n}\n\n// Save the changes and release the observer\ntry! realm.commitWrite(withoutNotifying: [observerToken])\ncommitWrite(withoutNotifying:)withoutNotifying",
"text": "@Jay They do actually… I’ve written a lot about this stuff already in my book I’ve been writing, but here’s some free excerpts from it:One way to handle object locking in Realm is by using transactions. Transactions provide atomicity and isolation, which can help ensure data integrity in multi-user scenarios. Here’s an example of how you could implement optimistic locking in Realm:By setting the modification date on the objects, you can detect whether they have been modified by another user since the current user started editing them. If any objects have been modified, you can notify the user that they need to refresh the data before saving their changes.This is just one example of how you could handle object locking in Realm. The approach you choose will depend on the specific needs of your application and the types of modifications you need to support.Another approach to object locking in Realm is to use transaction versions. When a transaction is started, Realm increments the transaction version, and any changes made in that transaction are tagged with the current version. You can then use this version to track changes and prevent conflicts.For example, when a user begins editing an object, you could store the current transaction version. When the user saves their changes, you can check the transaction version to see if any changes have been made since the user began editing the object. If so, you can prompt the user to review the changes and decide whether to merge or discard their changes.Here’s an example of how you could implement this approach in code:This approach allows you to track changes to objects and prevent conflicts when multiple users are editing the same objects simultaneously. However, it does require additional code to handle conflicts and merge changes, so it may not be the best approach for all applications.Ultimately, the best approach for handling object locking in Realm will depend on the specific needs of your application and the types of modifications you need to support. It’s important to carefully consider the potential risks and benefits of each approach before choosing the one that’s right for your application.In addition to the strategies mentioned above, there are a few other things you can do to mitigate the risk of data conflicts and object deletion in Realm:Implement proper error handling: Whenever an object is deleted or modified, make sure to handle any errors that may occur. For example, if a user tries to delete an object that is currently in use by another user, make sure to catch the error and provide an appropriate message to the user.Use transactions: Transactions provide a way to group multiple write operations together into a single atomic unit. This can help to ensure that modifications are applied consistently and reliably, even in the face of conflicts or errors.Use versioning: By including a version number or timestamp with each object, you can ensure that conflicting modifications are detected and resolved appropriately. For example, if two users attempt to modify the same object simultaneously, you can compare the version numbers or timestamps to determine which modification should take precedence.Overall, the key to handling object locking in Realm is to design a system that is both robust and flexible. By carefully considering the specific needs of your application and the potential risks and benefits of different approaches, you can create a solution that meets the needs of your users while minimizing the risk of data loss or corruption.Here are a few more coding examples for handling object locking in Realm:One approach to handling object locking in Realm is to create a dedicated locking table that keeps track of which objects are currently being modified. This table could include information such as the object ID, the user ID of the user who is currently modifying the object, and the time when the modification started.Here’s an example of what the schema for such a table might look like:To lock an object, you would create a new ObjectLock object with the appropriate values and add it to the Realm. To check if an object is currently locked, you would query the ObjectLock table for any locks that are currently in place for that object.Another approach to handling object locking in Realm is to use transactions to ensure that modifications are made atomically. Transactions provide a way to group multiple modifications together into a single, atomic operation, which can help to ensure that no other users are modifying the same objects at the same time.Here’s an example of how you might use transactions to modify an object in Realm:By wrapping the modification inside a write transaction, you can ensure that no other users are modifying the same object at the same time.Finally, another approach to handling object locking in Realm is to use optimistic locking. With optimistic locking, you assume that no other users are modifying the same objects at the same time, but you include a version number or timestamp with each object. When a user saves changes to an object, you check the version number or timestamp to make sure that no other changes have been made in the meantime.Here’s an example of how you might use optimistic locking in Realm:By checking the object’s version number before saving changes, you can detect if any other changes have been made in the meantime and handle the conflict appropriately.And this is another example for transactional logic for this and some more explanation about it, and how to handle it.Another approach to handling object locking in Realm is to use transaction observers. Transaction observers are a powerful feature in Realm that allow you to listen for changes to specific objects or collections and take action in response to those changes.Here’s an example of how you could use transaction observers to handle object locking in Realm:In this example, we add a transaction observer to the Gear object and listen for changes to the object. When the observer is notified of a change, we can handle the change appropriately, such as updating the UI or taking some other action.Before making changes to the Gear object, we start a write transaction and obtain a reference to the transaction observer token. We then make our changes to the object and save the changes using the commitWrite(withoutNotifying:) method. By passing in the observer token to the withoutNotifying parameter, we ensure that the observer is not notified of the changes we just made.This approach ensures that we are notified of any changes made to the Gear object, regardless of whether they were made by the current user or another user. By handling these changes appropriately, we can ensure that the data remains consistent and that users are not able to inadvertently modify data that is already in use.- DevOps Databases and Mobile Apps - A guide for everyone. -The actual repo itself is mostly empty for public viewing, but there will be free chapters put into the repo for it later on, but this stuff above should answer your questions quite cohesively. Let me know if you have any questions.",
"username": "Brock"
},
{
"code": "",
"text": "@BrockThanks for the awesome reply! Super great information.I’ve read it over several times and I can’t see where it addresses the core issue though - perhaps my question was too vague. For example, suppose this happensand before the user saves the orderSay the user takes 5 minutes to edit the order and during that 5 minute time period, another user simply deletes the objects the editing user has read in? There’s nothing preventing those read in objects from being deleted while “in use”, hence the title of the post: Object Lock; prevent deleting while in useThat being said, this part is more toward the questionAnd is similar to the solution I mentioned in my question, but it has a flaw:However. What if the “in use” flag is set and the app crashes, or the power goes out, or the user force-quts the app or any number of things. They flag would remain set, even if the item was not in use.In SQL, records read in can be locked - that prevents them from being altered or deleted but they are still available for use as ‘read-only’ records. This is obviously a very clean and simple solution.NoSQL databases are a bit more challenging and generally don’t offer SQL record/document locking like SQL. However, many offer a presence system where the server “know” about client connection status and when that status changes, can perform an action.So if a client connects and an order is created and 5 OrderItems are added, the linked Gear objects could be set to “in-use” (via a locking table as mentioned in your post). If there’s a d/c, the server knows that and can simply clear that table, which resets those items.I hope that clarify’s the question a bit further.",
"username": "Jay"
},
{
"code": "try! realm.write {\n //take some time and do some stuff with the gear\n gear.name = \"top gear\"\n}\ntry! realm.write {\n realm.delete(gear)\n}\n// Make changes to the gear object\ntry! realm.write {\n gear?.inUse = true\n ---> write is committed here <---\n}\n\n// Save the changes and release the observer\ntry! realm.commitWrite(withoutNotifying: [observerToken])\ntry! realm.write(withoutNotifying: [token]) {\n // ... write to realm\n}\n",
"text": "@BrockI have some followup questions if you don’t mind.In your post, the first two presented options for ‘locking’ areOne way to handle object locking in Realm is by using transactionsandAnother approach to object locking in Realm is to use transaction versionsBoth of those options seem to lean toward just notifying a user the object they are editing has been changed - it doesn’t prevent a change (e.g. lock) - just lets them know the object has been changed.Am I understanding these are more of a reactive approach than proactively prevent deleting in the first place?Wouldn’t simply attaching an observer (as mentioned) to each object do the same thing? If another user modifies the object, that will fire an event and the code/user would know the object was changed?The other question is about the word ‘transaction’. Realm does not have read transactions at all and the only transactions are within a write, and write transactions are first come-first-served. So if I open a write transaction to modify an objectand before that transaction completes another user does this…And the gear object is now deleted; believe it or not, the first transaction will complete without an error but there will be no gear object. This is what I am trying to prevent.oh… on this. I am not sure commitWrite will work here as the first section of code will commit when the closure ends.I think you may want thisAny tips or thoughts would be appreciated as this is a big issue for our project and so far, we have been unable to find a complete solution to prevent an object from being deleted while in use.",
"username": "Jay"
},
{
"code": "commitWritewritewithoutNotifying",
"text": "Yes, you are correct that the first two options presented for locking in Realm are reactive and not proactive. They notify the user that the object has been changed, but they do not prevent other users from modifying the object while it is being edited by another user.Attaching an observer to each object can also achieve the same result of notifying the user when the object has been changed. However, it would require more code to implement compared to using transactions or transaction versions.Regarding transactions in Realm, you are correct that Realm does not have read transactions and that write transactions are first come, first served. In the scenario you described, where one user is modifying an object while another user deletes it, the first transaction will complete without an error, but the object will no longer exist in the Realm.To prevent this, you can implement a lock mechanism using transactions or transaction versions, where the object is locked when a user starts editing it and unlocked when the user finishes editing it. This will prevent other users from modifying or deleting the object while it is being edited.As for the commitWrite method, you are correct that it will commit the changes to the Realm when the closure ends. If you want to write to the Realm without notifying the observer, you can use the write method with the withoutNotifying parameter, as you suggested. This will write to the Realm without notifying the observer, which can be useful in scenarios where you want to prevent notifications from being sent while a user is editing an object.As far as Transactions go, you would construct the logic for the transactions with a blocking measure to prevent it occurring logically. The main issue with Realm or ANY NoSQL solution, is that none of them possess locking like you’re looking for organically to them. So you have to create the logic.Everything that is done from this point forward is largely going to have to be custom solutions, above are examples of solutions you can develop and execute.",
"username": "Brock"
},
{
"code": "",
"text": "Thanks @Brock super good info and really helpful.We’ve been doing some testing where we have a table of objects that are 'in use\" and before an object is deleted, we check for it in the table using some versioning per your posts above.However, we’re run into additional issues because while Realm Writes are Atomic, the Reads are not (is there such a thing?). In other words, the delays in reading and writing can get things out of sync quickly and if there are 50 users all working on data… it just becomes a mess.One thing I did come across is findAndModify which is atomic; a document is found and modified concurrently. This would be a help.However, it doesn’t appear (to my knowledge) that’s a Realm ‘thing’. It feels like an Upsert would have the same functionality but it’s not clear from the documentation if it’s a guaranteed atomic call.",
"username": "Jay"
},
{
"code": "findAndModifyupsertupsert",
"text": "You’re welcome! I’m glad the information was helpful.Regarding your issue with syncing reads and writes in Realm, you’re correct that Realm does not currently support atomic reads. However, there are some strategies you can use to mitigate this issue:Minimize the duration of reads: You can try to minimize the duration of reads by only fetching the data that you need and using indexes to speed up queries. Additionally, you can consider caching frequently accessed data in memory to avoid unnecessary reads.Use optimistic locking: Optimistic locking is a technique where you add a version number or timestamp to each document and use this to detect conflicts. When you write to a document, you include the version or timestamp, and if the document has been updated since you last read it, the write will fail. You can then retry the write with the updated data. This approach can help reduce conflicts and ensure that writes are consistent.Use transactions: Transactions allow you to perform a series of writes atomically, ensuring that all writes are either committed or rolled back. You can use transactions to update multiple documents at once, which can help ensure that reads and writes are consistent.Regarding the findAndModify function, it is not currently supported in Realm. However, as you mentioned, the upsert function can be used to achieve similar functionality. According to the documentation, upsert operations are implied atomic and will either update an existing document or insert a new document if it doesn’t already exist. If multiple threads are trying to update the same document, the last write will win. However, if you’re using optimistic locking, conflicts can be detected and resolved appropriately.However, this is assuming the way it’s written in Kotlin’s SDK documentation isn’t written in error as it implies to me that upsert is atomic.",
"username": "Brock"
}
] | Object Lock; prevent deleting while in use | 2023-04-10T22:04:20.639Z | Object Lock; prevent deleting while in use | 1,263 |
null | [] | [
{
"code": "",
"text": "Hello Team,We are looking at Mongo DB Atlas rest endpoints. Could you please confirm is Rest Endpoints supported in Mongo DB ATLAS is stable, good in performance ? and do you have any futiure plan to deprecate or remove.Thanks,\nShubhangi",
"username": "Shubhangi_Pawar"
},
{
"code": "",
"text": "Hello @Shubhangi_Pawar ,Welcome to The MongoDB Community Forums! I assume you’re referring to the MongoDB Atlas Data API. If so, I would refer to the When To Use the Data API documentation for more information to confirm if it suits your use case. In saying so, It provides a simple and easy-to-use way to interact with MongoDB Atlas programmatically and enabling developers to automate administrative tasks.As far as I know, MongoDB has no plans to deprecate or remove the Data API in the near future. In fact, MongoDB is continuing to invest in its APIs, including the Data API, to make them even more powerful and flexible.With regards to the Data API, you may find the following resources useful:Learn how to send your first REST-like API calls to a MongoDB Atlas database with the new Data API.MongoDB Atlas's fully managed cloud services built to help you run code, integrate apps, and connect to your data. Take advantage of the generous free tier today.This article introduces the Atlas Data API and describes how to enable it and then call it from cURL.Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Hi Tarun,Thanks for your reply.Yes we are referring MongoDB Atlas [Data API (https://www.mongodb.com/docs/atlas/api/data-api/#overview).1.We are looking for crud operation API through rest endpoints. Do we have open API documentation for crud rest API?Query:a.\tData API (beta) - seems very slow to access data - MongoDB Atlas App Services & Realm / Atlas Data API - MongoDB Developer Community Forums\nb.\tData API is still slow in v1 - MongoDB Atlas App Services & Realm / Atlas Data API - MongoDB Developer Community ForumsThanks,\nShubhangi Pawar",
"username": "Shubhangi_Pawar"
},
{
"code": "",
"text": "1.We are looking for crud operation API through rest endpoints. Do we have open API documentation for crud rest API?You can take a look at below link for crud operations via Data API, I have shared additional resources in my previous reply that can also help you with the setup and integration.I’ve replied to the post.The threads you shared are nearly a year old and as we are constantly improving the services, recently we have not seen any such API slowness issues.In case you face any challenges, while working on this you can reach out to us on this community forums or in case you need in-depth technical help/architectural suggestions, you can reach out to us at Contact Us | MongoDB .",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Is Mongo DB ATLAS RestAPI is stable, good in performance ? and do you have any plan to deprecate or remove it in future | 2023-04-10T12:53:49.119Z | Is Mongo DB ATLAS RestAPI is stable, good in performance ? and do you have any plan to deprecate or remove it in future | 939 |
null | [
"node-js",
"sharding"
] | [
{
"code": "'use strict';\n\nvar http = require('http');\nvar express = require('express');\nvar bodyParser = require('body-parser');\nvar swaggerize = require('swaggerize-express');\nvar path = require('path');\nvar fs = require('fs.extra');\nvar cors = require('cors');\nvar request = require('request-json');\n\nvar i18n = require('i18n-future').middleware();\nvar locale = require(\"locale\");\nvar ejwt = require('express-jwt');\n\nvar gm = require(\"gm\").subClass({\n imageMagick: true\n});\n\nvar formidable = require('formidable');\nvar stream = require('readable-stream');\nvar uuid = require('node-uuid');\n\nvar UploadedFile = require(__dirname + '/models/uploadedfile.js');\nvar Error = require(__dirname + '/models/error.js');\n\nvar mongodb = require('mongodb'),\n MongoClient = mongodb.MongoClient;\n\nvar sharedData = module.exports.sharedData = {\n credentials: {}\n};\n\nif (fs.existsSync(__dirname + '/env.json')) {\n \n var env = JSON.parse(fs.readFileSync(__dirname + '/env.json'));\n\n for (var key in env) {\n if (!process.env[key]) {\n process.env[key] = env[key];\n }\n }\n}\n\nsharedData.credentials.sendgridOption = {\n auth: {\n // api_user: process.env.SENDGRID_USERNAME,\n api_key: process.env.SENDGRID_PASSWORD\n }\n};\n\nvar tools = sharedData.tools = require('./tools.js');\nvar emailer = sharedData.emailer = require(__dirname + '/emailer/index.js');\nvar statics = sharedData.statics = require(__dirname + '/statics/index.js');\nvar cronJobs = require(__dirname + '/cronJobs.js');\n\n// Start Express\nvar app = express();\nvar server = http.createServer(app);\n\n// ########################\n// # Defining middlewares #\n// ########################\n// CORS\napp.use(cors());\n// Provide swagger access on /api route\n\napp.use('/', function (req, res, next) {\n // Mobile redirection\n // var MobileDetect = require('mobile-detect');\n // if (req.headers && req.headers['user-agent'] && req.path === '/') {\n // var md = new MobileDetect(req.headers['user-agent']);\n\n // if (md.phone()) {\n // return res.redirect('/mobile');\n // }\n // }\n next();\n});\n\napp.use('/api', express.static(__dirname + '/public/swagger'));\n\napp.use('/api/files', function (req, res, next) {\n var path = req.path;\n if (req.path.substr(0, 1) !== '/') {\n path = '/' + path;\n }\n res.redirect(process.env.GDM_FILES_ENDPOINT + path);\n});\n\n\napp.use('/config/swagger.json', express.static(__dirname + '/config/swagger.json'));\n\n// JWT Token\napp.use(ejwt({\n secret: process.env.JWT_SECRET\n}).unless({\n path: [\n '/favicon.ico',\n '/api/sitemap.xml',\n '/api/ping',\n '/api/register/mobile',\n '/api/register/web',\n '/api/auth/mobile',\n '/api/auth/web',\n '/api/auth/regenerate',\n '/api/admin',\n '/api/public',\n '/api/devis',\n /^\\/api\\/mobile\\/.*/,\n /^\\/api\\/admin\\/.*/,\n /^\\/api\\/public\\/.*/,\n /^\\/api\\/devis\\/.*/\n ]\n}));\n// i18n\napp.use(i18n);\n// Middleware to get the local of users\napp.use(locale(['en', 'en_US', 'fr', 'fr_FR']));\n// Url encoded\napp.use(bodyParser.urlencoded({\n extended: true\n}));\n\n\n\n// JSON Body parser\napp.use(bodyParser.json());\n\napp.post('/api/file', function (req, res) {\n var form;\n form = new formidable.IncomingForm();\n\n form.onPart = function (part) {\n var outStream, path = '';\n\n if (part.filename == null) {\n return form.handlePart(part);\n }\n \n\n outStream = new stream.PassThrough();\n part.on('data', function (buffer) {\n form.pause();\n return outStream.write(buffer, function () {\n return form.resume();\n });\n });\n part.on('end', function () {\n form.pause();\n return outStream.end(function () {\n return form.resume();\n });\n });\n\n\n // Generate time-based filename;\n var uploadedFile = new UploadedFile({\n originalFileName: part.filename,\n filename: uuid.v1() + '.' + part.filename.split('.').pop()\n });\n\n if (req.get('fileName')) {\n uploadedFile.filename = req.get('fileName');\n }\n\n if (req.get('path')) {\n if (req.get('path').charAt(0) !== '/') {\n path += '/';\n }\n path += req.get('path');\n } else {\n path += '/';\n }\n\n path += uploadedFile.filename;\n console.log(\"received part: \" + part.filename + \", uploading to OVH at: \" + path);\n\n outStream.pipe(sharedData.tools.uploadFile(path, function (success, data) {\n console.log(success, data)\n\n if (success === false) {\n var error = new Error();\n error.code = 400;\n error.message = req.translate('error.unexpected');\n error.data = data;\n\n res.status(error.code).send(error);\n return;\n }\n\n return res.jsonp(uploadedFile);\n }));\n };\n form.on('error', function (err) {\n console.log(err);\n var error = new Error();\n error.code = 400;\n error.message = req.translate('error.unexpected');\n error.data = err;\n\n res.status(error.code).send(error);\n });\n form.on('end', function () {\n console.log('form end');\n });\n\n form.parse(req);\n});\n\n// Connect to MONGO DB\nMongoClient.connect(process.env.MONGODB_URL, {\n mongos: {\n ssl: true,\n sslValidate: false,\n poolSize: 2,\n \"socketOptions\": {\n \"keepAlive\": 120\n }\n },\n \"server\": {\n \"socketOptions\": {\n \"autoReconnect\": true,\n \"keepAlive\": 120\n }\n }\n },\n function (err, db) {\n if (err) {\n console.log(err);\n } else {\n sharedData.mongo = {\n db: db,\n oid: mongodb.ObjectID\n };\n\n // Create this object for compatibility with old api\n sharedData.manager = {\n app: app,\n server: server\n };\n\n require('./handlers/manager/server.js');\n // Initialize routes with swagger document\n app.use(swaggerize({\n api: path.resolve('./config/swagger.json'),\n handlers: path.resolve('./handlers')\n }));\n\n server.listen(process.env.PORT || 8022, function () {\n console.log((new Date()) + \" > Server ready to accept requests on port \" + (process.env.PORT || 8022));\n\n // start cronJobs\n cronJobs.init();\n });\n }\n }\n);\n",
"text": "Hello, today we upgraded from mongodb driver nodejs (2.1 to 3.7).Sadly we got this error MongoParseError: URI malformed, cannot be parsed.\nI tried many things.Could you help us given this piece of code please?and here is how is formatted my connection string in the env.json file :\n“MONGODB_URL”:“mongodb+srv://:@sl-eu-lon-2-portal.1.dblayer.com:10096/gdm?ssl=true&retryWrites=true&w=majority”",
"username": "Gu_L"
},
{
"code": "MongoParseError: URI malformed, cannot be parsedmongodb+srv://:@sl-eu-lon-2-portal.1.dblayer.com:10096/gdm?ssl=true&retryWrites=true&w=majoritymongodb+srvmongodb://mongodb+srv://:@mongodb+srv://<user>:<pass>@",
"text": "@Gu_L,Given the error being thrown is MongoParseError: URI malformed, cannot be parsed the issue appears to be with the connection string.The connection string you’ve shared is mongodb+srv://:@sl-eu-lon-2-portal.1.dblayer.com:10096/gdm?ssl=true&retryWrites=true&w=majority which has the following issues:Addressing the above should make the URI parseable by the Node.js driver.",
"username": "alexbevi"
},
{
"code": "",
"text": "Hello and thanks for your time,\nI put empty user and pass as it is sensitive information, im 100% sure with my logins, and i tried with and without +srv in the connection string.\nWe just updated the mongodb driver thats it. (because of an error of deprecation we updated from 2.1.7 to 3.7). The string didnt change.\nThanks in advance.",
"username": "Gu_L"
}
] | MongoParseError: URI malformed, cannot be parsed (upgraded from 2.1 to 3.7 mongodb nodejs) | 2023-04-13T15:52:44.974Z | MongoParseError: URI malformed, cannot be parsed (upgraded from 2.1 to 3.7 mongodb nodejs) | 1,889 |
null | [] | [
{
"code": "",
"text": "root@greenserver:~# apt updateE: Conflicting values set for option Signed-By regarding source MongoDB Repositories jammy/mongodb-org/6.0: /etc/apt/keyrings/mongodb.gpg !=E: Conflicting values set for option Signed-By regarding source MongoDB Repositories jammy/mongodb-org/6.0: /etc/apt/keyrings/mongodb.gpg != /etc/apt/keyrings/mongodb-archive-keyring.gpgE: The list of sources could not be read.root@greenserver:~#root@greenserver:~#",
"username": "suman_bhandari"
},
{
"code": "",
"text": "Could be duplicate entries or some missing entry\nCheck this link",
"username": "Ramachandra_Tummala"
}
] | Error in intalling mongodb in ubuntu 22.04 | 2023-04-13T12:36:49.824Z | Error in intalling mongodb in ubuntu 22.04 | 1,403 |
[
"atlas-cluster",
"atlas",
"connector-for-bi"
] | [
{
"code": "",
"text": "Hello all,We have an M10 tier with the BI connector enabled. We have also set up a DSN to connect PowerBI via ODBC. This is working fine and we have been able to read our documents and build a small dashboard.The problem starts when we publish the dashboard to PowerBI Service (the online version). Here we can see the plots and figures but we cannot refresh the data. We are trying to configure the connection credentials but the sign-in screen never ends (maybe we just need to whitelist the PowerBI IP ranges, but they change over time, so I guess this can be also an issue):\nimage545×603 14.2 KB\nIn addition, we have found some documentation saying that MongoDB connections to PowerBI Service may need a gateway to work, but I found nothing related to MongoDB Atlas. So, how can we set up this connection in order to be able to schedule the data refresh in PowerBI Service?",
"username": "Salvador_Ollero"
},
{
"code": "",
"text": "Hi Salvador,\nyou need to install a On–premise gateway in a pc to connect your powerbi service to MongoAtlas.Once you do this, go to Manage connections a Gateway in Powerbi Service and select your Gateway previously created. Then when you refresh your data in PBI Service, this will use the gateway to connect to Mongo Atlas through your local Mongo ODBC, just like when you refresh with Powerbi desktop in your computer",
"username": "Mario_Gonzalez"
}
] | Refresh data in PowerBI Service (online) connected with MongoDB Atlas | 2023-01-19T13:31:31.305Z | Refresh data in PowerBI Service (online) connected with MongoDB Atlas | 1,715 |
|
null | [
"node-js",
"mongoose-odm"
] | [
{
"code": "",
"text": "Hello everyone. I’m currently learning Express and MongoDB using Mongoose in my apps.My question is:Thanks",
"username": "Oussama_Louelkadi"
},
{
"code": "const userSchema = new mongoose.Schema({\n name: String,\n email: String,\n password: String,\n // other user properties\n budget: { type: Number, default: 1000 } //For example, consider 1000\n});\nbudgetuserSchemaapp.post('/register', async (req, res) => {\n // create new user\n const newUser = new User({\n name: req.body.name,\n email: req.body.email,\n password: req.body.password\n ...\n });\n\n // save new user to database\n await newUser.save();\n\n // create default budget for new user\n const newBudget = new Budget({\n userId: newUser._id,\n amount: 1000 // set default budget amount here\n ...\n });\n\n // save new budget to database\n await newBudget.save();\n",
"text": "Hello @Oussama_Louelkadi,Welcome to the MongoDB Community forums is there a built-in function on Mongoose doing the job, or should I add it manually?There is one scenario where you can store the budget in the same collection as the user, and that is when you can use the Mongoose Defaults option. Your schemas can define default values for certain fields. If you create a new document without setting that path, the default value will be used.For example, if your schema looks like this:In this case, budget is one of the fields in userSchema, so you can use the default value.However, as you mentioned in your post:the default budget document should be inserted into the Budget collectionIf you want to insert a value in a different collection that will be used after the registration process, you can achieve this by writing your own code in your registration route. Unfortunately, there is no built-in function in Mongoose that specifically handles this case.Sharing the sample code snippet for your reference:Please note that this is the sample snippet and it is recommended that you thoroughly test the code in a testing environment to ensure it meets all of your use cases and requirements before implementing it in production.I hope it helps. Let us know if you have any further queries.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Whenever a user register to my app, create a default budget for him | 2023-04-07T13:43:07.859Z | Whenever a user register to my app, create a default budget for him | 1,224 |
[
"database-tools",
"backup"
] | [
{
"code": "",
"text": "HelloI have create user with backup privilege :db.createUser({user:“translatorBckp”,pwd:“mypassword”, roles:[{ role: “backup”, db:“admin”}]})When I execute this command I have an error :mongodump -u ‘translatorBckp’ -p ‘mypassword’ --out=‘C:\\Temp\\test_backup’Error :\nimage775×89 13.6 KB\nIf I change role for translatorBckp to root, the backup is executing correctly.However, in documentation, backup role provides minimal privileges needed for backing up data.Thanks",
"username": "Julien_MARTIN"
},
{
"code": "",
"text": "Did you try with authentication database parameter?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "mongodumpYes I try with–authenticationDatabase adminand it’s same error.In user roles, I have just backup. Is there something missing ?",
"username": "Julien_MARTIN"
},
{
"code": "",
"text": "May be some version related issue\nWhat is your mongodb and mongodump version?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Mongodump’s version is 100.6.0 and mongodb is 6.0.1",
"username": "Julien_MARTIN"
},
{
"code": "",
"text": "Just ran into this issue, too. After updating our dev replica set from v5 to v6 and with Feature Compatibility Version set to “5.0” the existing mongodump cron job ran fine. After switching to Feature Compatibility Version set to “6.0”, the cron job failed and reported this error. It was using an account that just had the backup default role.After creating a custom role with the privilege “{ resource: { db: “config”, collection: “system.preimages” }, actions: [ “find” ] }” and adding the new role to the account used for backups, the cron job ran successfully.",
"username": "Doug_83685"
},
{
"code": "",
"text": "Hello i am using mongo atlas 6.0.1 and getting same error can you please help me i tried creating the role you mentioned but it didnt allow me to do so can you please help me",
"username": "Gaurav_Sharma"
},
{
"code": "",
"text": "I had same problém here. My solution is upgrade package mongodb-database-tools (mongodump) from version 100.5.2 to 100.7.0",
"username": "Jan_Cervenka"
}
] | Mongodump error Unauthorized | 2022-08-26T12:53:01.914Z | Mongodump error Unauthorized | 5,741 |
|
[
"connecting"
] | [
{
"code": "function connect() {\n try {\n if (!URI_DEV) throw new Error('URI is not added to your env variables');\n const uri = URI_DEV;\n const options = {\n useUnifiedTopology: true,\n useNewUrlParser: true\n };\n const client = new MongoClient(uri, options);\n return client.connect();\n } catch (err) {\n throw new Error(err.message);\n }\n}\n",
"text": "i’m using mongodb: ^4.13.0\nhere is the method i’m using to connect to mongodb when initializing the servernow when i’m checking the logs, they show me this error, and i can’t figure out why.\n\nimage899×472 75.1 KB\nnote that i’m using the same config and version of code and it works as expected.\ncan someone help me please to solve this problem. Thank you!",
"username": "gaming_state"
},
{
"code": "",
"text": "can somebody help me solving this issue please",
"username": "gaming_state"
},
{
"code": "",
"text": "Hi @gaming_state,Based off the hostnames I presume this is an Atlas cluster. The error itself seems to be a generic connection failure one. Regarding integration with Vercel, could you check the following Integrate with Vercel documentation and more specifically perhaps the IP Access Lists in Atlas and IP Allow Lists in Vercel section?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "thank you so much, it worked for me",
"username": "gaming_state"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Issue mongodb connection when deploying sveltekit application to vercel | 2023-04-07T20:37:54.329Z | Issue mongodb connection when deploying sveltekit application to vercel | 1,388 |
|
null | [
"node-js"
] | [
{
"code": "exports = async function(arg){\nconst { Client } = require('@elastic/elasticsearch');\n try {\n const apiKey = context.values.get(\"ES_API_KEY\");\n const cloudId = context.values.get(\"ES_CLOUD_ID\");\n const client = new Client({\n cloud: {\n id: cloudID\n },\n auth: {\n apiKey\n }\n})\n return client;\n \n } catch(err) {\n console.log(\"Error occurred while executing:\", err.message);\n return { error: err.message };\n }\nfailed to execute source for 'node_modules/@elastic/elasticsearch/index.js': FunctionError: failed to execute source for 'node_modules/@elastic/transport/index.js': FunctionError: failed to execute source for 'node_modules/@elastic/transport/lib/connection/index.js': FunctionError: failed to execute source for 'node_modules/@elastic/transport/lib/connection/UndiciConnection.js': FunctionError: failed to execute source for 'node_modules/undici/index.js': FunctionError: failed to execute source for 'node_modules/undici/lib/api/index.js': FunctionError: failed to execute source for 'node_modules/undici/lib/api/api-request.js': FunctionError: Cannot find module 'async_hooks'",
"text": "I am trying to integrate @elastic/elasticsearch npm library in mongodb realm function. To achieve that I addded @elastic/elasticsearch as dependency.When I did so I get below error.failed to execute source for 'node_modules/@elastic/elasticsearch/index.js': FunctionError: failed to execute source for 'node_modules/@elastic/transport/index.js': FunctionError: failed to execute source for 'node_modules/@elastic/transport/lib/connection/index.js': FunctionError: failed to execute source for 'node_modules/@elastic/transport/lib/connection/UndiciConnection.js': FunctionError: failed to execute source for 'node_modules/undici/index.js': FunctionError: failed to execute source for 'node_modules/undici/lib/api/index.js': FunctionError: failed to execute source for 'node_modules/undici/lib/api/api-request.js': FunctionError: Cannot find module 'async_hooks'what is causing this issue?",
"username": "Faisal_Ansari"
},
{
"code": "",
"text": "It seems at this point of time mongodb functions does not support latest @elastic/elasticsearch version 8.Downgrading version to 7.17.0 resolved this issue.",
"username": "Faisal_Ansari"
}
] | @elastic/elasticsearch npm library not working in mongodb realm function | 2023-04-13T05:35:59.380Z | @elastic/elasticsearch npm library not working in mongodb realm function | 914 |
[
"migration"
] | [
{
"code": "",
"text": "I have a M0 Sandbox shared cluster hosted with GCP in Mumbai region.\nI want to transfer it to us-west.When I tried to do it via the dashboard, I’m not even getting an option to review my changes\nimage909×863 53.1 KB\nAny help would be appreciated here! Thanks",
"username": "Balaji_Jayakumar"
},
{
"code": "M0M2M5M10+M2M5mongodumpM0M0mongodumpM0mongorestoreM0mongodumpM0mongorestoreM0",
"text": "Hey @Balaji_Jayakumar,Welcome to the MongoDB Community Forums! As per the Move a Cluster to a Different Region documentation, Atlas supports changing a cluster’s region and cloud provider:However, if you don’t want to upgrade to M2/M5 and don’t want to input new payment details, one workaround can be to mongodump your current M0, delete the M0 after the mongodump is complete (delete only once you have verified it has all the data), create a new M0, and then mongorestore to it.Another safer option can be to create a new project, create a new M0, mongodump from the original M0 project, and then mongorestore to your new M0 project.mongodump Documentation\nmongorestore DocumentationHope this helps. Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Changing providers for the free cluster in my project | 2023-04-12T21:56:22.231Z | Changing providers for the free cluster in my project | 818 |
|
null | [] | [
{
"code": "persons: [ObjectId]persons: [{ref: ObjectId, name: <referenced person name>}]",
"text": "Hello, if i have collection Persons which have name, age, position fields\nI also have Houses collection, that have address, floors fields.But House also needs to have persons list with all persons that live in that particular house.\nI understand that i can save list of references persons: [ObjectId] and than populate persons.but i need to store person name in houses collection persons: [{ref: ObjectId, name: <referenced person name>}]. Data can be updated eventually (asynchrony) but there needs to be guarantees that they will. Or explicitly see which parts have not updated and what why.Maybe there is something ready.If not, my best idea currently is as fallows: use changeStreams with cursor (so no changes have been missed out on) + each change initiates agenda job if needed to replicate data (so there are explicit success/failure for each particular “replication”).Im not entirely happy with this solution as i think it could get slow with large amount of data. Maybe there are other caveats to such approach. Maybe if there isn’t some out of the box solution, maybe there are some other way how to approach this?Other way was to on save, trigger save for other places as well (as agenda), but ideally things would update correctly even if changed directly in db, or from other app. Change stream would handle those cases as i understand.",
"username": "RoG"
},
{
"code": "ObjectId{\n \"_id\": ObjectId(\"...\"),\n \"address\": \"123 Main St\",\n \"floors\": 2,\n \"persons\": [\n {\n \"ref\": ObjectId(\"...\"),\n \"name\": \"John\"\n },\n {\n \"ref\": ObjectId(\"...\"),\n \"name\": \"Jane\"\n }\n ]\n}\nObjectIddb.Houses.updateMany(\n { \"persons.ref\": ObjectId(\"...\") },\n { $set: { \"persons.$[].name\": \"New Name\" } }\n);\nObjectId",
"text": "One way to achieve what you want is to use MongoDB’s embedded documents instead of references. You can create a subdocument for each person in the Houses collection, which contains both the ObjectId reference and the person’s name.For example, your Houses collection might look like this:This way, you can store the person’s name along with their reference, and you won’t have to perform additional queries to retrieve the names.To update the person names, you can use a multi-update operation to update all occurrences of a person’s ObjectId in the Houses collection. For example:This will update all occurrences of the person with the given ObjectId in the Houses collection.To ensure that updates to the Persons collection are reflected in the Houses collection, you can use change streams as you suggested. When a person document is updated, you can use a change stream to find all Houses documents that reference that person, and update the person name in those documents.Using a cursor with change streams should not be slow, as long as you handle each change efficiently. You can use an asynchronous job queue like Agenda to handle updates, which will allow you to process changes in the background without affecting the performance of your application.",
"username": "Brock"
}
] | Embed parts of referenced documents and keep them up to date | 2023-04-12T11:04:29.265Z | Embed parts of referenced documents and keep them up to date | 431 |
[
"database-tools"
] | [
{
"code": "mongoimport --config=config.yaml --db=tabledb --collection=ShopPack --type=csv --columnsHaveTypes --headerline --file=ServerShopPack.csv --ignoreBlanks\nindex.int32()\tUID.int32()\tItem.0.int32()\tItem.1.int32()\n1\t1000\t0\t1\n2\t1001\t0\t\n",
"text": "My commend here.My csv file dataWhen I Import ‘Item’ category, I wanted it to be an array type. But, Object Type.\nI Can’t find commend or header type.\nLet me know how to change array type.\n",
"username": "DEV_JUNGLE"
},
{
"code": "csv-parsercsvcsv-parsernpm install csv-parser\ncsv-parserconst fs = require('fs');\nconst csv = require('csv-parser');\n\nconst results = [];\nfs.createReadStream('path/to/csv/file.csv')\n .pipe(csv())\n .on('data', (data) => results.push(data))\n .on('end', () => {\n // Write the JSON output to a file\n fs.writeFileSync('path/to/json/file.json', JSON.stringify(results));\n });\nfs.createReadStream()csv()fs.writeFileSync()csvimport csv\nimport json\n\ncsv_file = 'path/to/csv/file.csv'\njson_file = 'path/to/json/file.json'\n\n# Read CSV file and convert to list of dictionaries\nwith open(csv_file, mode='r') as f:\n reader = csv.DictReader(f)\n data = [row for row in reader]\n\n# Write JSON output to file\nwith open(json_file, mode='w') as f:\n json.dump(data, f)\ncsv.DictReader()json.dump()index,UID,items\n1,1000,\"[1, 2]\"\n2,1001,\"[3, 4]\"\nmongoimport--jsonArraymongoimport --db=tabledb --collection=ShopPack --type=csv --columnsHaveTypes --headerline --file=ServerShopPack.csv --ignoreBlanks --jsonArray\n",
"text": "The easiest way is just convert the CSV to JSON Array.To convert a CSV file to JSON format, you can use a CSV parsing library such as csv-parser in Node.js or the csv library in Python. Here’s an example in Node.js:In this example, the CSV file is read using fs.createReadStream() and piped to the csv() function to parse it into an array of objects. The resulting array is then written to a JSON file using fs.writeFileSync().Similarly, in Python you can use the csv library to convert a CSV file to JSON format:In this example, the csv.DictReader() function is used to read the CSV file and convert it into a list of dictionaries. The resulting list is then written to a JSON file using json.dump().==To import a CSV file with an array field in MongoDB, you need to format the array elements in a specific way.==For example, to import an array field called “items” with two integer values (1 and 2) for each document, you need to format the CSV file like this:Note that the array elements are enclosed in square brackets and separated by a comma and a space. When you import the CSV file using the mongoimport command, you should specify the --jsonArray option to indicate that the field is an array type:This should import the “items” field as an array type in MongoDB.",
"username": "Brock"
}
] | When import array of csv file, changed object type | 2023-04-12T08:33:48.118Z | When import array of csv file, changed object type | 1,181 |
|
null | [
"connecting"
] | [
{
"code": "mongo.cfgstorage:\n dbPath: C:\\Program Files\\MongoDB\\Server\\6.0\\data\n journal:\n enabled: true\n\nsystemLog:\n destination: file\n logAppend: true\n path: C:\\Program Files\\MongoDB\\Server\\6.0\\log\\mongod.log\n\nnet:\n port: 27017\n bindIp: 0.0.0.0\n\nsecurity:\n authorization: enabled\nipconfigWindows IP Configuration\n\n\nUnknown adapter ProtonVPN TUN:\n\n Media State . . . . . . . . . . . : Media disconnected\n Connection-specific DNS Suffix . :\n\nEthernet adapter vEthernet (Ethernet):\n\n Connection-specific DNS Suffix . :\n Link-local IPv6 Address . . . . . : fe80::8eba:157b:5e9c:976%42\n IPv4 Address. . . . . . . . . . . : 172.18.0.1\n Subnet Mask . . . . . . . . . . . : 255.255.240.0\n Default Gateway . . . . . . . . . :\n\nEthernet adapter vEthernet (VirtualBox Host):\n\n Connection-specific DNS Suffix . :\n Link-local IPv6 Address . . . . . : fe80::6e20:d83d:b51e:fa3b%47\n IPv4 Address. . . . . . . . . . . : 172.21.64.1\n Subnet Mask . . . . . . . . . . . : 255.255.240.0\n Default Gateway . . . . . . . . . :\n\nEthernet adapter Ethernet:\n\n Connection-specific DNS Suffix . :\n IPv4 Address. . . . . . . . . . . : 192.168.1.222\n Subnet Mask . . . . . . . . . . . : 255.255.255.0\n Default Gateway . . . . . . . . . : 192.168.1.1\nipconfigWindows IP Configuration\n\nEthernet adapter vEthernet (Broadcom NetXtreme Gigabit Ethernet - Virtual Switch):\n\n Connection-specific DNS Suffix . :\n Link-local IPv6 Address. . . . : fe80::151f:aad0:9a3:2dd1%15\n IPv4 Address . . . . . . . : 192.168.1.163\n Subnet Mask. . . . . . . . : 255.255.255.0\n Default Gateway . . . . . . : 192.168.1.1\nmongodb://192.168.1.163:27017/?tls=true",
"text": "My client is a Windows 10 Education and my server is a Windows Server 2022. My client IP is 192.168.1.222 and my server IP is 192.168.1.163. I am using the default port 27017 for my connection, but I cannot connect to my MongoDB server. I have no idea what’s going on. I am following every guide out there and nothing has been able to fix my issue.Here is my mongo.cfg from the server:I run ipconfig on my client and I get this:I run ipconfig on my server and I get this:The firewall on my client has an outbound rule for port 27017 and the firewall on my server has an inbound rule for port 27017. The connection string that I use to connect is: mongodb://192.168.1.163:27017/?tls=true",
"username": "Alex_Micharski"
},
{
"code": "mongodb://192.168.1.163:27017/?tls=truemongod.conf",
"text": "Hi @Alex_Micharski and welcome to the MongoDB community forum!!The connection string that I use to connect is: mongodb://192.168.1.163:27017/?tls=trueCan you confirm if you have tried to connect using mongodb://localhost:27017 on the server machine to connect to the Mongo client and if the connection was successful?Also, would recommend you to follow the documentation on how to install MongoDB on Windows for further reference.If the problem persist, can you help me with a few details about the deployment:\\As as a recommendation, you can also consider moving to MongoDB Atlas which is fully managed MongoDB in the cloud and solves the complexity of IP networking and routing scenarios.Let us know if you have any further queries.Regards\nAasawari",
"username": "Aasawari"
}
] | Connect ETIMEOUT connecting to remote server windows | 2023-04-10T00:12:20.180Z | Connect ETIMEOUT connecting to remote server windows | 750 |
null | [
"queries"
] | [
{
"code": "",
"text": "hi i reading in the documentation about https://www.mongodb.com/docs/manual/tutorial/iterate-a-cursor/\nthis is talking about cursor but what i don’t know what is the meaning of cursor is exhausted please i need an explanation with example to understand because i am begginner",
"username": "mina_remon"
},
{
"code": "killSessions",
"text": "Hello @mina_remon,Welcome back to the MongoDB Community forums what is the meaning of cursor is exhaustedIn this context, the term “exhausted” refers to a cursor that has been fully traversed and has no more data to return.this is talking about the cursorA cursor is a pointer to the result set of a query in MongoDB. When a query is executed, MongoDB returns a cursor that can be used to iterate over the results. Cursors are often used for large result sets or when results need to be processed in batches.So, when a client has exhausted a cursor, it means that the client has retrieved all the documents that match the query conditions and there are no more documents left to fetch. At this point, the cursor can be closed and resources can be freed up.For example, imagine a collection of customer orders in a MongoDB database. If a query is executed to retrieve all orders for a particular customer, a cursor will be returned with all matching orders. If the cursor is iterated over and all orders are retrieved, the cursor will be exhausted and can be closed.Further, in the docs, it describes the cursor behavior:In MongoDB 5.0 and 4.4.8, cursors created within a client session will automatically close when the cursor is exhausted, and the corresponding server session ends with the killSessions command or the session times out. However, if it is created outside the session it will automatically close after 10 minutes of inactivity, or if the client has exhausted the cursor.I hope it answers your question. Let us know if you have any further questions.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "@Kushagra_Kesav Hi when I did a while loop or foreach loop sometimes I got a message that say cursor exhausted and I got to collection is that a different situation",
"username": "mina_remon"
},
{
"code": "foreachmongotest{\n \"_id\": {\n \"$oid\": \"642e640ebba9b652048e9be3\"\n },\n \"timestamp\": 1681110227585,\n \"measure\": \"12\"\n}\ntestmeasureforEachfindts> var cursor = db.test.find()\nts> cursor.forEach(function(ts) {print(ts.measure)})\n12\nts> cursor.forEach(function(ts) {print(ts.measure)})\nMongoCursorExhaustedError: Cursor is exhausted\nMongoCursorExhaustedError: Cursor is exhausted",
"text": "Hi @mina_remon,I did a while loop or foreach loop sometimes I got a message that say the cursor exhaustedHere, I tried to reproduce the error using the mongo shell.This is the sample collection named test which contains one document as follows:Here I’m iterating through the test collection and printing the measure field for each document using the forEach method on the cursor returned by the find:The first loop completes successfully and prints the measure for a document in the collection. However, when I tried to run the loop again, we get a “MongoCursorExhaustedError: Cursor is exhausted” error. This error occurs because the cursor has been exhausted and does not contain any more data to retrieve. I got to collection is that a different situationCan you please elaborate on what you mean by the above statement? If possible please provide any example to help us further understand the issue.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "MongoCursorExhaustedError: Cursor is exhausted",
"text": "no problem consider it a wrong statment that i wrote by mistack what i want to know is why i got the\n“MongoCursorExhaustedError: Cursor is exhausted ” and you gave me the answer",
"username": "mina_remon"
},
{
"code": "MongoCursorExhaustedError: Cursor is exhausted",
"text": "ok i tried this in one session it works for the first time and loops over the cursor and gave me the documents then on the same session i tried it again and gave me “MongoCursorExhaustedError: Cursor is exhausted ” error but when i opned a new session on the linux terminal i tried the same statement it works again and gave me the results\n1-first session terminal one window\n",
"username": "mina_remon"
},
{
"code": "",
"text": "2- another diffrent session terminal 2:\n",
"username": "mina_remon"
},
{
"code": "",
"text": "Hello @mina_remon,but when i opened a new session on the Linux terminal i tried the same statement it works again and gave me the resultsYes, it’s obvious that you obtained the result by repeating the same operation in the new session. Could you please help me understand what your question is?Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "There are no questions.\nYou answered me. Thank you very much",
"username": "mina_remon"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | What dose it mean by exhausts the cursor? | 2023-04-10T22:06:31.586Z | What dose it mean by exhausts the cursor? | 2,021 |
null | [] | [
{
"code": "",
"text": "Tried this. But still facing the issue. It’s frustrating as I’ve already lost a lot of time on this.",
"username": "GAYATRI_GUNTURU"
},
{
"code": "",
"text": "The solution here worked for me:",
"username": "Oluwatobi_Williams"
},
{
"code": "",
"text": "Hello @GAYATRI_GUNTURU,Welcome to the MongoDB Community forums As @Oluwatobi_Williams suggested changes to the project name, I hope they have worked for you and the issue has been resolved on your end.If not, please share the link to the lab and explain what specific problem you are encountering.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Lesson 3: Lab 1, Cannot submit | 2023-04-04T18:01:25.358Z | Lesson 3: Lab 1, Cannot submit | 808 |
null | [] | [
{
"code": "",
"text": "unable to check and complete following lesson practice\nLab: Managing Databases, Collections, and Documents in Atlas Data Explorer",
"username": "Eric_Wong1"
},
{
"code": "",
"text": "Are other labs working fine so far?It would be helpful if you can find some error messages to follow. Have you checked the browser developer tools console for possible ones?",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Hi I have the same issue, unfortunately the only error message is, “The users collection document count was incorrect. Please try again.”When i check console logs, there doesn’t seem to be any helpful info, only that some cookies are not allowed.How to proceed in the course other than skipping the lesson?",
"username": "Christian_Long"
},
{
"code": "",
"text": "I had the same issue. Found solution here: Lesson 3: Lab 1, cannot submit *SOLVED*Change project to MDB_EDU as the lab opened Project0. Verification will work if you complete in the MDB_EDU project",
"username": "Kevin_Silver"
},
{
"code": "MDB_EDUmdbuser_test_dbusers",
"text": "Hello @Christian_Long/@Eric_Wong1,Welcome to the MongoDB Community forums unable to check and complete following lesson practiceHi I have the same issue, unfortunately the only error message is, “The users collection document count was incorrect. Please try again.”As @Kevin_Silver mentioned please make sure that you are working on the correct project, i.e., MDB_EDU. Then, reload the sample dataset and generate the mdbuser_test_db database. Lastly, create the users collection and insert the first document.If the issue persists please share the link to the lab and explain what specific issue you are encountering.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unable to complete lab | 2023-03-30T07:29:49.908Z | Unable to complete lab | 1,593 |
[] | [
{
"code": "",
"text": "Hi, I have ran across a problem early into this coursework. Here are some screen shots of my errors.The guide mentions to insert this document in the JSON option field:{\n“name”: “Parker”,\n“age”: 28\n}I have provided the error screen shot as well as what my JSON looks like. (im a new user, i can only post one media file.)\nunnamed1250×817 37.5 KB\n",
"username": "Kenny_Nguyen"
},
{
"code": "",
"text": "Here is my error message",
"username": "Kenny_Nguyen"
},
{
"code": "PROJECT 0MDB_EDUmdbuser_test_dbusers",
"text": "Hi @Kenny_Nguyen,Welcome to the MongoDB Community forums Based on your shared screenshot, I can see you are in PROJECT 0. Make sure that you are working on the correct project, i.e., MDB_EDU. Then, reload the sample dataset and generate the mdbuser_test_db database. Lastly, create your users collection and insert your first document.If the issue persists please share the link to the lab and explain what specific issue you are encountering.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | The users collection document count was incorrect. Please try again | 2023-04-12T22:04:07.265Z | The users collection document count was incorrect. Please try again | 973 |
|
[] | [
{
"code": "",
"text": "There is a problem with the driver of mongodb. During the executionprocess, reading information fails and the connection times out.The screenshot is as follows, how to solve this problem, thank you for your answer.\n\n012525×1251 258 KB\n",
"username": "643194378"
},
{
"code": "",
"text": "Hello @643194378 ,Welcome to The MongoDB Community Forums! To understand your use-case better, can you please provide additional details such as:Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Hi Tarun\nThis is the first error encountered,\nVersion 4.0.26, driver java 3.6.3,\nI’m not connected to MongoDB Atlas or a local server, but with the same configuration, Alibaba Cloud has no driver and connection timeout issues.\nCan connect to my MongoDB database from the shell\nHow to solve this problem, thank you, looking forward to your reply.",
"username": "643194378"
},
{
"code": "",
"text": "I’m not connected to MongoDB Atlas or a local server, but with the same configuration, Alibaba Cloud has no driver and connection timeout issues.Can you please clarify where the MongoDB server returning the error is located? I would like to confirm since you have stated Alibaba Cloud has no issues or timeout issues.\nWhat is the deployment topology? (Standalone, replica set or any other)Can connect to my MongoDB database from the shellAs you mentioned you are able to connect to this deployment via shell, can you please try the same read query from this shell and see if you are able to get the required results? This may assist with narrowing down the possible causes of the error.This is the first error encountered,If this is the first time you are seeing this error, has anything changed in your deployment due to which this query started giving read timeouts?",
"username": "Tarun_Gaur"
}
] | There is a problem with the driver of mongodb | 2023-04-05T12:25:23.807Z | There is a problem with the driver of mongodb | 600 |
|
null | [] | [
{
"code": "",
"text": "I see a difference in System Memory Memory Used and Max System memory Memory Used parameter value .For ex: System Memory Memory Used shows 400 mb but Max System Memory Memory Used shows 1.4 GB.",
"username": "satyendra_kumar1"
},
{
"code": "",
"text": "Hi @satyendra_kumar1 - Welcome to the community Based off the description for the Max System Memory Used metric:The maximum System Memory Used value over the time period specified by the metric granularity.For ex: System Memory Memory Used shows 400 mb but Max System Memory Memory Used shows 1.4 GB.Based off your example, my understanding is that the 400mb shown is the average memory use for the granularity specified. The 1.4GB for Max System Memory shown was the maximum value recorded for system memory used in that time period (granularity). I would confirm this with the Atlas in-app chat support team if you have any further doubts or queries regarding these metrics.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Difference between System Memory And Max System Memory Metric | 2023-04-04T06:16:08.979Z | Difference between System Memory And Max System Memory Metric | 799 |
null | [] | [
{
"code": "",
"text": "Hi Mongo dev community . For whatever reason my certification isn’t showing my name. It’s showing up as ‘Pajaro Loco’. How do I change this back to my actual name?The account is linked to my Google Account, I’ve updated my Google acct to reflect my real name. It doesn’t seem theres a way to change the name in the university account.Please advise",
"username": "David_Chappell"
},
{
"code": "",
"text": "Hey @David_Chappell,Welcome to the MongoDB Community Forums! Kindly email the Certification Team at [email protected] for this request. They will be able to help you out with the name change in your certificate.Please feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "Looks like this is the solution. Thanks Satyam!",
"username": "David_Chappell"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Change MongoDB University 'learner' name | 2023-04-05T23:34:00.328Z | Change MongoDB University ‘learner’ name | 1,183 |
null | [
"queries",
"dot-net"
] | [
{
"code": "GridFSFileInfoIdAsBsonValuevar filter = Builders<GridFSFileInfo>.Filter.Eq(x => x.IdAsBsonValue, BsonValue.Create(ObjectId.Parse(id)));\n\nusing (var cursor = _bucket.Find(filter))\n {\n var fileInfo = cursor.ToList().FirstOrDefault();\n ...\n }\nIdAsBsonValueIdvar filter = Builders<GridFSFileInfo>.Filter.Eq(x => x.Id, ObjectId.Parse(id));\nGridFSFileInfoId",
"text": "I’m trying to get GridFSFileInfo from GridFS database and I have a problem that it is not working while finding by id.It is working while finding by IdAsBsonValue tho, here is some of my code:Code shown above is working fine but IdAsBsonValue is obsolete so I want to use just Id.When I change first line to:I get the following error when connecting to database: “MongoDB.Driver.Linq.ExpressionNotSupportedException: Expression not supported: x.Id.”There is property in GridFSFileInfo called Id and example shown before is working, so I have no clue what’s wrong there…Glad if anyone shares a solution Im using .NET 6.0",
"username": "Jakub_Sosnowski"
},
{
"code": "GridFSFileInfo_idIdAsBsonValueGridFSFileInfoSerializerGridFSFileInfo.IdIdAsBsonValueGridFSFileInfoGridFSFileInfoSerializerGridFSFileInfo<TFileId>GridFSFileInfoSerializer<TFileId>using System;\nusing MongoDB.Bson;\nusing MongoDB.Driver;\nusing MongoDB.Driver.GridFS;\n\nvar client = new MongoClient();\nvar db = client.GetDatabase(\"test\");\nvar coll = db.GetCollection<GridFSFileInfo<ObjectId>>(\"fs.files\");\n\nvar id = ObjectId.GenerateNewId().ToString();\nvar filter = Builders<GridFSFileInfo<ObjectId>>.Filter.Eq(x => x.Id, ObjectId.Parse(id));\nvar query = coll.Find(filter);\n\nConsole.WriteLine(query);\nfind({ \"_id\" : ObjectId(\"64371da5da7f980ab8c559d8\") })\n",
"text": "Hi, @Jakub_Sosnowski,Welcome to the MongoDB Community Forums. I understand that you are unable to query the GridFSFileInfo data from a GridFS-related collection. I was able to reproduce the issue. The root cause is that the _id property is mapped to IdAsBsonValue internally by the GridFSFileInfoSerializer and GridFSFileInfo.Id is simply a C# property that wraps around IdAsBsonValue.GridFSFileInfo and GridFSFileInfoSerializer are compatibility shims to our 1.x API. There are new versions GridFSFileInfo<TFileId> and GridFSFileInfoSerializer<TFileId>. Using these new versions, your filter definition produces the desired MQL and should work in your application.The output is:Sincerely,\nJames",
"username": "James_Kovacs"
}
] | ASP.NET GridFS get file info by Id | 2023-04-11T02:33:19.518Z | ASP.NET GridFS get file info by Id | 1,280 |
null | [
"swift",
"kotlin"
] | [
{
"code": "func doLogout(){\n repo.doLogout(){error in\n SingletonRepo.reset()\n UserDefaults.standard.set(false, forKey: \"isLogin\")\n isLoginShown = true\n }\n }\nclass SingletonRepo{\n\n static var shared = RealmRepo()\n \n private init() { }\n \n static func reset() {\n shared = RealmRepo()\n }\n",
"text": "I have a kmm app on swiftui and kotlin.My problem is that whenever I try to do a logout and then logging in back, the realm sync will not open or work. The realm will work only if I close the app and re-open the app.\nAlso, when starting the app on the login page, everything works fine, so the problem its when doing the logout.On the logout I do a reset of the repo, I re-initialize the object:the SingletonRepo with the reset method:The reset will re-initialize the shared Kotlin RealmRepo(). I do this for the next user to have a new Realm Repo instance.Is this the correct flow?\nShouldnt I reset the realm repo when doing the logout?\nShould I close it? Should I close the repo and then reset it?\nWhat is the correct flow?Thank you for your time!",
"username": "AfterFood_Contact"
},
{
"code": "",
"text": "Why are you using UserDefaults for that and what does the actual login and logout code look like? The code in the question is a big vague so it’s hard to get a feel for what you’re attempting to do.",
"username": "Jay"
},
{
"code": "",
"text": "The userdefaults are related to the iOS, it does not affect of the flow.\nI do a simple realm.login and realm.logout.",
"username": "Daniel_Gabor"
},
{
"code": "",
"text": "Understood. However, the code in the question doesn’t have a lot of meaning without more context.We don’t know what SingletonRepo or RealmRepo is or does or if it needs to be ‘reset’ - why do you want to reset it? What does ‘close the repo’ mean?Generally speaking when you log out of Realm, well, you’re logged out… and it’s ready for the same or different user to log in. There’s really no ‘resetting’ or ‘closing’ needed.You also don’t generally need to store anything in UserDefaults in regards to logging in our out - maybe there’s something for your use case though.",
"username": "Jay"
}
] | Realm will not work when logging in after a logout | 2023-04-12T13:21:49.985Z | Realm will not work when logging in after a logout | 970 |
null | [
"dot-net",
"flexible-sync"
] | [
{
"code": "",
"text": "There is a way to monitor when sync will result in storage space not enough ?Thanks",
"username": "Sergio_Carbonete"
},
{
"code": "realm-cli app storage --app-id <your app id>system.profile",
"text": "Hello @Sergio_Carbonete,Actually yes, there’s multiple ways to do this lol.Yes, MongoDB Realm provides several ways to monitor storage space usage and potential issues with storage space.Sure, here are six different ways you can monitor the storage space in MongoDB Realm:Use the Realm Admin UI to monitor storage usage on the Overview tab of your application. This will give you a high-level view of how much storage is being used, and how much is available.Use the Realm CLI to view the current storage usage by running the command realm-cli app storage --app-id <your app id>. This will give you a detailed breakdown of how much storage each collection is using.Use the MongoDB Atlas UI to view storage usage at the cluster level. Go to the “Metrics” tab for your cluster, and select “Storage” to view the total amount of storage used.Set up alerts in MongoDB Atlas to receive notifications when storage usage reaches a certain threshold. You can do this by going to the “Alerts” tab and creating a new alert based on the “Storage” metric.Monitor storage usage programmatically by querying the system.profile collection in your MongoDB database. This collection contains information about all database operations, including how much storage was used.Use a third-party monitoring tool like Datadog or New Relic to monitor storage usage in real-time and receive alerts when storage usage reaches a certain threshold.",
"username": "Brock"
},
{
"code": "",
"text": "Thanks @Brock , it’s help in Server side.\nSorry but i think i’m not clear.\nWhen i say need to monitor storage in sync is about client storage, to know if sync will be complete or fail.Thanks",
"username": "Sergio_Carbonete"
},
{
"code": "",
"text": "Hi @Sergio_Carbonete\nAre you looking for something like a sync progress cursor to indicate how much data has been synced to the server?",
"username": "Niharika_Pujar"
},
{
"code": "",
"text": "No i want to know if exist enough space to sync occurs ok before they start.\nOther doubt is if sync abort because not have space to consume, what exception is fired, and they commit some sync and abort pending sync or not commit all sync.Imagine sync to smartphone and no storage avaiable to made all sync.thanks",
"username": "Sergio_Carbonete"
},
{
"code": "",
"text": "If this is regarding storage concerns, we recommend having at least 50% storage available on the cluster before enabling sync, or you could choose to enable auto scaling on your cluster ( See docs) to take care of this for you.Let me know if that answered your question.Thanks.",
"username": "Niharika_Pujar"
},
{
"code": "",
"text": "No , you are focus in server, i’m talking about client, desktop smartphone for example.thks",
"username": "Sergio_Carbonete"
}
] | Storage monitor when sync | 2023-04-10T13:21:14.390Z | Storage monitor when sync | 969 |
null | [
"production",
"server"
] | [
{
"code": "",
"text": "MongoDB 5.0.16 is out and is ready for production deployment. This release contains only fixes since 5.0.15, and is a recommended upgrade for all 5.0 users.Fixed in this release:5.0 Release Notes | All Issues | All DownloadsAs always, please let us know of any issues.– The MongoDB Team",
"username": "Britt_Snyman"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 5.0.16 is released | 2023-04-12T19:22:33.085Z | MongoDB 5.0.16 is released | 1,038 |
null | [
"android"
] | [
{
"code": "",
"text": "Hi,We have a use case with complete dynamic number of “tables”/realms and each table have complete dynamic fields (other than id field). The tables and fields can be added or removed at any time. For example, today we have table A (id, firstName, lastName), table B (id, department, description), tomorrow we will have table A (id, firstName, lastName, employeeId), table B (id, department, description, contactId), table C (id, role, permission). The data needs to be synced with Altas.I learned that we may use DynamicRealm to support dynamic tables and schemas. but by reading this post\nlooks like every time we change the schema, we need to release a new Android/iOS App which is not possible in our use case. Is there a way to get it working without changing the code? For example, in Firebase or Couchbase, you don’t need to define a RealmObject in the code. The data query and sync are done by specifying the dynamic string table name/ realm name / collection name.",
"username": "Kan_Zhang"
},
{
"code": "",
"text": "Hi @Kan_Zhang,\nIf your schema changes contain net new fields, this would be an additive schema change and does not require terminating and re-enabling sync. You would only need to terminate and re-enable sync for breaking schema changes ( See breaking schema changes)Hope this helps.Best,\nNiharika",
"username": "Niharika_Pujar"
},
{
"code": "",
"text": "Do you mind elaborating on what exactly your ideal experience is here? I am not quite sure how you would write your application code if it does not know what fields exist in the app.One way around strict schemas is to use a RealmDictionary to store key/value pairs that the application logic can decode itself.Additionally, it might be relevant to note that adding additional fields or tables (additive only) to the schema in App Services does not enforce that you re-release your app with those new fields. A Realm can be defined with any subset of the fields / tables in your data.Thanks,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Hi Taylor,For example, in couchbase, if i want to Ad-Hoc sync with a table, I just need to pass in the table name string to the sync/replicator and the database will start sync the table. When I query the table, I can pass in a free formed SQL/SQL++ to the local database to query, without creating a RealmObject Java/Kotlin class. The SQL/SQL++ language support querying and sorting any columns in the table. Without pre-defining the columns in the Java/Kotlin class. When I add a new table or new field, I dont need to release any Java code. I just need to tell the mobile app to start sync with the new table (for example, store the list of table names that needs to be synced in a metadata database table, when adding a new table name in the metadata database table, the mobile app will start sync that new table). The SQL/SQL++ statements can be stored in database or even a text file, mobile app will download the text file to use the new SQL statements. No Java/Kotlin coding change needed, I can dynamically add/remove tables and fields.",
"username": "Kan_Zhang"
},
{
"code": "",
"text": "Got it, but then what is your app doing with this data? Just trying to figure out why you would want this behaviour since it seems like an anti-pattern to me.",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "The app does not do anything with the data. The app is just a platform for users to plugin the custom business logic and data. The custom business logic is data driven, not hard-coded in app logic.",
"username": "Kan_Zhang"
}
] | Dynamic Fields and Dynamic Schema without code change/releasing new version? | 2023-04-10T19:44:09.927Z | Dynamic Fields and Dynamic Schema without code change/releasing new version? | 1,052 |
null | [
"node-js",
"transactions"
] | [
{
"code": "",
"text": "I’m making a new project in Node.js where I want to update balance of an item through mongodb transaction where if the transaction fails the balance will also revert back to original value. When two concurrent request fetch the same record at the same time for update, the request which finishes first gets to update the record and the other request throws write conflict. Is there any way to handle this type of situation?",
"username": "ayush_srivastav"
},
{
"code": "",
"text": "Hello @ayush_srivastav ,Welcome to The MongoDB Community Forums! I notice that you haven’t had a response to this topic yet, were you able to find an explanation or solution?\nIf not, could you please share more details regarding your use-case, are you trying to do concurrent write operations to the database?The locks in MongoDB are acquired at the transaction level. However only write operations will lock the associated document. Read operations will not lock them. Hence the locks will be released only when the transaction is completed. Please refer Concurrency for more details.When two concurrent request fetch the same record at the same time for update, the request which finishes first gets to update the record and the other request throws write conflict.The first part is correct however, the second transaction waiting to update the record will get the access once the lock is released to update the same record.Can you share some sample documents and code snippet through which you are trying to achieve this? I can try reproducing the same at my end.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Updating same record in mongodb transaction by two different request | 2023-03-24T05:14:18.147Z | Updating same record in mongodb transaction by two different request | 1,387 |
null | [
"python"
] | [
{
"code": "",
"text": "Hi all, strange…\nI have a MongoDB deployed on a EKS cluster in AWS, I can insert into it using a port forward (running code locally in test).\nWhen I change my URL to point to a MongoDB Atlas created database, it connects, but then fails on the inserT_one …\nI’m authenticating using username password, and for naughty set my network access to 0.0.0.0/0 at the moment.\ncopy/pasted the Atlas provided url.any ideas popping up?\nG",
"username": "georgelza"
},
{
"code": " uri = f'mongodb+srv://{MONGO_USERNAME}:{urllib.parse.quote_plus(MONGO_PASSWORD)}@{MONGO_URL}/?retryWrites=true&w=majority'\n\n myClient = pymongo.MongoClient(uri)\n myDB = myClient[\"trustreg]\n my_jwt_collection = myDB[\"jwt_keys]\n \n pub_payload = {\n \"originatingDate\": str(datetime.now().strftime(\"%Y-%m-%d %H:%M:%S\")),\n \"endToEndId\": endToEndId,\n }\n with myClient.start_session() as my_session:\n\n my_jwt_collection.update_one({\"id\": pub_payload[\"id\"]}, {\"$set\": pub_payload}, upsert=True, session=my_session)\n \n # end with\n\nERROR :2023-04-12 10:05:36.714821, Payload Save Failed!, endToEndId:4dc6c46fc2f8468 error:ac-ctpggnp-shard-00-01.gqtnee8.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129),ac-ctpggnp-shard-00-02.gqtnee9.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129),ac-ctpggnp-shard-00-00.gqtnee8.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129), Timeout: 30s, Topology Description: <TopologyDescription id: 643665d890e02ff9a91af488, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('ac-ctpggnp-shard-00-00.gqtnee9.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('ac-ctpggnp-shard-00-00.gqtnee8.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)')>, <ServerDescription ('ac-ctpggnp-shard-00-01.gqtnee8.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('ac-ctpggnp-shard-00-01.gqtnee8.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)')>, <ServerDescription ('ac-ctpggnp-shard-00-02.gqtnee8.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('ac-ctpggnp-shard-00-02.gqtnee8.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)')>]>\n",
"text": "The update_one eventually fails with the below error…\nMONGO_URL = cluster0.gqtnee8.mongodb.net # modified here of course ",
"username": "georgelza"
},
{
"code": " myClient = pymongo.MongoClient(uri,\n serverSelectionTimeoutMS=5000,\n tlsCAFile=certifi.where())\n",
"text": "Resolved.import certifi",
"username": "georgelza"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongo Insert failing when access MongoDB Atlas - PYTHON | 2023-04-12T06:00:47.464Z | Mongo Insert failing when access MongoDB Atlas - PYTHON | 503 |
null | [
"node-js",
"crud",
"mongoose-odm"
] | [
{
"code": "createinsertManycreate()node importDataToDb.js",
"text": "Hi, I’ve had a detailed explanation posted here: node.js - Mongoose .insertMany and create function not working - Stack OverflowIn short, the create or insertMany is not working correctly if I run the function in the route. The create() only works correctly if I run node importDataToDb.js in the command line.",
"username": "James_Z"
},
{
"code": "importDataToDb.jscreate()create()once()connectionconnectedcreate()once()create()const express = require('express');\nconst router = express.Router();\nconst mongoose = require('mongoose');\nconst Item = require('../models/Item');\n\nconst uri = process.env.MONGO_URI;\nmongoose.connect(uri, { useNewUrlParser: true, useUnifiedTopology: true });\n\nconst db = mongoose.connection;\ndb.once('connected', () => {\n console.log('Connected to database');\n createItems();\n});\n\nasync function createItems() {\n try {\n await Item.create([\n { name: 'Item 1', description: 'Description 1' },\n { name: 'Item 2', description: 'Description 2' },\n { name: 'Item 3', description: 'Description 3' }\n ]);\n console.log('Items created');\n } catch (error) {\n console.log(error);\n }\n}\n\nmodule.exports = router;\ncreateItems()connectedcreate()",
"text": "Based on the information provided, it seems that the issue is related to how the data is being imported to the database. Specifically, when running the importDataToDb.js script via the command line, the create() function works correctly, but when running it within the route, it does not.This could potentially be due to a few different reasons, such as:Timing issues: It’s possible that the create() function is being called before the database connection is fully established. To ensure that the connection is established before calling the function, you could try using the once() method of the Mongoose connection object, which waits for the connected event before executing the callback function.Asynchronous behavior: The create() function is asynchronous, so it’s important to ensure that it is properly awaited or that a callback function is used to handle the result.Environmental variables: If the database connection string is stored in an environmental variable, it’s possible that it is not being properly accessed when running the script via the route. To ensure that the variable is accessible, you could try exporting it in your terminal before starting the server.Here’s an example of how to use the once() method to ensure that the create() function is only called after the database connection is fully established:In this example, the createItems() function is only called after the connected event is emitted by the database connection. Additionally, the create() function is awaited to ensure that it completes before logging the success message.I hope this helps, let me know if there’s still problems.",
"username": "Brock"
},
{
"code": "create()module.exports.getDownloadFile = async (req, res, next) => {\n async function createItems() {\n try {\n const receivedData= await downloadFile( url );\n \n const docs = await SomeDataModel.create(receivedData);\n \n return docs;\n } catch (e) {\n console.log(\"error import data\", e);\n }\n }\n\n const docs = await createItems();\n res.status(200).json({\n status: \"success\",\n data: docs.length,\n });\n};\nconst mongoose = require(\"mongoose\");\nconst app = require(\"./app\");\nconst { PORT } = require(\"./config\");\nconst { SomeDataModel } = require(\"./models/someDataModel\");\nconst { downloadFile } = require(\"./utils/downloader.js\");\nrequire(\"dotenv\").config({ path: \"./config.env\" });\n\nasync function importToDB() {\n try {\n console.log(\"start importing...\");\n const receivedData = await downloadFile(url);\n console.log(receivedData .length);\n await SomeDataModel.create(receivedData );\n console.log(\"Created\");\n } catch (e) {\n console.log(\"error\", e);\n }\n}\n\nmongoose.connect(`${process.env.DATABASE}`).then(() => {\n console.log(\"DB connected\");\n importToDB().then(() => {\n console.log(\"finished importing...\");\n });\n});\n\napp.listen(PORT, () => {\n console.log(`Express starts on port ${PORT}`);\n",
"text": "Thank you!\nI made it work by wrapping the create() into an async function, something like this:This will also work if I put the data import logic into the server.jsThank you again.\nI have another question that might be off the topic, since the data has about 5k documents to be imported, this whole process might take a few seconds to finish, during this process the server is stalled, and I cannot perform any request until the data import is finished. What can I do to bypass this issue?",
"username": "James_Z"
}
] | insertMany() and create() not working | 2023-04-11T23:09:21.401Z | insertMany() and create() not working | 1,338 |
null | [
"atlas-cluster"
] | [
{
"code": "",
"text": "Hi, I am trying to connect to MongoDB Altas from Qlik Cloud MongoDB connector, it’s asking for a server(hostname or IP address), what I should put into the server box?\nMy MongoDB connection is “mongodb+srv://xxxx:[email protected]/?retryWrites=true&w=majority”thanks",
"username": "Peng_Guo"
},
{
"code": "",
"text": "Hi @Peng_Guo - Welcome to the community I’m not too familiar with Qlik Cloud but based off the following Qlik Create a MongoDB connection documentation, it seems it can accept a server and port and/or replica set details.Regarding hostnames, please see my comment on the following post on how to obtain the hostnames.Hope the above helps.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "I\"d be keen to know what all you had to set once you got this working.Got it in our future also.G",
"username": "georgelza"
}
] | Connect to MongoDB Altas from Qlik Cloud | 2023-04-11T07:22:06.309Z | Connect to MongoDB Altas from Qlik Cloud | 1,071 |
null | [
"node-js",
"crud",
"atlas-cluster",
"transactions",
"mongoid-odm"
] | [
{
"code": "const MongoClient = require('mongodb').MongoClient;\n\nconst { ObjectId } = require('mongodb');\n\nconst uri = \"mongodb+srv://<username>:<password>@cluster.mongodb.net/test?retryWrites=true&w=majority\";\n\nconst client = new MongoClient(uri, { useNewUrlParser: true });\n\nasync function shareDocument(documentId) {\n\n try {\n\n await client.connect();\n\n const collection = client.db(\"test\").collection(\"documents\");\n\n // check if the \"transactionId\" parameter exists\n\n const document = await collection.findOne({ _id: ObjectId(documentId), transactionId: { $exists: false } });\n\n if (!document) {\n\n // if the \"transactionId\" parameter exists, update the \"isShared\" parameter only\n\n await collection.updateOne({ _id: ObjectId(documentId) }, { $set: { isShared: true } });\n\n } else {\n\n // if the \"transactionId\" parameter does not exist, create it and update the \"isShared\" parameter\n\n await collection.findOneAndUpdate({ _id: ObjectId(documentId) }, { $set: { isShared: true, transactionId: new ObjectId() } });\n\n }\n\n } catch (err) {\n\n console.log(err);\n\n } finally {\n\n await client.close();\n\n }\n\n}\n",
"text": "Hi,I’m looking for a way to perform an update that adds/modifies the parameter “isShared” to true and if the queried document has no trasactionId parameter it adds this parameter too (with a mongoId). But if it exists, does not add it nor update it.Chat gpt proposed something but it requires two trips to the DB, which is not optim, i think.In other words: How can I convert the following code into another that avoids a previous check on the DBThanks!",
"username": "kevin_Morte_i_Piferrer"
},
{
"code": "async function shareDocument(documentId) {\n try {\n const client = await MongoClient.connect(uri, { useNewUrlParser: true });\n const collection = client.db(\"test\").collection(\"sample\");\n const result = await collection.findOneAndUpdate(\n {\n _id: documentId,\n transactionId: { $exists: false }\n },\n {\n $set: { isShared: true, transactionId: new ObjectId() }\n }\n );\n if (!result.value) {\n await collection.updateOne(\n { _id: documentId },\n { $set: { isShared: true } }\n );\n }\n } catch (err) {\n console.log(err);\n } finally {\n await client.close();\n }\n}\n",
"text": "Hi @kevin_Morte_i_Piferrer,Welcome back to the MongoDB Community forums Apologies for the late response.How can I convert the following code into another that avoids a previous check on the DBTo perform the desired operation while skipping the db check, you can rewrite the query as follows:Here, I’ve used the findOneAndUpdate method with $exists + $set operator to add “transactionId” if it does not exist and update the “isShared” field. If findOneAndUpdate returns null, then I know that the document was not found and I can proceed with the updateOne method to set “isShared” to true.Please alter and test the code above accordingly against a test environment to ensure it meets all your use case/requirement(s) before implementing it in production.I hope it helps!Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Update document with conditions | 2023-01-22T20:11:42.104Z | Update document with conditions | 1,229 |
[] | [
{
"code": "",
"text": "When I start MongoDB service it gives the above error message. Any clear help to from the community??",
"username": "Thiwanka_Gunasinghe"
},
{
"code": "",
"text": "Could be permissions issues\nCheck mongod.log if more details are shown\nIs default port 27017 being used by any other process?",
"username": "Ramachandra_Tummala"
}
] | Unable to start MongoDB in local device and when start using service it says as windows could not start mongodb service on local computer | error 48 | 2023-04-12T05:57:15.951Z | Unable to start MongoDB in local device and when start using service it says as windows could not start mongodb service on local computer | error 48 | 1,207 |
|
null | [
"kafka-connector"
] | [
{
"code": "",
"text": "Hi,I am working on Kafka Mongo Source Connector and as per business requirement I need to poll for data in MongoDB for every 30 min interval, so for this I am using this property: “poll.await.time.ms”, but it is not working, when I insert/update data it immediately loads in Kafka topic, is there any way I could configure it to poll for specified time and then load data in Kafka topic.\nCan anyone please suggest me any solution.Thank you!",
"username": "sanket_kokne"
},
{
"code": "*/30 * * * * /path/to/kafka-mongo-source-connector.sh\n",
"text": "Hey @sanket_kokne,The “poll.await.time.ms” property in the Kafka Mongo Source Connector controls the amount of time that the connector waits before polling for new data from MongoDB. However, it does not guarantee that the connector will only load data into Kafka every 30 minutes.If you want to load data into Kafka every 30 minutes, you could use a scheduler to run the Kafka Mongo Source Connector every 30 minutes. One way to do this is to use a cron job. Here’s an example of how you could set up a cron job to run the Kafka Mongo Source Connector every 30 minutes:This cron job will run the “kafka-mongo-source-connector.sh” script every 30 minutes. You would need to replace “/path/to/kafka-mongo-source-connector.sh” with the path to the script that runs the Kafka Mongo Source Connector.Another option is to modify the Kafka Mongo Source Connector code to implement a custom polling interval. This would require more development work, but it would give you more control over the data loading process. You could modify the “poll” method in the connector code to wait for the specified interval before loading data into Kafka. However, I would recommend using a scheduler like cron instead, as it is a simpler and more reliable solution.",
"username": "Brock"
},
{
"code": "",
"text": "Hi @Brock ,Thank you so much for your help",
"username": "sanket_kokne"
}
] | Mongo Kafka Connector poll.await.time.ms Not Working | 2023-04-11T14:43:06.268Z | Mongo Kafka Connector poll.await.time.ms Not Working | 1,170 |
null | [] | [
{
"code": "",
"text": "Hi, apologies if this has already been answered somewhere but searching for “memory usage” issues has only turned up a huge number of posts about setting the wiredTiger cache size. I thought that was our issue too until delving a little deeper…We’re running 4.2.18 on Kubernetes using a StatefulSet with 3 pods. The pod memory limit is set to 8GB and we can see this has been correctly detected in db.hostInfo() as system.memSizeMB is 8192. The wiredTiger cache has also as expected been set to 3.5GB automatically. We are however only using a small dataset so only a few hundred MB of this is being used. What we see instead is TCMalloc’s “Bytes in thread cache freelists” gradually increasing until the pod is inevitably OOM-killed. This seems strange as in all other reports of similar memory issues this value is in the order of MB whilst ours just keeps climbing into multiple GB.We had hoped to be able to limit the total memory used by MongoDB and set the Kubernetes limit a bit higher, to give some room for overhead, but this doesn’t appear to be possible. Best we’ve come up with so far is to try to tame TCMalloc by setting tcmallocReleaseRate. According to the Mongo docs a value of 10 is the top end of the reasonable range but setting it to this doesn’t seem to make any noticeable difference in our case.Anyone seen this behaviour before and/or have any pointers on what to try next?",
"username": "Steve_M"
},
{
"code": "",
"text": "Hi. Having similar issues ourselves. We found this article that might be useful to you.Infrastructure Background:\nReading time: 5 min read\n",
"username": "Alexandru_Pirvu"
}
] | High TCMalloc thread cache usage on Kubernetes | 2022-02-01T13:13:22.519Z | High TCMalloc thread cache usage on Kubernetes | 2,406 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.