image_url
stringlengths 113
131
⌀ | tags
list | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"node-js",
"indexes",
"serverless"
]
| [
{
"code": "",
"text": "Hello,I would like to know if there is a solution to replicate indexes from on cluster to another without erasing data ?I have 2 clusters: production and staging, in 2 different project.\nThe 2 db inside each cluster have the same schema.There is multiple indexes in the staging db that I would like to replicate into the production db without erasing any data in the collections (so no restore I think)Is it Possible ?\nCan it be automated ?We use MongoDB Atlas Serverless for the 2 clusters\nWe use nodejs 16 mongodb driver v4.12.1",
"username": "Guillaume_Pestre"
},
{
"code": "staging = MongoClient( ... )\nproduction = MongoClient( ... )\n\nfor db_to_replicate in all databases to replicate indexes\n{\n db = staging.db( db_to_replicate )\n for collection_to_replicate in all collections of db\n {\n collection = db.collection( collection_to_replicate )\n for index in collection.listIndexes()\n {\n production.db( db_to_replicate ).collection( collection_to_replicate ).createIndex( index )\n }\n }\n}\n",
"text": "While I do not have specific details to give, I am confident that you will find what you wish inAn alternative way will be with a simple JS script that looks like:",
"username": "steevej"
},
{
"code": "",
"text": "Thank you very much ! I will look into it",
"username": "Guillaume_Pestre"
}
]
| Replicate Indexes from one cluster to anoter inn mongo Atlas | 2022-12-02T13:27:05.723Z | Replicate Indexes from one cluster to anoter inn mongo Atlas | 1,698 |
null | []
| [
{
"code": "",
"text": "Hello everyone. I am trying to upload my images to a 3rd storage provider. Could someone one tell me what is a best storage provider to use with MongoDB Realm. My images are about 100kb",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "Hi @Ciprian_Gabor,Why not store your images in a cheap AWS S3 bucket and just store the link to it in MongoDB?Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "Image(painter = rememberAsyncImagePainter(link))\n",
"text": "Hello!I am doing this already, storing the images inside S3 and the link to it inside MongoDB. I dont understand why it gets some time to load the images (like 1 second). In kotlin I am using like this:Is there a best way to load the images from URL?",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "No idea :/. That’s between S3 and Kotlin then now.\nWhich storage option did you choose in S3? It’s cold storage but there are options in S3 to lower the cost to the detriment of access time.Find detailed information on Free Tier, storage pricing, requests and data retrieval pricing, data transfer and transfer acceleration pricing, and data management features pricing options for all classes of S3 cloud storage.Are the images stored in the same AWS region close to where you are rendering the image? Maybe it’s just the latency + download time ?Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Not sure about it. I created the S3 free storage bucket.\nYes, the region is close to where I am downloading images.",
"username": "Ciprian_Gabor"
}
]
| Best storage providers | 2022-11-30T12:49:28.252Z | Best storage providers | 1,404 |
null | []
| [
{
"code": "async function addDOC(){\n\n\tawait DOCDBCHECK.remove({ });\n\tconsole.log(\"Collection being updated!\");\n\tconsole.log(\"3\");\n\tconsole.log(\"2\");\n\tconsole.log(\"1\");\n\tawait DOCDBCHECK.insertOne(doc);\n\tDOCDBCHECKBUP.insertOne(doc);\n\tconsole.log(\"Collection successfully updated!\");\n\n}\n return callback(new error_1.MongoServerError(res.writeErrors[0]));\n ^\n\nMongoServerError: E11000 duplicate key error collection: data.TFTbcBUP index: _id_ dup key: { _id: ObjectId('637f9f14d839980dd5e4da91') }\n at C:\\Users\\FT\\Downloads\\FoobaseFinal\\node_modules\\mongodb\\lib\\operations\\insert.js:53:33\n at C:\\Users\\FT\\Downloads\\FoobaseFinal\\node_modules\\mongodb\\lib\\cmap\\connection_pool.js:292:25\n at C:\\Users\\FT\\Downloads\\FoobaseFinal\\node_modules\\mongodb\\lib\\sdam\\server.js:215:17\n at handleOperationResult (C:\\Users\\FT\\Downloads\\FoobaseFinal\\node_modules\\mongodb\\lib\\sdam\\server.js:287:20)\n at Connection.onMessage (C:\\Users\\FT\\Downloads\\FoobaseFinal\\node_modules\\mongodb\\lib\\cmap\\connection.js:219:9)\n at MessageStream.<anonymous> (C:\\Users\\FT\\Downloads\\FoobaseFinal\\node_modules\\mongodb\\lib\\cmap\\connection.js:60:60)\n at MessageStream.emit (node:events:513:28)\n at processIncomingData (C:\\Users\\FT\\Downloads\\FoobaseFinal\\node_modules\\mongodb\\lib\\cmap\\message_stream.js:132:20)\n at MessageStream._write (C:\\Users\\FT\\Downloads\\FoobaseFinal\\node_modules\\mongodb\\lib\\cmap\\message_stream.js:33:9)\n at writeOrBuffer (node:internal/streams/writable:391:12) {\n index: 0,\n code: 11000,\n keyPattern: { _id: 1 },\n keyValue: {\n _id: ObjectId {\n [Symbol(id)]: Buffer(12) [Uint8Array] [\n 99, 127, 159, 20, 216,\n 57, 152, 13, 213, 228,\n 218, 145\n ]\n }\n },\n [Symbol(errorLabels)]: Set(0) {}\n}\nsetInterval(addDOC, oneMinute*6);\n",
"text": "Hi there, I have a code that goes like this:And it constantly outputs error saying the ID is not unique? Strange that, functions are timed to wait for one another, so there shouldn’t be a problem… whats happening?Error outputted is:What am I doing wrong?\nI thought initially that it was the timing of the functions, given them async await properties and still this happening, can someone guide me through how to overcome this error?(do you require any other information to assist me on this? Just let me know…)EDIT:I then have this on a timer:",
"username": "Zoo_Zaa"
},
{
"code": "await DOCDBCHECK.insertOne(doc);DOCDBCHECKBUP.insertOne(doc);",
"text": "What am I doing wrong?You are inserting the same document twice.Once withawait DOCDBCHECK.insertOne(doc);and another time with line righ after:DOCDBCHECKBUP.insertOne(doc);",
"username": "steevej"
},
{
"code": "",
"text": "I am inserting it in different collections, what is wrong with that? One the document first gets removed from collection, then the updated one goes in, and the other goes into a back up collection with all the many times it restarts so a record is kept for safekeeping and development usage… I’m confusedWhat should I do?",
"username": "Zoo_Zaa"
},
{
"code": "data.TFTbcBUP",
"text": "I think whats happening is with the BUP function…\nHere, look…data.TFTbcBUPon second line of error… Its only complaining about the BUP one, and on the BUP one saying that there is a repeated _id…? which is automatically added by Mongo…?",
"username": "Zoo_Zaa"
},
{
"code": "",
"text": "Your choice of variable names fooled me. It has been a long since I did FORTRAN with all upper case variables.I see one of 2 things:",
"username": "steevej"
},
{
"code": "var TFTDBCHECK = client.db(\"data\").collection(\"TFTbc\");\nvar TFTDBCHECKBUP = client.db(\"data\").collection(\"TFTbcBUP\");\n\tawait TFTDBCHECK.insertOne(DOC);\n\tconsole.log(\"Collection successfully updated!\");\n\tasync function addBUP(){\n\t\tawait TFTDBCHECKBUP.insertOne(DOC);\n\t}setTimeout(addBUP, 5000);\n",
"text": "ahh, worry not it happens even to the best.The variables are pointing to different collections, check bellow:Heres what I did, and as far as I’ve went I’m having no errors…I turned the BUP one to a function, thus separating the logic processing of the insertion into two different processes, I think, at least in my head it goes like that, either that or it was a timing issue, as I see no problem in uploading the file multiple times in different collections, simply perhaps a _id issue was in the difference in milliseconds at which the insertions were being made via the Mongodb framework… whatcha recon?Heres the working code, fyi… =)Cheers",
"username": "Zoo_Zaa"
},
{
"code": "",
"text": "@steevej It occasionally still outputs error, I dont know how to proceed other than doing something I find inadmissible… removing the BUP insert to keep track of events…Any ideas?",
"username": "Zoo_Zaa"
},
{
"code": "",
"text": "If mongod complains that you have a duplicate _id then it is because your code insert it twice in the collection. There is no implicit insert done by any part of the system. If the error only occurs from time to time, then most likely the initial states of your collections is not what you think.But the real question why to you duplicate your document in 2 collections?Also, why in the original code, do you remove the document in only 1 collection but not in both?",
"username": "steevej"
},
{
"code": "",
"text": "One was the back up, and the remove was meant to completelly clean collection then readding same entry but further elaborated…I’ve moved on from that tho, changed the logic now I don’t need to remove and insert it reads if exists one before doing anything…Thank you for the help",
"username": "Zoo_Zaa"
}
]
| Remove/Insert is outputting constant error | 2022-11-24T16:57:29.125Z | Remove/Insert is outputting constant error | 2,143 |
null | [
"dot-net",
"connecting",
"containers"
]
| [
{
"code": "fail: Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware[1]\n An unhandled exception has occurred while executing the request.\n System.TimeoutException: A timeout occurred after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 }, OperationsCountServerSelector }. Client view of cluster state is { ClusterId : \"1\", Type : \"Unknown\", State : \"Disconnected\", Servers : [{ ServerId: \"{ ClusterId : 1, EndPoint : \"Unspecified/catalogcommentdb:27015\" }\", EndPoint: \"Unspecified/catalogcommentdb:27015\", ReasonChanged: \"Heartbeat\", State: \"Disconnected\", ServerVersion: , TopologyVersion: , Type: \"Unknown\", HeartbeatException: \"MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.\n ---> System.Net.Internals.SocketExceptionFactory+ExtendedSocketException (111): Connection refused 172.24.0.4:27015\n at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress)\n at System.Net.Sockets.Socket.Connect(EndPoint remoteEP)\n at MongoDB.Driver.Core.Connections.TcpStreamFactory.Connect(Socket socket, EndPoint endPoint, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.TcpStreamFactory.CreateStream(EndPoint endPoint, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelper(CancellationToken cancellationToken)\n --- End of inner exception stack trace ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelper(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.Open(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.ServerMonitor.InitializeConnection(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.ServerMonitor.Heartbeat(CancellationToken cancellationToken)\", LastHeartbeatTimestamp: \"2022-08-08T12:28:13.2001516Z\", LastUpdateTimestamp: \"2022-08-08T12:28:13.2001518Z\" }] }.\n at MongoDB.Driver.Core.Clusters.Cluster.ThrowTimeoutException(IServerSelector selector, ClusterDescription description)\n at MongoDB.Driver.Core.Clusters.Cluster.WaitForDescriptionChangedHelper.HandleCompletedTask(Task completedTask)\n at MongoDB.Driver.Core.Clusters.Cluster.WaitForDescriptionChanged(IServerSelector selector, ClusterDescription description, Task descriptionChangedTask, TimeSpan timeout, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Clusters.Cluster.SelectServer(IServerSelector selector, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoClient.AreSessionsSupportedAfterServerSelection(CancellationToken cancellationToken)\n at MongoDB.Driver.MongoClient.AreSessionsSupported(CancellationToken cancellationToken)\n at MongoDB.Driver.MongoClient.StartImplicitSession(CancellationToken cancellationToken)\n at MongoDB.Driver.OperationExecutor.StartImplicitSession(CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.UsingImplicitSession[TResult](Func`2 func, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.FindSync[TProjection](FilterDefinition`1 filter, FindOptions`2 options, CancellationToken cancellationToken)\n at MongoDB.Driver.FindFluent`2.ToCursor(CancellationToken cancellationToken)\n at MongoDB.Driver.IAsyncCursorSourceExtensions.Any[TDocument](IAsyncCursorSource`1 source, CancellationToken cancellationToken)\n at MongoDB.Driver.IFindFluentExtensions.Any[TDocument,TProjection](IFindFluent`2 find, CancellationToken cancellationToken)\n at CatalogComment.API.Data.CatalogCommentContextSeed.SeedData[T](IMongoCollection`1 productCollection, List`1 seedData) in /src/Services/CatalogComment/CatalogComment.API/Data/CatalogCommentContextSeed.cs:line 9\n at CatalogComment.API.Data.CatalogCommentContext..ctor(IConfiguration configuration) in /src/Services/CatalogComment/CatalogComment.API/Data/CatalogCommentContext.cs:line 16\n at System.RuntimeMethodHandle.InvokeMethod(Object target, Span`1& arguments, Signature sig, Boolean constructor, Boolean wrapExceptions)\n at System.Reflection.RuntimeConstructorInfo.Invoke(BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)\n at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteRuntimeResolver.VisitConstructor(ConstructorCallSite constructorCallSite, RuntimeResolverContext context)\n at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteVisitor`2.VisitCallSiteMain(ServiceCallSite callSite, TArgument argument)\n at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteRuntimeResolver.VisitCache(ServiceCallSite callSite, RuntimeResolverContext context, ServiceProviderEngineScope serviceProviderEngine, RuntimeResolverLock lockType)\n at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteRuntimeResolver.VisitScopeCache(ServiceCallSite callSite, RuntimeResolverContext context)\n at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteVisitor`2.VisitCallSite(ServiceCallSite callSite, TArgument argument)\n at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteRuntimeResolver.Resolve(ServiceCallSite callSite, ServiceProviderEngineScope scope)\n at Microsoft.Extensions.DependencyInjection.ServiceLookup.DynamicServiceProviderEngine.<>c__DisplayClass2_0.<RealizeService>b__0(ServiceProviderEngineScope scope)\n at Microsoft.Extensions.DependencyInjection.ServiceProvider.GetService(Type serviceType, ServiceProviderEngineScope serviceProviderEngineScope)\n at Microsoft.Extensions.DependencyInjection.ServiceLookup.ServiceProviderEngineScope.GetService(Type serviceType)\n at Microsoft.Extensions.DependencyInjection.ActivatorUtilities.GetService(IServiceProvider sp, Type type, Type requiredBy, Boolean isDefaultParameterRequired)\n at lambda_method9(Closure , IServiceProvider , Object[] )\n at Microsoft.AspNetCore.Mvc.Controllers.ControllerActivatorProvider.<>c__DisplayClass7_0.<CreateActivator>b__0(ControllerContext controllerContext)\n at Microsoft.AspNetCore.Mvc.Controllers.ControllerFactoryProvider.<>c__DisplayClass6_0.<CreateControllerFactory>g__CreateController|0(ControllerContext controllerContext)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeInnerFilterAsync()\n --- End of stack trace from previous location ---\n at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeFilterPipelineAsync>g__Awaited|20_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope)\n at Microsoft.AspNetCore.Routing.EndpointMiddleware.<Invoke>g__AwaitRequestTask|6_0(Endpoint endpoint, Task requestTask, ILogger logger)\n at Microsoft.AspNetCore.Authorization.AuthorizationMiddleware.Invoke(HttpContext context)\n at Swashbuckle.AspNetCore.SwaggerUI.SwaggerUIMiddleware.Invoke(HttpContext httpContext)\n at Swashbuckle.AspNetCore.Swagger.SwaggerMiddleware.Invoke(HttpContext httpContext, ISwaggerProvider swaggerProvider)\n at Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware.Invoke(HttpContext context)\n version: '3.4'\n\nservices:\n catalogdb:\n image: mongo\n\n catalogbranddb:\n image: mongo\n\n catalogcategorydb:\n image: mongo\n\n catalog.api:\n image: ${DOCKER_REGISTRY-}catalogapi\n build:\n context: .\n dockerfile: Services/Catalog/Catalog.API/Dockerfile\n\n catalogbrand.api:\n image: ${DOCKER_REGISTRY-}catalogbrandapi\n build:\n context: .\n dockerfile: Services/CatalogBrand/CatalogBrand.API/Dockerfile\n\n\n catalogcategory.api:\n image: ${DOCKER_REGISTRY-}catalogcategoryapi\n build:\n context: .\n dockerfile: Services/CatalogCategory/CatalogCategory.API/Dockerfile\nversion: '3.4'\n\nservices:\n\n catalogdb:\n container_name: catalogdb\n restart: always\n ports:\n - \"27017:27017\"\n volumes:\n - ./mongo_data_catalogdb:/data/catalogdb\n\n catalogcategorydb:\n container_name: catalogcategorydb\n restart: always\n ports:\n - \"27018:27017\"\n volumes:\n - ./mongo_data_catalogcategorydb:/data/catalogcategorydb\n\n catalogbranddb:\n container_name: catalogbranddb\n restart: always\n ports:\n - \"27019:27017\"\n volumes:\n - ./mongo_data_catalogbranddb:/data/catalogbranddb\n\n catalog.api:\n container_name: catalog.api\n environment:\n - ASPNETCORE_ENVIRONMENT=Development\n - \"DatabaseSettings:ConnectionString=mongodb://catalogdb:27017\"\n - \"ElasticConfiguration:Uri=http://elasticsearch:9200\"\n depends_on:\n - catalogdb\n ports:\n - \"8000:80\"\n\n catalogbrand.api:\n container_name: catalogbrand.api\n environment:\n - ASPNETCORE_ENVIRONMENT=Development\n - \"DatabaseSettings:ConnectionString=mongodb://catalogbranddb:27019\"\n - \"ElasticConfiguration:Uri=http://elasticsearch:9200\"\n depends_on:\n - catalogbranddb\n ports:\n - \"8001:80\"\n\n catalogcategory.api:\n container_name: catalogcategory.api\n environment:\n - ASPNETCORE_ENVIRONMENT=Development\n - \"DatabaseSettings:ConnectionString=mongodb://catalogcategorydb:27018\"\n - \"ElasticConfiguration:Uri=http://elasticsearch:9200\"\n depends_on:\n - catalogcategorydb\n ports:\n - \"8002:80\"\n",
"text": "'m getting this error in my asp.API docker-compose logsand this is my docker-compose.ymland this is my docker-compose-overider.ymlI’m getting this error in all of the APIs Except Catalog.API and only catalog.API worked perfectly and I tried to do everything that I did with Catalog.API but still show error for other services and these are my container",
"username": "RegesteaWest"
},
{
"code": "",
"text": "your are getting “Connection refused 172.24.0.4:27015”. if you look closely, you will see you are trying to connect port “27015” but your container has only “27017/27018/27019”.whenever you get a “timeout/refused”, always check the connection string and then the security IP access list. and if you get a “drop”, check credentials and user access list.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "I found my answer here CLICK HERE",
"username": "RegesteaWest"
},
{
"code": "",
"text": "what you have added to your compose file is just telling to run mongodb on another port “of container”, but then your port mapping is still wrong because it is “outsideport:insideport” and the command changes “insideport” not “outsideport”You are making things a bit harder for future uses than you can imagine. What you think you have corrected is not valid “in terms of ports”. It is like pushing all the action buttons to get a combo move in that game, it may work but that does not mean you have solved the main issue.There are 2 reasons, that I missed one before:",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "I’m face the similar kind of issue but while run docker compose the env variable are not aling to docker file and networks also fails question is I’m trying to run dokerfile with test which is built the file and run the test so here my connection failed due to some networks connectivity, how to manage this.",
"username": "Jeeva_023"
},
{
"code": "",
"text": "This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Asp.net mongo db 'System.TimeoutException: A timeout occurred after 30000ms' in docker compose | 2022-08-13T07:21:46.086Z | Asp.net mongo db ‘System.TimeoutException: A timeout occurred after 30000ms’ in docker compose | 7,210 |
[
"compass"
]
| [
{
"code": "",
"text": "In the past if an Object called reviewInfo had a field status it would be shown when you start typing a filter, but now its showing only the main field and its harder to find if something is nestedIn this example it would suggest : “reviewInfo.status” but now it doesn’t in the newest update.\nimage (2)815×162 12.5 KB\nHow it is :\nMongo Version (Apple Silicion)\n",
"username": "Rron_94649"
},
{
"code": "",
"text": "Yes, this is a known issue that we’ll be looking into: https://jira.mongodb.org/browse/COMPASS-6335",
"username": "Massimiliano_Marcon"
}
]
| MongoDb Compass Auto Suggestions | 2022-12-02T16:23:16.716Z | MongoDb Compass Auto Suggestions | 1,208 |
|
null | [
"app-services-cli"
]
| [
{
"code": "[\n {\n \"route\": \"/testingonly5\",\n \"http_method\": \"GET\",\n \"function_name\": \"testingonly\",\n \"validation_method\": \"NO_VALIDATION\",\n \"respond_result\": true,\n \"fetch_custom_user_data\": false,\n \"create_user_on_auth\": false,\n \"disabled\": false,\n \"return_type\": \"JSON\"\n }\n]\n",
"text": "I’m using v. 2.6.2 of the mongodb-realm-cli library and trying to programmatically setup a basic HTTPS endpoint, following the documentation in https://www.mongodb.com/docs/atlas/app-services/data-api/custom-endpoints/.I’ve got a new function, which is successfully created when running the push command, but the https_endpoints->config.json configuration file has no effect to the infrastructure and no errors are reported. How can I create the https endpoint for the above function then? Setting it up manually via the UI does work, but nothing happens when trying the same via the realm-cli…For reference my config.json file is similar to the following:Any suggestions please?",
"username": "George_Ivanov"
},
{
"code": "testingonly? Please confirm the changes shown above Yes\nCreating draft\nPushing changes\nDeploying draft\nDeployment complete\nSuccessfully pushed app up: appname123-0-abcde\n",
"text": "Hi @George_Ivanov - Welcome to the community!I’ve got a new function, which is successfully created when running the push command, but the https_endpoints->config.json configuration file has no effect to the infrastructure and no errors are reported.Setting it up manually via the UI does work, but nothing happens when trying the same via the realm-cli…Just to clarify - It sounds like the push itself is successful but you are not seeing the changes to the HTTPS Endpoint configuration after the push. In addition to this, I presume the function itself testingonly is already created and you’re only trying to create the HTTPS endpoint to associate it with this function. Please correct me if I am incorrect in either of my assumptions.Regarding the push itself, are you seeing something similar to the below output?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "executing: npx realm-cli login --api-key ****** --private-api-key ****** -y\nSuccessfully logged in\nexecuting: npx realm-cli push --remote platform-dev-**** --local build/mongodb-realm-app -y \nDetermining changes\nCreating draft\nPushing changes\nDeploying draft\nDeployment complete\nSuccessfully pushed app up: platform-dev-***\ndeployed platform-dev-*******\n",
"text": "Hi @Jason_Tran,Yes, the output from the push is:The function is created:\n\nScreenshot 2022-11-18 at 10.20.542010×63 2.47 KB\n\nBut no https endpoint is added",
"username": "George_Ivanov"
},
{
"code": "-y-y--remote--localconfig.json[]../realm-cli push\nDetermining changes\nThe following reflects the proposed changes to your Realm app\n--- http_endpoints/config.json\n+++ http_endpoints/config.json\n@@ -1,2 +1,14 @@\n-[]\n+[\n+ {\n+ \"route\": \"/customroute\",\n+ \"http_method\": \"POST\",\n+ \"function_name\": \"helloworld\",\n+ \"validation_method\": \"NO_VALIDATION\",\n+ \"respond_result\": true,\n+ \"fetch_custom_user_data\": false,\n+ \"create_user_on_auth\": false,\n+ \"disabled\": false,\n+ \"return_type\": \"JSON\"\n+ }\n+]\n\n\n? Please confirm the changes shown above Yes\nCreating draft\nPushing changes\nDeploying draft\nDeployment complete\nSuccessfully pushed app up: application-1-redacted\n--local build/mongodb-realm-app\nls -lls -l https_endpointscat https_endpoints/config.json",
"text": "Thanks for providing the output George.But no https endpoint is addedDue to the -y option used, I believe the proposed changes won’t be logged to the output in the cli. Would you be able to try create another HTTPS_endpoint again and provide the output without the -y option? Additionally, could you try creating the HTTPS_endpoint directly from the application folder (or even a test application folder) without the use of --remote or --local for troubleshooting purposes? This will help determine what the issue may be when creating the custom endpoint.An example of this below from my test environment where a HTTPS_Endpoint is created (original config.json only containing []):Also, regarding the following:Can you provide the following output from the above directory?:Please redact any personal or sensitive information before posting here.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "georivan@pcpcpc mongodb-realm-app % realm-cli push --remote data-platform-dev-********\nDetermining changes\nThe following reflects the proposed changes to your Realm app\n--- functions/testingonly/config.json\n+++ functions/testingonly/config.json\n@@ -1 +1,6 @@\n+{\n+ \"can_evaluate\": {},\n+ \"name\": \"testingonly\",\n+ \"private\": false\n+}\n \n\n--- functions/testingonly/source.js\n+++ functions/testingonly/source.js\n@@ -1 +1,7 @@\n+exports = () => {\n+ const commitId = context.values.get(\"gitCommitId\");\n+ const commitMessage = context.values.get(\"gitCommitMessage\");\n+ const commitTime = context.values.get(\"gitCommitTime\");\n+ return { commitId, commitMessage, commitTime };\n+};\n \n\n? Please confirm the changes shown above (y/N) \n\ngeorivan@pcpcpc mongodb-realm-app % ls -l\ntotal 8\ndrwxr-xr-x 2 georivan staff 64 Nov 29 11:27 auth\ndrwxr-xr-x 3 georivan staff 96 Nov 29 11:30 auth_providers\n-rw-r--r-- 1 georivan staff 326 Nov 29 11:27 config.json\ndrwxr-xr-x 3 georivan staff 96 Nov 29 11:27 data_sources\ndrwxr-xr-x 7 georivan staff 224 Nov 29 11:27 environments\ndrwxr-xr-x 3 georivan staff 96 Nov 29 11:29 functions\ndrwxr-xr-x 3 georivan staff 96 Nov 29 11:29 graphql\ndrwxr-xr-x 3 georivan staff 96 Nov 29 11:51 https_endpoints\ndrwxr-xr-x 2 georivan staff 64 Nov 29 11:27 log_forwarders\ndrwxr-xr-x 3 georivan staff 96 Nov 30 17:38 services\ndrwxr-xr-x 2 georivan staff 64 Nov 29 11:27 sync\ndrwxr-xr-x 2 georivan staff 64 Nov 29 11:30 triggers\ndrwxr-xr-x 7 georivan staff 224 Nov 29 11:32 values\ngeorivan@pcpcpc mongodb-realm-app % ls -l https_endpoints\ntotal 8\n-rw-r--r-- 1 georivan staff 293 Nov 29 11:51 config.json\ngeorivan@pcpcpc https_endpoints % cat config.json\n[\n {\n \"route\": \"/testingonly5\",\n \"http_method\": \"GET\",\n \"function_name\": \"testingonly\",\n \"validation_method\": \"NO_VALIDATION\",\n \"respond_result\": true,\n \"fetch_custom_user_data\": false,\n \"create_user_on_auth\": false,\n \"disabled\": false,\n \"return_type\": \"JSON\"\n }\n]\n",
"text": "Still no luck, only the function is detected when applying, while the https_endpoints are completely ignored:The folder structure is:Contents of https_endpoints folder:Finally, the config file itself:",
"username": "George_Ivanov"
},
{
"code": "drwxr-xr-x 3 georivan staff 96 Nov 29 11:51 https_endpoints\"http_endpoints\"",
"text": "drwxr-xr-x 3 georivan staff 96 Nov 29 11:51 https_endpointsCan you try rename the directory to \"http_endpoints\" and try the push again?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "I’ve tried both http_endpoints and https_endpoints as the directory name, but again makes no difference ",
"username": "George_Ivanov"
},
{
"code": "pull$ ls -la\ntotal 8\ndrwxr-xr-x 12 staff 384 1 Dec 20:14 .\ndrwxr-xr-x 21 staff 672 1 Dec 10:59 ..\ndrwxr-xr-x 4 staff 128 1 Dec 10:59 auth\ndrwxr-xr-x 3 staff 96 1 Dec 10:59 data_sources\ndrwxr-xr-x 7 staff 224 1 Dec 10:59 environments\ndrwxr-xr-x 4 staff 128 1 Dec 10:59 functions\ndrwxr-xr-x 4 staff 128 1 Dec 10:59 graphql\ndrwxr-xr-x 3 staff 96 1 Dec 10:59 https_endpoints /// <--- Changed directory to https_endpoints manually\ndrwxr-xr-x 2 staff 64 1 Dec 10:59 log_forwarders\n-rw-r--r-- 1 staff 201 1 Dec 10:59 realm_config.json\ndrwxr-xr-x 3 staff 96 1 Dec 10:59 sync\ndrwxr-xr-x 2 staff 64 1 Dec 10:59 values\n\"http_endpoints\"$ ../realm-cli push\nDetermining changes\nThe following reflects the proposed changes to your Realm app\n--- http_endpoints/config.json\n+++ http_endpoints/config.json\n@@ -1,14 +1,2 @@\n-[\n- {\n- \"route\": \"/testingonly\",\n- \"http_method\": \"GET\",\n- \"function_name\": \"helloworld\",\n- \"validation_method\": \"NO_VALIDATION\",\n- \"respond_result\": true,\n- \"fetch_custom_user_data\": false,\n- \"create_user_on_auth\": false,\n- \"disabled\": false,\n- \"return_type\": \"EJSON\"\n- }\n-]\n+[]\nrealm-cli pull\"http_endpoints\"$ ../realm-cli pull\n? Directory '/Users/<REDACTED>/mongodb-realm-cli/node_modules/mongodb-realm-cli/Application-1' already exists, do you still wish to proceed? Yes\nSaved app to disk\nSuccessfully pulled app down: .\n❯ ls\ntotal 8\ndrwxr-xr-x 4 staff 128 1 Dec 10:59 auth\ndrwxr-xr-x 3 staff 96 1 Dec 10:59 data_sources\ndrwxr-xr-x 7 staff 224 1 Dec 10:59 environments\ndrwxr-xr-x 4 staff 128 1 Dec 10:59 functions\ndrwxr-xr-x 4 staff 128 1 Dec 10:59 graphql\ndrwxr-xr-x 3 staff 96 1 Dec 20:20 http_endpoints /// <--- Created from the pull\ndrwxr-xr-x 3 staff 96 1 Dec 10:59 https_endpoints /// <--- Manually changed prior to pull\ndrwxr-xr-x 2 staff 64 1 Dec 10:59 log_forwarders\n-rw-r--r-- 1 staff 201 1 Dec 20:20 realm_config.json\ndrwxr-xr-x 3 staff 96 1 Dec 10:59 sync\ndrwxr-xr-x 2 staff 64 1 Dec 10:59 values\nrealm-cli pushpull\"http_endpoints\"pushpullpush",
"text": "If possible, would you be able to pull the Application down again? From my testing, the old directory / folder was somehow being tracked still:After running a push, the previous \"http_endpoints\" directory is still being tracked (will need to see why this is):What I tried next:It may be possibly easier to just remove the directory and do a fresh pull of the application, make the changes from the \"http_endpoints\" directory that is created, and perform a push again.Let me know if both a fresh pull / push doesn’t work either.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Downloading the structure using a pull and then adding the http_endpoints (https_endpoints is not tracked/does not work as you’ve suggested) has done the job!I’ve noticed that the project structure is a bit different now compared to what it was like with v.1 of the realm-cli. However it’s easy enough to move the necessary files around and update some of the config files to match the new style!Thanks!",
"username": "George_Ivanov"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Using realm-cli to setup an https endpoint | 2022-11-15T17:06:15.661Z | Using realm-cli to setup an https endpoint | 3,130 |
[
"compass"
]
| [
{
"code": "",
"text": "Hi MongoDB community, I am having a problem with replSetName in my cfg file. I am using mongoDb version mongodb-windows-x86_64-4.4.17-signedWhen i add replSetName to cfg the I am able to start the windows service but i am not able to connect via MonGoDB CompassTime Out in MonGoDB Compass?\nimage1307×655 131 KB\ni do not see any error messages in the Log files. Lines 42 and 43 show MonGoDB Compass trying to connect\nimage1920×761 277 KB\nAny ideas what i am doing incorrectly ?",
"username": "Alan_Tam"
},
{
"code": "",
"text": "The replicaSet has not been initiated.You can use the Direct Connection option to connect and then use the built-in mongosh to initiate.\nSee screenshots below.The process is described in the manual:\nimage822×863 62.8 KB\n\nimage1427×252 17.4 KB\n",
"username": "chris"
}
]
| Windows service mongod.cfg replSetName | 2022-12-02T11:33:16.993Z | Windows service mongod.cfg replSetName | 1,294 |
|
null | []
| [
{
"code": "",
"text": "Happy to join new community !!",
"username": "Hiral_Makwana"
},
{
"code": "",
"text": "Hi @Hiral_Makwana,Welcome to the MongoDB Community forums We’re glad that you have joined MongoDB Community. Can you introduce yourself in your own words to the community?Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Hello @Kushagra_Kesav\nI’m backend developer learning MongoDB.\nThanks !",
"username": "Hiral_Makwana"
},
{
"code": "",
"text": "Hi @Hiral_Makwana,Thanks for sharing about yourself, please utilize our free university courses to learn MongoDB: https://university.mongodb.com/. We suggest you get started with M001 - MongoDB BasicsHappy Learning Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "A post was split to a new topic: Hello, I’m Lahcene",
"username": "Kushagra_Kesav"
}
]
| Hello everyone! | 2022-07-16T05:43:50.050Z | Hello everyone! | 2,891 |
null | []
| [
{
"code": "",
"text": "Hi MongoDB CommunityI would like to implement PSAso far i have this … it would create P and Aconfig = { _id: “rs1”, members:[{_id: 0, host: “localhost:27017”},{ _id: 1, host : “localhost:37017”}]}how do i modify to add arbiter ?{_id: 3, host: \" ??? \"}Thanks",
"username": "Alan_Tam"
},
{
"code": "rs.initiate({\n _id: \"replicaTest\",\n members: [\n { _id: 0, host: \"127.0.0.1:27017\" },\n { _id: 1, host: \"127.0.0.1:27018\" },\n { _id: 2, host: \"127.0.0.1:27019\", arbiterOnly:true }]});\n",
"text": "Hi @Alan_Tam,Please don’t. Trust me. Do yourself a favor and don’t use an Arbiter. I already explained this at length (I just did it in a post like 3 minutes ago) in multiple posts in this forum. Arbiters are evil.Basically Replica Sets (RS) are designed to make your cluster Highly Available (HA). It’s their ONE reason to exist. Arbiters take a bite in HA already as one node in your cluster can vote but can’t acknowledge write operations meaning that you cannot write with w=majority without loosing HA entirely (because in a PSA losing the P or the S will prevent you from writing to 2 nodes at least). This wouldn’t happen in a PSS standard RS setup.But if you must, this is the syntax you are looking for:Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
]
| How do i add Arbiter for PSA set up | 2022-12-02T12:04:35.596Z | How do i add Arbiter for PSA set up | 1,052 |
[
"aggregation"
]
| [
{
"code": "$lookup{\n from: 'comments',\n localField: '_id',\n foreignField: 'actionId',\n as: 'comments',\n pipeline: [\n { $sort: { createdDate: 1 } }, \n ],\n}\n",
"text": "Hello,We were working with the aggregate pipeline ($lookup) and faced an issue (\"$lookup with ‘pipeline’ may not specify ‘localField’ or ‘foreignField’\") with the community version that does not happen in the Enterprise edition.Can you guys tell me if it’s expected behavior?Thanks !\nArtboard1920×1078 122 KB\n",
"username": "Julien_Sarazin"
},
{
"code": "",
"text": "FYI We upgraded to 5.0.14 community edition and had the same issue.",
"username": "Julien_Sarazin"
},
{
"code": "$lookupmongosh",
"text": "Hi @Julien_Sarazin - Welcome to the community.Thanks for providing those details regarding the error message.FYI We upgraded to 5.0.14 community edition and had the same issue.It’s quite interesting as I’ve tried to use the same $lookup details you’ve provided (although on different documents in my test environment) on a 5.0.14 community edition test server but could not replicate the error. Please see the details below:\nimage1962×1358 275 KB\nCan you confirm / provide the following information:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "I found the same issue, was using 5.0.9 locally and was wondering why it behaved differently than my server which has 5.0.13, went ahead and upgraded to 5.0.14 locally but same problem on Windows 10\nimage1160×503 33.1 KB\nThe only difference is that the server version where it works is running Ubuntu 22",
"username": "Pedro_Verdugo"
},
{
"code": "{\n '$lookup': {\n from: 'comments',\n localField: '_id',\n foreignField: 'actionId',\n as: 'comments',\n pipeline: [ { '$sort': { createdDate: 1 } } ]\n }\n}\nCurrent Mongosh Log ID:\t12345b2085df6bb1e9bc4275\nConnecting to:\t\tmongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.6.0\nUsing MongoDB:\t\t5.0.14\nUsing Mongosh:\t\t1.6.0\n\n/// pipeline run:\ntest> db.local.aggregate(\n{\n '$lookup': {\n from: 'comments',\n localField: '_id',\n foreignField: 'actionId',\n as: 'comments',\n pipeline: [ { '$sort': { createdDate: 1 } } ]\n }\n})\n[\n {\n _id: 1,\n comments: [\n {\n _id: ObjectId(\"63895a6c71c54b95e649921a\"),\n actionId: 1,\n createdDate: ISODate(\"2022-12-02T01:52:44.467Z\")\n }\n ]\n }\n]\n\n/// Set the feature compatibility version to \"4.4\"\ntest> db.adminCommand({setFeatureCompatibilityVersion:\"4.4\"})\n{\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1669946175, i: 1029 }),\n signature: {\n hash: Binary(Buffer.from(\"0000000000000000000000000000000000000000\", \"hex\"), 0),\n keyId: Long(\"0\")\n }\n },\n operationTime: Timestamp({ t: 1669946175, i: 1029 })\n}\n\n/// same pipeline run again:\ntest> db.local.aggregate(\n{\n '$lookup': {\n from: 'comments',\n localField: '_id',\n foreignField: 'actionId',\n as: 'comments',\n pipeline: [ { '$sort': { createdDate: 1 } } ]\n }\n})\n\n/// same error you have received:\nMongoServerError: $lookup with 'pipeline' may not specify 'localField' or 'foreignField'\ntest>\nmongoddb.adminCommand( {\n getParameter: 1,\n featureCompatibilityVersion: 1\n }\n )\n",
"text": "Have you used setFeatureCompatibilityVersion to “4.4” for that server which is receiving the error?I have the pipeline below used in my tests:Version 5.0.14 server:You can check the FCV using the following on a mongod:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "$lookup",
"text": "As an additional note, I tested the $lookup via Compass and managed to get the same error you have provided when I set the FCV to “4.4”.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Ahh that was it, I wasn’t doing many aggregations on my local so totally forgot about that, thanks.",
"username": "Pedro_Verdugo"
},
{
"code": "",
"text": "Thank for updating the post Pedro. Glad to hear that was it.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hello @Jason_TranSorry for the late reply.\nScreenshot 2022-12-02 at 16.23.121560×842 134 KB\n",
"username": "Julien_Sarazin"
},
{
"code": "db.adminCommand( { setFeatureCompatibilityVersion: \"5.0\" } )",
"text": "It was indeed related to the FCV 4.4! Thanks a lot!\nSetting db.adminCommand( { setFeatureCompatibilityVersion: \"5.0\" } ) fixed the problem ",
"username": "Julien_Sarazin"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| "$lookup with 'pipeline' may not specify 'localField' or 'foreignField'" | 2022-11-28T08:19:19.779Z | “$lookup with ‘pipeline’ may not specify ‘localField’ or ‘foreignField’” | 7,108 |
|
null | [
"node-js",
"data-modeling",
"field-encryption"
]
| [
{
"code": "schemaMapautoEncryption{\n 'database.customers': {\n bsonType: 'object',\n encryptMetadata: {\n keyId: [new Binary(Buffer.from(\"KEYID\", \"hex\"), 4)]\n },\n properties: {\n name: {\n bsonType: [ 'object', 'null' ],\n anyOf: [\n { bsonType: 'null' },\n {\n bsonType: 'object',\n properties: {\n value: {\n encrypt: {\n algorithm: 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic',\n bsonType: 'string'\n }\n }\n }\n }\n ]\n }\n }\n }\n}\nInvalid schema containing the 'encrypt' keywordanyOf",
"text": "Hi,I am trying to use Client-Side-Field-Level-Encryption (CSFLE) ; So far I am providing the schemaMap in the autoEncryption property when I initialize my Mongo Client (which is the native NodeJs driver).However my schema has a spec which is I want to encrypt a single field from an object, but, the object itself may be null.My latest tries were as follow :Which ends up with Invalid schema containing the 'encrypt' keyword. The documentation of course does not cover such special case. It seems the keyword anyOf is supported for Mongo schemas and CSFLE does not say the support it’s dropped for encryption.If anyone may point out a lead to solve that, i would be thankful.Thank you,\nBest regards",
"username": "Adrien_Mille1"
},
{
"code": "",
"text": "Hello. Did you manage to fix it?",
"username": "Gustavo_Albuquerque"
}
]
| Schema for CSFLE with null values | 2022-02-23T08:30:15.190Z | Schema for CSFLE with null values | 3,208 |
null | [
"replication",
"java",
"spring-data-odm"
]
| [
{
"code": " 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4\n ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6\n 192.168.10.10 dbaMongoServer\n rs.initialize()\n rs.add(\"192.168.10.10:27017\").\n 2022-11-28 08:54:53.848 INFO 14720 --- [XNIO-1 task-2] org.mongodb.driver.connection Closed connection [connectionId{localValue:3695}] to dbaMongoServer.org:27017 because there was a socket exception raised by this connection.\n 2022-11-28 08:54:53.850 ERROR 14720 --- [XNIO-1 task-2] p.g.r.services.ValidateService org.springframework.dao.DataAccessResourceFailureException: dbaMongoServer.org; nested exception is com.mongodb.MongoSocketException: dbaMongoServer.org\n at org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:90)\n at org.springframework.data.mongodb.core.MongoTemplate.potentiallyConvertRuntimeException(MongoTemplate.java:2774)\n```\n```\nCaused by: com.mongodb.MongoSocketException: dbaMongoServer.org\n at com.mongodb.ServerAddress.getSocketAddress(ServerAddress.java:188)\n192.168.10.10 dbaMongoServer.org\n \"members\" : [\n {\n \"_id\" : 0,\n \"host\" : \"dbaMongoServer.org:27017\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 1,\n \"tags\" : {\n\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n }\n ],\n",
"text": "HiI set up Mongo 4.0.2 for linux server. The server hostname is dbaMongoServer.org./etc/hosts content:As you can see the hostname established in /etc/hosts file is not the same for server hostnameThen I changed the mongod.conf to establish it as replicaset and I executed the following commands in mongoshell:One test application that connects to MongoDB and it started to faill. This application connects by IP address, not by hostname database server. These are the following messages from application log:I fixed the /etc/hosts changing its third line as follows:The application keeps failling.After that, I checked the replicaset configuration by db.conf(), I realized the member host was changed for itself from IP address to hostname.The question is why the replica set member host changed for itself from server IP address to hostname?",
"username": "Carlos_Manuel_55780"
},
{
"code": "",
"text": "Did your rs.add succeed?\nOn which port your mongod was running?You are trying to add second node on 27017\nShow us the output of rs.status()\nThe output you have shown must be that of primary\nBy default rs.initiate takes hostname with empty config or if the members were not defined at initiate time",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "The rs.add succeded but not how I wanted. I executed rs.add(“192.168.10.10:27017”) after\nrs.initialize(). Even though, the host member was stored as ‘dbaMongoServer:27017’. It never happenned to me before.The port is 27017. It’s a replicaset for only one node (primary).The output you have shown must be that of primary",
"username": "Carlos_Manuel_55780"
},
{
"code": "rs.initiate()rs.add()root@mdb:/# hostname -I\n172.17.0.3 \nroot@mdb:/# mongo --quiet\n> rs.initiate()\n{\n\t\"info2\" : \"no configuration specified. Using a default configuration for the set\",\n\t\"me\" : \"mdb:27017\",\n\t\"ok\" : 1\n}\ns0:SECONDARY> \ns0:PRIMARY> rs.add('172.17.0.3')\n{\n\t\"operationTime\" : Timestamp(1669848239, 1),\n\t\"ok\" : 0,\n\t\"errmsg\" : \"The hosts mdb:27017 and 172.17.0.3:27017 all map to this node in new configuration version 2 for replica set s0\",\n\t\"code\" : 103,\n\t\"codeName\" : \"NewReplicaSetConfigurationIncompatible\",\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1669848239, 1),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\t\t\t\"keyId\" : NumberLong(0)\n\t\t}\n\t}\n}\n\n//matching my example from above\ns0:PRIMARY> var c=rs.conf()\ns0:PRIMARY> c.members[0].host='172.17.0.3:27017'\n172.17.0.3:27017\ns0:PRIMARY> rs.reconfig(c)\n{\n\t\"ok\" : 1,\n\t\"operationTime\" : Timestamp(1669850026, 1),\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1669850026, 1),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\t\t\t\"keyId\" : NumberLong(0)\n\t\t}\n\t}\n}\n\n",
"text": "rs.initiate() will have set the host up using its own fqdn. So it would not have changed it would have been dbMongoServer.org from the initialization. The subsequent rs.add() would likely have failed.Using fqdn’s is the recommended way to reference replicaset members.Regardless, there are a few ways to change to change this to IP, I’ll add one here:",
"username": "chris"
},
{
"code": " 192.168.10.10 dbaMongoServer // which it's a incorrect hostname. The correct is dbaMongoServer.org\n",
"text": "I got a question.If the /etc/hosts file was no configured well. I mean, it was containing:After a period of time I corrected the hosts file. Would this have to do with a possible change of the replicaset’s member host to hostname?",
"username": "Carlos_Manuel_55780"
},
{
"code": "db.hello()directConnection=true",
"text": "On the client side understanding how the client connects to a replicaSet is important.The client driver treats the hosts in the connection string as seed hosts, once a connection is successfully made the topology of the replicaset is retrieved by the equivalent of db.hello(). This is why the client would have tried to connect to dbaMongoServer.org if you specified 192.168.10.10 in your connection string.Client: connect 192.168.10.10 → Server\nClient: hello() → Server\nClient ← Server: Replicaset looks like this, and primary is dbaMongoServer.org:27017\nClient: connect dbaMongoServer.org → Cannot get an address for dbaMongoServer.orgYou can specify directConnection=true to work around this behavior, but this is generally not want you want when connection to a replicaset as you want the application to connect to the next primary if the current one fails or is stepped down.",
"username": "chris"
},
{
"code": "mongo.conf# network interfaces\nnet:\n port: 27017\n bindIp: dbaMongoServer.org, 127.0.0.1\n",
"text": "Hi @Carlos_Manuel_55780I think this is related to your mongo.conf file. you possibly have set these following lines:doing so, your instances will listen to local connections and react to only requests made for “dbaMongoServer.org” (order matters). This also causes replica set use this name upon initialization. I am guessing you have set it there and forgot about it when you open this post.although members mostly use same port but different names, you may have same name different ports from different machines but that would be hard to maintain in hosts file.",
"username": "Yilmaz_Durmaz"
},
{
"code": "net:\n port: 27017\n bindIp: 0.0.0.0 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses ...",
"text": "This is the content of /etc/mongod.conf for net section",
"username": "Carlos_Manuel_55780"
},
{
"code": "ps aux | grep mongod",
"text": "Honestly, the only time I got a similar result to yours was with that addition in the config file. I may still be wrong, but your server might be using another config file having this shape, other than “/etc/mongod.conf”.\nCan you please check about this possibility? one way could be ps aux | grep mongod.",
"username": "Yilmaz_Durmaz"
},
{
"code": "rs.initiate()rs.add()class CmdReplSetInitiate : public ReplSetCommand \n \n seeds->push_back(m);\n }\n if (*comma == 0) {\n break;\n }\n p = comma + 1;\n }\n }\n } // namespace\n \n class CmdReplSetInitiate : public ReplSetCommand {\n public:\n CmdReplSetInitiate() : ReplSetCommand(\"replSetInitiate\") {}\n std::string help() const override {\n return \"Initiate/christen a replica set.\\n\"\n \"http://dochub.mongodb.org/core/replicasetcommands\";\n }\n virtual bool run(OperationContext* opCtx,\n const DatabaseName&,\n const BSONObj& cmdObj,\n BSONObjBuilder& result) {\n \n HostAndPort me = someHostAndPortForMe();\n \n StorageInterface::get(opCtx),\n ReplicationProcess::get(opCtx));\n std::string name;\n std::vector<HostAndPort> seeds;\n parseReplSetSeedList(&externalState, replSetString, &name, &seeds); // may throw...\n \n BSONObjBuilder b;\n b.append(\"_id\", name);\n b.append(\"version\", 1);\n BSONObjBuilder members;\n HostAndPort me = someHostAndPortForMe();\n \n auto appendMember =\n [&members, serial = DecimalCounter<uint32_t>()](const HostAndPort& host) mutable {\n members.append(\n StringData{serial},\n BSON(\"_id\" << static_cast<int>(serial) << \"host\" << host.toString()));\n ++serial;\n };\n appendMember(me);\n result.append(\"me\", me.toString());\n \n getHostName()\n \n if (localhost_only) {\n // We're only binding localhost-type interfaces.\n // Use one of those by name if available,\n // otherwise fall back on \"localhost\".\n return HostAndPort(addrs.size() ? addrs[0] : \"localhost\", bind_port);\n }\n \n // Based on the above logic, this is only reached for --bind_ip '0.0.0.0'.\n // We are listening externally, but we don't have a definite hostname.\n // Ask the OS.\n std::string h = getHostName();\n verify(!h.empty());\n verify(h != \"localhost\");\n return HostAndPort(h, serverGlobalParams.port);\n }\n \n void parseReplSetSeedList(ReplicationCoordinatorExternalState* externalState,\n const std::string& replSetString,\n std::string* setname,\n std::vector<HostAndPort>* seeds) {\n const char* p = replSetString.c_str();\n \n ",
"text": "rs.initiate() will have set the host up using its own fqdn. So it would not have changed it would have been dbMongoServer.org from the initialization. The subsequent rs.add() would likely have failed.The initiate is using the servers own hostname as returned by the OS.class CmdReplSetInitiate : public ReplSetCommand HostAndPort me = someHostAndPortForMe();Which eventually leads to the call of getHostName() in someHostAndPortForMe():",
"username": "chris"
},
{
"code": "",
"text": "There is only one config file for mongod. On the other side, I can connect to the databse by MongoCompass from my PC. The hostname and IP are not added from Windows hosts file.According to the previous messages. I keep in mind now that I have to check member host value for primary once added.Thank you very much",
"username": "Carlos_Manuel_55780"
}
]
| The replica set member host changed for itself from server IP address to hostname | 2022-11-29T14:38:35.910Z | The replica set member host changed for itself from server IP address to hostname | 3,708 |
[]
| [
{
"code": "",
"text": "exception in initAndListen: NonExistentPath: Data directory C:\\data\\db\\ not found. Create the missing directory or specify another path using (1) the --dbpath command line option, or (2) by adding the ‘storage.dbPath’ option in the configuration file., terminating\nimage1594×380 17.1 KB\nafter this it kicks me out of the mongod and I am able to write in the hyper terminal. So my server is not starting",
"username": "Aleksandar_Nedelkovski"
},
{
"code": "",
"text": "How did you start your mongod?\nIf you have issued just mongod it will start on default port 27017 and default dirpath c:/data/db\nSince dir is missing it is exiting\nYou need to create the directory\nSince you are on Windows if you installed mongod as service it uses proper mongod.conf",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "The message clearly indicate that theData directory C:\\data\\db\\ not foundThe simplest solution is to create the directory as hinted by the rest of the error messageCreate the missing directoryYou may create the missing directory using the command mkdir.",
"username": "steevej"
}
]
| Exception in initAndListen: NonExistentPath | 2022-12-02T10:44:45.108Z | Exception in initAndListen: NonExistentPath | 2,167 |
|
null | [
"java",
"swift",
"react-native",
"android"
]
| [
{
"code": "",
"text": "Hello,\nThe team that I work in is moving toward supporting react-native 0.70.0 in order to adopt the many benefits of the new architecture.Currently, our application has Realm integrated on both the native iOS and Android layer and React-Native layer at the same time (this is an anti-pattern, though something our current team needs to work with for now)In order for integration to work on both layers we have discovered that the latest version of Realm we can use is bottlenecked by the latest version of Realm-Core that RealmSwift, RealmJava and RealmJS all support.As of this post,\nThe latest versions supported are as follows:\nReact-native: 11.1.0 (12.11.0 core) - Nov 1st release\nSwift: 10.32.2 (12.11.0 core) - Nov 2nd release\nJava: 10.12.0 (12.6.0 core) - Sept 22 releaseSo in order to upgrade, from my understanding, we are bottlenecked by RealmJava which supports 12.6.0 core as its latest version.When looking at realmJS, the latest version that also supports RealmCore 12.6.0 does not support ReactNative version 0.70.0Which I believe is blocking our team from being able to upgrade.The earliest version of RealmCore that supports ReactNative 0.70.0 is RealmCore version 12.11.0, so in order for us to adopt ReactNative 0.70.0 RealmJava will also need to support RealmCore 12.11.0.This brings me to my question -\nAre there plans for RealmJava to support RealmCore 12.11.0 anytime in the near future?",
"username": "Joshua_Wheeler"
},
{
"code": "",
"text": "Hi, Yes, we are planing to bring Realm Java to the latest Core version within the next few weeks.",
"username": "ChristanMelchior"
},
{
"code": "",
"text": "test Core version within the next few weeGood morning.Could you identify the ‘latest Core version’ you are refering?Though the initial message mentions 12.11.0, i see that the version 13.1.0 is available from github.I looked at the projects that depend on realm-core and see that the javascript (React Native) does point to 13.1.0 while the others point to 12.11.0.Regards,\nDaniel",
"username": "Daniel_Labonte"
}
]
| Will Realm-Java Support Realm-Core 12.11.0 in the near future? | 2022-11-08T19:53:18.805Z | Will Realm-Java Support Realm-Core 12.11.0 in the near future? | 2,040 |
null | [
"java",
"production"
]
| [
{
"code": "",
"text": "The 4.8.0 MongoDB Java & JVM Drivers has been released.The documentation hub includes extensive documentation of the 4.8 driver.You can find a full list of bug fixes here .You can find a full list of improvements here .You can find a full list of new features here .",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB Java Driver 4.8.0 released | 2022-12-02T13:05:19.756Z | MongoDB Java Driver 4.8.0 released | 2,765 |
null | [
"aggregation"
]
| [
{
"code": "{\n \"_id\": {\n \"site\": \"111\",\n \"path\": \"index.html\",\n \"macro\": \"macro1=value1¯o2=value2\",\n \"granularity\": \"DAILY\",\n \"period\": {\n \"$date\": \"2022-09-01T00:00:00.000Z\"\n }\n },\n \"createdAt\": {\n \"$date\": \"2022-09-22T12:52:38.307Z\"\n },\n \"metrics\": {\n \"metric1\": {\n \"value\": {\n \"$numberLong\": \"1\"\n }\n },\n \"metric2\": {\n \"value\": {\n \"$numberLong\": \"1\"\n }\n },\n\t\t...\n\t\t\"metricN\": {\n \"value\": {\n \"$numberLong\": \"1\"\n }\n }\n\t}\n}\n[{\n \"$match\": {\n \"_id.site\": \"111\",\n \"_id.granularity\": \"DAILY\"\n }\n }, {\n \"$group\": {\n \"_id\": {\n \"path\": \"$_id.path\",\n \"period\": \"$_id.period\"\n },\n \"metric1\": {\n \"$sum\": \"$metrics.metric1.value\"\n },\n \"metric2\": {\n \"$sum\": \"$metrics.metric2.value\"\n },\n\t\t\t.\n\t\t\t.\n\t\t\t.\n \"metricN\": {\n \"$sum\": \"$metrics.metricN.value\"\n }\n }\n }, {\n \"$project\": {\n \"fields\": \"$_id\",\n \"metrics\": {\n \"metric1.value\": \"$metric1\",\n \"metric2.value\": \"$metric2\",\n .\n\t\t\t\t.\n\t\t\t\t.\n \"metricN.value\": \"$metricN\"\n },\n \"metrics.calculatedField1.value\": {\n\t\t\t\t...\n },\n \"metrics.calculatedField2.value\": {\n\t\t\t\t...\n },\n \"metrics.calculatedFieldN.value\": {\n\t\t\t\t...\n }\n }\n }, {\n \"$sort\": {\n \"fields.period\": -1\n }\n }, {\n \"$facet\": {\n \"grandTotal\": [{\n \"$group\": {\n \"_id\": null,\n \"total\": {\n \"$sum\": 1\n },\n \"metric1\": {\n \"$sum\": \"$metrics.metric1.value\"\n },\n \"metric2\": {\n \"$sum\": \"$metrics.metric2.value\"\n },\n\t\t\t\t\t\t.\n\t\t\t\t\t\t.\n\t\t\t\t\t\t.\n \"metricN\": {\n \"$sum\": \"$metrics.metricN.value\"\n }\n }\n }, {\n \"$project\": {\n \"total\": 1,\n \"totalMetrics\": {\n \"metric1.value\": \"$metric1\",\n \"metric2.value\": \"$metric2\",\n\t\t\t\t\t\t\t.\n\t\t\t\t\t\t\t.\n\t\t\t\t\t\t\t.\n \"metricN.value\": \"$metricN\"\n },\n \"totalMetrics.calculatedField1.value\": {\n\t\t\t\t\t\t\t...\n },\n \"totalMetrics.calculatedField2.value\": {\n\t\t\t\t\t\t\t...\n },\n \"totalMetrics.calculatedFieldN.value\": {\n\t\t\t\t\t\t\t...\n }\n }\n }\n ],\n \"paginatedData\": [{\n \"$skip\": 0\n }, {\n \"$limit\": 40\n }\n ]\n }\n }\n]\n",
"text": "Hi, I’m trying to understand why my aggregations’ execution time increased from 1s to more than 30s after increasing the number of documents.\nBackground… I have only one collection in my db, which aggregates daily metrics.Schema:My aggregation:I’m running this aggregation in a M50 cluster tier and I have 30 milions documents in my collection.\nAlso, I have a compound index on all key attributes and avg document size is 1kb.\nMy aggregation should be very flexible. It should be possible to group by any key attribute and filter by any field.\nWhen I goup by PERIOD and PATH for a specific site, for example, according to explain, FETCH stage uses IXSCAN and returns 416151 documents, meaning it uses 416MB for this aggregation pipeline stage, which exceeds 100MB and uses disk (allowDiskUse is true). This aggregation takes 51s to execute, even paginated.Question: How to improve this aggregation performance to reach max 5 secs execution time? Should I increase aggregation pipeline stage memory? How? Is this schema good enough for this query?",
"username": "Bruno_Carreira"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Ideas to fit complex aggregation in acceptable runtime | 2022-12-02T11:59:44.828Z | Ideas to fit complex aggregation in acceptable runtime | 844 |
null | [
"aggregation",
"queries",
"node-js"
]
| [
{
"code": "interface Category {\n _id: string;\n name: string;\n parentId: string;\n}\ninterface Product {\n _id: string;\n name: string;\n categoryId: string;\n\n // ... other irrelevant fields ...\n}\ncategoryId_id$graphLookupdb.product_categories.aggregate([\n { $match: { _id: \"<parent-category-id-here>\" } },\n {\n $graphLookup: {\n as: \"children\",\n startWith: \"$_id\",\n connectFromField: \"_id\",\n connectToField: \"parentId\",\n from: \"product_categories\",\n },\n },\n { $unwind: \"$children\" },\n { $project: { _id: 1, childId: \"$children._id\" } },\n]);\n$matchdb.products.aggregate([\n {\n $match: {\n categoryId: {\n $in: [\n \"<parent-category-id>\",\n \"<sub-category-ids-from-the-$graphLookup-stage>\",\n ],\n },\n },\n },\n // ... other stages\n]);\n",
"text": "I have two collections with the following schema:This collection is hierarchical.I want to filter products by categoryId. For example: if I’m searching\nwith the _id of Components category then the query should also find\nproducts from sub-categories like “CPU”, “GPU” etc.I’ve managed to come up with a query to find sub-categories recursively with the\n$graphLookup stage.But, I don’t know how to use it in the $match stage of the following query.I know that, I can just fetch the subcategories before running second query but\nI’m trying to do it all in one go. Is it possible?Please give me some hint about how can I proceed further from this step. Is\nthere a better alternative solution to this problem?Thanks in advance .",
"username": "h-sifat"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to use the result of $graphLookup in $match stage | 2022-12-02T09:44:54.492Z | How to use the result of $graphLookup in $match stage | 1,195 |
null | [
"queries",
"crud",
"mongodb-shell"
]
| [
{
"code": "[\n {\n \"_id\": \"123\",\n \"Config\": \"Some text and then Old substring config for 123\"\n },\n {\n \"_id\": \"456\",\n \"Config\": \"Some more text for another config Old substring config for 456\"\n }\n]\n[\n {\n \"_id\": \"123\",\n \"Config\": \"Some text and then New substring config for 123\"\n },\n {\n \"_id\": \"456\",\n \"Config\": \"Some more text for another config New substring config for 456\"\n }\n]\nConfigdb.MyConfigs.find().forEach(function(doc){ \n var newConfig = doc.Config.replace('Old substring', 'New substring'); \n db.MyConfigs.updateOne(\n {\"_id\", doc._id},\n {$set: { \"Config\": newConfig } }\n ); \n});\nError: clone(t={}){const r=t.loc||{};return e({loc:new Position(\"line\"in r?r.line:this.loc.line,\"column\"in r?r.column:...<omitted>...)} could not be cloned.\n at Object.serialize (node:v8:332:7)\n at u (<Path>\\AppData\\Local\\MongoDBCompass\\app-1.34.1\\resources\\app.asar.unpacked\\node_modules\\@mongosh\\node-runtime-worker-thread\\dist\\worker-runtime.js:1917:583764)\n at postMessage (<Path>\\AppData\\Local\\MongoDBCompass\\app-1.34.1\\resources\\app.asar.unpacked\\node_modules\\@mongosh\\node-runtime-worker-thread\\dist\\worker-runtime.js:1917:584372)\n at i (<Path>\\AppData\\Local\\MongoDBCompass\\app-1.34.1\\resources\\app.asar.unpacked\\node_modules\\@mongosh\\node-runtime-worker-thread\\dist\\worker-runtime.js:1917:5\n3.6.8",
"text": "This is a sample of my document structure:Expected outcome after update:I’m trying to replace a substring in the Config property for the whole collection using below code.I’m getting below error when I execute above code in MONGOSH.MondoDB Version: 3.6.8Can anyone help with the issue here?Thanks.",
"username": "Stak"
},
{
"code": "",
"text": "Hi @Stak and welcome in the MongoDB Community !I’ll start with a version compatibility issue: https://www.mongodb.com/docs/mongodb-shell/connect/#supported-mongodb-versions.Looks like Mongosh is for MDB 4.0 and above. Can you try with the old “mongo shell” and see if you have the same problem?That being said, for this use case, I would use a bulk write operation to send the updateOne operations in a single bulk instead of firing them one by one. If you really have a lot, I would send them 1000 by 1000 for a first try.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "db.MyConfigs.find().forEach(function(doc){\n var newConfig = doc.Config.replace('Old substring', 'New substring'); \n db.MyConfigs.updateOne(\n {\"_id\": doc._id},\n {$set: { \"Config\": newConfig } }\n ); \n});\n",
"text": "I just tested your script and it’s working fine for me on MongoDB 6.0.3 and Mongosh 1.6.0 except that I just had to correct the typo in the filter (\":\" instead of “,”).",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thanks for the response. Regarding the typo, yes it’s just a typo in the question, query was executed with proper syntax.I guess it has to do with MongoDB version. Since I don’t have access to a later version, i’ll try this again in a later date.Once again, thanks for taking the time to answer and verify that it does work in later version.",
"username": "Stak"
}
]
| MONGOSH updateOne gives error "could not be cloned" | 2022-12-01T04:15:49.328Z | MONGOSH updateOne gives error “could not be cloned” | 15,441 |
null | [
"replication"
]
| [
{
"code": "",
"text": "Hello! i have 3 node mongodb replica set ( Master > Secondary > Secondary )and after i turn off 2 Secondaries ( systemctl stop mongod ) my Master after 10 seconds becomes a secondary! why this happend?",
"username": "tabi_jonsan"
},
{
"code": "",
"text": "if a single node survives a system failure, it will not allow writes anymore and become read-only.if there are 2 nodes surviving, they will not allow writes either because they fail to determine which would become a new primary. that is unless you gave them a priority.think of this situation: your 3 member set somehow loses connection to the primary. now you will have 2 chunks of servers; 1 is alone, and the other 2. both chunks would think others had just died but they are well alive. what would happen if both chunks continue serving as primaries?to protect chaos, this is the default behavior: they will serve read-only until other members come and voting for a primary completes.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Tessekur ederim! but what can i do if i lost this 2 servers? how can i make my remaining server working? i think i should delete this 2 died members from Replica set, but my db is secondary and it is Read only ",
"username": "tabi_jonsan"
},
{
"code": "ps aux | grep mongod # gets its pid\nkill pid # there is possibly shorter ways, what what you know is faster :)\nuse local\ndb.dropDatabase()\nuse admin\ndb.shutdownServer()\n",
"text": "permanent or temporary? I will assume they lost to you forever else be careful.it is usually about maintenance steps or just removing members.\nRemove Members from Replica Set — MongoDB Manual\nPerform Maintenance on Replica Set Members — MongoDB ManualBut since you current instance work in secondary mode, you cannot “db.shutdown()” it, nor “rs.remove” works because not authrized. you have to kill this instance first.then as described in maintanence document, comment out or edit some lines so you can run it again without replica settings. (replica set name and port are most important).then connect on this new port, and remove “local” database (browse it if you want and see if there is another way). this is the fastest to remove from a replica set.now edit config again, change port and remove replica set related lines if you want, then just restart the server.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Cok sagol abi! it works!",
"username": "tabi_jonsan"
},
{
"code": "",
"text": "Afiyet olsun \nGood luck!",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Master becomes secondary after stop all replicas | 2022-11-29T10:47:51.272Z | Master becomes secondary after stop all replicas | 3,034 |
null | [
"mongodb-shell",
"server"
]
| [
{
"code": "Try re-running the command as root for richer errors.\nError: Failure while executing; `/bin/launchctl bootstrap gui/501/Users/schub/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist` exited with 5.\nCurrent Mongosh Log ID: 638619ca4e5f303a7c13367b\nConnecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.6.0\nMongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017\n{\"t\":{\"$date\":\"2022-11-29T15:41:01.536+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-11-29T15:41:01.539+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-11-29T15:41:01.540+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2022-11-29T15:41:01.544+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-11-29T15:41:01.544+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-11-29T15:41:01.544+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2022-11-29T15:41:01.544+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-11-29T15:41:01.544+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":30902,\"port\":27017,\"dbPath\":\"/data/db\",\"architecture\":\"64-bit\",\"host\":\"MacJS.local\"}}\n{\"t\":{\"$date\":\"2022-11-29T15:41:01.544+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23352, \"ctx\":\"initandlisten\",\"msg\":\"Unable to resolve sysctl {sysctlName} (number) \",\"attr\":{\"sysctlName\":\"hw.cpufrequency\"}}\n{\"t\":{\"$date\":\"2022-11-29T15:41:01.544+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23351, \"ctx\":\"initandlisten\",\"msg\":\"{sysctlName} unavailable\",\"attr\":{\"sysctlName\":\"machdep.cpu.features\"}}\n{\"t\":{\"$date\":\"2022-11-29T15:41:01.544+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.1\",\"gitVersion\":\"32f0f9c88dc44a2c8073a5bd47cf779d4bfdee6b\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"aarch64\",\"target_arch\":\"aarch64\"}}}}\n{\"t\":{\"$date\":\"2022-11-29T15:41:01.544+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"22.1.0\"}}}\n{\"t\":{\"$date\":\"2022-11-29T15:41:01.544+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}}\n{\"t\":{\"$date\":\"2022-11-29T15:41:01.545+01:00\"},\"s\":\"E\", \"c\":\"NETWORK\", \"id\":23024, \"ctx\":\"initandlisten\",\"msg\":\"Failed to unlink socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\",\"error\":\"Permission denied\"}}\n{\"t\":{\"$date\":\"2022-11-29T15:41:01.545+01:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":40486,\"file\":\"src/mongo/transport/transport_layer_asio.cpp\",\"line\":1120}}\n{\"t\":{\"$date\":\"2022-11-29T15:41:01.545+01:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\nName Status User File\nmongodb-community error 3584 schub ~/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist\n",
"text": "Hi, I tried to install [email protected] via Brew, knowing that a previous version 5.0 was already installed. I followed the official procedure here https://www.mongodb.com/docs/v6.0/installation/.I get the following message when starting the service:Bootstrap failed: 5: Input/output errorI get this message when running mongoshAnd here are the errors raised by launching mongodAnd this after this commandebrew services restart mongodb/brew/mongodb-communityTried everything I found on Google, StackOverflow and here, tried uninstalling and reinstalling but still getting the same errors.thank you in advance for your help",
"username": "Jean-Sebastien_Lasne"
},
{
"code": "MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017\"Failed to unlink socket file\"mongod --version",
"text": "I am not a Mac user, but I can interpret few of those errors.MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017When running locally, this error comes up because there is no service listening on that port. This means mongod failed to start. you need to inspect the cause, fix it, and then restart mongod again.\"Failed to unlink socket file\" … “/tmp/mongodb-27017.sock” … “Permission denied”This error mostly arises because either there is an instance of server running on that port or an instance was not shutdown properly, hence the file is locked. The remaining part also tell that root access is needed.\nFirst, check if the old version is still there and running. then stop it.\nIf there is no running server, even trying to run the server as root will fail. because the existence of this file causes mongod to think a server is already running. removing this lock file mostly solves the issue.also, check if the new server is installed fine by this command: mongod --version.",
"username": "Yilmaz_Durmaz"
},
{
"code": "db version v6.0.1\nBuild Info: {\n \"version\": \"6.0.1\",\n \"gitVersion\": \"32f0f9c88dc44a2c8073a5bd47cf779d4bfdee6b\",\n \"modules\": [],\n \"allocator\": \"system\",\n \"environment\": {\n \"distarch\": \"aarch64\",\n \"target_arch\": \"aarch64\"\n }\n}\nCould not kill process on port 27017. No process running on port.",
"text": "Thank you for your reply.\nThe output of mongod --version is this:I used kill-port to kill the process on port 27017 and this is the result:Could not kill process on port 27017. No process running on port.And by the way I don’t know which file is locked and the file /tmp/mongodb-27017.sock does not exist on my mac.",
"username": "Jean-Sebastien_Lasne"
},
{
"code": "\"Unable to resolve sysctl {sysctlName} (number) \"\n\"{sysctlName} unavailable\"\n",
"text": "the file /tmp/mongodb-27017.sock does not exist on my macthis complicates things because that is in the last 3 lines of the logs. I don’t know how Mac’s root privileges work. did you try checking that file as root?Another one is a few lines above that, but I don’t know what that means in a Mac.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "thank you very much, but I’m new to macOS and I don’t know how to do this. I will find out how to do it.",
"username": "Jean-Sebastien_Lasne"
},
{
"code": "sudosudo rm /tmp/mongodb-27017.sock",
"text": "Enter administrator commands in Terminal on Mac - Apple Support (IE)\nsudo seems to do it.try sudo rm /tmp/mongodb-27017.sockalso check this one and see if it is useful . macos - Failed to unlink socket file /tmp/mongodb-27017.sock errno:13 Permission denied - Stack Overflow",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Thanks for these informations. I tried this command but nothing happens. I also tried StackOverflow suggestions but that doesn’t work either. And the files that are mentioned in the logs are no longer available.",
"username": "Jean-Sebastien_Lasne"
},
{
"code": "",
"text": "I am sorry to hear that. I can interpret error messages as they are almost the same in all operating systems. I just can’t confirm what happens on macos.\nI have found the following topic with pretty much the same problem. the difference I can see is how he tries to start the service. It might be related so check it out if you haven’t\nHELP: Brew [email protected] error [MacOS] - Ops and Admin / Installation & Upgrades - MongoDB Developer Community Forumsit again has this socket file problem and solves it. check the commands they used.If you still fail to have a solution, you may tag “Stennie” from that post for a look at it here. to tag someone, start typing “@” symbol and the name. I hope he has an idea.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "The permission related issues occur if you had run mongod as root\nSo check permissions on dbpath & logpath folder\nls -lrt /tmp/mongod-27017.sock\nIf you have multiple versions you have to start brew service giving the version number but your command does not append version number\nAlso from logs i see it is using /data/db which is default dbpath location.On Macos access to root folders is removed.If this is causing issues you have to give different dirpath\nTill your default mongod on port 27017 issue is resolved you can always spinup your own mongod on a different port,dbpath,logpath by giving some port like 28000 and your home directory\nEx:\nMongod --port 28000 --dbpath homedir --logpath homedir/mongod.log --fork",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Thank you for your answers but I still face the same errors. And Ramachandra_Tummala,@Stennie_X: do you have an idea to solve my problem?Thank you in advance for your assistance",
"username": "Jean-Sebastien_Lasne"
}
]
| Connect ECONNREFUSED after Mongod community 6.0 installation | 2022-11-29T14:46:55.283Z | Connect ECONNREFUSED after Mongod community 6.0 installation | 3,165 |
null | []
| [
{
"code": "",
"text": "Hi Lauren,\nI am creating a framework to test mongoDB aggregation data with nodeJS and jest, could you please recommend me maybe course or videos to create this framework.Thank you,\nNatalia Merkulova\nhttps://www.linkedin.com/in/natalia-merkulova/",
"username": "Natalia_Merkulova"
},
{
"code": "",
"text": "Hello @Natalia_Merkulova ,I am not clear with your requirement but I am sure you will find your search in below links,University coursesDiscover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.General Search in All Resources:Developer Articles & Topics:Code, content, tutorials, programs and community to enable developers of all skill levels on the MongoDB Data Platform. Join or follow us here to learn more!Blogs:\nhttps://www.mongodb.com/blog/search/nodejsCommunity Forum Topics:\nhttps://www.mongodb.com/community/forums/search?q=nodejs",
"username": "turivishal"
},
{
"code": "",
"text": "@turivishal Thank you for the help!",
"username": "Natalia_Merkulova"
}
]
| Recommendations for framework to test mongoDB aggregation with nodeJS and jest | 2022-11-29T06:34:38.159Z | Recommendations for framework to test mongoDB aggregation with nodeJS and jest | 1,947 |
[]
| [
{
"code": "",
"text": "This is the screenshot\n\nimage1681×727 107 KB\n",
"username": "Tuan_Anh_Le"
},
{
"code": "",
"text": "Hi @Tuan_Anh_Le,Thank you for sharing the screenshot. We are working with the concerned team to address this. We will keep you updated.Feel free to reach out to us, if you have any questions, concerns, or feedback.Thanks,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Incorrect Option for Associate Dev Java Practice Exam | 2022-11-30T09:00:19.883Z | Incorrect Option for Associate Dev Java Practice Exam | 1,888 |
|
null | []
| [
{
"code": "",
"text": "Hi, I was late to join Examily for certificate testing. Can you help me ?",
"username": "Tuan_Anh_Le"
},
{
"code": "",
"text": "Hi @Tuan_Anh_Le,I think our certification team has helped you with your query. If you face any further technical issues while appearing for the exam, feel free to reach out to [email protected],\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Unable to attempt the exam due to late arrival | 2022-12-01T14:33:11.506Z | Unable to attempt the exam due to late arrival | 1,551 |
[]
| [
{
"code": "",
"text": "\nScreenshot from 2022-11-28 19-00-071198×391 33.2 KB\n\n_id 1,2,3 have already existed, we can insert document with _id 4 and 5, why is it incorrect ?",
"username": "Tuan_Anh_Le"
},
{
"code": "",
"text": "deleteOne is the same … what wrong ?\n\ndeleteOne1198×510 68.1 KB\n",
"username": "Tuan_Anh_Le"
},
{
"code": "",
"text": "What are the correct answers for connection pooling advantages ?\n\nCP1198×278 20 KB\n",
"username": "Tuan_Anh_Le"
},
{
"code": "",
"text": "About deleteOne.Is that for mongosh, nodejs or python?In python, it is delete_one(), but mongosh and nodejs should be deleteOne. Is it possible that none of the answer is good. In mongosh db.scores.deleteOne() should work but in nodejs, the correct answer could be db.collection( “scores” ).deleteOne().In the question and the answer, is there a space between E. and M.? In the sample documents there is one but it is not clear for the question and answer. If there is none, the NO documents match the query.",
"username": "steevej"
},
{
"code": "",
"text": "About connection pooling.Your answer about deleteOne had a red square which I assume means a wrong answer. For connection pooling the squares around you answer is not red. So I assume your answer is good.",
"username": "steevej"
},
{
"code": "",
"text": "In java, it’s deleteOne, right ?",
"username": "Tuan_Anh_Le"
},
{
"code": "db.scores.deleteOne( new Document( \"name\" , \"E. M.\" ) ) ;\n",
"text": "The method is named deleteOne but the syntax would be wrong for JAVA. You would need at the minimum:It would be good for mongosh and maybe nodejs.",
"username": "steevej"
},
{
"code": "",
"text": "Hi @Tuan_Anh_Le,Welcome to the MongoDB Community forums What are the correct answers for connection pooling advantages ?The above screenshot shows that your selected answer is marked with black, which indicates that one of the options selected is correct but another is incorrect.The problem statement, states that you have chosen two advantages, so I would encourage you to re-attempt the question with the correct options. To learn more about connection polling please refer to the documentation here.If you have any further questions or concerns, please don’t hesitate to contact us.Thanks,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "Python language",
"text": "Hi @Tuan_Anh_Le,In my understanding, you are attempting the Python language practice question for the MongoDB DEV certification exam.deleteOne is the same … what wrong ?If yes then please refer to the official documentation of the python driver to learn more. I would encourage you to re-attempt the practice question using the correct option and share your feedback with us.If you have any further questions or concerns, please don’t hesitate to reach out to us.Thanks,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "I agree with @Tuan_Anh_Le. He selected the right answer. Please review your practice exam.\nremoveOne is for bulk operations, delete_one and remove_one don’t even exist.",
"username": "Corentin_Rodrigo"
},
{
"code": "",
"text": "Hi @Corentin_Rodrigo,Welcome to the MongoDB Community forums Thanks for your feedback. Please check out Deleting Documents in Python Applications - MongoDB University and learn how you can delete a MongoDB document in a Python application.I’ll also encourage you to read the Python driver documentation to learn more about it.If you have any doubts, please feel free to reach out to us.Thanks,\nKushagra Kesav",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "You can see the image show Connection Pooling question, I’m learning Java Practice Question Course.",
"username": "Tuan_Anh_Le"
},
{
"code": "",
"text": "2 posts were split to a new topic: Incorrect Option for Associate Dev Java Practice Exam",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "2 posts were split to a new topic: Unable to attempt the exam due to late arrival",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Please check answers for practice exam | 2022-11-28T12:09:58.646Z | Please check answers for practice exam | 3,509 |
|
null | [
"queries",
"python"
]
| [
{
"code": "mycol.with_options(write_concern=WriteConcern(w = \"majority\")).insert_one({\n\t\t\t\t\"first_name\": user.first_name,\n\t\t\t\t\"last_name\": user.last_name,\n\t\t\t\t\"email\": user.email,\n\t\t\t\t\"hashed_password\": user.password,\n\t\t\t\t\"phone\": 123456789,\n\t\t\t\t\"preferences\": [],\n\t\t\t\t\"status\": 'private',\n\t\t\t\t\"uid\": \"None\",\n\t\t\t\t\"date_created\": datetime.datetime.now(),\n\t\t\t\t\"date_updated\": datetime.datetime.now(),\n\t\t\t\t\"is_admin\": False,\n\t\t\t\t\"user_created\": \" \",\n\t\t\t\t\"user_updated\": \" \",\n\t\t\t\t# \"address\": {\"physical\": {\"street\": address.street, \"street1\": address.street1, \"city\": address.city, \"state\": address.state, \"country\": address.country, \"postal_code\": address.postalCode}},\n\t\t\t\t\"to_delete\": False,\n\t\t\t\t# 'card':{},\n\t\t\t\t\"is_owner\": False\n\t\t\t})\n\t\t\treturn user\n\t\t\t# return {\"success\": True}\n\t\t \n\t\texcept Error :\n\t\t\traise Error(\"A user with the given email address already exists\")\ntry:\n\t\t\t#Updates the Physical address object in the user record\n\t\t\tupdatedUser = mycol.with_options(write_concern=WriteConcern(w=\"majority\")).update_one(\n\t\t\t\t{\"email\": email},\n\t\t\t\t{\"$set\": {\"address.physical.street\": address.street, \"address.physical.street1\": address.street1, \"address.physical.city\" : address.city, \"address.physical.state\": address.state, \"address.physical.country\": address.country, \"address.physical.postal_code\": address.postalCode}},\n\t\t\t)\n\n\t\t\t#returns user record\n\t\t\treturn updatedUser\n\n\t\t#if there isn't a user with the given email address, raises the belog flag\n\t\texcept Error:\n\t\t\traise Error(\"A user with the given email address already exists\")\nDB.updateAddress(user[\"email\"], address)\n\nupdatedUser = DB.getUser(user[\"email\"])\n\nprint(updatedUser)\n\nprint()\n\nprint(updatedUser[\"address\"][\"physical\"])\n{'_id': ObjectId('63835a2336377a211b729a4b'), 'first_name': 'David', 'last_name': 'Thomnpson', 'email': '[email protected]', 'hashed_password': 'RE$etme$200', 'phone': 123456789, 'preferences': [], 'status': 'private', 'uid': 'None', 'date_created': datetime.datetime(2022, 11, 27, 20, 37, 55, 605000), 'date_updated': datetime.datetime(2022, 11, 27, 20, 37, 55, 605000), 'is_admin': False, 'user_created': ' ', 'user_updated': ' ', 'to_delete': False, 'is_owner': False, 'address': {'physical': {'city': 'Singapoe', 'country': 'Singapoe', 'postal_code': 259811, 'state': 'Singapoe', 'street': 'Balmoral Rd2', 'street1': '15-01'}}}\n\n{'city': 'Singapoe', 'country': 'Singapoe', 'postal_code': 259811, 'state': 'Singapoe', 'street': 'Balmoral Rd2', 'street1': '15-01'}\n",
"text": "I am learning a ton when it comes to using MongoDB with Python, but I have one issue I need help on.I am new to Python and MongoDB and I am developing an e-commerce site for MSMEs to use. I can create the user record using the following query:I then update the record adding the address object as follows.Where I am struggling is the fact that in order for me to display the updated record with the address object in the record, I need to query the database again using the find_one query. This seems inefficient. I was trying to assign the result of the query to the variable updatedUser and return the updatedUser to the main program. When I do this I cannot access the address.physical field even though it is successfully updated.Originally I was just calling the query from main and trying to display it like follows.user = DB.updateAddress(user[“email”], address)This should return the updatedUser into the user variable. But when I try to print the user it doesn’t work. The only way I can access the fields is with the following code in Main.Running that code results in this:Is there a more efficient way to do this?",
"username": "David_Thompson"
},
{
"code": "find_one_and_update()return_document=ReturnDocument.AFTERfrom pymongo import ReturnDocument\ndb.example.find_one_and_update(\n {'_id': 'userid'},\n {'$inc': {'seq': 1}},\n return_document=ReturnDocument.AFTER)\n{'_id': 'userid', 'seq': 1}\n",
"text": "Hi @David_Thompson ,You logical thinking is correct the PyMongo driver has a method to perform both update and find and it is called find_one_and_update():https://pymongo.readthedocs.io/en/stable/api/pymongo/collection.html#pymongo.collection.Collection.find_one_and_updateAs you can see in the following example if we set the return_document=ReturnDocument.AFTER flag to “AFTER” we will get the document after the change:If you use a different driver look for the same method in the relevant driver docs.Let me know if that helps.Pavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "@Pavel_Duchovny ,Thanks!!! my question now is can I just find and update the create function? Basically take my createUser method and replace all the guts into the find_one_and_update? Can this query be used like an upsert?",
"username": "David_Thompson"
},
{
"code": "db.example.find_one_and_update(\n {'_id': 'userid'},\n {'$inc': {'seq': 1}},\n projection={'seq': True, '_id': False},\n upsert=True,\n return_document=ReturnDocument.AFTER)\n",
"text": "Hi @David_Thompson ,Yes it has an upsert flag :Ty\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "@Pavel_Duchovny ,\nI answered my own question after I sent the response. I read the link and decided to download the file as it has all the information that I need to start out with.Thanks for the link!!! I am finding out that my queries, while they work are not optimised for PyMongo… I need to look at your link and redo a lot of my queries to capitalise on the capabilities of this driver. Once again, thanks for the pointer.I looks like these queries resemble the aggregation pipeline…",
"username": "David_Thompson"
}
]
| Displaying updated record | 2022-11-28T03:36:47.095Z | Displaying updated record | 1,456 |
null | [
"aggregation",
"queries",
"mongodb-shell"
]
| [
{
"code": "Atlas atlas-8qnlsq-shard-0 [primary] customers> db.customers.find();\n[\n {\n _id: ObjectId(\"6387eed206cb28022d634cc4\"),\n userId: '500',\n userName: 'robert.mcnaught',\n firstName: 'Rab',\n lastName: 'McNaught',\n spend1: 10,\n spend2: 15\n },\n {\n _id: ObjectId(\"6387eef106cb28022d634cc5\"),\n userId: '501',\n userName: 'robert.mcnaught',\n firstName: 'Rab',\n lastName: 'McNaught',\n spend1: 10,\n spend2: 15\n }\n]\ndb.customers.find().limit(10).aggregate(\n {\n {$sort:{\"$project\":{\n \"spend1\":\"$spend1\",\n \"spend2\":\"$spend2\",\n “totalSum”:{“$add”:[“$spend1”,”$spend2”]},\n },-1});\n",
"text": "I am trying to write a single mongosh query on my customers database to:Here is a couple of the collections in the database:From googling and manuals, I have come up with the following query structure. I can’t find a similar example for a complex query like this online. I don’t fully understand the nesting of multiple operators. I am also not sure whether I need aggregate() and $sum together in the same query. My totalSum isn’t a field in my data, so I’m not sure if it is correct to use totalSum in this manner.General pointers on query structure would help me in figuring this out.",
"username": "Rab_McNaught"
},
{
"code": "",
"text": "If you havecustomerswith mongodb databases I strongly recommend you take some courses from https://university.mongodb.com/ otherwise you will struggle withgoogling and manualsYou seem to want the top 10 most spenders.Since you want by userNames, you will need a stage named $group.For totalSpends, you will need the $group accumulator named $sum.For descending order, you will need the stage $sort.For top 10, the stage is $limit.",
"username": "steevej"
}
]
| MongoDB - $sum, $limit and $sort in the same single query | 2022-12-01T18:25:38.871Z | MongoDB - $sum, $limit and $sort in the same single query | 1,284 |
null | [
"node-js"
]
| [
{
"code": "",
"text": "Hi folks, I am new with Mongo atlas I need help in connecting a Lambda function to mongo atlas without user credentials.",
"username": "Abass_S_Barry"
},
{
"code": "",
"text": "Hi @Abass_S_Barry - Welcome to the community.As per the Create a Database User for Your Cluster documentation:You must create a database user to access your cluster. For security purposes, Atlas requires clients to authenticate as MongoDB database users to access clusters.Regards,\nJason",
"username": "Jason_Tran"
}
]
| Connecting a lambda function to mongo atlas without user credentials | 2022-11-24T12:53:02.193Z | Connecting a lambda function to mongo atlas without user credentials | 1,086 |
null | [
"aggregation",
"queries",
"node-js",
"atlas-search"
]
| [
{
"code": "[\n {\n '$search': {\n 'index': 'company_index', \n 'compound': {\n 'should': [\n {\n 'autocomplete': {\n 'path': 'companyName', \n 'query': 'apple'\n }\n }, {\n 'embeddedDocument': {\n 'path': 'produces', \n 'operator': {\n 'compound': {\n 'must': [\n {\n 'autocomplete': {\n 'path': 'produces.name', \n 'query': 'apple', \n 'fuzzy': {\n 'maxEdits': 2, \n 'prefixLength': 3\n }\n }\n }\n ]\n }\n }\n }\n }\n ]\n }\n }\n }, {\n '$match': {\n 'status': 'VERIFIED'\n }\n }, {\n '$skip': 0\n }, {\n '$limit': 24\n }, {\n '$project': {\n 'companyName': 1, \n 'status': 1, \n 'score': {\n '$meta': 'searchScore'\n }\n }\n }\n]\n {\n 'equals': {\n 'path': 'produces.deleted', \n 'value': false, \n }\n",
"text": "Hey I have a problem with my aggregation search. following statement gives me the proper result. just additionally i need to specify not deleted elements. I need to specify that produces and companies are different collections.So when i add this statement into must it gives me no result, what am i doing wrong at this point ?",
"username": "Atakan_Yildirim"
},
{
"code": "\"produces.deleted\"falsemustembeddedDocuments{\n \"mappings\": {\n \"fields\": {\n \"companyName\": {\n \"type\": \"autocomplete\"\n },\n \"produces\": {\n \"dynamic\": false,\n \"fields\": {\n \"deleted\": {\n \"type\": \"boolean\"\n },\n \"name\": {\n \"type\": \"autocomplete\"\n }\n },\n \"type\": \"embeddedDocuments\"\n }\n }\n }\n}\napples> db.collection.find()\n[\n {\n _id: ObjectId(\"6387d384a44f9420fbc4e1e6\"),\n companyName: 'apple',\n produces: [ { name: 'apple', deleted: false } ],\n status: 'VERIFIED'\n },\n {\n _id: ObjectId(\"6387d384a44f9420fbc4e1e7\"),\n companyName: 'apple',\n produces: [ { name: 'delicious apples', deleted: true } ],\n status: 'VERIFIED'\n },\n {\n _id: ObjectId(\"6387d384a44f9420fbc4e1e8\"),\n companyName: 'apples are good',\n produces: [ { name: 'apple test', deleted: false } ],\n status: 'VERIFIED'\n }\n]\n\"produces.deleted\"falsemust$match$skip$limitvar pipeline = \n[\n {\n '$search': {\n index: 'company_index',\n compound: {\n should: [\n { autocomplete: { path: 'companyName', query: 'apple' } },\n {\n embeddedDocument: {\n path: 'produces',\n operator: {\n compound: {\n must: [\n {\n autocomplete: {\n path: 'produces.name',\n query: 'apple',\n fuzzy: { maxEdits: 2, prefixLength: 3 }\n }\n },\n {\n equals: { path: 'produces.deleted', value: false }\n }\n ]\n }\n }\n }\n }\n ]\n }\n }\n },\n {\n '$project': {\n companyName: 1,\n status: 1,\n 'produces.deleted': 1,\n 'produces.name': 1,\n score: { '$meta': 'searchScore' }\n }\n }\n]\napples> db.collection.aggregate(pipeline)\n[\n {\n _id: ObjectId(\"6387d384a44f9420fbc4e1e8\"),\n companyName: 'apples are good',\n produces: [ { name: 'apple test', deleted: false } ],\n status: 'VERIFIED',\n score: 2.1154866218566895\n },\n {\n _id: ObjectId(\"6387d384a44f9420fbc4e1e6\"),\n companyName: 'apple',\n produces: [ { name: 'apple', deleted: false } ],\n status: 'VERIFIED',\n score: 2.098456382751465\n },\n {\n _id: ObjectId(\"6387d384a44f9420fbc4e1e7\"),\n companyName: 'apple',\n produces: [ { name: 'delicious apples', deleted: true } ],\n status: 'VERIFIED',\n score: 0.09845632314682007\n }\n]\n",
"text": "Hi @Atakan_Yildirim,So when i add this statement into must it gives me no result, what am i doing wrong at this point ?Can you provide the following information:Please redact any personal or sensitive information before positing it here.In the meantime, I have tried to replicate a similar pipeline in my test environment for demonstration purposes using embeddedDocuments with the below searchindex definition:Sample documents:Slightly altered pipeline using \"produces.deleted\" search for false in the must clause (did not include the $match , $skip or $limit for brevity:Output from running above search pipeline:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hey Jason thank you for your brief example and information it helped a lot, now i will try your samples if they dont work i will send sample output for further support ",
"username": "Atakan_Yildirim"
},
{
"code": "{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"companyName\": [\n {\n \"analyzer\": \"lucene.standard\",\n \"type\": \"string\"\n },\n {\n \"foldDiacritics\": true,\n \"maxGrams\": 6,\n \"minGrams\": 2,\n \"tokenization\": \"edgeGram\",\n \"type\": \"autocomplete\"\n }\n ],\n \"produces\": {\n \"type\": \"embeddedDocuments\",\n \"fields\": {\n \"name\": [\n {\n \"analyzer\": \"lucene.standard\",\n \"type\": \"string\"\n },\n {\n \"foldDiacritics\": true,\n \"maxGrams\": 6,\n \"minGrams\": 2,\n \"tokenization\": \"edgeGram\",\n \"type\": \"autocomplete\"\n }\n ]\n }\n },\n \"status\": {\n \"type\": \"string\"\n }\n }\n }\n}\n",
"text": "Hey Jason you can see my mapping here i think its the issue thats why whenever i add produces.deleted to search query it returns nothing.So my following questiong how can i add produces.deleted to this mapping on mongodb ? Because they are not using this code so they must be handled mappings on mongodb side.",
"username": "Atakan_Yildirim"
},
{
"code": "",
"text": "Additionally can you tell me where can i update my index definition?",
"username": "Atakan_Yildirim"
},
{
"code": "\"deleted\"\"produces\"boolean\"deleted\"\"produces\"{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"companyName\": [\n {\n \"analyzer\": \"lucene.standard\",\n \"type\": \"string\"\n },\n {\n \"foldDiacritics\": true,\n \"maxGrams\": 6,\n \"minGrams\": 2,\n \"tokenization\": \"edgeGram\",\n \"type\": \"autocomplete\"\n }\n ],\n \"produces\": {\n \"fields\": {\n \"deleted\": {\n \"type\": \"boolean\"\n },\n \"name\": [\n {\n \"analyzer\": \"lucene.standard\",\n \"type\": \"string\"\n },\n {\n \"foldDiacritics\": true,\n \"maxGrams\": 6,\n \"minGrams\": 2,\n \"tokenization\": \"edgeGram\",\n \"type\": \"autocomplete\"\n }\n ]\n },\n \"type\": \"embeddedDocuments\"\n },\n \"status\": {\n \"type\": \"string\"\n }\n }\n }\n}\nembeddedDocumentsembedded",
"text": "I’m not entirely sure what your document structure is like but I will presume the \"deleted\" field exists within the objects inside of the \"produces\" array. Please correct me if I am wrong here.You can test the following index definition (This was based off the index definition you had posted here but with an added a boolean type index definition for the \"deleted\" field inside the \"produces\" array embedded documents):Additionally can you tell me where can i update my index definition?As of the time of this message you can’t use the Atlas UI Visual Index Builder to define fields of embeddedDocuments type so you will need to do so using the JSON editor in the Atlas UI.Also, as noted in the embeddedDocument documentation currently the Atlas Search embeddedDocuments index option, embeddedDocument operator, and embedded scoring option are in preview.Regards,\nJason",
"username": "Jason_Tran"
}
]
| Adding multiple clauses to EmbeddedDocuments | 2022-11-30T14:52:24.025Z | Adding multiple clauses to EmbeddedDocuments | 1,591 |
null | [
"replication",
"golang"
]
| [
{
"code": "cannot specify topology or server options with a deployment\n \n \tserverOpts = append(\n \t\tserverOpts,\n \t\ttopology.WithClock(func(*session.ClusterClock) *session.ClusterClock { return c.clock }),\n \t\ttopology.WithConnectionOptions(func(...topology.ConnectionOption) []topology.ConnectionOption { return connOpts }),\n \t)\n \tc.topologyOptions = append(topologyOpts, topology.WithServerOptions(\n \t\tfunc(...topology.ServerOption) []topology.ServerOption { return serverOpts },\n \t))\n \n \t// Deployment\n \tif opts.Deployment != nil {\n \t\t// topology options: WithSeedlist, WithURI, WithSRVServiceName and WithSRVMaxHosts\n \t\t// server options: WithClock and WithConnectionOptions\n \t\tif len(serverOpts) > 2 || len(topologyOpts) > 4 {\n \t\t\treturn errors.New(\"cannot specify topology or server options with a deployment\")\n \t\t}\n \t\tc.deployment = opts.Deployment\n \t}\n \n \treturn nil\n }\n \n configure",
"text": "When i convert-standalone-to-replica-set , use custom deployment in clientOptions when create mongo Client, I got such error like cannot specify topology or server options with a deployment , Below is the error link return in mongo-go-driver source code, the length topologyOpts slice is greater than four in my situation( opts.ReplicaSet is not nil) which result in the problemI have read the whole configure function in client.go source code, the minimun elements in topologyOpts is four, if clientOptions object have nonempty atrributes like Direct | ReplicaSet |ServerMonitor| ServerSelectionTimeout| LoadBalanced , topologyOpts elements will get increased 。 meanwhile, if you use custom deployment in clientOptions, you will encouter the error describe before, But if i comment the topologyOpts length compare in mongo-go-driver code, my program can still work。 I have searched the first commit of this line code in Change Client to depend on driver.Deployment · mongodb/mongo-go-driver@bb9530d · GitHub ,still could not get some helpful msgquestion:\nIs there any real use of the limitation of the topologyOpts length? i can not figure out,Or limitation of the topologyOpts length is not correct in the code?",
"username": "fleetingtimer_N_A"
},
{
"code": "DeploymentserverOptstopologyOptsDeploymentDeploymentDeployment// Deployment specifies a custom deployment to use for the new Client.\n//\n// Deprecated: This option is for internal use only and should not be set. It may be changed or removed in any\n// release.\nDeployment driver.Deployment\n",
"text": "Hey @fleetingtimer_N_A thanks for the question! What is your use case for defining a custom Deployment?The purpose of the serverOpts and topologyOpts length checks is to prevent specifying any non-default options, which won’t be honored if a custom Deployment is used. However, I strongly recommend against using a custom Deployment because you’re likely to encounter undefined behavior. The documentation of ClientOptions says that the Deployment configuration is not intended for use except for internal use cases (e.g. Go Driver tests):",
"username": "Matt_Dale"
},
{
"code": "",
"text": "Many thanks to you answer, we use custom deployment for support muliti tenant access mongodb sutiation。 I have already read the documentation of [ClientOptions] (options package - go.mongodb.org/mongo-driver/mongo/options - Go Packages) and knew that it is not recommend to use custom deployment,but in my sutiation,self custom deployment generated use the same method with inner deployement mongo-go-driver/client.go at v1.8.2 · mongodb/mongo-go-driver · GitHub, there should not cause any non-default options. when i use default mongo client generate method without using custom Deployment in mongodb replicaset, the topologyOpts length used for generate deployement is not limited ( may larger than 4 ) and it can work well",
"username": "fleetingtimer_N_A"
},
{
"code": "ClientOptionsClientOptions",
"text": "@fleetingtimer_N_A thanks for describing your use case more, it sounds like there is no way to accomplish what you’re trying to do without using a custom Deployment.If you’re not setting any server or topology options in your ClientOptions configuration, you shouldn’t be getting that error message. Can you share an example of your ClientOptions configuration so I can try to reproduce your issue?",
"username": "Matt_Dale"
},
{
"code": "",
"text": "Thanks , I use a test function as an example of my use case, you may get the test code at Go Playground - The Go Programming Language and put it at mongo-go-driver/client_test.go at v1.8.2 · mongodb/mongo-go-driver · GitHub file",
"username": "fleetingtimer_N_A"
},
{
"code": "",
"text": "@fleetingtimer_N_A we actually ended up removing the “no topology/server options with a custom deployment” check completely in Go Driver v1.11.0. Consider updating to that version of the Go Driver if you’re still encountering that error.",
"username": "Matt_Dale"
}
]
| Use custom deployment in ClientOptions return error | 2022-07-28T03:30:22.511Z | Use custom deployment in ClientOptions return error | 2,934 |
null | [
"queries",
"node-js",
"crud",
"next-js"
]
| [
{
"code": "try {\n // Inspiration for this version: https://www.mongodb.com/docs/drivers/node/v3.6/usage-examples/updateOne/\n const db = client.db(\"main\");\n // create a filter for a movie to update\n const filter = { _id: mongodb.ObjectID(updatedCompany.id) };\n\n // create a document that sets the plot of the movie\n const updateDoc = {\n $set: {\n name: updatedCompany.name,\n // slug: updatedCompany.slug,\n // size: updatedCompany.size,\n // bio: updatedCompany.bio,\n // location: updatedCompany.location,\n // image: updatedCompany.image,\n // website: updatedCompany.website,\n // industry: updatedCompany.industry,\n // userId: updatedCompany.userId,\n // email: updatedCompany.email,\n },\n };\n\n // this option instructs the method to create a document if no documents match the filter\n const options = { upsert: true };\n\n console.log(updatedCompany.id);\n\n const result = await db\n .collection(\"companies\")\n .updateOne(filter, updateDoc, options);\n\n console.log(result);\n\n // Not sure about that line:\n // newCompany.id = result.insertedId;\n } \nimport { MongoClient } from \"mongodb\";\n// import clientPromise from \"../../../lib/mongodb\";\n\n// ON GOING\n\nasync function handler(req, res) {\n if (req.method === \"PUT\") {\n const {\n id,\n name,\n bio,\n size,\n location,\n image,\n website,\n industry,\n userId,\n email,\n } = req.body;\n\n // | (bio.trim() === \"\")\n // BACKEND VALIDATION\n if (!name || name.trim() === \"\") {\n res.status(422).json({ message: \"Invalid input.\" });\n return;\n }\n\n function capitalize(word) {\n return word[0].toUpperCase() + word.slice(1).toLowerCase();\n }\n\n // Storing it in the database\n const updatedCompany = {\n id,\n name: capitalize(name),\n slug: name.toLowerCase().replace(/\\s+/g, \"\"),\n size,\n bio,\n location,\n image,\n website,\n industry,\n userId,\n email,\n };\n\n let client;\n\n console.log(updatedCompany);\n console.log(\"Test ligne 51\");\n console.log(updatedCompany.id);\n try {\n client = await MongoClient.connect(process.env.MONGODB_URI);\n } catch (error) {\n console.log(\"erreur 500 DB connection\");\n res.status(500).json({ message: \"Could not connect to database.\" });\n return;\n }\n\n const db = client.db(\"main\");\n\n try {\n // Inspiration for this version: https://www.mongodb.com/docs/drivers/node/v3.6/usage-examples/updateOne/\n // create a filter for a movie to update\n const filter = { _id: mongodb.ObjectID(updatedCompany.id) };\n\n // create a document that sets the plot of the movie\n const updateDoc = {\n $set: {\n name: updatedCompany.name,\n // slug: updatedCompany.slug,\n // size: updatedCompany.size,\n // bio: updatedCompany.bio,\n // location: updatedCompany.location,\n // image: updatedCompany.image,\n // website: updatedCompany.website,\n // industry: updatedCompany.industry,\n // userId: updatedCompany.userId,\n // email: updatedCompany.email,\n },\n };\n\n // this option instructs the method to create a document if no documents match the filter\n const options = { upsert: true };\n\n console.log(updatedCompany.id);\n\n const result = await db\n .collection(\"companies\")\n .updateOne(filter, updateDoc, options);\n\n console.log(result);\n\n // Not sure about that line:\n // newCompany.id = result.insertedId;\n } catch (error) {\n console.log(\"erreur 500 de storing\");\n client.close();\n res.status(500).json({ message: \"Storing message failed!\" });\n return;\n }\n\n client.close();\n\n res.status(201).json({ message: \"Sucessfuly stored company\" });\n }\n}\n\nexport default handler;\n\n",
"text": "Hello,Apologies if my bug is very entry-level, I have been a developer for less than 6 months!I am trying to perfom a PUT request through an API route on Next js (node js), however it keep failing and return a 500 error.I have tried to debug it while following MongoDb documentation about CRUD, but I cannot find the issue.It seems that the error comes from this part of the API route:Full api/companies/update root:Thanks in advance for your help!!",
"username": "Marving_Moreton"
},
{
"code": "const db = client.db(\"main\");",
"text": "it would be better helpfull if you also elevate errors’ messages and share them here.by the way, I am not sure if related, but you use this line twice, inside and outside try block: const db = client.db(\"main\");",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Hello @Yilmaz_Durmaz, appreciate your quick response!There is the error message:\nScreenshot 2022-12-01 at 3.15.03 PM1906×121 29.2 KB\nYes I have tried to put the client.db inside the try to see if it changes something, it did not, I have tried both scenario in vain, I have removed the duplicate, thanks",
"username": "Marving_Moreton"
},
{
"code": " } catch (error) {\n console.log(\"internal error\");\n console.dir(error); // print on server console\n res.status(500).json({ message: \"Internal error\", error: error }); // not needed, only if you need in client side\n return;\n }\n",
"text": "you misunderstood me. show whole error message, preferred on the server side since you don’t send a useful full error message back to browser. edit like this and copy error from server console:PS: you have 2 locations sendin 500 error. adapt this to both.",
"username": "Yilmaz_Durmaz"
},
{
"code": "res.status(500).json({ message: \"Internal error\", error: error }); // not needed, only if you need in client sidconst filter = { _id: ObjectId(updatedCompany.id) };\n",
"text": "res.status(500).json({ message: \"Internal error\", error: error }); // not needed, only if you need in client sidGotcha, thanks, I have made the edit, which allowed me to unbug my code!\nIt confirmed what I have found in the meantime, the issue comes from wrongly importing ObjectId that blocked me from executing:Now it is good.\nThanks man ",
"username": "Marving_Moreton"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Mongodb Next Js - CRUD API Routes - Put request failing 500 error | 2022-12-01T18:33:08.342Z | Mongodb Next Js - CRUD API Routes - Put request failing 500 error | 3,454 |
null | [
"swift"
]
| [
{
"code": "",
"text": "I am currently using realm to develop a job board application. I am working on posting a new job, but I m having trouble updating my existing list using realm. If anyone could, I would really appreciate if you could review my code and help me out! https://github.com/Brandondhollins/Brandon-Hollins-Individualist/tree/PostJobRealm",
"username": "Brandon_Hollins"
},
{
"code": "",
"text": "Hi @Brandon_HollinsSounds like an interesting project but reviewing code of an entire project goes beyond the scope of what we can do here in the forums.I would suggest isolating the troublesome code down into an small example and posting here so we can take a look. Usually less then 30 lines is ideal and generally enough for us to understand the use case.",
"username": "Jay"
}
]
| I'm creating a job board using realm but I am having trouble adding a new job to my existing list | 2022-12-01T18:51:18.069Z | I’m creating a job board using realm but I am having trouble adding a new job to my existing list | 1,318 |
null | [
"cxx"
]
| [
{
"code": "01:50:38 -- Auto-configuring bsoncxx to use C++17 std library polyfills since C++17 is active and user didn't specify otherwise\n\n01:50:38 bsoncxx version: 3.7.0\n82>C:\\conan\\.conan\\data\\mongo-cxx-driver\\3.7.0\\nemtech\\stable\\package\\ba80356c2e9b1444ad652db7a781f16d357b0c49\\include\\bsoncxx\\v_noabi\\bsoncxx/stdx/make_unique.hpp(66,1): fatal error C1189: #error: \"Cannot find a valid polyfill for make_unique\" [C:\\tmp\\_build\\extensions\\mongo\\tests\\test\\tests.catapult.test.mongo.vcxproj]cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo -DCMAKE_INSTALL_PREFIX=C:\\Users\\wayon\\code\\symbol\\symbol\\client\\catapult\\_deps\\mongodb -DBOOST_ROOT=C:\\Users\\wayon\\code\\symbol\\symbol\\client\\catapult\\_deps\\boost -DCMAKE_CXX_STANDARD=17 -DBSONCXX_POLY_USE_BOOST=1 ..\n Creating library C:/Users/wayon/code/symbol/symbol/client/catapult/_deps/source/mongo-cxx-driver/_build/src/mongocxx/Debug/mongocxx-mocked.lib and object C:/Users/wayon/code/symbol/symbol/client/catapult/_deps/source/mongo-cxx-driver/_build/src/mongocxx/Debug/mong\n ocxx-mocked.exp\ncollection.obj : error LNK2019: unresolved external symbol \"void __cdecl boost::throw_exception(class std::exception const &)\" (?throw_exception@boost@@YAXAEBVexception@std@@@Z) referenced in function \"public: class mongocxx::v_noabi::result::bulk_write & __cdecl boost\n::optional<class mongocxx::v_noabi::result::bulk_write>::value(void)& \" (?value@?$optional@Vbulk_write@result@v_noabi@mongocxx@@@boost@@QEGAAAEAVbulk_write@result@v_noabi@mongocxx@@XZ) [C:\\Users\\wayon\\code\\symbol\\symbol\\client\\catapult\\_deps\\source\\mongo-cxx-driver\\_bu\nild\\src\\mongocxx\\mongocxx_mocked.vcxproj]\nchange_stream.cpp.obj : error LNK2001: unresolved external symbol \"void __cdecl boost::throw_exception(class std::exception const &)\" (?throw_exception@boost@@YAXAEBVexception@std@@@Z) [C:\\Users\\wayon\\code\\symbol\\symbol\\client\\catapult\\_deps\\source\\mongo-cxx-driver\\_bu\nild\\src\\mongocxx\\mongocxx_mocked.vcxproj]\nC:\\Users\\wayon\\code\\symbol\\symbol\\client\\catapult\\_deps\\source\\mongo-cxx-driver\\_build\\src\\mongocxx\\Debug\\mongocxx-mocked.dll : fatal error LNK1120: 1 unresolved externals [C:\\Users\\wayon\\code\\symbol\\symbol\\client\\catapult\\_deps\\source\\mongo-cxx-driver\\_build\\src\\mongo\ncxx\\mongocxx_mocked.vcxproj]\n",
"text": "Hey,\nI am trying to build mongo-cxx-driver on windows and hitting some issues. Has anyone been able to build the latest release(r3.7.0)? I didn’t have an issue with r3.6.7.The issue seems to be related with pollyfill but not sure what changed as yet.Usually build with polyfill disabled but it seems in that case bsoncxx automatically enables polyfill.cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo -DCMAKE_INSTALL_PREFIX=C:_deps\\mongodb -DCMAKE_CXX_FLAGS=’/Zc:__cplusplus’ -DCMAKE_CXX_STANDARD=17 …Which leads to this error82>C:\\conan\\.conan\\data\\mongo-cxx-driver\\3.7.0\\nemtech\\stable\\package\\ba80356c2e9b1444ad652db7a781f16d357b0c49\\include\\bsoncxx\\v_noabi\\bsoncxx/stdx/make_unique.hpp(66,1): fatal error C1189: #error: \"Cannot find a valid polyfill for make_unique\" [C:\\tmp\\_build\\extensions\\mongo\\tests\\test\\tests.catapult.test.mongo.vcxproj]Now I am trying to enable polyfill using boostThis leads to unresolve symbol for boost. ",
"username": "Wayon_Blair"
},
{
"code": "",
"text": "This is with Visual Studio 2019 and Visual Studio 2022",
"username": "Wayon_Blair"
},
{
"code": "cmake -G \"Visual Studio 17 2022\" -A x64 -DCMAKE_CXX_STANDARD=17 -DCMAKE_CXX_FLAGS=\"/Zc:__cplusplus\" -DCMAKE_PREFIX_PATH=C:\\mongo-c-driver -DCMAKE_INSTALL_PREFIX=C:\\mongo-cxx-drivercmake --build . --target install",
"text": "I could successfully built mongo-cxx-driver 3.7.0 on Windows 11 with Visual Studio 2022.1. Setup Build\ncmake -G \"Visual Studio 17 2022\" -A x64 -DCMAKE_CXX_STANDARD=17 -DCMAKE_CXX_FLAGS=\"/Zc:__cplusplus\" -DCMAKE_PREFIX_PATH=C:\\mongo-c-driver -DCMAKE_INSTALL_PREFIX=C:\\mongo-cxx-driver\n\nScreenshot 2022-11-26 at 1.49.11 PM2976×418 121 KB\n2. Execute Build\ncmake --build . --target installPlease make sure the /Zc:__cplusplus flag is correctly set.",
"username": "Rishabh_Bisht"
},
{
"code": "cmake -G \"Visual Studio 17 2022\" -A x64 -DCMAKE_CXX_STANDARD=17 -DCMAKE_CXX_FLAGS=\"/Zc:__cplusplus\" -DCMAKE_PREFIX_PATH=C:\\mongo-c-driver -DCMAKE_INSTALL_PREFIX=C:\\mongo-cxx-driver ..\n-- Selecting Windows SDK version 10.0.22000.0 to target Windows 10.0.22621.\n-- The CXX compiler identification is MSVC 19.34.31933.0\n-- Detecting CXX compiler ABI info\n-- Detecting CXX compiler ABI info - done\n-- Check for working CXX compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.34.31933/bin/Hostx64/x64/cl.exe - skipped\n-- Detecting CXX compile features\n-- Detecting CXX compile features - done\n-- Found PythonInterp: C:/Python310/python.exe (found version \"3.10.3\")\nC:\\Users\\wayon\\code\\wayonb\\monorepo\\client\\catapult\\_deps\\source\\mongo-cxx-driver\\etc\\calc_release_version.py:29: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives\n from distutils.version import LooseVersion\n-- No build type selected, default is Release\n-- The C compiler identification is MSVC 19.34.31933.0\n-- Detecting C compiler ABI info\n-- Detecting C compiler ABI info - done\n-- Check for working C compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.34.31933/bin/Hostx64/x64/cl.exe - skipped\n-- Detecting C compile features\n-- Detecting C compile features - done\n-- Auto-configuring bsoncxx to use C++17 std library polyfills since C++17 is active and user didn't specify otherwise\nbsoncxx version: 3.7.0\nfound libbson version 1.23.1\n-- Performing Test COMPILER_HAS_DEPRECATED_ATTR\n-- Performing Test COMPILER_HAS_DEPRECATED_ATTR - Failed\nmongocxx version: 3.7.0\nfound libmongoc version 1.23.1\n-- Performing Test CMAKE_HAVE_LIBC_PTHREAD\n-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed\n-- Looking for pthread_create in pthreads\n-- Looking for pthread_create in pthreads - not found\n-- Looking for pthread_create in pthread\n-- Looking for pthread_create in pthread - not found\n-- Found Threads: TRUE\n-- Build files generated for:\n-- build system: Visual Studio 17 2022\n-- instance: C:/Program Files/Microsoft Visual Studio/2022/Community\n-- instance: x64\n-- Configuring done\n-- Generating done\n-- Build files have been written to: C:/Users/wayon/code/wayonb/monorepo/client/catapult/_deps/source/mongo-cxx-driver/_build\nC:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.34.31933\\include\\ostream(305,1): warning C4530: C++ exception handler used, but unwind semantics are not enabled. Specify /EHsc [C:\\Users\\wayon\\code\\wayonb\\monorepo\\client\\catapu\nlt\\_deps\\source\\mongo-cxx-driver\\_build\\src\\bsoncxx\\test\\test_bson.vcxproj]\nC:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.34.31933\\include\\ostream(298,1): message : while compiling class template member function 'std::basic_ostream<char,std::char_traits<char>> &std::basic_ostream<char,std::char_trai\nts<char>>::operator <<(unsigned int)' [C:\\Users\\wayon\\code\\wayonb\\monorepo\\client\\catapult\\_deps\\source\\mongo-cxx-driver\\_build\\src\\bsoncxx\\test\\test_bson.vcxproj]\nC:\\Users\\wayon\\code\\wayonb\\monorepo\\client\\catapult\\_deps\\source\\mongo-cxx-driver\\src\\bsoncxx/test_util/to_string.hh(53,57): message : see reference to function template instantiation 'std::basic_ostream<char,std::char_traits<char>> &std::basic_ostrea\nm<char,std::char_traits<char>>::operator <<(unsigned int)' being compiled [C:\\Users\\wayon\\code\\wayonb\\monorepo\\client\\catapult\\_deps\\source\\mongo-cxx-driver\\_build\\src\\bsoncxx\\test\\test_bson.vcxproj]\nC:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.34.31933\\include\\ostream(671,75): message : see reference to class template instantiation 'std::basic_ostream<char,std::char_traits<char>>' being compiled [C:\\Users\\wayon\\code\\wa\nyonb\\monorepo\\client\\catapult\\_deps\\source\\mongo-cxx-driver\\_build\\src\\bsoncxx\\test\\test_bson.vcxproj]\n Generating Code...\nC:\\Users\\wayon\\code\\wayonb\\monorepo\\client\\catapult\\_deps\\source\\mongo-cxx-driver\\src\\bsoncxx\\test\\bson_builder.cpp(1705): fatal error C1001: Internal compiler error. [C:\\Users\\wayon\\code\\wayonb\\monorepo\\client\\catapult\\_deps\\source\\mongo-cxx-driver\\\n_build\\src\\bsoncxx\\test\\test_bson.vcxproj]\n (compiler file 'D:\\a\\_work\\1\\s\\src\\vctools\\Compiler\\Utc\\src\\p2\\main.c', line 224)\n To work around this problem, try simplifying or changing the program near the locations listed above.\n If possible please provide a repro here: https://developercommunity.visualstudio.com\n Please choose the Technical Support command on the Visual C++\n Help menu, or open the Technical Support help file for more information\n Building Custom Rule C:/Users/wayon/code/wayonb/monorepo/client/catapult/_deps/source/mongo-cxx-driver/src/mongocxx/test/CMakeLists.txt\n client_helpers.cpp\nC:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.34.31933\\include\\ostream(482,1): warning C4530: C++ exception handler used, but unwind semantics are not enabled. Specify /EHsc [C:\\Users\\wayon\\code\\wayonb\\monorepo\\client\\catapu\nlt\\_deps\\source\\mongo-cxx-driver\\_build\\src\\mongocxx\\test\\test_client_side_encryption_specs.vcxproj]\n",
"text": "Thanks @Rishabh_Bisht for the quick reply.\nWhen I build with using your cmake command, tests are failing to build with the error below. Can you tell me which VS development env are you using?",
"username": "Wayon_Blair"
},
{
"code": "",
"text": "Looking at the last logs it would seem like the build succeed. The only reason checked for an error cause cmake exited with error code of 1 and my script will stop on error.",
"username": "Wayon_Blair"
},
{
"code": "",
"text": "An internal compiler error is an issue with the compiler, not sure what we can do here. But I’d be happy to raise a ticket with our engineering team to get this investigated more thoroughly.\nIn the meantime, try commenting the test that’s causing this issue, unless you need the tests as well.I used VS 2022 Community as well.",
"username": "Rishabh_Bisht"
},
{
"code": "",
"text": "Thanks @Rishabh_Bisht for raising a ticket.I am more curious of why you are not seeing this error. I have built with Windows 11 and Windows Server 2022 with both VS 2019 and VS 2022 and always get this error. Thinking I am missing something in my environment but not sure what.",
"username": "Wayon_Blair"
},
{
"code": "-- Auto-configuring bsoncxx to use C++17 std library polyfills since C++17 is active and user didn't specify otherwise\n -DCMAKE_CXX_FLAGS=\"/Zc:__cplusplus\"",
"text": "Also, do you know why if I set the -DCMAKE_CXX_FLAGS=\"/Zc:__cplusplus\" flag correctly, why is ployfill getting configured for bsoncxx? Is it required?",
"username": "Wayon_Blair"
},
{
"code": "Auto-configuring bsoncxx to use C++17 std library polyfills since C++17 is active and user didn't specify otherwise-DCMAKE_CXX_STANDARD=17\n \n MATH(EXPR BSONCXX_POLY_OPTIONS_SET \"${BSONCXX_POLY_OPTIONS_SET}+1\")\n endif()\n endforeach()\n \n if(BSONCXX_POLY_OPTIONS_SET GREATER 1)\n # You can't ask for more than one polyfill\n message(FATAL_ERROR \"Cannnot specify more than one bsoncxx polyfill choice\")\n elseif(BSONCXX_POLY_OPTIONS_SET EQUAL 0)\n # You can just not say, in which case we endeavor to pick a sane default:\n \n if(NOT CMAKE_CXX_STANDARD LESS 17)\n # If we are in C++17 mode, use the C++17 versions\n set(BSONCXX_POLY_USE_STD ON)\n message(STATUS \"Auto-configuring bsoncxx to use C++17 std library polyfills since C++17 is active and user didn't specify otherwise\")\n elseif(CMAKE_CXX_COMPILER_ID STREQUAL \"MSVC\")\n # Otherwise, since MSVC can't handle MNMLSTC, default to boost\n set(BSONCXX_POLY_USE_BOOST ON)\n message(STATUS \"Auto-configuring bsoncxx to use boost std library polyfills since C++17 is inactive and compiler is MSVC\")\n else()\n # Otherwise, we are on a platform that can handle MNMLSTC\n set(BSONCXX_POLY_USE_MNMLSTC ON)\n \n ",
"text": "Upon a deeper look, I found out I wasn’t building tests. I come across similar error as yours when building tests. This needs investigation.For polyfill question\nAuto-configuring bsoncxx to use C++17 std library polyfills since C++17 is active and user didn't specify otherwise means the C++17 standard library features are used to satisfy the C++17 polyfill. This is expected. It is enabled when -DCMAKE_CXX_STANDARD=17 flag is set.",
"username": "Rishabh_Bisht"
},
{
"code": "BSONCXX_POLY_USE_STDmake_unique// Copyright 2014 MongoDB Inc.\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n// http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n#pragma once\n\n#include <bsoncxx/config/prelude.hpp>\n\n#if defined(BSONCXX_POLY_USE_MNMLSTC)\n\nbsoncxx/stdx/make_unique.hpp(66,1): fatal error C1189: #error: \"Cannot find a valid polyfill for make_unique\"\n \n BSONCXX_INLINE_NAMESPACE_BEGIN\n namespace stdx {\n \n using ::std::make_unique;\n \n } // namespace stdx\n BSONCXX_INLINE_NAMESPACE_END\n } // namespace bsoncxx\n \n #else\n #error \"Cannot find a valid polyfill for make_unique\"\n #endif\n \n #include <bsoncxx/config/postlude.hpp>\n \n ",
"text": "Thanks @Rishabh_Bisht, glad you were able to repo the issue when build tests. Pollyfill question\nYes, I saw the cmake file and BSONCXX_POLY_USE_STD is defined in this case.\nThe issue seems to be that for make_unique, this define is not used to include the STD header. maybe something is missing?When trying to build I am getting the error below\nbsoncxx/stdx/make_unique.hpp(66,1): fatal error C1189: #error: \"Cannot find a valid polyfill for make_unique\"",
"username": "Wayon_Blair"
},
{
"code": "",
"text": "Hey @Rishabh_Bisht, Do you have any information on the ployfill error?",
"username": "Wayon_Blair"
},
{
"code": "make_unique<memory>_cplusplus\n \n namespace stdx {\n \n using ::boost::make_unique;\n \n } // namespace stdx\n BSONCXX_INLINE_NAMESPACE_END\n } // namespace bsoncxx\n \n #elif __cplusplus >= 201402L || (defined(_MSVC_LANG) && _MSVC_LANG >= 201402L)\n \n #include <memory>\n \n namespace bsoncxx {\n BSONCXX_INLINE_NAMESPACE_BEGIN\n namespace stdx {\n \n using ::std::make_unique;\n \n } // namespace stdx\n BSONCXX_INLINE_NAMESPACE_END\n } // namespace bsoncxx\n \n ",
"text": "The make_unique should come as part of <memory> header once the _cplusplus flag is set correctly. I am not sure what’s going wrong here.Can you try to dump and check __cplusplus macro - if it’s set correctly or not?",
"username": "Rishabh_Bisht"
},
{
"code": "__cplusplus",
"text": "Thanks @Rishabh_Bisht. I missed that. For some reason, we have gotten the our code to build without setting the __cplusplus macro.",
"username": "Wayon_Blair"
},
{
"code": "__cplusplus",
"text": "@Rishabh_Bisht setting the __cplusplus macro worked with building our code.\nThanks for the help ",
"username": "Wayon_Blair"
},
{
"code": "",
"text": "Glad to hear that, @Wayon_Blair !",
"username": "Rishabh_Bisht"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Building mongo-cxx-driver 3.7.0 on Windows with MSVC | 2022-11-25T18:57:48.095Z | Building mongo-cxx-driver 3.7.0 on Windows with MSVC | 4,624 |
null | [
"performance"
]
| [
{
"code": "",
"text": "I have been trying to benchmark a query ( which is a single field-single value query ) on a collection that is 1.1 billion documents large, but is a single shard setup. My query is expected to return 2 million documents from the list of 1.1 billion in that collection. Further, the index is setup properly for this query. The cluster is a 3 node cluster behind the scenes.Upon doing an explain of the query, the following are the time spent in each of the stagesRun 1 ) IXSCAN : 2 seconds, FETCH 44 seconds, SINGLE_SHARD 15 seconds.\nRun 2) IXSCAN : 1.5 seconds, FETCH 13 seconds, SINGLE_SHARD 46 seconds.QuestionsI can understand why the SHARD_MERGE stage is needed in a multisharded collection and that SINGLE_SHARD is it’s equivalent for similar tasks - but why is that stage needed when there is only a single shard ? What really happens in the “SINGLE_SHARD” stage?Why does “SINGLE_SHARD” stage take so much time given FETCH has already done the job of retrieving physically the documents from the disk ?While I understand why FETCH takes a lot less time in the second run ( cached documents / loaded in Memory ), why does SINGLE_SHARD stage shoot up wrt the time it takes ?Is it possible to avoid SINGLE_SHARD stage on one shard setups / or atleast make them more performant ?",
"username": "kembhootha_k"
},
{
"code": "",
"text": "Bumping up the thread to elicit a reply.",
"username": "kembhootha_k"
},
{
"code": "",
"text": "Hi @kembhootha_k,I opened a ticket to understand why the stages in the explain output aren’t fully documented. It already frustrated me a few times as well.My best guess is that during this stage the data is transferred from the shard to the mongos and because this is probably a few hundred megabytes at this point (2 million docs), maybe you are saturating the network or the mongos itself?Some questions I have for you though:Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thank you for your time. Answers as belowYou are right. That many documents will not be sent to the end user. It would be used more for batch jobs or feed analytics type of workloads. Having said that, my questions / observations stem from a performance suite that is run to test the limits of my setup.Precisely the reason. I’m trying to test out the single shard setup against various scenarios to better formulate my sharding strategy for the future using data created as part of the suite.The 2 million dataset fits within less than 40% of the RAM on the node serving the data.Even if we momentarily keep aside what SINGLE_SHARD does, it’s weirder that it should take more time the second time around.",
"username": "kembhootha_k"
},
{
"code": "SINGLE_SHARDexplain460.02343,500explainSINGLE_SHARDFETCHSINGLE_SHARDexplainSINGLE_SHARDexplainPRIMARYexplain",
"text": "Hi @kembhootha_k - my name is Chris and I am one of Maxime’s coworkers here at MongoDB. Thanks again for your question.Broadly speaking, the takeaway from this comment is going to be that:The SINGLE_SHARD stage is unlikely to be meaningfully contributing to the duration noted in the explain output.Individual query latency is necessarily going to be higher on a single shard sharded cluster compared to a replica set by itself.I would also be curious about what your specific goals are. At a worst case (presumably cold cache) total time of 46 seconds, this implies that each document is being processed in 0.023 milliseconds or a processing rate of nearly 43,500 documents per second. Is there a more defined target that you are trying to hit, or are you just exploring what is possible with the current configuration?Would you be able to provide the full explain outputs for us to examine? It is difficult to provide specific answers or guidance about what may be happening in your environment given only a few duration metrics. When examined as a whole, explain output really helps tell a story (or acts as a map) about what is going on. Without the complete picture we may be missing important pieces.Even in the absence of the full output, we can still say a few things that are probably useful. I would expect the execution time reported by subsequent stages in the explain output to be cumulative and inclusive of their children stages. This implies a few interesting items:There may be a typo or mixup in the numbers mentioned in the original post. I don’t think it should be possible for the parent SINGLE_SHARD stage to report a smaller duration (15 seconds) than its child FETCH stage (44 seconds). Is it possible the times for the SINGLE_SHARD stage were transposed between the two runs, as the 46 seconds and 13 seconds from the opposite lines seem to match pretty closely?The total time for the explain operation should basically be the largest number (e.g. 46 seconds) as opposed to the sum of each duration reported (e.g. 60.5 = 1.5 + 13 + 46).The SINGLE_SHARD stage should not be responsible for doing much work. Given the assumptions above are correct (including the final number being swapped), the maximum time that could be attributable to this stage would be 3 seconds. Even that number could be inflated for other reasons. There is probably not much (or any) optimization which could really be done here.As a point of comparison, what is the total duration for the same explain operation when executed directly against the PRIMARY member of the underlying replica set for the shard? I would expect that the majority of the time (when using explain) will be dominated by the work being performed by the underlying shard, so the numbers will likely be similar.",
"username": "Christopher_Harris"
}
]
| What happens really in the SINGLE_SHARD stage? | 2022-11-28T07:27:38.366Z | What happens really in the SINGLE_SHARD stage? | 2,387 |
null | [
"next-js"
]
| [
{
"code": "realm-webrealm",
"text": "Hi, I’m currently building a desktop application using electron and nextjs which I would to enable realm sync with it.\nWanted to know which package should I use between realm-web and just realm",
"username": "Tony_Ngomana"
},
{
"code": "",
"text": "Hi @Tony_Ngomana, hopefully this post will help get you started Realm Not Working with Packaged Electron App",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Hello, I’m having the same problem. I would like to know if you solved this dilema, and if so, how?",
"username": "Mariano_Moran"
}
]
| Electron and MongoDB Realm | 2021-06-03T23:05:25.233Z | Electron and MongoDB Realm | 3,853 |
null | [
"aggregation",
"queries",
"node-js"
]
| [
{
"code": " var student = express.Router();\n app.use('/student', student);\n student.get('/get/:sr_no/:title/:topic_title', function (req, res) \n {\n var v1,v2,v3;\n v1=req.params['sr_no']; \n v2=req.params['title']; \n v3=req.params['topic_title']; \n var cursor = dbo.collection(\"chepters\").find({\"sr_no\":v1}).toArray(function(err, result) \n {\n \n console.log(result);\n res.send(result);\n });\n\n\n })\n console.log(result);\n res.send(result);\n });\n",
"text": "We are trying to find the collection in mongodb using parameter like followingvar student = express.Router();\napp.use(’/student’, student);\nstudent.get(’/get/:sr_no/:title/:topic_title’, function (req, res)\n{\nvar v1,v2,v3;\nv1=req.params[‘sr_no’];\nv2=req.params[‘title’];\nv3=req.params[‘topic_title’];\nvar cursor = dbo.collection(“chepters”).find({“sr_no”:v1}).toArray(function(err, result)\n{})but the variable v1 is not wokring instead if we put static value 1 in place of v1 the query is working. How to pass value as variable.We want solution we are beginners in mongodb and node.jsHelp us to find the solution.",
"username": "Papu_Chopda"
},
{
"code": "",
"text": "Hi @Papu_Chopda and welcome in the MongoDB Community !Check out my old repo on Github here.A few things changed I guess since 2016 but I had parameters so it’s probably about the same syntax.If you are starting with MongoDB and Node.js, I would recommend you checkout our tutorials on the DevHub or the M220JS course on MongoDB University. Everything is free.Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
]
| We are trying to find the collection in mongodb using parameter like following | 2022-12-01T09:07:32.048Z | We are trying to find the collection in mongodb using parameter like following | 1,439 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "",
"text": "Hi Team,I am having a collection and a query is executing as follows and based on the query i had created an index and the the query is taking the correct index but from application it is much slow to display the results, Please help to create a perfect index to display results faster, Kindly help me in this matter.[{\"$match\":{\"$or\":[{“sId”:“D”,“dId”:“869247047809394”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“B”,“dId”:“869247047809394”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“B”,“dId”:“356849088459441”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“D”,“dId”:“356849088459441”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“D”,“dId”:“862818045326057”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“B”,“dId”:“862818045326057”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“B”,“dId”:“862818045336478”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“D”,“dId”:“862818045336478”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“D”,“dId”:“862818045447093”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“B”,“dId”:“862818045447093”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“B”,“dId”:“865006041831992”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“D”,“dId”:“865006041831992”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“D”,“dId”:“869247047838740”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“B”,“dId”:“869247047838740”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“B”,“dId”:“862818041586670”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“D”,“dId”:“862818041586670”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“D”,“dId”:“869247047837429”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“B”,“dId”:“869247047837429”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“B”,“dId”:“862818045408509”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“D”,“dId”:“862818045408509”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“B”,“dId”:“869247047734188”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“D”,“dId”:“869247047734188”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“B”,“dId”:“862818045348358”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“D”,“dId”:“862818045348358”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“B”,“dId”:“869247047720997”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“D”,“dId”:“869247047720997”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“B”,“dId”:“869247047756165”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“D”,“dId”:“869247047756165”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“B”,“dId”:“869247047832438”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“D”,“dId”:“869247047832438”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“D”,“dId”:“869247048595570”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“B”,“dId”:“869247048595570”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“D”,“dId”:“862818045336312”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“B”,“dId”:“862818045336312”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“B”,“dId”:“869247048697434”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“D”,“dId”:“869247048697434”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“D”,“dId”:“869247048554528”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“B”,“dId”:“869247048554528”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“D”,“dId”:“862818045409598”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“B”,“dId”:“862818045409598”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“B”,“dId”:“356849088479381”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“D”,“dId”:“356849088479381”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“D”,“dId”:“865006041911687”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“B”,“dId”:“865006041911687”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“D”,“dId”:“862818045441880”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“B”,“dId”:“862818045441880”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“B”,“dId”:“356158069620207”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“D”,“dId”:“356158069620207”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“B”,“dId”:“865006041907974”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“D”,“dId”:“865006041907974”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“B”,“dId”:“35615869590361”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“D”,“dId”:“35615869590361”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“B”,“dId”:“865006041779514”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“D”,“dId”:“865006041779514”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“D”,“dId”:“865006041855165”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“B”,“dId”:“865006041855165”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“B”,“dId”:“869247047800187”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“D”,“dId”:“869247047800187”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“B”,“dId”:“862818045421122”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“D”,“dId”:“862818045421122”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“B”,“dId”:“869738067222435”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“D”,“dId”:“869738067222435”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“B”,“dId”:“869738067380522”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“D”,“dId”:“869738067380522”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“B”,“dId”:“355026070124025”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“D”,“dId”:“355026070124025”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":-30.0}},{“tmp”:{\"$lt\":50.0}}]},{“sId”:“B”,“dId”:“869247047788119”,“temperatureLogTime”:{\"$gt\":{\"$date\":“2022-12-01T08:46:59.874Z”}},\"$or\":[{“tmp”:{\"$gt\":2.0}},{“tmp”:{\"$lt\":8.0}}]},{“sId”:“D”}]}}]},“planSummary”:\"IXSCAN { sId: -1, dId: -1, temperatureLogTime: -1, tmp: -1 }, IXSCAN { sId: -1, dId: -1, temperatureLogTime: -1, tmp: -1 }, IXSCAN { sId: -1, dId: -1, temperatureLogTime: -1, tmp: -1 }, IXSCAN { sId: -1, dId: -1, temperatureLogTime: -1, tmp: -1 }, IXSCAN { sId: -1, dId: -1, temperatureLogTime: -1, tmp: -1 }, IXSCAN { sId: -1, dId: -1, temperatureLogTime: -1, tmp: -1 }, IXSCAN { sId: -1, dId: -1, temperatureLogTime: -1, tmp: -1 }, IXSCAN { sId: -1, dId: -1, temperatureLogTime: -1, tmp: -1 }, IXSCAN { sId: -1, dId: -1, temperatureLogTime: -1, tmp: -1 }, IXSCAN { sId: -1, dId: -1, temperatureLogTime: -1, tmp: -1 }, IXSCAN { sId: -1, dId: -1, temperatureLogTime: -1, tmp: -1 }, IXSCAN { sId: -1, dId: -1, temperatureLogTime: -1, tmp: -1 }Thanks&Regards,\nM. Ramesh.",
"username": "MERUGUPALA_RAMES"
},
{
"code": "[{\n \"$match\": {\n \"$or\": [{\n \"sId\": \"D\",\n \"dId\": \"869247047809394\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"869247047809394\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"356849088459441\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"356849088459441\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"862818045326057\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"862818045326057\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"862818045336478\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"862818045336478\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"862818045447093\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"862818045447093\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"865006041831992\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"865006041831992\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"869247047838740\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"869247047838740\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"862818041586670\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"862818041586670\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"869247047837429\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"869247047837429\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"862818045408509\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"862818045408509\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"869247047734188\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"869247047734188\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"862818045348358\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"862818045348358\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"869247047720997\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"869247047720997\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"869247047756165\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"869247047756165\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"869247047832438\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"869247047832438\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"869247048595570\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"869247048595570\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"862818045336312\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"862818045336312\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"869247048697434\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"869247048697434\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"869247048554528\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"869247048554528\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"862818045409598\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"862818045409598\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"356849088479381\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"356849088479381\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"865006041911687\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"865006041911687\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"862818045441880\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"862818045441880\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"356158069620207\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"356158069620207\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"865006041907974\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"865006041907974\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"35615869590361\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"35615869590361\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"865006041779514\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"865006041779514\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"865006041855165\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"865006041855165\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"869247047800187\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"869247047800187\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"862818045421122\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"862818045421122\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"869738067222435\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"869738067222435\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"869738067380522\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"869738067380522\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"355026070124025\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\n \"sId\": \"D\",\n \"dId\": \"355026070124025\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": -30.0}}, {\"tmp\": {\"$lt\": 50.0}}]\n }, {\n \"sId\": \"B\",\n \"dId\": \"869247047788119\",\n \"temperatureLogTime\": {\"$gt\": {\"$date\": \"2022-12-01T08:46:59.874Z\"}},\n \"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]\n }, {\"sId\": \"D\"}]\n }\n}]\n\"$or\": [{\"tmp\": {\"$gt\": 2.0}}, {\"tmp\": {\"$lt\": 8.0}}]{tmp: {\"$gt\": 2.0, \"$lt\": 8.0}}$or{\"sId\": \"D\"}{\"sId\": \"D\"} { sId: 1, dId: 1, temperatureLogTime: 1, tmp: 1 }sIddIddId",
"text": "Hi @MERUGUPALA_RAMES,I cleaned up your query so we could read it… Next time it would be nice to do it yourself before you post it.Before we talk about indexes, I see several problems with this query.If you explain what you are trying to do with this query and provide a few document samples and the expected output, maybe I could come up with a better way to achieve this query.Indexes: { sId: 1, dId: 1, temperatureLogTime: 1, tmp: 1 } isn’t bad but it might not be the best depending on the cardinality and data distribution of temperatureLogTime and tmp. We could also invert these 2 to gain some performances.\nSame for sId and dId. I would put the one with the greater cardinality first to eliminate docs faster and reduce the number of index entries that need to be accessed. From what I’m seeing maybe dId has a better cardinality.I hope this helps a bit.\nCheers,\nMaxime.",
"username": "MaBeuLux88"
}
]
| How to create a perferct index based on a query | 2022-12-01T12:26:08.263Z | How to create a perferct index based on a query | 948 |
null | [
"sharding"
]
| [
{
"code": "",
"text": "Hello team,we have a 3 node sharded cluster and we are trying to delete the unused data from the collections, but after the delete the disk space was not released back to the OS.can anyone provide me the details on running the compact command from a mongod instance? looks like running compact from mongos is not supported in 3.0.4",
"username": "personal_java"
},
{
"code": "",
"text": "Its going to block the database.\nYou need to run it on each member.The collection will reuse the space so if you’re adding data to it still, you may just want to leave it.And obigatory: 3.0.4 is old, really old.",
"username": "chris"
},
{
"code": "",
"text": "Hi @personal_java and welcome in the MongoDB Community !MongoDB is currently in version 6.0.3.V3.0.4 was born in March 2015 and reached end of life support in Feb 2018 according to MongoDB Legacy Support Policy.So my first suggestion would be to update this cluster to 6.0.3 ASAP and there are good chances that your problem could just go away with the upgrade.Are you still running on MMAPv1 or are you using WiredTiger?Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "reg 3.0.4 yea, we just have this one cluster and others are migrated to Atlas. we are in process of moving this one too… soon!Do you have any specific commands that I need toexecute the compact command from mongod?Thanks",
"username": "personal_java"
},
{
"code": "",
"text": "Thank you so much for the reply\nwe are on wiredtiger",
"username": "personal_java"
},
{
"code": "db.runCommand ( { compact: 'mydb.mycollection', force: true } )",
"text": "I would go with db.runCommand ( { compact: 'mydb.mycollection', force: true } ). But yeah if you can move to Atlas, just forget about it and migrate. Problem solved.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "The command is as @MaBeuLux88 replied.But per the link in my previous reply MMAPv1 or wiredTiger will tield different results.MMAPv1 wont’t release storage back. You’ll need to perform an initial sync in that case.",
"username": "chris"
},
{
"code": "",
"text": "I tried running that from the mongos instance. but looks like it is not supported. How can we run it from a mongod instance ?",
"username": "personal_java"
},
{
"code": "",
"text": "Direct connect on each mongod, and each secondary too, its not replicated.Its on the page too: compact — MongoDB Manual",
"username": "chris"
},
{
"code": "MongoDB shell version: 3.0.4\nconnecting to: test\n2022-12-01T15:55:08.719+0000 W NETWORK Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused\n2022-12-01T15:55:08.720+0000 E QUERY Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed\n at connect (src/mongo/shell/mongo.js:179:14)\n at (connect):1:6 at src/mongo/shell/mongo.js:179\nexception: connect failed.\n\nHere is our set uo : \n1. mongos server\n /opt/mongo/bin/mongos --configdb m1.qa..com:27019,m2.qa..com:27019,m3.qa..com:27019\n\n2. 3 mongod servers \n /opt/mongo/bin/mongod --dbpath /mongodb/configdb --port 27019 --storageEngine wiredTiger\n /opt/mongo/bin/mongod --dbpath /mongodb/sharddb --port 27018 --storageEngine wiredTiger\n",
"text": "Thank you for the link. this is what I get when I try to connect to mongo shell from the instance where mongod is running.",
"username": "personal_java"
},
{
"code": "",
"text": "As per your commands(and standard for shard replset members) the port is 27018",
"username": "chris"
},
{
"code": "",
"text": "that is it. i’m able to connect. Thank you so much for your help.",
"username": "personal_java"
}
]
| Mongodb compact | 2022-11-30T20:09:16.327Z | Mongodb compact | 2,798 |
null | [
"aggregation",
"replication",
"time-series"
]
| [
{
"code": "",
"text": "Hi,I have a large replica set, and in one of my collections, I have documents similar to the following:{\ndevice: integer,\ndate: string,\ntime: string,\nvoltage: double,\namperage: double\n}Data is inserted as time series data, and a separate process aggregates and averages results so that this collection has a single document per device every 5 minutes. ie. time is 00:05:00, 00:10:00, etc.Here is what I’m trying to figure out. I have a subset of devices that I need to query for at once, and on the same date. Usually I’ll be searching for 5-10 devices, on the same date, and I need to find the time when all 5-10 devices are >= 27, and the summation of the amperages at that time is the lowest. The end result that I’m looking for is the time that this occurred.I had been going down the path of searching for these devices with $in, and specifying the voltage, which works fine, but it didn’t guarantee that all 5 devices met that requirement.Any suggestions on how to accomplish something like this?Thanks",
"username": "Mark_Windrim"
},
{
"code": "",
"text": "Hi @Mark_Windrim,Would it be possible for you to provide a small data set that illustrates what you are looking for and the expected output given that data set?When you say:Usually I’ll be searching for 5-10 devices, on the same date, and I need to find the time when all 5-10 devices are >= 27I’m struggling to understand the actual query in English already so I’m not in a position to translate it into MQL just yet =).Oh and are you using the “new” timeseries collection or is it a regular collection?Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
]
| Aggregation with sorting, summations and matches | 2022-11-30T19:57:45.387Z | Aggregation with sorting, summations and matches | 1,390 |
null | []
| [
{
"code": "",
"text": "Hi Team,When I tried to save data using reactiveMongoRepository.save() . It’s not working.(I am not getting any errors and data is not available in the collection) but mongoRepository.save() is working fine without any issues. I could see data in collection.Thanks",
"username": "prabhu_padala"
},
{
"code": "",
"text": "Hi @prabhu_padala and welcome in the MongoDB Community !There isn’t much to work with here to help you. Can you please provide a short piece of code (as short as possible) that reproduces the problem without anything else around?You are in Java right?Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
]
| ReactiveMongoRepository save Issue | 2022-11-30T18:00:53.036Z | ReactiveMongoRepository save Issue | 998 |
null | [
"replication"
]
| [
{
"code": "",
"text": "I created 2 MongoDB windows services on same server different ports as described in this websiteAnswer a question I have installed MongoDB and its set up as windows service. When I try to set up replicaSet I am getting error \"Only one usage of each socket address (protocol/network address/port) 芒果数据 DevPress官方社区The services will restart automatically…My question is will i need to Configure replica set from MongoDB shell after server restart again ?",
"username": "Alan_Tam"
},
{
"code": "",
"text": "No the replicaSet configuration is statefull.A replica set needs 3 memebers to be viable. The best is 3 data bearing replicas but a arbiter can be used(but is not advised) to create a PSA replica set.As the primary reason for a replica set is redundancy and high availability there is very little value in placing the members on the same host.If there are features that you want to access that rely on a replica set then a single node replica set can be configured.",
"username": "chris"
},
{
"code": "",
"text": "Yes you have to do it manually from shell\nThe document you shared clearly shows the steps\nConnect to your primary port run rs.initiate() and add other node\nor\nWhile initiating itself define your nodes of your replicaset",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Thanks Chris, stateful configuration makes sense… But I just want to clarify, is there anything I need to do to kick off the replication again after server reboots? for example rs.initiate again ?",
"username": "Alan_Tam"
},
{
"code": "rs.initiate",
"text": "is there anything I need to do to kick off the replication again after server reboots? for example rs.initiate again ?unless you forgot to add them all to the service start-up list, then no.rs.initiate writes to a configuration collection on each of those server instances, and they will know exactly what to do when they come back online. So if you don’t make dramatic changes (ports for example) to those instances, they will automatically reconnect to the set if any of them restarts for any reason.in fact, the hard part is removing them from the set (temporary/permanent) and/or trying to make run them back as single. I exaggerated the “hard” as it may sound. so do not worry about that ",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "replicaSet configurationThanks How, How do you remove entire replicaSet configuration ? From MongoDB Shell ?Also I can Verify the status of all three Windows services using the sc command with query option… But i cannot connect to any of the MongoDB’s using MongoDB Compass. Any ideas ?",
"username": "Alan_Tam"
},
{
"code": "",
"text": "I figure out why i was not able to connect with MongoDB Compass… after you add “replSetName” to the cfg file you will NOT be able to connect w/ MongoDB Compass until after you configure replica set ( ie MongoDB Shell stuff )Still wondering How do you remove entire replicaSet configuration using MongoDB Shell ?Thanks",
"username": "Alan_Tam"
},
{
"code": "c:\\datadbdb1db2C:\\mongodb-win32-xxxloglog1log2localhostmongodb://localhost:27017,localhost:27017,localhost:27017/?replicaSet=rs1",
"text": "check the task manager if you have 3 “mongod” process. they might have stopped for you might forgot the unseen step of creating data and log folders.f you have followed that page and did not divert from the instruction it provides, you should be able to connect them all as localhost at ports “27017”, “37017” and “47017” individually.the primary node of your set can change at any time you restart the system, so use this to connect as a whole (connect to primary if not set otherwise):\nmongodb://localhost:27017,localhost:27017,localhost:27017/?replicaSet=rs1",
"username": "Yilmaz_Durmaz"
},
{
"code": "local",
"text": "How do you remove entire replicaSet configurationfor this my friend, you need a separate search. these may help",
"username": "Yilmaz_Durmaz"
},
{
"code": "A replica set needs 3 memebers to be viable.",
"text": "A replica set needs 3 memebers to be viable.if I only have 2 members … Primary and Secondary this will not work ?",
"username": "Alan_Tam"
},
{
"code": "",
"text": "if I only have 2 members … Primary and Secondary this will not work ?unless you set their priority levels, they will fail to decide who will be the primary. that is why 3 is the best number to break the tie. setting one of them as arbiter or hidden is also applicable. check this page:",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Is there any disadvantage to using Arbiter ?",
"username": "Alan_Tam"
},
{
"code": "",
"text": "Arbiters do not hold data so cuts the storage requirements. other than having an instance of a server somewhere, no downside. just do not forget they are there to finalize voting (plus a few other functioning you don’t need for now).",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "No Automatic Failover when using Primary with a Secondary and an Arbiter ?",
"username": "Alan_Tam"
},
{
"code": "",
"text": "check this for more info about Replica Sets and how they behave when they lose contact with other (about failover and more)\nReplication — MongoDB Manualif primary is left alone, it will not declare itself as primary anymore. if secondary and arbiter are alive, arbiter votes for the other. more than 3 requires a bit of delicacy to get balanced decisions among alive members.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "have you ever taken a course at MongoDB University? All courses are free and with proof of completion*.I recommend you take M103 course.\nM103: Basic Cluster Administration | MongoDB UniversityEdit: previously, I used “certificated” instead of “with proof of completion”. sorry for that.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Primary with a Secondary and an Arbiterok, i am just trying to figure out why someone would pick Primary with a Secondary and an Arbiter over Primary with Two Secondary Members besides requiring fewer resources ?",
"username": "Alan_Tam"
},
{
"code": "",
"text": "ok, i am just trying to figure out why someone would pick Primary with a Secondary and an Arbiter over Primary with Two Secondary Members besides requiring fewer resources ?Unless you set priorities, they will fail to assign a primary and your database will serve only as read-only. so the final decision is on who administer them and who pays for resources.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Welcome to the MongoDB Community @Alan_Tam !Per Replica set with 3 DB Nodes and 1 Arbiter - #8 by Stennie, there are several disadvantages to using an arbiter with modern versions of MongoDB. I recommend NOT using an arbiter where possible, especially for a production deployment.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "@Stennie_X just to be clear… if you had the choice betweenChoice would be 1 ?\ncan you give some disavantages w/ choice 2 ?Thanks",
"username": "Alan_Tam"
}
]
| Do I need to Configure replica set from MongoDB shell after server restart? | 2022-11-26T12:33:26.030Z | Do I need to Configure replica set from MongoDB shell after server restart? | 4,258 |
null | []
| [
{
"code": "",
"text": "How to add pipelibe to structurd straming.pls provide the syntax…i need to capture the changes on collection, adding new document and for deleting",
"username": "Krishnamoorthy_Kalidoss"
},
{
"code": "",
"text": "please let me know the syntax using sparksession(instead of sparkconf)",
"username": "Krishnamoorthy_Kalidoss"
},
{
"code": "",
"text": "‘’’\nSparkSession spark = SparkSession.builder()\n.appName(“testprogram”)\n.config(“spark.jars.package”, “org.mongodbspark.mongo-spark-connector10.0.5”)\n.getOrCreate();\npipeline = [{‘match’: {‘status’: ‘A’}}]dataStreamDf = spark.readStream()\n.format(“mongodb”)\n.option(“spark.mongodb.connection.uri”, “”)\n.option(“spark.mongodb.database”, “”)\n.option(“spark.mongodb.change.publish.full.document.only”,“true”)\n.option(“spark.ongodb.read.aggreation.pipeline”,pipeline\")\n.option(“spark.mongodb.collection”, “”)\n.schema(“inferschema”,“true”)\n.load().writeStream()\n.format(“mongodb”)\n.option(“checkpointLocation”, “/tmp/”)\n.option(“forceDeleteTempCheckpointLocation”, “true”)\n.option(“spark.mongodb.connection.uri”, “”)\n.option(“spark.mongodb.database”, “”)\n.option(“spark.mongodb.collection”, “”)\n.trigger(continuous=“1 second”)\n.outputMode(“append”);\n.start() ‘’’With this i can capture when i do update and insert to read collection. (without pipeline)…here i want to acheive the following with structred streaming with mongo-spark-connector10.0.5”:1). i have set the pipeline in the above code - pipeline is not working with the above code…\n2) how to capture operation type (insert, update, delete)\n3) how to caputre only the changes during updatePlease help me as soon as",
"username": "Krishnamoorthy_Kalidoss"
},
{
"code": "",
"text": "can anyone please help on this…it is quite urgent for me…",
"username": "Krishnamoorthy_Kalidoss"
},
{
"code": "",
"text": "Any update here… Do we have a way to use pipeline ?",
"username": "Krishnamoorthy_Kalidoss"
},
{
"code": "",
"text": "Hello MongoDb Community,I believe we can not trigger pipeline in structured streaming using\n‘’’“spark.mongodb.read.aggregation.pipeline”\n‘’’’Please confirm…(Streaming from mongo to mongo using continuous)",
"username": "Krishnamoorthy_Kalidoss"
}
]
| How to add pipelibe in structured streaming.. pls. Provide the syntax..i want to capture only the changss done on collection | 2022-11-25T12:33:40.062Z | How to add pipelibe in structured streaming.. pls. Provide the syntax..i want to capture only the changss done on collection | 1,393 |
null | []
| [
{
"code": "use sample_training\nfor (var i = 0; i < collections.length; i++) {\n print(collections[i], db.getCollection(collections[i]).countDocuments())\n}\n",
"text": "Hi, I’m tryng to list the count of documents for all colletions but my code doesn’t work:I get this error:ReferenceError: collections is not definedWhat am I doing wrong?",
"username": "50de2ab8098abaaa71308c68efda337"
},
{
"code": "for (var i = 0; i < db.getCollectionInfos().length; i++) {\n print(\n db.getCollectionInfos()[i]['name'],\n db.getCollection(db.getCollectionInfos()[i]['name']).countDocuments())\n}\n",
"text": "this seems to work:",
"username": "50de2ab8098abaaa71308c68efda337"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| List number of documents in each collection | 2022-12-01T09:15:29.080Z | List number of documents in each collection | 697 |
null | [
"connector-for-bi"
]
| [
{
"code": "",
"text": "I tired to launch mongosqld locally with the following command:mongosqld --mongo-uri “mongodb://mongo-dev-shard-00-00.xxxxx.gcp.mongodb.net:27017,mongo-dev-shard-00-01.xxxxx.gcp.mongodb.net:27017,mongo-dev-shard-00-02.xxxxx.gcp.mongodb.net:27017/?ssl=true&replicaSet=mongo-dev-shard-0&retryWrites=true&w=majority” --auth -u username -p password --mongo-versionCompatibility 4.4.6but ends up with an error:unable to load MongoDB information: failed to create admin session for loading server cluster information: unable to execute command: server selection error: context deadline exceeded, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: mongo-dev-shard-00-00.xxxxx.gcp.mongodb.net:27017, Type: Unknown, Average RTT: 0 }, { Addr: mongo-dev-shard-00-01.xxxxx.gcp.mongodb.net:27017, Type: Unknown, Average RTT: 0 }, { Addr: mongo-dev-shard-00-02.xxxxx.gcp.mongodb.net:27017, Type: Unknown, Average RTT: 0 }, ] }However, I can successfully connect db with mongodb compass client,\nby using uri:mongodb://username:[email protected]:27017,mongo-dev-shard-00-01.xxxxx.gcp.mongodb.net:27017,mongo-dev-shard-00-02.xxxxx.gcp.mongodb.net:27017/?ssl=true&replicaSet=mongo-dev-shard-0&retryWrites=true&w=majorityI can’t figure out what I am doing wrong. Anyone can assist me? Thanks in advance.hosted mongodb version: 4.4.6\nmongosqld version: 2.14.3",
"username": "Hong_Jian_Lim"
},
{
"code": "",
"text": "I am facing the same issue. @Hong_Jian_Lim : Did you find a solution to that?",
"username": "Ahmed_Nounou"
},
{
"code": "",
"text": "I am also facing the exact same issue. Are there any news so far?",
"username": "Simon_Bieri"
},
{
"code": "",
"text": "@Ahmed_Nounou @Simon_Bieri I’m also stucked with this problem, did anyone come up with a solution?Thanks for your time",
"username": "Antoni_Heinrichs"
}
]
| Hosted Database and On Premises BI Connector, failed to launch mongosqld | 2021-05-27T12:47:39.465Z | Hosted Database and On Premises BI Connector, failed to launch mongosqld | 6,673 |
[]
| [
{
"code": "",
"text": "Hi Guys,I am facing today timed-out 30000ms error today on my production attached some screenshots there kindly give a solution.Thank You\n\n01-digitaloceanscreenshot1436×777 24.7 KB\n",
"username": "NIKHIL_27"
},
{
"code": "",
"text": "Hi @NIKHIL_27, welcome to the community. \nCan you help me by providing the following details:If you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "Hi @SourabhBagrecha1.No it’s not 1st time I am running this application in the environment of this server I created it a year ago and it was running well I am facing these issues since yesterday I haven’t make any config changes yesterday\n2. Previously my IP address was 127.0. 0.1",
"username": "NIKHIL_27"
},
{
"code": "",
"text": "Hi @NIKHIL_27, thanks for reverting quickly.Previously my IP address was 127.0. 0.1This is your localhost’s(your own machine’s) IP address. You need to add your server’s IP address where you are deploying your application.If you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Database Error :MongooseServerSelectionError: Server selection timed out after 30000 ms | 2022-12-01T06:58:44.229Z | Database Error :MongooseServerSelectionError: Server selection timed out after 30000 ms | 1,421 |
|
[
"queries",
"replication",
"python",
"spark-connector",
"scala"
]
| [
{
"code": "22/10/28 08:42:13 INFO connection: Opened connection [connectionId{localValue:1, serverValue:67507}] to 10.00.000.000:19902\n22/10/28 08:42:13 INFO connection: Opened connection [connectionId{localValue:2, serverValue:1484}] to 10.00.000.000:20902\n22/10/28 08:42:13 INFO cluster: Monitor thread successfully connected to server with description ServerDescription{address=10.00.000.000:19902, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 4, 8]}, minWireVersion=0, maxWireVersion=9, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=63396153, setName='ABCD', canonicalAddress=10.00.000.000:19902, hosts=[10.00.000.000:18902, 10.00.000.000:19902], passives=[], arbiters=[10.00.000.000:20902], primary='10.00.000.000:18902', tagSet=TagSet{[]}, electionId=null, setVersion=10, lastWriteDate=Fri Oct 28 08:42:12 UTC 2022, lastUpdateTimeNanos=9353874207439702}\n22/10/28 08:42:15 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool\n22/10/28 08:42:15 INFO DAGScheduler: ResultStage 1 (showString at NativeMethodAccessorImpl.java:0) finished in 0.032 s\n22/10/28 08:42:15 INFO DAGScheduler: Job 1 finished: showString at NativeMethodAccessorImpl.java:0, took 0.038818 s\n++\n||\n++\n++\n\n22/10/28 08:42:15 INFO SparkContext: Invoking stop() from shutdown hook\n22/10/28 08:42:15 INFO MongoClientCache: Closing MongoClient: [10.00.000.000:19902,10.00.000.000:20902,10.00.000.000:18902]\nfrom pyspark import SparkConf, SparkContext\nimport sys\nimport json\n\nsc = SparkContext()\nspark = SparkSession(sc).builder.appName(\"MongoDbToS3\").config(\"spark.mongodb.input.uri\", \"mongodb://username:password@host1,host2,host3/db.table/?replicaSet=ABCD&authSource=admin\").getOrCreate()\ndata = spark.read.format(\"com.mongodb.spark.sql.DefaultSource\").load()\ndata.show()\nfrom datetime import datetime\nimport json\n#import boto3\nfrom bson import json_util\nimport pymongo\n\n\nclient = pymongo.MongoClient(\"mongodb://username@host:port/?authSource=admin&socketTimeoutMS=3600000&maxIdleTimeMS=3600000\")\n\n# Database Name\ndb = client[\"database_name\"]\n\n# Collection Name\nquoteinfo__collection= db[\"collection_name\"]\n\nresults = quoteinfo__collection.find({}).batch_size(1000)\ndoc_count = quoteinfo__collection.count_documents({})\n\nprint(\"documents count from collection: \",doc_count)\nprint(results)\nrecord_increment_no = 1\n\nfor record in results:\n print(record)\n print(record_increment_no)\n record_increment_no = record_increment_no + 1\nresults.close()\ndocuments count from collection: 32493\n<pymongo.cursor.Cursor object at 0x7fe75e9a6650>\n python3 mongofiltercount.py\ndocuments count from collection: 32492\n<pymongo.cursor.Cursor object at 0x7f2595328690>\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n",
"text": "I am trying to read data from 3 node MongoDB cluster(replica set) using PySpark and native python in AWS EMR. I am facing issues while executing the codes with in AWS EMR cluster as explained below but the same codes are working fine in my local windows machine.Through Pyspark - (issue - pyspark is giving empty dataframe)Below are the commands while running pyspark job in local and cluster mode.local mode : spark-submit --master local[*] --packages org.mongodb.spark:mongo-spark-connector_2.11:2.4.4 test.pycluster mode :\nspark-submit --master yarn --deploy-mode cluster --packages org.mongodb.spark:mongo-spark-connector_2.11:2.4.4 test.pywith both the modes I am not able to read data from mongoDB(empty dataframe) even though telnet is working across all nodes from spark cluster(from all nodes) . From the logs, I can confirm that spark is able to communicate with mongoDB and my pyspark job is giving empty dataframe. Please find below screenshots for same!Below is the code snippet for same:please let me know anything I am doing wrong or missing in pyspark code?Through native python code - (issue - code is getting stuck if batch_size >1 and if batch_size =1 it will print first 24 mongo documents and then cursor hangs)I am using pymongo driver to connect to mongoDB through native python code. The issue is when I try to fetch/print mongoDB documents with batch_size of 1000 the code hangs forever and then it gives network time out error. But if I make batch_size =1 then cursor is able to fetch first 24 documents after that again cursor hangs. we observed that 25th document is very big(around 4kb) compared to first 24 documents and then we tried skipping 25th document, then cursor started fetching next documents but again it was getting stuck at some other position, so we observed whenever the document size is large the cursor is getting stuck.can you guys please help me in understanding the issue?is there anything blocking from networking side or mongoDB side?below is code snippet :below is output screenshot for samefor batch_size = 1000 (code hangs and gives network timeout error)\nnetwork-timeout-error1252×673 84 KB\nbatch_size = 1 (prints documents only till 24th and then cursor hangs)",
"username": "Ranjit_J_N"
},
{
"code": "",
"text": "Hi All,\nThere were some issues with AWS account peering between our dev and MongoDB hosted AWS account as explained belowAfter adding transit gateway for MongoDb IP1 and MongoDB IP2,we are able to read data properly with any batch size for any collection.",
"username": "Ranjit_J_N"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Unable to read data from mongoDB using Pyspark or Python | 2022-10-28T08:45:32.298Z | Unable to read data from mongoDB using Pyspark or Python | 3,735 |
|
null | [
"queries",
"crud",
"scala"
]
| [
{
"code": "db.Cart.find(\n { \"items\": { $elemMatch: { productId: \"123\", productType: \"TSHIRT\" } } } \n)\ndb.Cart.updateMany( { }, { $pull: { \"items\": { $elemMatch: { \"productId\": \"123\", \"productType\": \"TSHIRT\" } } } }\n)\ndb.Cart.updateMany( { }, { $pull: { \"items\": { \"productType\": \"TSHIRT\", \"productId\": \"123\" } } })\n",
"text": "Hi,This query works and returns an element.But when I want to remove this item from the items array with the following statement:This one doesn not work:This one works:Why is this?And I am using the JVM (scala driver) - So I need to translate it into Scala code. Still haven’t found a way to do that with the one statement that works.",
"username": "Kristoffer_Almas"
},
{
"code": "db.Cart.updateMany( { }, { $pull: { \"items\": { \"productType\": \"TSHIRT\", \"productId\": \"123\" } } })\ndb.Cart.find(\n { \"items\": { $elemMatch: { productId: \"123\", productType: \"TSHIRT\" } } } \n)\n",
"text": "This query works and returns an element.Please share the element that is returned.This one works:Please share the UpdateResult.Then run again the first queryto see what was changed.One of the issue might be that for two objects to be equals the fields must have the same values and be in the same order. The object {a:1,b:2} is not equal to the object {b:2,a:1} because the fields are not in the same order.Your first query has the order productId and productType. The $pull that does not work has the same order which is surprising because the one that does work has the productType and productId order. I would have guess that the 2nd one does not work but that the first one does.",
"username": "steevej"
},
{
"code": "",
"text": "Ok it’s described in the documentation https://www.mongodb.com/docs/manual/reference/operator/update/pull/#remove-items-from-an-array-of-documents that it won’t work. So no need to pursue this one ",
"username": "Kristoffer_Almas"
}
]
| $pull with $elemMatch not working | 2022-11-30T15:50:22.044Z | $pull with $elemMatch not working | 3,051 |
null | [
"node-js"
]
| [
{
"code": "",
"text": "Hi there,I attended the MongoDB Associate Developer Node certification exam on December 1st, 2022 in the Examity portal and unfortunately could not pass the exam. I have got the result breakdown percentage for each section but there is no information about what are questions I made wrong and what are correct. I am not sure where I kind find it.knowing these details will help me improve my preparation to retake the exam. Also, I would like to know the pass percentage for the Associate developer exam.Any clarifications would be appreciated…Thanks",
"username": "Pushpathumba_Saravanan"
},
{
"code": "",
"text": "Hi @Pushpathumba_Saravanan,When you complete your exam you will receive a pass or fail result along with a report of how well you did in each of the exam topic areas (represented in percentages). In order to keep our exam content secure, we do not provide a breakdown at the question level. To see the exam topics and objectives and to learn how to prepare please reference our exam guides:If you have any doubts, please feel free to reach out to us.Thanks,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Thank you @Kushagra_Kesav for clarifying me. Could be please let me know what is the minimum percentage required for passing this exam?",
"username": "Pushpathumba_Saravanan"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB Associate Developer Exam Node.js | 2022-12-01T02:21:33.633Z | MongoDB Associate Developer Exam Node.js | 2,030 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "{\n name:\"abcd\",\n price:\"21\",\n text:\"aaa\",\n comapny:{\n name:\"bbbb\"\n }\n }\n`\n",
"text": "how to use atlas search to search a keyword present in my ‘company’ key which is an object",
"username": "Adhil_E"
},
{
"code": "{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"company.name\": {\n \"type\": \"string\"\n }\n }\n }\n}\ndb.newAtlasS.aggregate(db.newAtlasS.aggregate([\n {\n $search: {\n index: 'default',\n text: {\n query: 'bbbb',\n path: 'company.name'\n }\n }\n }\n]\n)\n[\n {\n _id: ObjectId(\"63864d60959925ab51e0cf96\"),\n name: 'abcd',\n price: '21',\n test: 'aaa',\n company: { name: 'bbbb' }\n }\n]\n",
"text": "Hi @Adhil_E and welcome to MongoDB the community forum!!Based on the sample data provided above, I created the index in the following way:and the following search query returned me the above documents:Output for which is:For more detailed information, please refer to the documentation on nested field search.Let us know if you have any further queries.Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "This topic was automatically closed after 180 days. New replies are no longer allowed.",
"username": "system"
}
]
| How to use aggregation in MongoDB Atlas to search keyword/string present in object inside another object | 2022-11-28T04:55:28.920Z | How to use aggregation in MongoDB Atlas to search keyword/string present in object inside another object | 1,163 |
[
"python"
]
| [
{
"code": "",
"text": "Hi, I just tried the Practice Exam for MongoDB Associate Developer Exams (Python). As a new developer, I found it infuriating to learn that I did terrible at the practice exam AND I can’t get the correct answer for some questions AND some questions don’t have a ‘correct’ answer.Is it just me or is it for everyone else?\n\nMongoDB Practice Exam Python - 11709×950 93.1 KB\nThis is just one of the many questions that doesn’t show me the correct answer and possibly doesn’t have a correct answer.If it is the case for everyone else, can I request for the MongoDB team to re-review the practice exam AND the actual certification exam? Cause I certainly don’t want to find the same bugs with the actual exam.",
"username": "Tobias_Aditya"
},
{
"code": "",
"text": "Hi @Tobias_Aditya,Welcome to the MongoDB Community forums As a new developer, I found it infuriating to learn that I did terrible at the practice exam AND I can’t get the correct answer for some questions AND some questions don’t have a ‘correct’ answer.At the moment, we do not display the correct answer to a practice question that you have answered incorrectly. However, we appreciate your feedback and will work with the concerned team to address it.This is just one of the many questions that doesn’t show me the correct answer and possibly doesn’t have a correct answer.MongoDB PracAs you can see in the screenshot above, you have chosen all the options, so it reports the option as incorrect. As per the problem statement, there is only one correct option.If it is the case for everyone else, can I request for the MongoDB team to re-review the practice exam AND the actual certification exam ? Cause I certainly don’t want to find the same bugs with the actual exam.Our team has reviewed the practice exam questions and fixed some of the misconfigured ones. I would encourage you to retry the practice question and provide feedback again.If you have any further questions or concerns, please don’t hesitate to contact us.Thanks,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "c.find()\n{ _id: 1, a: 1, b: 1 }\n{ _id: 2, a: 2 }\nc.updateMany( { a:1 } , { $set : { a :3 }})\n{ acknowledged: true,\n insertedId: null,\n matchedCount: 1,\n modifiedCount: 1,\n upsertedCount: 0 }\nc.find()\n{ _id: 1, a: 3, b: 1 }\n{ _id: 2, a: 2 }\n",
"text": "I hope this question is one that was corrected because none of the answers suggested is correct.The document _id:1 matches the query a:1 but none of the answer shows the document _id:1 with a:3.",
"username": "steevej"
},
{
"code": "",
"text": "Hi @steevej,Yes, we are working with the concerned team to fix the options. We will keep you updated on this.Thanks,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Practice Exam for MongoDB Associate Developer Exams (Python) | 2022-11-21T09:03:52.744Z | Practice Exam for MongoDB Associate Developer Exams (Python) | 2,412 |
|
null | [
"python",
"sharding"
]
| [
{
"code": "",
"text": "Hi Team,I’m new to mongodb and we are planning to use sharding feature in our company, so it became a priority for me to monitor it for shardDistribution, so i used pymongo example: db.collection_name.getShardDistribution() but it is not working, and I’m not sure what I’m missing\ncan someone help me with this?Thanks\nVaseem",
"username": "Vaseem_Akram_mohammad"
},
{
"code": "db.collection.getShardDistributionmongosh",
"text": "Hi @Vaseem_Akram_mohammad and welcome to the MongoDB community forum!!The db.collection.getShardDistribution is a helper function which works only in the mongosh as of today.\nThe driver version of the command in pymongo are still not present in the latest release.While there is no direct replacement of the command in Python, you may be able to get some part of the information by switching to the config database and perform some queries on the collections inside it. See Collections to Support Sharded Cluster Operations for more details on this.However, please note that the contents of the config database are internal to MongoDB and may be subjected to change in future depending on the product requirement.Let us know if you have any further queries.Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Helo @Aasawari ,Thank you",
"username": "Vaseem_Akram_mohammad"
}
]
| How can I run getShardDistribution() using pymongo | 2022-11-03T18:13:48.655Z | How can I run getShardDistribution() using pymongo | 2,025 |
null | []
| [
{
"code": "&<",
"text": "Hi folks, I have imported a large amount of data, and unfortunately there’s a ton of escape chars such as & and < the original dump did this and I have not checked before importing.Is there any function that can be used to process all documents in the collection and unescape those back to regular characters?Thanks",
"username": "Vinicius_Carvalho"
},
{
"code": "&<DB>db.collection.find({},{_id:0,text:1})\n[\n { text: '&this is some& text<' },\n { text: 'text<123' },\n { text: 'this &text< 123' }\n]\nDB>db.collection.updateMany( {},\n[\n {\n '$set': {\n text: {\n '$replaceAll': { input: '$text', find: '&', replacement: '' }\n }\n }\n },\n {\n '$set': {\n text: {\n '$replaceAll': { input: '$text', find: '<', replacement: '' }\n }\n }\n }\n])\n{\n acknowledged: true,\n insertedId: null,\n matchedCount: 3,\n modifiedCount: 3,\n upsertedCount: 0\n}\nDB> db.collection.find({},{_id:0,text:1})\n[\n { text: 'this is some text' },\n { text: 'text123' },\n { text: 'this text 123' }\n]\n",
"text": "Hi @Vinicius_Carvalho,I have only briefly tested this on a small collection containing 3 sample documents so if you believe it may work for your use case / environment then I would recommend testing thoroughly on a test / duplicated environment of what you have currently imported to verify it is correct.As an example, I have the following documents containing & and <:Running the below update against this collection:Documents in the same collection after the above update:If you find that this does not suit your use case, please provide the following:Hope this helps.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
]
| Escaping html chars in collection | 2022-11-26T22:08:03.967Z | Escaping html chars in collection | 930 |
null | []
| [
{
"code": "systemLog:\n destination: file\n path: /var/log/mongodb/mongod.log\n logAppend: true\n/var/log/mongodb/mongod.logmongodb:mongodb/var/log/mongodb/mongod.log",
"text": "For [email protected] community edition, I don’t see any logs being written.But the file size of /var/log/mongodb/mongod.log is 0",
"username": "Pra_Deep"
},
{
"code": "ss -tlnp\nps -aef | grep [m]ongo\n",
"text": "Most likely the mongod instance is not using the configuration file you shared.How did you started mongod?How do you connect to mongod?Share the output of the following commands:",
"username": "steevej"
},
{
"code": "/etc/[email protected]$ ps -aef | grep [m]ongo\nmongodb 14400 1 99 Jun29 ? 295-20:17:30 /usr/bin/mongod --config /etc/mongod.conf\n\n$ ss -tlnp\nState Recv-Q Send-Q Local Address:Port Peer Address:Port \nLISTEN 0 128 127.0.0.1:6379 0.0.0.0:* \nLISTEN 0 128 0.0.0.0:80 0.0.0.0:* \nLISTEN 0 128 127.0.0.53%lo:53 0.0.0.0:* \nLISTEN 0 128 0.0.0.0:22 0.0.0.0:* \nLISTEN 0 128 0.0.0.0:443 0.0.0.0:* \nLISTEN 0 128 127.0.0.1:27017 0.0.0.0:* \nLISTEN 0 128 *:8080 *:* users:((\"node\",pid=11366,fd=10)) \nLISTEN 0 128 [::]:22 [::]:* \nLISTEN 0 128 *:3000 *:* users:((\"node /home/prad\",pid=17598,fd=19))\n/etc/mongod.confstorage:\n dbPath: /var/lib/mongodb\n journal:\n enabled: true\n\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\nnet:\n port: 27017\n bindIp: 127.0.0.1\n\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\nsecurity:\n authorization: \"enabled\"\n",
"text": "Running mongo server using systemctl on ubuntu@18.\nCan confirm that it is using the right config file /etc/mongod.conf\nI connect to mongo using mongodb nodejs package [email protected] of /etc/mongod.conf",
"username": "Pra_Deep"
},
{
"code": "ls -al /var/log/mongodb/mongod.log\n",
"text": "Output of the command:",
"username": "steevej"
},
{
"code": "-rw------- 1 mongodb mongodb 0 Jul 10 06:25 /var/log/mongodb/mongod.log\n",
"text": "",
"username": "Pra_Deep"
},
{
"code": "",
"text": "B-(The above is the face of a perplex person.I will come back later, hopefully with B-)",
"username": "steevej"
},
{
"code": "df -v /var/log/mongodb/\n",
"text": "Output of the command",
"username": "steevej"
},
{
"code": "",
"text": "The machine has a lot of space as well as memory available",
"username": "Pra_Deep"
},
{
"code": "db.adminCommand( { getLog: \"global\" } )\n/bin/rm /var/log/mongodb/mongod.log\n/bin/cp /dev/null /var/log/mongodb/mongod.log\n",
"text": "I see a few possibilities.The configuration file has changed since mongod was started and the running instance is in fact writing into a different file. In mongosh you might tryto see if you can find log entries for initandlisten.Commands such aswere issued replacing the file that mongod is writing into. But mongod is still writing in the original file handle of the file. Some people do that when they do not run logRotate and the file becomes too big with the false hope that the original file is truncated.",
"username": "steevej"
},
{
"code": "/var/log/mongodb/mongod*.log {\n weekly\n dateext\n missingok\n rotate 12\n compress\n notifempty\n}\n",
"text": "thank you. you are right. With the first command, I checked the logs were coming in fact.\nRestarting the mongodb server fixed the issue. now I can see the logs in mongod.log.And probably logrotate might be the culprit here. Is there anything wrong with my logrotate config that could have caused this issue?",
"username": "Pra_Deep"
},
{
"code": "reopenreopen",
"text": "I do not know enough about logrotate utility but you could look at\nand\nandOf particular attention is the sentence:reopen closes and reopens the log file following the typical Linux/Unix log rotate behavior. Use reopen when using the Linux/Unix logrotate utility to avoid log loss.The default value is rename which seems to be incompatible with logrotate utility.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| No logs in mongo v4.4 | 2022-11-30T12:11:15.395Z | No logs in mongo v4.4 | 2,084 |
null | [
"queries",
"crud",
"golang"
]
| [
{
"code": "client.Database().Collection().InsertMany()db.collection.find().pretty()",
"text": "Hey everyone! I am new to MongoDB.\nI read the MongoDB Golang driver document, I found that we can use client.Database().Collection().InsertMany() to insert some data. But if the query sends by frontend, how can I execute the plain query likes db.collection.find().pretty()?\nThanks a lot.",
"username": "oysterdays"
},
{
"code": "findFindtype\"Oolong\"bson.D{}",
"text": "@oysterdays if you’re asking about how to run a find operation with the Go driver, check out the Find example from the Go Driver docs. That example uses a filter to only match documents where type is \"Oolong\", but you can find all documents by replacing that filter with an empty filter (e.g. bson.D{}).",
"username": "Matt_Dale"
}
]
| Is there any way to execute plain text query in Go? | 2022-11-30T06:54:11.950Z | Is there any way to execute plain text query in Go? | 1,154 |
null | []
| [
{
"code": "{\n_id: \"1\"\ntransitions: [\n {\n \"_id\" : \"11\"\n \"name\" : \"Tr1\"\n \"checkLists\" : [\n { _id: \"111\", name: \"N1\"},\n { _id: \"112\", name: \"N2\"}\n ]\n } \n ]\n}\ndb.collection.findOne({ 'transitions.checkLists._id: new ObjectId(\"112\") } }}, { 'transitions.checkLists.$': 1 })\n\n{ _id: ObjectId(\"1\"),\n transitions: \n [ { checkLists: \n [ { name: 'N1', _id: ObjectId(\"111\") },\n { name: 'N2', _id: ObjectId(\"112\") } ] } ] }\n{ _id: ObjectId(\"1\"),\n transitions: \n [ { checkLists: \n [ { name: 'N2', _id: ObjectId(\"112\") } ] } ] }\n",
"text": "Hi,\nI have the following data:I used the following code to get the name N2 by query of _id:112but the result returns back both of them:I would like to find and get only the name N2 by query of _id:112\nExpected Result:",
"username": "Mehran_Ishanian1"
},
{
"code": "db.collection.findOne({ 'transitions.checkLists._id: new ObjectId(\"112\") } }}, { 'transitions.checkLists.db.collection.aggregate(\n [\n {\n $match: {\n \"transitions.checkLists._id\": \"112\",\n },\n },\n {\n $unwind: \"$transitions\",\n },\n {\n $unwind: \"$transitions.checkLists\",\n },\n {\n $match: {\n \"transitions.checkLists._id\": \"112\",\n },\n },\n ]\n)\n",
"text": "db.collection.findOne({ 'transitions.checkLists._id: new ObjectId(\"112\") } }}, { 'transitions.checkLists.: 1 })`Hi Mehran, I’m not sure if it’s an “elegant” solution or not very scalable, but I got to this query",
"username": "Leandro_Domingues"
},
{
"code": "db.collection.aggregate(\n [\n {\n $match: {\n \"transitions.checkLists._id\": \"112\",\n },\n },\n {\n $unwind: \"$transitions\",\n },\n {\n $project: {\n transitions: {\n $filter: {\n input: \"$transitions.checkLists\",\n as: \"transition\",\n cond: {\n $eq: [\"$$transition._id\", \"112\"],\n },\n },\n },\n },\n },\n ]\n)\n{\n_id: \"1\"\ntransitions: [\n {\n \"_id\" : \"11\"\n \"name\" : \"Tr1\"\n \"checkLists\" : [\n { _id: \"111\", name: \"N1\"},\n { _id: \"112\", name: \"N2\"}\n ]\n } \n ]\n}\ndb.collection.findOne({ 'transitions.checkLists._id: new ObjectId(\"112\") } }}, { 'transitions.checkLists.$': 1 })\n\n{ _id: ObjectId(\"1\"),\n transitions: \n [ { checkLists: \n [ { name: 'N1', _id: ObjectId(\"111\") },\n { name: 'N2', _id: ObjectId(\"112\") } ] } ] }\n{ _id: ObjectId(\"1\"),\n transitions: \n [ { checkLists: \n [ { name: 'N2', _id: ObjectId(\"112\") } ] } ] }\n",
"text": "Or maybe this[quote=“Mehran_Ishanian1, post:1, topic:201482, full:true”]\nHi,\nI have the following data:I used the following code to get the name N2 by query of _id:112but the result returns back both of them:I would like to find and get only the name N2 by query of _id:112\nExpected Result:@Paulo_Cesar_Benjamin_Junior1",
"username": "Leandro_Domingues"
},
{
"code": "$map$filter$maptransitions$filtercheckLists_id$mergeObjectstransitionscheckListsdb.collection.findOne(\n { \"transitions.checkLists._id\": new ObjectId(\"112\") },\n {\n \"transitions\": {\n \"$map\": {\n \"input\": \"$transitions\",\n \"in\": {\n \"$mergeObjects\": [\n \"$$this\",\n {\n \"checkLists\": {\n \"$filter\": {\n \"input\": \"$$this.checkLists\",\n \"cond\": { \"$eq\": [\"$$this._id\", new ObjectId(\"112\")] }\n }\n }\n }\n ]\n }\n }\n }\n }\n)\n",
"text": "Hello @Mehran_Ishanian1,I used the following code to get the name N2 by query of _id:112The projection positional $, the condition will work only in the first level of the array,You can use aggregation operators $map and $filter operators to filter the nested array,Playground",
"username": "turivishal"
},
{
"code": "",
"text": "Thanks for your solution",
"username": "Mehran_Ishanian1"
},
{
"code": "",
"text": "Thank you for the solution",
"username": "Mehran_Ishanian1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| $ (dollar sign) project query with arrays | 2022-11-25T20:22:56.542Z | $ (dollar sign) project query with arrays | 1,861 |
null | []
| [
{
"code": "{\"t\":{\"$date\":\"2022-11-30T18:20:21.470+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn5\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"mydb.sessionactivity\",\"command\":{\"find\":\"sessionactivity\",\"filter\":{\"$and\":[{\"deviceSessionId\":\"62d1\"},{\"deleted\":{\"$ne\":true}}]},\"sort\":{\"startTime\":1,\"endTime\":-1},\"limit\":9007199254740991.0,\"returnKey\":false,\"showRecordId\":false,\"lsid\":{\"id\":{\"$uuid\":\"9c3\"}},\"$db\":\"mydb\"},\"planSummary\":\"COLLSCAN\",\"keysExamined\":0,\"docsExamined\":2109349,\"hasSortStage\":true,\"cursorExhausted\":true,\"numYields\":2164,\"nreturned\":3,\"queryHash\":\"1AA21857\",\"planCacheKey\":\"D35640A3\",\"reslen\":1587,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":2165}},\"ReplicationStateTransition\":{\"acquireCount\":{\"w\":2165}},\"Global\":{\"acquireCount\":{\"r\":2165}},\"Database\":{\"acquireCount\":{\"r\":2165}},\"Collection\":{\"acquireCount\":{\"r\":2165}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"protocol\":\"op_msg\",\"durationMillis\":4331}}\n{\"t\":{\"$date\":\"2022-11-30T18:20:29.046+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":48742, \"ctx\":\"conn13\",\"msg\":\"Profiler settings changed\",\"attr\":{\"from\":{\"level\":0,\"slowms\":4000,\"sampleRate\":0.42},\"to\":{\"level\":0,\"slowms\":5000,\"sampleRate\":0.42}}}\n",
"text": "I have two setups of mongodb server (developer env, production env.)\nWhile dev env shows no slow query even below 100ms. The production env shows too many slow queries (4000ms-7000ms).\nBoth have [email protected], same systems (ubuntu@18), prodution has more resources available than dev (memory, storage, etc.)This is the slow query log in production env",
"username": "Pra_Deep"
},
{
"code": "\"planSummary\":\"COLLSCAN\"{\"deviceSessionID\":1, \"deleted\":1,\"startTime\":1,\"endTime\":-1}{\"deviceSessionID\":1, \"deleted\":1}{\"deviceSessionID\":1}",
"text": "\"planSummary\":\"COLLSCAN\"Your collection has no supporting Index for this query. The whole collection (2.1M documents) is being scanned, reading from disk where necessary(slow). Based only on this query an optimal index might look like:{\"deviceSessionID\":1, \"deleted\":1,\"startTime\":1,\"endTime\":-1}An index like {\"deviceSessionID\":1, \"deleted\":1} or even {\"deviceSessionID\":1} might be less optimal but could support more use cases.",
"username": "chris"
}
]
| Two similar mongodb servers, one has slow queries, other does not | 2022-11-30T18:50:19.616Z | Two similar mongodb servers, one has slow queries, other does not | 745 |
null | [
"node-js",
"replication"
]
| [
{
"code": "",
"text": "Hello all,I’d like to initiate a work process with replica set;all of our servers had already set up with such.thing is, locally, what’s the best practice (or convinience), should I actually set up rs (using run-rs for example) or should I convert a standalone mongodb to a replica set?help would be much appreciated!cheers",
"username": "DaveJames"
},
{
"code": "alias mdb='docker run --rm -d -p 27017:27017 -h $(hostname) --name mongo mongo:6.0.3 --replSet=test && sleep 4 && docker exec mongo mongosh --quiet --eval \"rs.initiate();\"'\nalias m='docker exec -it mongo mongosh --quiet'\n",
"text": "Hi @DaveJames,When you are working locally, you just need a single node Replica Set (RS). There is no need to deploy a full 3 nodes RS on the same host. It will just consume more ressources and generate useless IOPS.Usually when I work locally, I just use Docker and deploy a temporary node but you can also persist the data in a volume if you want to reuse it next time.Single node RS also support Change Streams and Transactions just like a “normal” production ready 3 nodes RS.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hi @MaBeuLux88,What should I do incase I run a local mongodb and do not use docker? is there an easy way of turning the local instance to rs?cheers",
"username": "DaveJames"
},
{
"code": "mongod --config rs_single.configmkdir ~/rs-single\nmongod --dbpath ~/rs-single --fork --logpath ~/rs-single.log --replSet rssingle --bind_ip localhost --port 27077\nmongo --port 27077\n",
"text": "What should I do incase I run a local mongodb and do not use docker?make a copy of config file (usually /etc/mongo.conf) and edit its specific parts to suit your needs.alternatively you can fit all these options into a single command (provided folders exist)use this same method (edit as needed) anytime you want to test things freely.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "thanks @Yilmaz_Durmaz I’ll try that!",
"username": "DaveJames"
},
{
"code": "",
"text": "I would keep the default 27017 port for simplicity but yes that’s the equivalent without Docker. I use my Docker command line almost daily, especially when I’m working on the forum and I need to try something quickly and then trash the entire cluster.Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
]
| Mongodb replica set, on localhost | 2022-11-30T15:59:49.527Z | Mongodb replica set, on localhost | 4,293 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "{\n _id: ObjectId(\"12321323\"),\n name: \"ageRank\",\n filters: [\"<25\", \"25-29\"]\n},\n{\n _id: ObjectId(\"12321323\"),\n name: \"age\",\n filters: [15, 18, 25]\n},\n{\n ageRank: [{filter: \"<25\"}, {filter: \"25-29\"} ],\n age:[{filter:\"15\"}, {filter: \"18\"},{filter: \"25\"} ]\n}\n",
"text": "Hello! I need Help with some retrieve dataI have a collection like this:And I want you to give them to me like this:If anyone can help me I would really appreciate it, thank you very much! ",
"username": "Sergi_Ramos_Aguilo"
},
{
"code": "test [direct: primary] test> db.coll.insertMany([{name: \"ageRank\", filters: [\"<25\", \"25-29\"] }, {name: \"age\", filters: [15, 18, 25] }])\n{\n acknowledged: true,\n insertedIds: {\n '0': ObjectId(\"63863a1a0721bcd1c687cd29\"),\n '1': ObjectId(\"63863a1a0721bcd1c687cd2a\")\n }\n}\ntest [direct: primary] test> db.coll.find()\n[\n {\n _id: ObjectId(\"63863a1a0721bcd1c687cd29\"),\n name: 'ageRank',\n filters: [ '<25', '25-29' ]\n },\n {\n _id: ObjectId(\"63863a1a0721bcd1c687cd2a\"),\n name: 'age',\n filters: [ 15, 18, 25 ]\n }\n]\ntest [direct: primary] test> db.coll.aggregate([\n... {\n... '$project': {\n... '_id': 0, \n... 'k': '$name', \n... 'v': {\n... '$map': {\n... 'input': '$filters', \n... 'as': 'i', \n... 'in': {\n... 'filter': '$$i'\n... }\n... }\n... }\n... }\n... }, {\n... '$group': {\n... '_id': null, \n... 'x': {\n... '$push': '$$ROOT'\n... }\n... }\n... }, {\n... '$project': {\n... 'result': {\n... '$arrayToObject': '$x'\n... }\n... }\n... }, {\n... '$replaceRoot': {\n... 'newRoot': '$result'\n... }\n... }\n... ])\n[\n {\n ageRank: [ { filter: '<25' }, { filter: '25-29' } ],\n age: [ { filter: 15 }, { filter: 18 }, { filter: 25 } ]\n }\n]\n[\n {\n '$project': {\n '_id': 0, \n 'k': '$name', \n 'v': {\n '$map': {\n 'input': '$filters', \n 'as': 'i', \n 'in': {\n 'filter': '$$i'\n }\n }\n }\n }\n }, {\n '$group': {\n '_id': null, \n 'x': {\n '$push': '$$ROOT'\n }\n }\n }, {\n '$project': {\n 'result': {\n '$arrayToObject': '$x'\n }\n }\n }, {\n '$replaceRoot': {\n 'newRoot': '$result'\n }\n }\n]\n",
"text": "Hi @Sergi_Ramos_Aguilo and welcome back! I think I got it. At least this is one way to do it.\nMaybe not the best but it works I guess.Here is my pipeline for an easy copy & paste:Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Works!\nThank you very much! I love this community\nCheers",
"username": "Sergi_Ramos_Aguilo"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How I get documents with dynamic property | 2022-11-28T19:08:24.432Z | How I get documents with dynamic property | 972 |
null | [
"queries"
]
| [
{
"code": "{ a: 1, b: 1, name: 1 }{ a: true, b: 'value', name: /jon/i, sort: { name: 1 }}{ a: 1, b: 1, name: 1 },{ a: true, b: 'value' }",
"text": "I have name index which I use for sorting and I have to apply regex search on the same, below is the index:-\n{ a: 1, b: 1, name: 1 }Query:-\n{ a: true, b: 'value', name: /jon/i, sort: { name: 1 }}This Query is using the above index but { a: 1, b: 1, name: 1 }, but the query is taking more than 6s and the no.of records are 85K for { a: true, b: 'value' } query and post name filtering 26 records are returned.Which index should be created to support this regex query?\nOR\nHow can I perform a regex search quicker?\nThanks for the help in advance!",
"username": "Viraj_Chheda"
},
{
"code": "executionStats",
"text": "Hello @Viraj_Chheda ,I notice you haven’t had a response to this topic yet - were you able to find a solution?\nIf not, could you please provide below details?Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "{\n \"_id\": {\n \"$oid\": \"61d6d6a5568e7e17161g3565\"\n },\n \"user_details\": {\n \"id\": 54,\n \"name\": \"A SHIVAKUMAR\",\n \"name_in_lowercase\": \"a shivakumar\",\n }\n \"is_disable\": false\n \"source_with_assessment_id\": \"course_lesson_95891\",\n .\n .\n .\n #other fields\n}\n# basically I want to get results as we do with 'like' query\n# so that all the names which has 'shiv' in them apperares in the results eg: shivabc, abcshivdef, abcshiv\n\nFilter - { is_disable: false, source_with_assessment_id: 'course_lesson_95891', \"user_details.name_in_lowercase\": /shiv/i }\n\nSort - { \"user_details.name_in_lowercase\": 1 }\n\nexecution stats{\n\"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 26,\n \"executionTimeMillis\": 6621,\n \"totalKeysExamined\": 91735,\n \"totalDocsExamined\": 26,\n \"executionStages\": {\n \"stage\": \"FETCH\",\n \"nReturned\": 26,\n \"executionTimeMillisEstimate\": 27,\n \"works\": 91736,\n \"advanced\": 26,\n \"needTime\": 91708,\n \"needYield\": 0,\n \"saveState\": 2477,\n \"restoreState\": 2477,\n \"isEOF\": 1,\n \"docsExamined\": 26,\n \"alreadyHasObj\": 0,\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"filter\": {\n \"user_details.name_in_lowercase\": {\n \"$regex\": \"shiv\",\n \"$options\": \"i\"\n }\n },\n \"nReturned\": 26,\n \"executionTimeMillisEstimate\": 27,\n \"works\": 91735,\n \"advanced\": 26,\n \"needTime\": 91708,\n \"needYield\": 0,\n \"saveState\": 2477,\n \"restoreState\": 2477,\n \"isEOF\": 1,\n \"keyPattern\": {\n \"is_disable\": 1,\n \"source_with_assessment_id\": 1,\n \"user_details.name_in_lowercase\": 1\n },\n \"indexName\": \"is_disable_1_source_with_assessment_id_1_user_details.name_in_lowercase_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"is_disable\": [],\n \"source_with_assessment_id\": [],\n \"user_details.name_in_lowercase\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"is_disable\": [\n \"[false, false]\"\n ],\n \"source_with_assessment_id\": [\n \"[\\\"course_lesson_95891\\\", \\\"course_lesson_95891\\\"]\"\n ],\n \"user_details.name_in_lowercase\": [\n \"[\\\"\\\", {})\",\n \"[/shiv/i, /shiv/i]\"\n ]\n },\n \"keysExamined\": 91735,\n \"seeks\": 1,\n \"dupsTested\": 0,\n \"dupsDropped\": 0\n }\n },\n \"allPlansExecution\":[] # 27 Plans were there\n}\n",
"text": "@Tarun_Gaur I have still not found solution to this, below are the details:-",
"username": "Viraj_Chheda"
},
{
"code": "# so that all the names which has 'shiv' in them apperares in the results eg: shivabc, abcshivdef, abcshivshivshiv",
"text": "From the explain output you postedThe server examines a lot of index keys to return just 26 documents, so the query does not effectively use the index. Ideally, nReturned, totalKeysExamined, and totalDocsExamined should be the same number (all 26 in this example), or reasonably close to one another, which means that the server does little to no extra work to return the results. In this particular example, the server needed to examine more than 3500 index keys per one returned document.# so that all the names which has 'shiv' in them apperares in the results eg: shivabc, abcshivdef, abcshivInitially, I was going to recommend you to use anchored regex and collation but as you mentioned above that you need all the results where shiv is present irrespective of the location so this won’t be helpful for your use case. However, if you can find a pattern in your queries that can help narrow down the index keys needed to be examined, that will most certainly be helpful. For example, search for shiv anywhere in the name, but maybe you can be reasonably sure that the person’s user id is between 10 and 1000. This additional information would be very useful for performance gain.If you’re using Atlas, perhaps you can check out Atlas Search which is an embedded full-text search in MongoDB Atlas that gives you a seamless, scalable experience.",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Thanks, @Tarun_Gaur for the inputs. We will check if we should avail atlas search for this.",
"username": "Viraj_Chheda"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Regex query taking time - is there any way to bring the query execution time low? | 2022-11-10T10:03:24.525Z | Regex query taking time - is there any way to bring the query execution time low? | 2,238 |
null | [
"aggregation"
]
| [
{
"code": "{\n \"_id\": \"6369b55019ddb435c190065a\",\n \"name\": \"Tips\",\n \"counter\": 10,\n \"last_updated\": {\n \"$date\": {\n \"$numberLong\": \"1667917950452\"\n }\n },\n \"type\": \"generic\"\n}\n{\n \"_id\": \"2f980777-3e45-4710-9f9f-231c6b79275e\",\n \"type\": \"ads\",\n \"default\": \"ads\",\n\n \"index\": 0,\n \"last_updated\": {\n \"$date\": {\n \"$numberLong\": \"1669497124258\"\n }\n },\n \"magazine_id\": \"63487e6a-52c7-4215-a6fe-50082a7f830b\",\n \"merge_group\": false,\n \"number\": 50,\n \"page_group\": [\n 46,\n 47,\n 48,\n 49,\n 50\n ],\n \"publication_id\": \"360cea8c-5837-45cf-908c-1276b1abd9ab\",\n \"reviewed\": true,\n \n \"tags\": [\n {\n \"_id\": \"61c89821-be46-47c5-8e32-3f71c05c863b\",\n \"counter\": 0,\n \"last_updated\": {\n \"$date\": {\n \"$numberLong\": \"1669402611000\"\n }\n },\n \"name\": \"Poll\",\n \"type\": \"generic\"\n }\n ],\n \"issue_date\": {\n \"$date\": {\n \"$numberLong\": \"620611200000\"\n }\n }\n}\n",
"text": "Hi folks I have two collections:Tags and Pages.A tag document is defined as:And a Page is defined as:So counting the top tags has not been a problem as I just do the aggregation on the collection. However how do I find also tags that exist in Tags but not in Pages. Ideally I would like to get a result with the missing Tags counted as 0.Thank you",
"username": "Vinicius_Carvalho"
},
{
"code": "lookup = { \"$lookup\" : {\n \"from\" : \"Pages\" ,\n \"as\" : \"_result\" ,\n \"localField\" : \"name\" ,\n \"foreignField\" : \"tags.name\"\n \"pipeline\" : [ { \"$limit\" : 1 } ]\n} }\n\nmatch = { \"$match\" : {\n \"_result\" : { \"$size\" : 0 }\n} }\n\ndb.Tags.aggregate( [ lookup , match ] )\n",
"text": "There must be something I do not understand because it sounds like a very simple pipeline with a lookup and a match should provide you with what you are looking for. Something like the untested:",
"username": "steevej"
},
{
"code": "Tag.name. | count\nTips | 187\nPoll | 23\nReviews | 0\nProfiles. |. 0\n",
"text": "Hi Steve, I’m sorry if I was not clear. I want something like an left join in SQL would do:There are elements in Tags not present in Pages, and I need to count both presence and absence. When I ran the aggregation only on Pages, I get only the present tags not the missing ones, does it make sense?Thanks",
"username": "Vinicius_Carvalho"
},
{
"code": "[{\n $match: {\n type: 'generic'\n }\n}, {\n $lookup: {\n from: 'Pages',\n localField: '_id',\n foreignField: 'tags._id',\n as: '_result'\n }\n}, {\n $addFields: {\n tagsCount: {\n $size: '$_result'\n }\n }\n}, {\n $sort: {\n tagsCount: 1\n }\n}]\n",
"text": "I managed to get this to work:",
"username": "Vinicius_Carvalho"
},
{
"code": "",
"text": "Your pipeline is essentially the same as mine with the following differences.An extra $match at the beginning for type:generic which was not mentioned in your original post.Field names in localField and foreignField that seems wrong compared to your sample documents. The localField:_id seems to be an ObjectId while foreignField:tags._id seems to be UUID. They could not match. I used localField:name and foreignField:tags.name because they were the only that could be matched.My $limit:1 and final $match wrongly assumed that you only wanted the unused tags.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks for your input, helped a lot, I just documented the result in case someone else finds this.Cheers",
"username": "Vinicius_Carvalho"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Lookup collection and aggregation | 2022-11-29T22:16:46.588Z | Lookup collection and aggregation | 1,413 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "{\n \"_id\" : ObjectId(\"6091541aa4eea86edbc76dd0\"),\n \"type\" : \"login\",\n \"user\" : ObjectId(\"6qa9d80cd24c110524c9dbee\"),\n \"ip\" : \"127.0.0.1\",\n \"createdAt\" : ISODate(\"2021-05-04T14:03:06.670+0000\"),\n \"updatedAt\" : ISODate(\"2021-05-04T14:03:06.670+0000\"),\n}\n",
"text": "Hello! Would it be possible to create a query that goes through all access logs for all users and return an object with arrays with ips that has been used on multiple users?Example of the document for a loginIf it could return an object/print somehow which users has had the same ip and group them together? We have some users creating a lot of accounts (which is not allowed) but does’t always change his ip.",
"username": "amazing"
},
{
"code": "db.foo.insertMany([\n { \"ip\" : \"127.0.0.1\", \"user\": 1 },\n { \"ip\" : \"127.1.1.1\", \"user\": 2 },\n { \"ip\" : \"127.2.1.1\", \"user\": 3 },\n { \"ip\" : \"127.0.0.1\", \"user\": 4 },\n { \"ip\" : \"127.1.1.1\", \"user\": 5 },\n]);\ndb.foo.createIndex({ ip: 1, user: 1 });\ndb.foo.aggregate([\n { $sort: { ip: 1 } },\n { $group: {\n _id: \"$ip\",\n used: { $sum: 1 },\n users: { $push: { user: \"$user\" } }\n }},\n { $match: { used: { $gt: 1 } } }, \n])\nipuser",
"text": "Hi @amazing,Something like the following might be suitable for your needs:Note that if you’re only using the ip and user fields, having an index on these (as shown above) should greatly improve the performance of this operation.I’ve written up a longer form of this response at Efficiently Identifying Duplicates using MongoDB | ALEX BEVILACQUA as this question comes up pretty frequently ",
"username": "alexbevi"
},
{
"code": "",
"text": "Hi @alexbevi the results are exactly what I was looking for. Thank you so much!However, with our database it doesn’t really work, Maybe i wasn’t clear enough before but the collection has all logins so with your query it will just add the same user multiple times on all ips if the user have logged in more than once? (which most users have.So it should just add the user if it’s not a duplicate of itself ",
"username": "amazing"
},
{
"code": "db.foo.drop();\ndb.foo.insertMany([\n { \"ip\" : \"127.0.0.1\", \"user\": 1 },\n { \"ip\" : \"127.1.1.1\", \"user\": 1 },\n { \"ip\" : \"127.2.1.1\", \"user\": 2 },\n { \"ip\" : \"127.0.0.1\", \"user\": 1 },\n { \"ip\" : \"127.1.1.1\", \"user\": 2 },\n]);\ndb.foo.createIndex({ ip: 1, user: 1 });\ndb.foo.aggregate([\n { $sort: { ip: 1 } },\n { $group: {\n _id: { ip: \"$ip\", user: \"$user\" },\n used: { $sum: 1 }\n }},\n { $match: { used: { $gt: 1 } } }, \n])\nuserip",
"text": "@amazing in this case it sounds like you want to group by unique pairs of values (ex: ip/user):This would just count any time a user and ip pair appear more than once. Is this more along the lines of what you were looking for?If not it might help if you could provide a couple more sample documents that demonstrate what a “duplicate” looks like given your data.",
"username": "alexbevi"
},
{
"code": "db.foo.insertMany([\n { \"ip\" : \"123\", \"user\": 1 },\n { \"ip\" : \"123\", \"user\": 1 },\n { \"ip\" : \"123\", \"user\": 1 },\n { \"ip\" : \"456\", \"user\": 2 },\n { \"ip\" : \"456\", \"user\": 3 },\n { \"ip\" : \"789\", \"user\": 4 },\n]);\ndb.foo.createIndex({ ip: 1, user: 1 });\ndb.foo.aggregate([\n { $sort: { ip: 1 } },\n { $group: {\n _id: \"$ip\",\n used: { $sum: 1 },\n users: { $push: { user: \"$user\" } }\n }},\n { $match: { used: { $gt: 1 } } }, \n])\n{\n \"_id\": \"456\",\n \"used\": 2,\n \"users\": [\n {\n \"user\": 2\n },\n {\n \"user\": 3\n }\n ]\n }\n",
"text": "Hey @alexbevi no your first reply was closer! Let me give a very clear example So here the only thing that should return isThe one with 123 that also gets returned is from the same user logging in 3 times so that’s nothing wrong while user 2 & 3 both login on on the ip “456” which should be returned as it does now ",
"username": "amazing"
},
{
"code": "db.foo.aggregate([\n { $sort: { ip: 1 } },\n { $group: {\n _id: \"$ip\", \n users: { $addToSet: \"$user\" }\n }},\n { $match: { \n $expr: { $gt: [ { $size: \"$users\" }, 1] }\n }}\n])\n",
"text": "Thanks for clarifying. What you’re looking to do can be best accomplished by adding the user IDs to a set, then filtering the results by sets with more than 1 entry:",
"username": "alexbevi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Query to find duplicate users (ip) | 2022-11-29T14:10:02.235Z | Query to find duplicate users (ip) | 1,867 |
null | [
"replication",
"sharding"
]
| [
{
"code": "Oct 18 04:01:10.526700 //Replica.xecutor--1666065669.gz\nOct 18 04:01:32.726745 //ShardRegistry--1666065691.gz\nOct 18 04:50:28.010586 //ShardRe.Updater--1666068627.gz\nOct 18 04:50:39.290355 //ReplCoord-16--1666068638.gz\nOct 18 04:52:04.376591 //Logical.cheReap--1666068723.gz\nOct 18 04:52:10.809420 //Logical.cheReap--1666068723.gz\nOct 18 04:52:26.634675 //ReplCoord-0--1666068720.gz\n",
"text": "Hi, been working with mongo 6.0 for a few weeks now.\nUsing a sharded cluster configuration.\nObserved a bunch of core dumps happening, but was not able to inspect them with mongod on the VM that it happened in (core size was limited, so its reading garbage).\nCore dump names are truncated, but I believe you can identify.They happened all at once more or less:Thanks!",
"username": "Oded_Raiches"
},
{
"code": "mongodmongod",
"text": "Hi @Oded_RaichesCould you provide more background details:And also about the event itself:This seems like a lot of questions. but the goal is to know enough of the situation you’re facing, your hardware, and the events timeline, so people can imitate your situation and reproduce the issue reliably.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "mongod{\"t\":{\"$date\":\"2022-10-19T03:40:19.693+00:00\"},\"s\":\"I\", \"c\":\"SH_REFR\", \"id\":4619901, \"ctx\":\"CatalogCache-558\",\"msg\":\"Refreshed cached collection\",\"attr\":{\"namespace\":\"config.system.sessions\",\"lookupSinceVersion\":\"1|1024||633ffefb43612c48dd1e4cdd||Timestamp(1665138427, 6)\",\"newVersion\":\"{ chunkVersion: { t: Timestamp(1665138427, 6), e: ObjectId('633ffefb43612c48dd1e4cdd'), v: Timestamp(1, 1024) }, forcedRefreshSequenceNum: 2199, epochDisambiguatingSequenceNum: 2181 }\",\"timeInStore\":\"{ chunkVersion: \\\"None\\\", forcedRefreshSequenceNum: 2198, epochDisambiguatingSequenceNum: 2180 }\",\"durationMillis\":6}}\n{\"t\":{\"$date\":\"2022-10-19T03:40:19.713+00:00\"},\"s\":\"I\", \"c\":\"SH_REFR\", \"id\":4619901, \"ctx\":\"CatalogCache-558\",\"msg\":\"Refreshed cached collection\",\"attr\":{\"namespace\":\"config.system.sessions\",\"lookupSinceVersion\":\"1|1024||633ffefb43612c48dd1e4cdd||Timestamp(1665138427, 6)\",\"newVersion\":\"{ chunkVersion: { t: Timestamp(1665138427, 6), e: ObjectId('633ffefb43612c48dd1e4cdd'), v: Timestamp(1, 1024) }, forcedRefreshSequenceNum: 2201, epochDisambiguatingSequenceNum: 2183 }\",\"timeInStore\":\"{ chunkVersion: \\\"None\\\", forcedRefreshSequenceNum: 2200, epochDisambiguatingSequenceNum: 2182 }\",\"durationMillis\":4}}\n{\"t\":{\"$date\":\"2022-10-19T03:40:36.660+00:00\"},\"s\":\"W\", \"c\":\"NETWORK\", \"id\":4615610, \"ctx\":\"MirrorMaestro-8\",\"msg\":\"Failed to check socket connectivity\",\"attr\":{\"error\":{\"code\":6,\"codeName\":\"HostUnreachable\",\"errmsg\":\"Connection closed by peer\"}}}\n{\"t\":{\"$date\":\"2022-10-19T03:40:36.660+00:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22561, \"ctx\":\"MirrorMaestro-8\",\"msg\":\"Dropping unhealthy pooled connection\",\"attr\":{\"hostAndPort\":\"10.3.14.67:7501\"}}\n{\"t\":{\"$date\":\"2022-10-19T03:40:36.660+00:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22576, \"ctx\":\"MirrorMaestro\",\"msg\":\"Connecting\",\"attr\":{\"hostAndPort\":\"10.3.14.67:7501\"}}\n{\"t\":{\"$date\":\"2022-10-19T03:41:54.107+00:00\"},\"s\":\"I\", \"c\":\"SH_REFR\", \"id\":4619901, \"ctx\":\"CatalogCache-559\",\"msg\":\"Refreshed cached collection\",\"attr\":{\"namespace\":\"config.system.sessions\",\"lookupSinceVersion\":\"1|1024||633ffefb43612c48dd1e4cdd||Timestamp(1665138427, 6)\",\"newVersion\":\"{ chunkVersion: { t: Timestamp(1665138427, 6), e: ObjectId('633ffefb43612c48dd1e4cdd'), v: Timestamp(1, 1024) }, forcedRefreshSequenceNum: 2203, epochDisambiguatingSequenceNum: 2185 }\",\"timeInStore\":\"{ chunkVersion: \\\"None\\\", forcedRefreshSequenceNum: 2202, epochDisambiguatingSequenceNum: 2184 }\",\"durationMillis\":5}}\n{\"t\":{\"$date\":\"2022-10-19T03:41:54.116+00:00\"},\"s\":\"I\", \"c\":\"SH_REFR\", \"id\":4619901, \"ctx\":\"CatalogCache-559\",\"msg\":\"Refreshed cached collection\",\"attr\":{\"namespace\":\"config.system.sessions\",\"lookupSinceVersion\":\"1|1024||633ffefb43612c48dd1e4cdd||Timestamp(1665138427, 6)\",\"newVersion\":\"{ chunkVersion: { t: Timestamp(1665138427, 6), e: ObjectId('633ffefb43612c48dd1e4cdd'), v: Timestamp(1, 1024) }, forcedRefreshSequenceNum: 2205, epochDisambiguatingSequenceNum: 2187 }\",\"timeInStore\":\"{ chunkVersion: \\\"None\\\", forcedRefreshSequenceNum: 2204, epochDisambiguatingSequenceNum: 2186 }\",\"durationMillis\":4}}\n",
"text": "Hi @kevinadi , thanks for the fast reply!Logs around this time:\nmongos logs:Unfortunately, I don’t have data about what occurred there at this time (logs got rolled) but I got a hint that the storage holding the database might have disconnected for some time.",
"username": "Oded_Raiches"
},
{
"code": "mongodmongosmongos",
"text": "Hi @Oded_RaichesUnfortunately we need the crashed mongod logs instead of the mongos logs, since the mongos process have no idea what’s going on in the shards themselves.You also mentioned:but I got a hint that the storage holding the database might have disconnected for some time.If I understand correctly, your setup have storage defined external to the nodes, and the VM lost connection to the storage node. Is this accurate? If yes, then this is like pulling the hard drive out when your PC is running. It’s best to double check that no data is lost or corrupt.I would verify that your deployment follows the settings put forth in the production notes to ensure best performance and reliability.Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi @kevinadi, I didn’t notice any corruption in data, but when the storage is disconnected we get these cores.\nIs there a way I can share you the logs and cores? I’m not really able to read the cores, at first I thought they were corrupt but I’m probably not using the right binaries",
"username": "Oded_Raiches"
},
{
"code": "OplogApplier#0 0x00007f722da79817 in raise () from /lib/x86_64-linux-gnu/libpthread.so.0\n#1 0x000055a8bc928968 in mongo::(anonymous namespace)::endProcessWithSignal(int) ()\n#2 0x000055a8bc929c26 in mongo::(anonymous namespace)::myTerminate() ()\n#3 0x000055a8bcab46a6 in __cxxabiv1::__terminate(void (*)()) ()\n#4 0x000055a8bcab46e1 in std::terminate() ()\n#5 0x000055a8b9a11508 in mongo::ThreadPool::Impl::_startWorkerThread_inlock() [clone .cold.629] ()\n#6 0x000055a8bc716f48 in mongo::ThreadPool::Impl::schedule(mongo::unique_function<void (mongo::Status)>) ()\n#7 0x000055a8bc717083 in mongo::ThreadPool::schedule(mongo::unique_function<void (mongo::Status)>) ()\n#8 0x000055a8ba726f20 in mongo::repl::OplogApplierImpl::_applyOplogBatch(mongo::OperationContext*, std::vector<mongo::repl::OplogEntry, std::allocator<mongo::repl::OplogEntry> >) ()\n#9 0x000055a8ba725597 in mongo::repl::OplogApplierImpl::_run(mongo::repl::OplogBuffer*) ()\n#10 0x000055a8ba7ca02b in auto mongo::unique_function<void (mongo::executor::TaskExecutor::CallbackArgs const&)>::makeImpl<mongo::repl::OplogApplier::startup()::{lambda(mongo::executor::TaskExecutor::CallbackArgs const&)#1}>(mongo::repl::OplogApplier::startup()::{lambda(mongo::executor::TaskExecutor::CallbackArgs const&)#1}&&)::SpecificImpl::call(mongo::executor::TaskExecutor::CallbackArgs const&) ()\n#11 0x000055a8bc0f9ee0 in mongo::executor::ThreadPoolTaskExecutor::runCallback(std::shared_ptr<mongo::executor::ThreadPoolTaskExecutor::CallbackState>) ()\n#12 0x000055a8bc0fa2e0 in auto mongo::unique_function<void (mongo::Status)>::makeImpl<mongo::executor::ThreadPoolTaskExecutor::scheduleIntoPool_inlock(std::__cxx11::list<std::shared_ptr<mongo::executor::ThreadPoolTaskExecutor::CallbackState>, std::allocator<std::shared_ptr<mongo::executor::ThreadPoolTaskExecutor::CallbackState> > >*, std::_List_iterator<std::shared_ptr<mongo::executor::ThreadPoolTaskExecutor::CallbackState> > const&, std::_List_iterator<std::shared_ptr<mongo::executor::ThreadPoolTaskExecutor::CallbackState> > const&, std::unique_lock<mongo::latch_detail::Latch>)::{lambda(auto:1)#3}>(mongo::executor::ThreadPoolTaskExecutor::scheduleIntoPool_inlock(std::__cxx11::list<std::shared_ptr<mongo::executor::ThreadPoolTaskExecutor::CallbackState>, std::allocator<std::shared_ptr<mongo::executor::ThreadPoolTaskExecutor::CallbackState> > >*, std::_List_iterator<std::shared_ptr<mongo::executor::ThreadPoolTaskExecutor::CallbackState> > const&, std::_List_iterator<std::shared_ptr<mongo::executor::ThreadPoolTaskExecutor::CallbackState> > const&, std::unique_lock<mongo::latch_detail::Latch>)::{lambda(auto:1)#3}&&)::SpecificImpl::call(mongo::Status&&) ()\n#13 0x000055a8bc715a25 in mongo::ThreadPool::Impl::_doOneTask(std::unique_lock<mongo::latch_detail::Latch>*) ()\n#14 0x000055a8bc71719b in mongo::ThreadPool::Impl::_consumeTasks() ()\n#15 0x000055a8bc71865c in mongo::ThreadPool::Impl::_workerThreadBody(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) ()\n#16 0x000055a8bc718bd0 in std::thread::_State_impl<std::thread::_Invoker<std::tuple<mongo::stdx::thread::thread<mongo::ThreadPool::Impl::_startWorkerThread_inlock()::{lambda()#4}, , 0>(mongo::ThreadPool::Impl::_startWorkerThread_inlock()::{lambda()#4})::{lambda()#1}> > >::_M_run() ()\n#17 0x000055a8bcad093f in execute_native_thread_routine ()\n#18 0x00007f722da6e6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0\n#19 0x00007f722d79761f in clone () from /lib/x86_64-linux-gnu/libc.so.6\nSignalHandlerProgram terminated with signal SIGABRT, Aborted.\n#0 0x00007fb7875b6817 in raise () from /lib/x86_64-linux-gnu/libpthread.so.0\n[Current thread is 1 (Thread 0x7fb7843b2700 (LWP 5666))]\n(gdb) bt\n#0 0x00007fb7875b6817 in raise () from /lib/x86_64-linux-gnu/libpthread.so.0\n#1 0x00005558680be968 in mongo::(anonymous namespace)::endProcessWithSignal(int) ()\n#2 0x00005558680bfc26 in mongo::(anonymous namespace)::myTerminate() ()\n#3 0x000055586824a6a6 in __cxxabiv1::__terminate(void (*)()) ()\n#4 0x000055586824a6e1 in std::terminate() ()\n#5 0x0000555864c5c494 in mongo::repl::ReplicationCoordinatorImpl::AutoGetRstlForStepUpStepDown::_startKillOpThread() [clone .cold.3979] ()\n#6 0x00005558655bbf28 in mongo::repl::ReplicationCoordinatorImpl::AutoGetRstlForStepUpStepDown::AutoGetRstlForStepUpStepDown(mongo::repl::ReplicationCoordinatorImpl*, mongo::OperationContext*, mongo::repl::ReplicationCoordinator::OpsKillingStateTransitionEnum, mongo::Date_t) ()\n#7 0x00005558655c8e10 in mongo::repl::ReplicationCoordinatorImpl::stepDown(mongo::OperationContext*, bool, mongo::Duration<std::ratio<1l, 1000l> > const&, mongo::Duration<std::ratio<1l, 1000l> > const&) ()\n#8 0x000055586571c460 in mongo::stepDownForShutdown(mongo::OperationContext*, mongo::Duration<std::ratio<1l, 1000l> > const&, bool) ()\n#9 0x0000555865413029 in mongo::(anonymous namespace)::shutdownTask(mongo::ShutdownTaskArgs const&) ()\n#10 0x00005558680bb4a5 in mongo::(anonymous namespace)::runTasks(std::stack<mongo::unique_function<void (mongo::ShutdownTaskArgs const&)>, std::deque<mongo::unique_function<void (mongo::ShutdownTaskArgs const&)>, std::allocator<mongo::unique_function<void (mongo::ShutdownTaskArgs const&)> > > >, mongo::ShutdownTaskArgs const&) ()\n#11 0x00005558651f2ad2 in mongo::shutdown(mongo::ExitCode, mongo::ShutdownTaskArgs const&) ()\n#12 0x0000555866199b80 in mongo::(anonymous namespace)::signalProcessingThread(mongo::LogFileStatus) ()\n#13 0x0000555866199d25 in std::thread::_State_impl<std::thread::_Invoker<std::tuple<mongo::stdx::thread::thread<void (*)(mongo::LogFileStatus), mongo::LogFileStatus&, 0>(void (*)(mongo::LogFileStatus), mongo::LogFileStatus&)::{lambda()#1}> > >::_M_run() ()\n#14 0x000055586826693f in execute_native_thread_routine ()\n#15 0x00007fb7875ab6db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0\n#16 0x00007fb7872d461f in clone () from /lib/x86_64-linux-gnu/libc.so.6\nReplica.xecutor",
"text": "Hi @kevinadi , I observed some crashes again.\nWhat happened here is that a storage drive was replaced.Do you know what it means and how we can prevent it?",
"username": "Oded_Raiches"
},
{
"code": "mongodmongod",
"text": "Hi @Oded_RaichesI have to reiterate that MongoDB was not designed to have storage pulled out under it while it’s running.If the mongod process cannot access the disk because it was suddenly not available, either deliberately or accidentally, it is not surprising that it crashes.how we can prevent it?Please don’t disconnect the disk while the mongod process is running.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "ftdc[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\nCore was generated by `/usr/bin/mongod --quiet --config /etc/mongod.conf'.\nProgram terminated with signal SIGABRT, Aborted.\n#0 raise (sig=<optimized out>) at ../sysdeps/unix/sysv/linux/raise.c:51\n51 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.\n[Current thread is 1 (Thread 0x7fcb4e290700 (LWP 2679))]\n(gdb) bt\n#0 raise (sig=<optimized out>) at ../sysdeps/unix/sysv/linux/raise.c:51\n#1 0x0000562520b5b968 in mongo::(anonymous namespace)::endProcessWithSignal(int) ()\n#2 0x0000562520b5cc26 in mongo::(anonymous namespace)::myTerminate() ()\n#3 0x0000562520ce76a6 in __cxxabiv1::__terminate(void (*)()) ()\n#4 0x0000562520d7c029 in __cxa_call_terminate ()\n#5 0x0000562520ce7095 in __gxx_personality_v0 ()\n#6 0x00007fcb59ec2573 in ?? () from /lib/x86_64-linux-gnu/libgcc_s.so.1\n#7 0x00007fcb59ec2ad1 in _Unwind_RaiseException () from /lib/x86_64-linux-gnu/libgcc_s.so.1\n#8 0x0000562520ce7807 in __cxa_throw ()\n#9 0x000056251dc78971 in mongo::error_details::throwExceptionForStatus(mongo::Status const&) ()\n#10 0x000056251dc8d8c2 in mongo::uassertedWithLocation(mongo::Status const&, char const*, unsigned int) ()\n#11 0x000056251d72b02e in mongo::FTDCController::doLoop() [clone .cold.495] ()\n#12 0x000056251e170dec in std::thread::_State_impl<std::thread::_Invoker<std::tuple<mongo::stdx::thread::thread<mongo::FTDCController::start()::{lambda()#2}, , 0>(mongo::FTDCController::start()::{lambda()#2})::{lambda()#1}> > >::_M_run() ()\n#13 0x0000562520d0393f in execute_native_thread_routine ()\n#14 0x00007fcb59c9a6db in start_thread (arg=0x7fcb4e290700) at pthread_create.c:463\n#15 0x00007fcb599c361f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95\n(gdb)\n",
"text": "Thanks for the reply @kevinadi .\nSo essentially your saying is that mongo can crash anywhere when it is having storage issues?\nFor storage systems this is pretty common for a disk to have issues and be needing replace.One more example I got recently, is ftdc crash because of this.\nHere is the stacktrace:",
"username": "Oded_Raiches"
},
{
"code": "",
"text": "mongo can crash anywhere when it is having storage issuesYes. Like all software that needs disks for permanent storage. Like a car that crashes when it looses a wheel.The solution is RAID at the storage level and replication at the system architecture level.",
"username": "steevej"
}
]
| Multiple core dumps | 2022-10-24T10:52:32.376Z | Multiple core dumps | 3,303 |
null | [
"serverless"
]
| [
{
"code": "currentOpadmin()AdminDatabaseThe mongodb.admin() method returns an AdminDatabase object. The object contains helper methods that wrap a subset of MongoDB database commands. See admin.getDBNames()context.service.get('Cluster0').admin().command({currentOp: 1})",
"text": "I’m trying to execute admin commands from Atlas Serverless Functions, I’m unable to figure out how to execute admin commands such as currentOpThe docs explain the use of admin() which returns an instance of AdminDatabase but it’s unclear how to execute commands through this object.It explains The mongodb.admin() method returns an AdminDatabase object. The object contains helper methods that wrap a subset of MongoDB database commands. See admin.getDBNames()But doesn’t expand on any additional functions/commands.I would expect that I’d be able to execute commands like context.service.get('Cluster0').admin().command({currentOp: 1})Any additional guidance would be much appreciated.",
"username": "certifiedfoodclassic"
},
{
"code": "admin.getDBNames()",
"text": "Hi Matt,Admin commands in functions are generally not supported.The document you’ve referenced is specifically for the use of admin.getDBNames() which is the exception. Please see article below detailing which commands are supported:Regards\nManny",
"username": "Mansoor_Omar"
},
{
"code": "admin().getDBNames()",
"text": "So is admin().getDBNames() the only admin command supported?Is there a specific reason this?",
"username": "certifiedfoodclassic"
}
]
| Executing admin commands from Atlas Serverless Function | 2022-11-29T16:29:07.613Z | Executing admin commands from Atlas Serverless Function | 1,937 |
[
"node-js",
"mongoose-odm"
]
| [
{
"code": "",
"text": "Hi,\nI am facing a very strange issue, MongoDB with help of mongoose is not connecting with the atlas cluster database, while I am using my wifi - Dlink, on the other hand, when I am using a mobile hotspot, it is working fine.screenshots of not conn have been uploaded. kindly help me. please reply asap.Please let me know how to resolve this issue, (I can not use a\n\nfirst1280×367 61 KB\n\nmobile hotspot)",
"username": "Govind_Bisen"
},
{
"code": "",
"text": "Search this forum for ETIMEOUT. It is a recurring issue.",
"username": "steevej"
}
]
| Mongodb Not connecting due to internent? | 2022-11-30T04:17:40.042Z | Mongodb Not connecting due to internent? | 863 |
|
null | [
"queries",
"crud",
"compass"
]
| [
{
"code": "let cache: {\n [key: string]: any;\n} = null;\nConfigModel.find({}).then((config) => {\n cache = Object.fromEntries(config.map((c) => [c.name, c.value]));\n});\nexport async function populate() {\n const config = await ConfigModel.find({});\n cache = Object.fromEntries(config.map((c) => [c.name, c.value]));\n}\nsetInterval(() => {\n ConfigModel.find({}).then((config) => {\n cache = Object.fromEntries(config.map((c) => [c.name, c.value]));\n });\n}, 60000);\n\nexport function getConfig(value: string) {\n return cache[value];\n}\n",
"text": "Hi, i’m using this caching system for a config collection I have on mongoDB, but I’d like to use a post action with the schema, to live edit the cache when editing a property using Compass, currently, I have to wait 1min for the changes to be taken in count, I tried to use .post(‘updateOne’), but I got nothing.",
"username": "Simon_N_A"
},
{
"code": "",
"text": "It looks like you wantMongoDB triggers, change streams, database triggers, real time",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Keep track of Compass changes | 2022-11-30T12:37:20.805Z | Keep track of Compass changes | 1,058 |
null | [
"atlas-cluster",
"database-tools"
]
| [
{
"code": "",
"text": "Dear Community,\ndid anyway faced such a response. I am trying to think where could my problem be.\nI tried to import to my cluster with the command line tools. I tried mongoimport --uri mongodb+srv://hermann17:@movies-couch-api.fyn8ikd.mongodb.net/ --collection --type --file .\nThe response was : Failed: open movies.json: The system cannot find the file specified.\n2022-11-30T14:34:51.954+0100 0 document(s) imported successfully. 0 document(s) failed to import.\nAny thoughts or suggestions?",
"username": "Hermann_Rasch"
},
{
"code": "",
"text": "Try putting the connection string within single quotes.",
"username": "steevej"
}
]
| I am not sure why I am not being able to import to my Cluster | 2022-11-30T13:35:29.314Z | I am not sure why I am not being able to import to my Cluster | 1,096 |
null | [
"node-js"
]
| [
{
"code": "",
"text": "I am unable to resolve this error. Need guidance to resolve this error.ReferenceError: XPathExpression is not defined\nat Object. (E:\\Databases\\FruitsProject\\app.js:2:16)\nat Module._compile (node:internal/modules/cjs/loader:1159:14)\nat Module._extensions…js (node:internal/modules/cjs/loader:1213:10)\nat Module.load (node:internal/modules/cjs/loader:1037:32)\nat Module._load (node:internal/modules/cjs/loader:878:12)\nat Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)\nat node:internal/main/run_main_module:23:47",
"username": "Hardik_Rathod"
},
{
"code": "XPathExpression",
"text": "Hello @Hardik_Rathod , Welcome to the Developer Community Forum,at Object. (E:\\Databases\\FruitsProject\\app.js:2:16)It looks like you have used XPathExpression property but it is not defined, can you please show more details like code in the app.js file?",
"username": "turivishal"
}
]
| ReferenceError: XPathExpression is not defined | 2022-11-30T10:47:55.858Z | ReferenceError: XPathExpression is not defined | 1,143 |
[
"security"
]
| [
{
"code": "",
"text": "Hi Everyone,I have 2 api’s using the same method to connect to the same mongo db (same URI), the one is working while the other throws an error : MongoParseError: URI malformed, cannot be parsed.Anyone for helping me on this ?Plz, check attached screenshot.\napi-node-express-mongo1803×941 221 KB\nThank you.",
"username": "ryzeforce"
},
{
"code": "",
"text": "Hi there, back with a solution in case it could help some of you.\nActually, it has to do with the node version I use for my app. I had to downgrade it from node 19.1.0 to node 18.10.0 and it did the trick.",
"username": "ryzeforce"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| MongoParseError: URI malformed, cannot be parsed - 2 api's, one works, other one not | 2022-11-30T08:35:18.560Z | MongoParseError: URI malformed, cannot be parsed - 2 api’s, one works, other one not | 3,281 |
|
null | [
"replication",
"security"
]
| [
{
"code": "",
"text": "Hello everyone,I am facing an issue where a secondary member of a 5-node replica-set always elects primary as a sync-source, having an specialized internal setup, this is not fit for our environment, and I was wondering if I can manaully setup a mechanism that will force sync from a non-primary member (the member ofcourse if eligible to be a sync source). We are using 4.2 version of MongoDBThanks everyone!",
"username": "Tin_Cvitkovic"
},
{
"code": "initialSyncSourceReadPreference“Starting in MongoDB 4.4, you can specify the preferred initial sync source using the initialSyncSourceReadPreference parameter. This parameter can only be specified when starting the mongod”",
"text": "I’ve found out the solution to my problem would be initialSyncSourceReadPreference .\n“Starting in MongoDB 4.4, you can specify the preferred initial sync source using the initialSyncSourceReadPreference parameter. This parameter can only be specified when starting the mongod”",
"username": "Tin_Cvitkovic"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Sync from non-primary | 2022-11-24T08:02:37.654Z | Sync from non-primary | 1,824 |
null | []
| [
{
"code": "",
"text": "Hi,I am searching how to upgrade from v3.2 to a stable version(v4.4 or v5.0 in shortlist)。 I found mongodb v5.0 has some more extra cpu instructions requirements…(MongoDB 5.0 CPU Intel G4650 compatibility)Is there a list for v5.0 cpu instructions requirements?Thanks a lot!",
"username": "Xu_Han"
},
{
"code": "",
"text": "我的赛扬n5105也不能启动。系统是debian11,mongo版本是5.0。报错非法指令,就是不支持指令集了。",
"username": "cao_baocheng"
}
]
| Mongodb v5.0 cpu instructions requirements | 2022-03-24T06:36:16.324Z | Mongodb v5.0 cpu instructions requirements | 2,427 |
[
"aggregation",
"node-js",
"atlas-search"
]
| [
{
"code": "Given the following sample documents:\n\n{_id:1, name: \"Quesedillas Inc.\", active: true },\n{_id:2, name: \"Pasta Inc.\", active: true },\n{_id:3, name: \"Tacos Inc.\", active: false },\n{_id:4, name: \"Cubanos Inc.\", active: false },\n{_id:5, name: \"Chicken Parm Inc.\", active: false },\n\n\nA company wants to create a mobile app for users to find restaurants by name. The developer wants to show the user restaurants that match their search. An Atlas Search index has already been created to support this query.\n\nWhat query satisfies these requirements?\n\nA. db.restaurants.aggregate([{ \"$search\": { \"text\": { \"path\": \"name\", \"synonym\": \"cuban\"} } }])Your Answer\nB. db.restaurants.aggregate([{ \"$search\": { \"text\": { \"path\": \"name\", \"query\": \"cuban\"} } }])\nC. db.restaurants.aggregate([{ \"$search\": { \"text\": { \"field\": \"name\", \"query\": \"cuban\"} } }])\nD. db.restaurants.aggregate([{ \"$search\": { \"text\": { \"field\": \"name\", \"synonym\": \"cuban\"} } }])\n",
"text": "HiI’ve been answering the practice questions for Associate Developer Node.js from https://learn.mongodb.com/learn/course/associate-developer-node-practice-questions/prep-questions/practice-questionsThere is a question:So I’ve tried all 4 and have been told all 4 are wrong… see screen shots (attached)\n\nwrongA1508×132 38.9 KB\n\n\nwrongB1564×144 38.7 KB\n\n\nwrongC1524×140 40.3 KB\n\n\nwrongD1560×162 42.5 KB\nannoying - means you can’t get 100% on the practice",
"username": "Ryan_Moore"
},
{
"code": "",
"text": "Hi @Ryan_Moore,Thanks for flagging it! We have forwarded this to the concerned team. We will keep you updated.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "There are so many such questions which are not clear. There is no way knowing the correct answer and supporting reasoning.",
"username": "neeraj"
},
{
"code": "",
"text": "Hi @neeraj,Thanks for your post. And we fixed the question earlier today and I’ll encourage you to re-attempt the practice exam and let us know your feedback!Let me know if you face any further problems!Thanks,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "thanks @Kushagra_Kesav for your response. Will try.",
"username": "neeraj"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Practice questions no valid answer | 2022-11-17T17:16:32.131Z | Practice questions no valid answer | 2,874 |
|
null | [
"aggregation"
]
| [
{
"code": "",
"text": "I need to update a number that is in an object nested on an array of the document collection, I don’t want to use the aggregation pipeline since I cannot use arrayFilters with it and don’t want to update all the objects of the array cause is not needed, the problem is with the language that there is some inaccuracies when working with decimals and my app is a financial one that needs more accurate results in the order of 2 decimal places, I see some people that say to solve this client side, but several people can update the same document so can be error prone, how can I round to 2 decimal places without aggregation or use array filters with aggregation to update just the object that is needed to update?",
"username": "Santiago_Gonzalez"
},
{
"code": "",
"text": "Could you work with integers instead of reals: in cents instead of euros (of dollars)",
"username": "Peter_Kaagman"
},
{
"code": "",
"text": "thanks for your reply, sorry but I don’t understand",
"username": "Santiago_Gonzalez"
},
{
"code": "",
"text": "With $map in an aggregation pipeline you do not update all the objects of an array since with $cond you simply return $this for elements that do not need to be updated.To help you further we will need samples documents and expected results. What ever you tried together with how it fails is helpful for us to know.And finally, $convert would be the way to convert your amounts.",
"username": "steevej"
}
]
| How to round to 2 decimal places without aggregation or use array filters with aggregation | 2022-11-28T00:17:20.709Z | How to round to 2 decimal places without aggregation or use array filters with aggregation | 1,185 |
null | []
| [
{
"code": "",
"text": "Requirement is that an application instance deployed across multiple AWS regions needs connectivity/access to the same DB. Does MongoDB Atlas support multiple Private endpoints to VPCs in different AWS regions where the application is deployed?",
"username": "Sudhir_Harikant"
},
{
"code": "ABC",
"text": "Hi @Sudhir_Harikant,Requirement is that an application instance deployed across multiple AWS regions needs connectivity/access to the same DB.Could you provide following details on your Atlas environment regarding this question:Additionally, could you briefly describe the AWS application infrastructure? As an example:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "ABC",
"text": "Hi @Jason_Tran ,We are still trying to evaluate the best fit for this requirement. Can we sharded cluster deployed in a multi-region AWS environment and have private link connections to application instance running in each region? Will this provide HA in case one region is unavailable?This is right, the instances deployed across multiple regions belong to same application and hence need communication to a single DB",
"username": "Sudhir_Harikant"
},
{
"code": "",
"text": "hi @Jason_Tran,Could you guide me on the suitable deployment type?Regards,\nSudhir",
"username": "Sudhir_Harikant"
},
{
"code": "ABC",
"text": "Hi Sudhir,I wouldn’t be able to advise a “best fit” for your organization regarding the private endndpoint connectivity as there are many factors that would affect this. In saying so, If it’s a replica set multi-region cluster, you’ll need to ensure AWS PrivateLink must be active in all regions into which you deploy a multi-region cluster.This is right, the instances deployed across multiple regions belong to same application and hence need communication to a single DBFor replica sets, based off a quick glance at the application instances and assuming you have a single cluster (single region cluster), you would need set up Same Region private endpoint <—> Same Region application instance private link and have your other region applications have peering to the “Same Region” application VPC to use the VPC endpoint which connects to the Atlas cluster:To connect to Atlas database deployments using AWS PrivateLink from regions in which you haven’t deployed a private endpoint connection, you must peer VPCs in those regions to VPCs in a region in which you have deployed a private endpoint connection.For a sharded cluster I would go over the (Optional) Regionalized Private Endpoints for Multi-Region Sharded Clusters documentation.However, you may wish to confirm with the Atlas support team via the in-app chat if you have any further queries regarding private endpoint connectivity.Lastly, the following documentation may be of use to you to go over: Private Endpoints - LimitationsRegards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
]
| Connecting to a Atlas DB from different AWS regions | 2022-11-08T02:33:50.108Z | Connecting to a Atlas DB from different AWS regions | 3,163 |
null | [
"queries",
"transactions"
]
| [
{
"code": "",
"text": "Hi,\nis there a way to set queries timeout with command within single transaction?\nIn other words, i’m looking for equivalent command toSET LOCAL statement_timeout TO 200;Thanks for a help in advance!",
"username": "Mateusz_Glowinski"
},
{
"code": "wtimeoutwriteconcernsession.startTransaction",
"text": "Hi @Mateusz_Glowinski and welcome to the MongoDB Community !I think you are looking for the wtimeout that you can set in the writeconcern parameter of the session.startTransaction function.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thank you @MaBeuLux88 for a really nice hint, but this will not solve my problem.What I’m actually trying to do, is to use djongo package to communicate with MongoDB. I want to get an database error, once a query takes too much time.\nSomething similar to this.",
"username": "Mateusz_Glowinski"
},
{
"code": "max_time_ms",
"text": "In Python (and equivalent in the other drivers of course) you have max_time_ms on cursors: cursor – Tools for iterating over MongoDB query results — PyMongo 4.3.3 documentationThere are also a bunch of timeouts that you can set in the connection string: https://www.mongodb.com/docs/manual/reference/connection-string/ (CTRL + F for “ms” to find them all).I hope this helps,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thank you, that is exactly what i need!",
"username": "Mateusz_Glowinski"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Is there an equivalent command to SQL "SET LOCAL statement_timeout"? | 2022-11-29T08:46:43.281Z | Is there an equivalent command to SQL “SET LOCAL statement_timeout”? | 1,695 |
null | [
"field-encryption"
]
| [
{
"code": "",
"text": "Hi Everyone!\nAfter seeing the Field level encryption on a session of Mongo DB live we decided to try implementing it.\nI successfully generated the key vault collection using a AWS KMS key. Next I followed the docs and added a JSON schema to the collection which also worked and I can see the schema in compass. Now what I cant seem to get working is the actual insert. Everything I have tried gives the following error on insert\n\"MongoError: BSON field ‘insert.jsonSchema’ is an unknown field. \"\nWhat exactly does this error mean with FLE?",
"username": "kyle_mcarthur"
},
{
"code": "",
"text": "I am actually working with dev support now, and they are looking into it.",
"username": "kyle_mcarthur"
},
{
"code": "",
"text": "Hi @kyle_mcarthur, and welcome to the forum!Everything I have tried gives the following error on insert\n\"MongoError: BSON field ‘insert.jsonSchema’ is an unknown field. \"The error message sounds related to a misconfiguration on the application side.\nYou may find the following resources useful:I am actually working with dev support now, and they are looking into it.Feel free to post back here if you have a solution. It may beneficial for other users who may encounter similar configuration issue in the future.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Did you find a solution? I am having the same problem.",
"username": "Emilio_Lopez"
},
{
"code": "",
"text": "Hi Emilio,\nYes, my problem was that the mongocryptd process was not running. You will have to download the enterprise edition of mongo and start the process.",
"username": "kyle_mcarthur"
},
{
"code": "const extraOptions = {\n mongocryptdURI: connectionString,\n mongocryptdBypassSpawn: true,\n };\n",
"text": "Thanks Kyle.I am actually using a 4.2 MongoDB Atlas Cluster. However I’m still having that error message.What do you have on your autoEncryption.extraOptions object?I currently have:",
"username": "Emilio_Lopez"
},
{
"code": "",
"text": "So that is the part that is confusing, you CANT use your atlas cluster as the encryption process. You have to start the process yourself and use it on the same server your backend is running or expose the process yourself on a separate server and connect with the extra options but you can’t use your atlas instance. I chose the first option and I let the mongo node driver start the process on my server (the driver will try and start it if it’s in your PATH if you remove the bypassSpawn option you have set)",
"username": "kyle_mcarthur"
},
{
"code": "",
"text": "Oh ok I understand.I might end up using cloud run base on your solution. Would it be possible for you to share the code regarding including the encryption binary in the docker file? And also the exponential backoff loop you wrote?It would really help me a lot, but if you can’t, that’s fine.I really appreciate all your help man.",
"username": "Emilio_Lopez"
},
{
"code": "",
"text": "So when I talked to dev support about that (i was trying to use my atlas connection string as well at first) they said its because its “client” encryption but they would take my feedback to the team about using an atlas string. I too am using a serverless architecture but I am using Cloud Run and since that is a docker container I am able to include the encryption binary in the docker container and the node driver starts it automatically.The biggest issue I had on cloud run was the driver code attempts to wait for the process to be started and that didnt always work on cloud run. I basically wrote an exponential backoff loop with a try catch trying to connect to the mongo cluster and now it works flawlessly as it eliminates the race condition with the node driver and the encryption process starting.I use AWS lambda on a different project and with that you would have to do what you said and start the process on a lightsail or ec2 instance. If you havent looked at cloud run I would suggest taking a look as it works great for something like this. If you dont want to use google cloud, aws has fargate that should work similar but its a bit more involved in the setup than cloud run.",
"username": "kyle_mcarthur"
},
{
"code": "mongocryptdDockerfilemongocryptd",
"text": "Hi @Emilio_Lopez and @kyle_mcarthur,So that is the part that is confusing, you CANT use your atlas cluster as the encryption processThe general notion of MongoDB Client-Side Field Level Encryption is that the server never sees the unencrypted values. In this case the driver would automatically encrypt the values via a client-side process (i.e. mongocryptd) before sending data to Atlas.Please note that MongoDB also has Encryption At Rest capability. Atlas encrypts all cluster storage and snapshot volumes, ensuring the security of all cluster data at rest.Would it be possible for you to share the code regarding including the encryption binary in the docker file?If you’re looking for an example Dockerfile that includes just mongocryptd please see\ngithub.com/sindbach/field-level-encryption-docker .I’d also recommend to review FLE: Encryption Components for more information around Client-Side Field Level Encryption.Regards,\nWan.",
"username": "wan"
},
{
"code": "let pollForMongo = true;\nlet maxPolls = 0;\nlet backoff = 0;\nwhile (pollForMongo && maxPolls < 10) {\n console.log('polling.....', maxPolls);\n try {\n // eslint-disable-next-line no-await-in-loop\n await mongoose.connect(uri, {\n useUnifiedTopology: true,\n useNewUrlParser: true,\n useCreateIndex: true,\n autoEncryption: {\n keyVaultNamespace,\n kmsProviders,\n schemaMap,\n },\n });\n pollForMongo = false;\n } catch (e) {\n console.log('time until next poll...', 1000 + backoff);\n // eslint-disable-next-line no-await-in-loop\n await sleep(1000 + backoff);\n maxPolls += 1;\n backoff += 1000;\n }\n}\n#Use the official lightweight Node.js 13 image.\n#https://hub.docker.com/_/node\nFROM node:13.5-slim\n\n#Create and change to the app directory.\nWORKDIR ./\n\n#Copy application dependency manifests to the container image.\n#A wildcard is used to ensure both package.json AND package-lock.json are copied.\n#Copying this separately prevents re-running npm install on every code change.\nCOPY package*.json ./\n\n#Install production dependencies.\nRUN npm install --only=production\n\n#Copy local code to the container image.\nCOPY . ./\nRUN apt-get update && apt-get install -y --no-install-recommends apt-utils\nRUN apt install /encryption/mongocryptd.deb\n\n#Run the web service on container startup.\nCMD [ \"npm\", \"start\" ]\n",
"text": "A backoff loop will look something like this:An example docker file is something like:And then in my repo I placed mongocryptd.deb in a folder called encryption.",
"username": "kyle_mcarthur"
},
{
"code": "",
"text": "Thanks man!By the way, are you having any performance issues with mongocryptd? My writes are taking twice the amount of time even if the document I am saving doesn’t have any encrypted fields. Also when I make 20+ writes concurrently it just stops working.",
"username": "Emilio_Lopez"
},
{
"code": "",
"text": "I have fortunately not had any performance problems with the encryption process. Let me know if you find out the cause.",
"username": "kyle_mcarthur"
},
{
"code": "",
"text": "Nevermind I figured it out",
"username": "Emilio_Lopez"
},
{
"code": "",
"text": "I will.Did you have this error when running it on docker?Error: /app/node_modules/mongodb-client-encryption/build/Release/mongocrypt.node: invalid ELF header.",
"username": "Emilio_Lopez"
},
{
"code": "",
"text": "hi are you downloaded ‘mongocryptd.deb’ with these config? Screenshot 2020-09-13 at 12.39.49 PM626×616 35.6 KBI am not able to install mongocryptd.deb due to dependences.\nE: Release ‘mongocryptd.deb’ for ‘python-pymongo’ was not found\nE: Release ‘mongocryptd.deb’ for ‘libmongodb-java’ was not found\nE: Release ‘mongocryptd.deb’ for ‘jmeter-mongodb’ was not found\nE: Release ‘mongocryptd.deb’ for ‘python3-pymongo’ was not found\nE: Release ‘mongocryptd.deb’ for ‘libcatmandu-store-mongodb-perl’ was not found\nE: Release ‘mongocryptd.deb’ for ‘libmongodb-perl’ was not found\nE: Release ‘mongocryptd.deb’ for ‘libmongo-client-dev’ was not found\nE: Release ‘mongocryptd.deb’ for ‘libmongo-client0’ was not found\nE: Release ‘mongocryptd.deb’ for ‘libmongo-client-doc’ was not found\nE: Release ‘mongocryptd.deb’ for ‘libmongo-client0-dbg’ was not found\nE: Release ‘mongocryptd.deb’ for ‘libmongoc-1.0-0’ was not found\nE: Release ‘mongocryptd.deb’ for ‘libmongoc-dev’ was not found\nE: Release ‘mongocryptd.deb’ for ‘libmongoc-doc’ was not found\nE: Release ‘mongocryptd.deb’ for ‘libmongoclient-dev’ was not found\nE: Release ‘mongocryptd.deb’ for ‘libmongoclient0’ was not found\nE: Release ‘mongocryptd.deb’ for ‘php-mongodb’ was not found\nE: Release ‘mongocryptd.deb’ for ‘php-doctrine-mongodb-odm’ was not found\nE: Release ‘mongocryptd.deb’ for ‘php-horde-mongo’ was not found\nE: Release ‘mongocryptd.deb’ for ‘php7.0-mongodb’ was not found\nE: Release ‘mongocryptd.deb’ for ‘php-mongo’ was not found\nE: Release ‘mongocryptd.deb’ for ‘libpocomongodb46’ was not found\nE: Release ‘mongocryptd.deb’ for ‘prometheus-mongodb-exporter’ was not found\nE: Release ‘mongocryptd.deb’ for ‘puppet-module-puppetlabs-mongodb’ was not found\nE: Release ‘mongocryptd.deb’ for ‘python-pymongo-ext’ was not found\nE: Release ‘mongocryptd.deb’ for ‘python-pymongo-doc’ was not found\nE: Release ‘mongocryptd.deb’ for ‘python2.7-pymongo’ was not found\nE: Release ‘mongocryptd.deb’ for ‘python2.7-pymongo-ext’ was not found\nE: Release ‘mongocryptd.deb’ for ‘python3-pymongo-ext’ was not found\nE: Release ‘mongocryptd.deb’ for ‘python-mongoengine’ was not found\nE: Release ‘mongocryptd.deb’ for ‘python2.7-mongoengine’ was not found\nE: Release ‘mongocryptd.deb’ for ‘python-mongoengine-doc’ was not found\nE: Release ‘mongocryptd.deb’ for ‘python3-mongoengine’ was not found\nE: Release ‘mongocryptd.deb’ for ‘rsyslog-mongodb’ was not found\nE: Release ‘mongocryptd.deb’ for ‘ruby-em-mongo’ was not found\nE: Release ‘mongocryptd.deb’ for ‘ruby-mongo’ was not found\nE: Release ‘mongocryptd.deb’ for ‘syslog-ng-mod-mongodb’ was not found\nE: Release ‘mongocryptd.deb’ for ‘uwsgi-mongodb-plugins’ was not found\nE: Release ‘mongocryptd.deb’ for ‘w1retap-mongo’ was not found",
"username": "Ben_Luk"
},
{
"code": "mongocryptdapt-getDockerfileRUN wget -qO - https://www.mongodb.org/static/pgp/server-4.4.asc | apt-key add -\nRUN echo \"deb [ arch=amd64,arm64,s390x ] http://repo.mongodb.com/apt/ubuntu focal/mongodb-enterprise/4.4 multiverse\" | tee /etc/apt/sources.list.d/mongodb-enterprise.list\nRUN apt-get update && apt-get install -y mongodb-enterprise-cryptd=4.4.1\n",
"text": "Hi @Ben_Luk, and welcome to the forum!You can install mongocryptd from apt-get on Ubuntu. For example in a Dockerfile:See also github.com/sindbach/field-level-encryption-docker/node for a working example.Regards,\nWan",
"username": "wan"
},
{
"code": "",
"text": "Hello Team,I am facing the same issue, can some one have solution for this?",
"username": "khasim_ali1"
},
{
"code": "",
"text": "@wan whats the point of having “mongocryptdURI” extra option then?",
"username": "Obaid_Maroof"
},
{
"code": "mongocryptdURI\"mongodb://localhost:27020\"",
"text": "Hi @Obaid_Maroof , and welcome to the forums,whats the point of having “mongocryptdURI” extra option then?The mongocryptd configuration parameters are used to specify values that differ from the defaults. In this case, the default for mongocryptdURI is \"mongodb://localhost:27020\". If you would like to specify a different URI i.e. port number then you could utilise the parameter to do so.If you have further questions, please feel free to open a new topic.Regards,\nWan.",
"username": "wan"
}
]
| FLE MongoError: BSON field 'insert.jsonSchema' is an unknown field | 2020-06-15T20:37:02.691Z | FLE MongoError: BSON field ‘insert.jsonSchema’ is an unknown field | 12,640 |
null | [
"aggregation",
"dot-net"
]
| [
{
"code": " var dataFacet = AggregateFacet.Create(\"data\",\n PipelineDefinition<BankTransaction, BankTransactionDto>.Create(new IPipelineStageDefinition[]\n {\n PipelineStageDefinitionBuilder.Skip<BankTransaction>((pageNumber - 1) * pageSize),\n PipelineStageDefinitionBuilder.Limit<BankTransaction>(pageSize),\n PipelineStageDefinitionBuilder.Project<BankTransaction, BankTransactionDto>(x => new BankTransactionDto\n {\n Id = x.Id,\n TransactionDate = x.TransactionDate,\n Customer = x.Customer,\n Description = x.Description,\n Value = x.Value,\n CategoryId = x.CategoryId,\n CategoryName = x.Category.Name\n }),\n }));\nSystem.FormatException: Element 'Id' does not match any field or property of class SaveMeter.Modules.Transactions.Core.DTO.BankTransactionDto.\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.DeserializeClass(BsonDeserializationContext context)\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.Deserialize(BsonDeserializationContext context, BsonDeserializationArgs args)\n at MongoDB.Bson.Serialization.IBsonSerializerExtensions.Deserialize[TValue](IBsonSerializer`1 serializer, BsonDeserializationContext context)\n at MongoDB.Bson.Serialization.Serializers.EnumerableSerializerBase`2.Deserialize(BsonDeserializationContext context, BsonDeserializationArgs args)\n at MongoDB.Bson.Serialization.Serializers.SerializerBase`1.MongoDB.Bson.Serialization.IBsonSerializer.Deserialize(BsonDeserializationContext context, BsonDeserializationArgs args)\n at MongoDB.Bson.Serialization.IBsonSerializerExtensions.Deserialize(IBsonSerializer serializer, BsonDeserializationContext context)\n at MongoDB.Driver.AggregateFacetResultsSerializer.Deserialize(BsonDeserializationContext context, BsonDeserializationArgs args)\n at MongoDB.Bson.Serialization.IBsonSerializerExtensions.Deserialize[TValue](IBsonSerializer`1 serializer, BsonDeserializationContext context)\n at MongoDB.Bson.Serialization.Serializers.EnumerableSerializerBase`2.Deserialize(BsonDeserializationContext context, BsonDeserializationArgs args)\n at MongoDB.Bson.Serialization.IBsonSerializerExtensions.Deserialize[TValue](IBsonSerializer`1 serializer, BsonDeserializationContext context)\n at MongoDB.Driver.Core.Operations.AggregateOperation`1.CursorDeserializer.Deserialize(BsonDeserializationContext context, BsonDeserializationArgs args)\n at MongoDB.Bson.Serialization.IBsonSerializerExtensions.Deserialize[TValue](IBsonSerializer`1 serializer, BsonDeserializationContext context)\n at MongoDB.Driver.Core.Operations.AggregateOperation`1.AggregateResultDeserializer.Deserialize(BsonDeserializationContext context, BsonDeserializationArgs args)\n at MongoDB.Bson.Serialization.IBsonSerializerExtensions.Deserialize[TValue](IBsonSerializer`1 serializer, BsonDeserializationContext context)\n at MongoDB.Driver.Core.WireProtocol.CommandUsingCommandMessageWireProtocol`1.ProcessResponse(ConnectionId connectionId, CommandMessage responseMessage)\n at MongoDB.Driver.Core.WireProtocol.CommandUsingCommandMessageWireProtocol`1.ExecuteAsync(IConnection connection, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.Server.ServerChannel.ExecuteProtocolAsync[TResult](IWireProtocol`1 protocol, ICoreSession session, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.RetryableReadOperationExecutor.ExecuteAsync[TResult](IRetryableReadOperation`1 operation, RetryableReadContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.ReadCommandOperation`1.ExecuteAsync(RetryableReadContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.AggregateOperation`1.ExecuteAsync(RetryableReadContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.AggregateOperation`1.ExecuteAsync(IReadBinding binding, CancellationToken cancellationToken)\n at MongoDB.Driver.OperationExecutor.ExecuteReadOperationAsync[TResult](IReadBinding binding, IReadOperation`1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.ExecuteReadOperationAsync[TResult](IClientSessionHandle session, IReadOperation`1 operation, ReadPreference readPreference, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.AggregateAsync[TResult](IClientSessionHandle session, PipelineDefinition`2 pipeline, AggregateOptions options, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.UsingImplicitSessionAsync[TResult](Func`2 funcAsync, CancellationToken cancellationToken)\n at MongoDB.Driver.IAsyncCursorSourceExtensions.FirstAsync[TDocument](IAsyncCursorSource`1 source, CancellationToken cancellationToken)\n at SaveMeter.Modules.Transactions.Core.Queries.Handlers.GetBankTransactionsByFilterHandler.HandleAsync(GetBankTransactionsByFilter query, CancellationToken cancellationToken) in /Users/mawb/learning/repos/savemeter-api-monolith/src/Modules/Transactions/SaveMeter.Modules.Transactions.Core/Queries/Handlers/GetBankTransactionsByFilterQueryHandler.cs:line 70\n{\n _id: UUID(\"8edbc21e5a214672b4f04a91bac5e0c9\"),\n _t: [\n 'Entity',\n 'BankTransaction'\n ],\n CreatedAt: ISODate('2022-11-01T07:55:21.840Z'),\n UpdatedAt: ISODate('2022-11-01T07:55:21.840Z'),\n TransactionDate: ISODate('2022-03-14T00:00:00.000Z'),\n Customer: 'Example customer',\n Description: 'Example description.',\n Value: NumberDecimal('-72.99'),\n CategoryId: UUID(\"792badf0b21642f1a312c105d1b87051\"),\n BankName: 'null',\n UserId: UUID(\"...\")\n}\n BsonClassMap.RegisterClassMap<Entity>(map =>\n {\n map.SetIsRootClass(true);\n map.MapIdMember(x => x.Id);\n map.MapMember(x => x.CreatedAt);\n map.MapMember(x => x.UpdatedAt);\n });\n BsonClassMap.RegisterClassMap<BankTransaction>(map =>\n {\n map.AutoMap();\n map.SetIgnoreExtraElements(true);\n map.MapMember(x => x.Value).SetSerializer(new DecimalSerializer(BsonType.Decimal128));\n map.GetMemberMap(x => x.Category).SetShouldSerializeMethod(_ => false);\n });\ninternal class BankTransaction : Entity\n{\n public DateTime TransactionDate { get; set; }\n public string Customer { get; set; }\n public string Description { get; set; }\n public decimal Value { get; set; }\n public Guid? CategoryId { get; set; }\n public Category Category { get; set; }\n public string BankName { get; set; }\n public Guid UserId { get; set; }\n}\n\npublic record BankTransactionDto\n{\n public Guid Id { get; init; }\n public DateTime TransactionDate { get; init; }\n public string Customer { get; init; }\n public string Description { get; init; }\n public decimal Value { get; init; }\n public Guid? CategoryId { get; init; }\n public string CategoryName { get; init; }\n}\n",
"text": "Hello,I have a problem with the Project step in my Facet stage which maps Entity object into DTO:When I run it, it throws an error “Element ‘Id’ does not match any field or property of class”. Here entire stack:The interesting fact, when I rename “Id” property into “TransactionId” in DTO object, everything works fine. So I assume, that something is wrong with the property name “Id”. Also, there is no error when I remove Project step.I’m using MongoDB.Driver 2.18.0\nExample document:This document is mapped with BsonClassMap:And classes used in example:I forgot to mention that I’m using lookup operation to join Category, but I tested it without lookup and had the same error.",
"username": "Mateusz_Wroblewski"
},
{
"code": "",
"text": "Hey @Mateusz_Wroblewski , what driver version do you use? Also, please provide an example of document you use",
"username": "Dmitry_Lukyanov"
},
{
"code": "_idIdx[\"_id\"] class Model{\n [BsonId]\n [BsonRepresentation(BsonType.ObjectId)]\n public string? Id { get; set; } // this maps to `_id` in the document\n [BsonElement(\"id\")]\n public int SId { get; private set; } // had to give up naming this as \"Id\"\n [BsonElement(\"class\")]\n public string Class { get; private set; } = null!;\n}\n",
"text": "I had a similar issue and a similar solution. I think the driver tries to map the _id field of the actual document on the database to the Id property of the class. Even with annotation to link it to a different field in the document, this happens.Try using it as x[\"_id\"] and see if it helps.My class has this basic structure:",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Hi @Dmitry_Lukyanov, I have updated the post.",
"username": "Mateusz_Wroblewski"
},
{
"code": "",
"text": "Thanks @Yilmaz_Durmaz, but I think it’s not the same case as you mentioned. In my example, BankTransactionDto does not have any MongoDB annotations or BsonClassMap configuration.",
"username": "Mateusz_Wroblewski"
},
{
"code": "Id = x.IdBankTransactionId[BsonId][BsonRepresentation(BsonType.ObjectId)]IdId_id",
"text": "Id = x.Id\n…\nmap.MapIdMember(x => x.Id);this seems to be the problem. your BankTransaction class does not have an Id property, thus this assignment and/or mapping fails.I must stress that “_id” field in stored documents is mapped to “Id” of classes. this mapping operation will just take it and assign to another name. you can also do this with annotations [BsonId][BsonRepresentation(BsonType.ObjectId)] on a property. so the name does not have to be Id, but you need to use whatever name you choose.You don’t have to have Id property (or a property mapped to _id) at all when all you do is read data and this field is not relevant to you. but once you try to use it as the above two lines does, it becomes apparent that you need it defined.",
"username": "Yilmaz_Durmaz"
},
{
"code": " public abstract class Entity\n {\n public Guid Id { get; set; }\n public DateTime CreatedAt { get; set; }\n public DateTime UpdatedAt { get; set; }\n }\nPipelineStageDefinitionBuilder.Project<BankTransaction, BankTransactionDto>(x => new BankTransactionDto\n {\n TransactionId = x.Id,\n TransactionDate = x.TransactionDate,\n Customer = x.Customer,\n Description = x.Description,\n Value = x.Value,\n CategoryId = x.CategoryId,\n CategoryName = x.Category.Name\n }),\n",
"text": "It is not obvious, but BankTransaction class does have Id property and inherits it from Entity classFor me it’s not a problem with BankTransaction class, but with BankTransactionDto because when I change the Project operation to this, everything works.So it’s strange because when I have “Id” property in BankTransactionDto, projection fails, but when I have “TransactionId” in BankTransactionDto there is no error.",
"username": "Mateusz_Wroblewski"
},
{
"code": "PipelineStageDefinitionBuilder.ProjectIdFacetxIdId// DTO\n{\n MongoClient dbClient = new MongoClient(URI);\n\n var database = dbClient.GetDatabase(\"testme\");\n var collection = database.GetCollection<BankTransaction>(\"banktransaction\");\n\n // create a few transactions\n // var transaction1 = new BankTransaction()\n // {\n // Id = 2,\n // CreatedAt = \"2022-11-01T07:55:21.840Z\",\n // UpdatedAt = \"2022-11-01T07:55:21.840Z\",\n // Customer = \"Example customer\",\n // BankName = \"helloBank\",\n // UserId = 12345\n // };\n // collection.InsertOne(transaction1);\n // var transaction2 = new BankTransaction()\n // {\n // Id = 3,\n // CreatedAt = \"2022-11-01T07:55:21.840Z\",\n // UpdatedAt = \"2022-11-01T07:55:21.840Z\",\n // Customer = \"elder customer\",\n // BankName = \"Bankhello\",\n // UserId = 54321\n // };\n // collection.InsertOne(transaction2);\n // read transactions\n Console.WriteLine(collection.Find<BankTransaction>(_ => true).ToList().ToJson());\n\n // set class mapping\n // BsonClassMap.RegisterClassMap<Entity>(map =>\n // {\n // map.SetIsRootClass(true);\n // map.MapIdMember(x => x.Id);\n // map.MapMember(x => x.CreatedAt);\n // map.MapMember(x => x.UpdatedAt);\n // });\n Console.WriteLine(\"\\nmap?\\n\");\n // BsonClassMap.RegisterClassMap<BankTransaction>(map =>\n // {\n // map.AutoMap();\n // map.SetIgnoreExtraElements(true);\n // });\n\n var dataFacet = AggregateFacet.Create(\"bankdata\",\n PipelineDefinition<BankTransaction, BankTransactionDto>.Create(new IPipelineStageDefinition[]\n {\n PipelineStageDefinitionBuilder.Skip<BankTransaction>(1),\n PipelineStageDefinitionBuilder.Limit<BankTransaction>(10),\n PipelineStageDefinitionBuilder.Project<BankTransaction, BankTransactionDto>(x => new BankTransactionDto\n {\n xId = x.Id,\n Customer = x.Customer,\n CustomerId=x.UserId\n }),\n }));\n\n var aggregation = collection.Aggregate().Match(_ => true).Facet(dataFacet);\n var result = aggregation.Single().Facets.ToJson();\n Console.WriteLine(result);\n\n}\n\ninternal class BankTransaction : Entity\n{\n public string Customer { get; set; }\n public string BankName { get; set; }\n public int UserId { get; set; }\n}\n\npublic record BankTransactionDto\n{\n public int Id { get; init; }\n public int xId { get; init; }\n public string Customer { get; init; }\n public int CustomerId { get; init; }\n}\n\npublic abstract class Entity\n{\n public int Id { get; set; }\n public string CreatedAt { get; set; }\n public string UpdatedAt { get; set; }\n}\n[{ \"_t\" : \"AggregateFacetResult`1\", \"Name\" : \"bankdata\", \n \"Output\" : [\n { \"_id\" : 0, \"xId\" : 2, \"Customer\" : \"Example customer\", \"CustomerId\" : 12345 },\n { \"_id\" : 0, \"xId\" : 3, \"Customer\" : \"elder customer\", \"CustomerId\" : 54321 }\n ]\n}]\nId_id",
"text": "I could finally have a working minimal code to get the issue and the fix you mention.I have also found a StackOverflow post about the issue dating back 2 years: c# - Mongo .Net Driver PipelineStageDefinitionBuilder.Project automatically ignores all Id values with a facet - Stack OverflowPipelineStageDefinitionBuilder.Project fails to map Id when used inside Facet. I have checked this forum and the Jira pages of the driver. it seems no one has reported this issue yet. (Note I used the keyword “issue”)I will add my minimal working code here to work with it later. It has lots of commented-out code but two parts are important: adding a few test data and renaming xId to Id to get the error.Here is the output for this program after the facet stage:I think we should open a new post something like “PipelineStageDefinitionBuilder.Project fails to map Id in Facet” to get attention and/or open an issue on Jira.PS: I hope I am not exagerating this PS again: did you notice Id is mapped to _id!? but it is basically empty!",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Thanks @Yilmaz_Durmaz for preparing minimal code. I made some tests on my code and I reproduced situation where Id with Guid type was empty (all zeros). It would be nice to get more attention to this issue .",
"username": "Mateusz_Wroblewski"
},
{
"code": "[{\n $facet: {\n data: [{\n $project: {\n _id: 1,\n Customer: 1,\n Description: 1\n }\n }]\n }\n}]\n",
"text": "Although we have a working buggy code, it is possible we are trying to use the Facet stage wrong. It is easy when working with Javascript and JSON/BSON format, but making classes and moving data around in classes is not.Let’s forget about class DTO for a while and try going on with easy way. How exactly would you want the result to look like? There is a nice feature in Atlas web interface (I guess also in Compass, haven’t tried yet) that lets you build an aggregation pipeline with json format, and export to C# syntax with BSON methods if you want. Have you ever tried that before?Connect to Atlas and open your deployment page. in the menu to the left, select Deployment->Database. Open “Browse Collection”, select the collection you work on, and “Aggregation” in the middle panel. Add stages to the pipeline, watch the result and check the data. does the result you get conforms to what you want in your program?This one is an approximation to your query:",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "@Yilmaz_Durmaz Yes I tried to create a facet in MongoDB Compass and it worked.",
"username": "Mateusz_Wroblewski"
},
{
"code": "",
"text": "I had some more trial/error work on this. no solution other than using another name instead of “Id” I have opened a new post with a bit more description of the problem and a simpler/generic code sample here: \"Facet\" fails to map \"_id\" field of a projection to \"Id\" of class (BUG?). You may set a “watching” or “tracking” notification level on it. I have also a link here not to lose the track of it ",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Thanks @Yilmaz_Durmaz! I hope that someone will look at it .",
"username": "Mateusz_Wroblewski"
}
]
| "Element 'Id' does not match any field or property of class" when using Project in Facet stage | 2022-11-16T17:47:57.363Z | “Element ‘Id’ does not match any field or property of class” when using Project in Facet stage | 8,697 |
null | [
"replication",
"atlas",
"ops-manager"
]
| [
{
"code": "",
"text": "Questions for the day :Can we filter few statements ( like delete/ drop ) at MongoDB replication ( i.e., Primary – to – One Secondary ) Possible ? Not delayed secondaries - we want to complete filter them so that they don’t replicate. That’s Client requirement ! Can we add our OnPrem servers to Cloud/Atlas Manager ( Not OpsManager ) without migrating to it ? Possible ? Even this is another Client requirement ",
"username": "Srinivas_Mutyala"
},
{
"code": "",
"text": "Hi @Srinivas_Mutyala,As of today, this isn’t possible because your secondary can become Primary if your Primary goes down and this would “revive” deleted data or dropped collections. Also if you set your read preference to anything but “Primary” you would have inconsistant reads as you would read from the Primary or any of the Secondaries. This just wouldn’t work.Hard no. Atlas is a fully managed service. The machines provisioned are completely under control so MongoDB can perform security updates, OS upgrades and MongoDB tasks (update MongoDB, perform a backup), or even perform more “dramatic” tasks like remove a broken node from the config, order and setup a new machine to take over and replace the broken node. All this automation is only possible within a controlled environment (ie the 3 cloud providers we offer today). These machines are also out of reach for the client so they can’t interfere with the automation or perform destructive admin commands.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB Replications with filtering few statements like DELETE/DROP | 2022-11-28T04:56:00.622Z | MongoDB Replications with filtering few statements like DELETE/DROP | 1,781 |
null | [
"atlas-functions",
"atlas-triggers"
]
| [
{
"code": "",
"text": "Is it possible to pass a variable from a trigger to a function? I have a use case where I would like to set up the same scheduled trigger on a number of databases, all using exactly the same function (except the database name in the function that the function will connect to). I don’t want to use a Value with a list of databases to iterate over, as the triggers will be for databases in different timezones etc.So can the function get any calling information from a trigger, or can they only retrieve values using context.values.get(“valueName”)? Is the name of the trigger in a header anywhere in context.request?",
"username": "Ben_Giddins"
},
{
"code": "context.request",
"text": "Have a look at this “context” page: Context — Atlas App Services (mongodb.com)I haven’t tried it myself but context.request might be useful. or you may embed the details in the payload.",
"username": "Yilmaz_Durmaz"
},
{
"code": " // Get the application name from the context\n // e.g. dev2\n const appName = context.app.name;\n\n // Connect to the database with the same name as the application\n // e.g mongodb-atlas/dev2\n const db = context.services\n .get(\"mongodb-atlas\")\n .db(appName);\n",
"text": "My current workaround is to have one application per database, and the application name is the same as the database name. So when a scheduled trigger needs to know which database to connect to, I can use a snippet like:When the trigger runs on a schedule, the app name is available, and I use that as the database name. Just means adhering to a naming standard.Pretty sure the context.request is empty for a scheduled trigger.",
"username": "Ben_Giddins"
},
{
"code": "exports = function (changeEvent) {",
"text": "what about “change events”? it seems it carries the name of database and collection that was changed.your function would have this shape: exports = function (changeEvent) {",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "changeEvent is only available in a database trigger, not a scheduled trigger.",
"username": "Ben_Giddins"
},
{
"code": "function(...args){JSON.stringify(args)[{}]",
"text": "Forgive my naive approaches. I missed “scheduled” part. Can you please edit the title to have “Scheduled Trigger”. It would give a better concept to the next person to join.As for my understanding, though, they seem to work just to run a function at specified times and delegates the work of selecting database/collection to the function, calling it with no arguments. I scheduled a function and used function(...args){ and logged args with JSON.stringify(args), the result is [{}] suggesting truth to my assumption. Seems you have to implement your own logic to differentiate databases with the “Date.now()”",
"username": "Yilmaz_Durmaz"
}
]
| Is there any way to pass a value from a Trigger to a Function? | 2022-11-17T18:23:08.879Z | Is there any way to pass a value from a Trigger to a Function? | 2,514 |
null | [
"stitch"
]
| [
{
"code": "",
"text": "Hello, I’m trying to fetch data from mongo db to google sheets using mongo stitch, but I cannot find this feature on my mongo db atlas, all the tutorials which I followed seemed to be outdated. I need to mention that I’m using a non-paid service from mongo DB. Can I get some help finding mongo stitch on my mongo atlas? Thank you!",
"username": "Dan_Muntean"
},
{
"code": "",
"text": "Hi @Dan_Muntean ,The product originally known as MongoDB Stitch has evolved into Atlas App Services and the Realm Web SDK.There have been quite a few improvements since Stitch, so I recommend starting with the current documentation and tutorials.There’s also some brief info inRegards,\nStennie",
"username": "Stennie_X"
}
]
| Cannot find mongoDB stitch | 2022-11-29T14:12:11.801Z | Cannot find mongoDB stitch | 1,979 |
null | [
"aggregation",
"node-js"
]
| [
{
"code": "",
"text": "Hi,\nI’d like to flush all data in Materialized view every-time when I update.\nShould I just delete the collection and use merge to create it again??",
"username": "Peter_li1"
},
{
"code": "$merge$out$merge",
"text": "Hi @Peter_li1,It depends. You can leverage the different options of the $merge stage likeSee: https://www.mongodb.com/docs/manual/reference/operator/aggregation/merge/#syntaxIf you have an exact match on a field each time then you could just use “replace” and if you have a 1 to 1 match for every single document, the entire collection would be replaced in the end.Or you could use $out instead of $merge which would replace the target the collection.I think this last option is probably the best because it will preserve collection metadata (like indexes).Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Materialized View delete | 2022-11-28T01:52:20.147Z | Materialized View delete | 1,029 |
null | [
"replication",
"compass"
]
| [
{
"code": "",
"text": "I configured replica set in such a way that the the primary and secondary are two different VMs so that i could able to get My API’s hit to the secondary to get the results form it so that it won’t affect the primary and the data which i have is always upto date.Now the issue is the secondary replica set connection URL, I could able to connect the same credentials form tools such and robo 3t but when it comes to connection string placement it is showing the following error “getaddrinfo ENOTFOUND mongo.explorer.replica.#####.com”. and couldn’t able to access the same URL or the credentials from Mongo compass.It would be very helpful if i get a solution to solve this issue.and may i know the reason why the mongo secondary replica i could able to connect through robo 3t and not via mongo compass.",
"username": "Ch_Sadvik"
},
{
"code": "",
"text": "I configured replica set in such a way that the the primary and secondary are two different VMsYou should have 3 nodes in your replica set. See https://www.mongodb.com/docs/manual/replication/.get My API’s hit to the secondary to get the results form it so that it won’t affect the primary and the data which i have is always upto dateWhat you are doing is based on the wrong assumption that a secondary has less work to do than the primary. To have up to date data on secondaries, the secondaries do exactly the same writes as the primary. If you keep them busy with read work load because your primary is not powerful enough to do both, your secondaries probably cannot do both, so they will eventually be behind the primary oplog and the data on your secondaries will not be up to date.As for ENOTFOUND, it is not mongodb specific, it is a DNS setup that is not correct. You have some DNS information that is not propagated to all resolvers. Try to connect with mongodb only once you are able to ping the same address.",
"username": "steevej"
}
]
| Regarding the secondary replica connection string | 2022-11-29T07:13:56.119Z | Regarding the secondary replica connection string | 1,449 |
null | [
"queries"
]
| [
{
"code": "[email protected]ġ[email protected]",
"text": "I want to make my Email field in my collection a case-insensitive index. The official docs say to use Collation for this, with a strength value of 1 or 2.On the docs, it says that level 1 performs comparsion for base characters only, while level 2 also include diacritics.I’d like to ask the following:",
"username": "Gil_M"
},
{
"code": "aAàAaA",
"text": "Hi @Gil_M and welcome in the MongoDB Community !From my understanding, level 3 is case sensitive. So it will detect the difference between a and A.Performance with or without collation should be within statistical error margins.I hope this helps.\nCheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| What is Tertiary level of comparison? | 2022-11-27T16:15:51.995Z | What is Tertiary level of comparison? | 941 |
null | [
"aggregation"
]
| [
{
"code": "[\n {\n \"data\": {\n \"name\": \"Base\"\n },\n \"meta\": {\n \"id\": \"bb4a4578-7947-41ec-bdc9-d8be16ae6efa\",\n \"type\": \"EiffelActivityTriggeredEvent\"\n }\n },\n {\n \"data\": {\n \"name\": \"1\"\n },\n \"links\": [\n {\n \"target\": \"bb4a4578-7947-41ec-bdc9-d8be16ae6efa\",\n \"type\": \"CAUSE\"\n }\n ],\n \"meta\": {\n \"id\": \"dc4ff5a0-9a09-4931-92f3-e62127b967f8\",\n \"type\": \"EiffelActivityTriggeredEvent\"\n }\n },\n {\n \"data\": {\n \"name\": \"2\"\n },\n \"links\": [\n {\n \"target\": \"dc4ff5a0-9a09-4931-92f3-e62127b967f8\",\n \"type\": \"CAUSE\"\n }\n ],\n \"meta\": {\n \"id\": \"2a00b3f5-143c-4dde-bc77-4127a61d0410\",\n \"type\": \"EiffelActivityTriggeredEvent\"\n }\n },\n {\n \"data\": {\n \"name\": \"3\"\n },\n \"links\": [\n {\n \"target\": \"2a00b3f5-143c-4dde-bc77-4127a61d0410\",\n \"type\": \"CAUSE\"\n }\n ],\n \"meta\": {\n \"id\": \"93b97903-32da-4de4-87bd-ddc633c7c741\",\n \"type\": \"EiffelActivityTriggeredEvent\"\n }\n },\n {\n \"data\": {\n \"name\": \"4\"\n },\n \"links\": [\n {\n \"target\": \"93b97903-32da-4de4-87bd-ddc633c7c741\",\n \"type\": \"CAUSE\"\n }\n ],\n \"meta\": {\n \"id\": \"986de1a4-b376-481a-893d-a735ae0eaf87\",\n \"type\": \"EiffelActivityTriggeredEvent\"\n }\n },\n {\n \"data\": {\n \"name\": \"5\"\n },\n \"links\": [\n {\n \"target\": \"986de1a4-b376-481a-893d-a735ae0eaf87\",\n \"type\": \"CONTEXT\"\n },\n {\n \"target\": \"d73497e7-a1a1-496c-9f48-cceb5d5db210\",\n \"type\": \"CAUSE\"\n }\n ],\n \"meta\": {\n \"id\": \"aa2e4a69-04d4-42a4-b034-938f71f77ed9\",\n \"type\": \"EiffelActivityTriggeredEvent\"\n }\n }\n]\ndb.collection.aggregate({\n \"$match\": {\n \"meta.id\": \"bb4a4578-7947-41ec-bdc9-d8be16ae6efa\"\n }\n},\n{\n \"$graphLookup\": {\n \"from\": \"collection\",\n \"startWith\": \"$meta.id\",\n \"connectFromField\": \"meta.id\",\n \"connectToField\": \"links.target\",\n \"as\": \"ActT\",\n \"maxDepth\": 4\n \"restrictSearchWithMatch\": {\n \"links.type\": \"CAUSE\"\n }\n }\n},\n{\n \"$unwind\": {\n \"path\": \"$ActT\"\n }\n},\n{\n \"$replaceRoot\": {\n \"newRoot\": \"$ActT\"\n }\n},\n{\n \"$sort\": {\n \"data.name\": 1\n }\n})\nCAUSEmeta.id",
"text": "Hello,I am having trouble getting an aggregation to work as I want it due to me having a more complex data type that I want to match.My data consists of several “events” that are being sent within a CI/CD system. These events can link to each other using link types and the ID of the event that they link to.\nExample: {“type”: “CAUSE”, “target”: MyID} - My event is sent because of “MyID”.This data can be considered a graph and, as such, using GraphLookup seems correct. My problem is that there can be multiple links with different link types to different events and I want to restrict my search to only see a certain link type.\nFor instance, I only want to do a graph lookup of events that have a Target to my ID AND that link is a CAUSE link.Sample data. Event with data.name: “5” has a link to “4”, but it is a CONTEXT link, not CAUSE. But since there is a CAUSE link (to an event that’s not in the graph) I get it in my aggregation.Sample query:Link to mongoplayground: Mongo playgroundIs there a way to do an aggregation query where it will only get events that have a CAUSE link to the meta.id of the parent, and nothing else.Thank you!",
"username": "Tobias_Persson"
},
{
"code": "\"meta.id\": \"bb4a4578-7947-41ec-bdc9-d8be16ae6efa\"\n\"links.type\": \"CAUSE\"",
"text": "I am not sure I understand so the following might be stupid.Can’t you simply addto\"links.type\": \"CAUSE\"inside the restrictSearchWithMatch.",
"username": "steevej"
},
{
"code": "startWithmeta.idrestrictSearchWithMatchrestrictSearchWithMatch",
"text": "That would, sadly, only help on the first depth where the startWith is used. Since each event on every depth has a new meta.id value that would need to be used in restrictSearchWithMatch and this value is, as far as I know, not available in restrictSearchWithMatch.",
"username": "Tobias_Persson"
},
{
"code": "mongosh > c.find()\n{ _id: ObjectId(\"638609e736e25a0dff0d1097\"),\n data: { name: 10 },\n meta: { id: 1, cause: 2 } }\n{ _id: ObjectId(\"63860a4936e25a0dff0d1098\"),\n data: { name: 12 },\n meta: { id: 1, cause: 2 } }\n\nmatch = { \"$match\" : { \"data.name\" : 10 } }\n/* output suppressed */\n\nmongosh > graphLookup = { \"$graphLookup\" : {\n \"from\" : \"Tobias_Persson\" ,\n \"startWith\" : \"$meta\" ,\n \"connectFromField\" : \"meta\" ,\n \"connectToField\" : \"meta\" ,\n \"as\" : \"_result\" } }\n/* output suppressed */\n\nmongosh > db.Tobias_Persson.aggregate( [ match , graphLookup ] )\n{ _id: ObjectId(\"638609e736e25a0dff0d1097\"),\n data: { name: 10 },\n meta: { id: 1, cause: 2 },\n _result: \n [ { _id: ObjectId(\"63860a4936e25a0dff0d1098\"),\n data: { name: 12 },\n meta: { id: 1, cause: 2 } },\n { _id: ObjectId(\"638609e736e25a0dff0d1097\"),\n data: { name: 10 },\n meta: { id: 1, cause: 2 } } ] }\n",
"text": "I knew I did not understood something correctly.The only think I can think at this point is that connectFromField and connectToField might be objects rather than simple values. I would not know how you could leverage this fact but I have a small example.I do not know if you could modify your schema to use objects in connectFrom/ToField but it is a starting that might gives you some ideas.",
"username": "steevej"
}
]
| GraphLookup matching multiple fields | 2022-11-25T14:14:14.140Z | GraphLookup matching multiple fields | 1,760 |
null | [
"replication",
"storage"
]
| [
{
"code": "systemctl start mongod\nJob for mongod.service failed because a timeout was exceeded. See \"systemctl status mongod.service\" and \"journalctl -xe\" for details.\n\nand here is mongod logs\n{\"t\":{\"$date\":\"2022-09-06T16:28:30.241+06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"main\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2022-09-06T16:28:30.245+06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify –sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-09-06T16:28:30.271+06:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2022-09-06T16:28:30.271+06:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2022-09-06T16:28:30.271+06:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2022-09-06T16:28:30.271+06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":7151,\"port\":27017,\"dbPath\":\"/var/lib/mongo\",\"architecture\":\"64-bit\",\"host\":\"facetech-prod-mongo01-uv03.fortebank.com\"}}\n{\"t\":{\"$date\":\"2022-09-06T16:28:30.271+06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"4.4.3\",\"gitVersion\":\"913d6b62acfbb344dde1b116f4161360acd8fd13\",\"openSSLVersion\":\"OpenSSL 1.0.1e-fips 11 Feb 2013\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"rhel70\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-09-06T16:28:30.272+06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"CentOS Linux release 7.9.2009 (Core)\",\"version\":\"Kernel 3.10.0-1160.42.2.el7.x86_64\"}}}\n{\"t\":{\"$date\":\"2022-09-06T16:28:30.272+06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"10.0.225.235\",\"port\":27017},\"processManagement\":{\"fork\":true,\"pidFilePath\":\"/var/run/mongodb/mongod.pid\",\"timeZoneInfo\":\"/usr/share/zoneinfo\"},\"replication\":{\"replSetName\":\"prod-facetech\"},\"storage\":{\"dbPath\":\"/var/lib/mongo\",\"journal\":{\"enabled\":true}},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/var/log/mongodb/mongod.log\"}}}}\n{\"t\":{\"$date\":\"2022-09-06T16:28:30.273+06:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22271, \"ctx\":\"initandlisten\",\"msg\":\"Detected unclean shutdown - Lock file is not empty\",\"attr\":{\"lockFile\":\"/var/lib/mongo/mongod.lock\"}}\n{\"t\":{\"$date\":\"2022-09-06T16:28:30.273+06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/var/lib/mongo\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2022-09-06T16:28:30.273+06:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22302, \"ctx\":\"initandlisten\",\"msg\":\"Recovering data from the last clean checkpoint.\"}\n{\"t\":{\"$date\":\"2022-09-06T16:28:30.273+06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=3398M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],\"}}\n{\"t\":{\"$date\":\"2022-09-06T16:28:30.879+06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1662460110:879327][7151:0x7f63aed90bc0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 37908 through 37909\"}}\n{\"t\":{\"$date\":\"2022-09-06T16:28:30.975+06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1662460110:975619][7151:0x7f63aed90bc0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 37909 through 37909\"}}\n{\"t\":{\"$date\":\"2022-09-06T16:28:31.108+06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1662460111:108841][7151:0x7f63aed90bc0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Main recovery loop: starting at 37908/256 to 37909/256\"}}\n{\"t\":{\"$date\":\"2022-09-06T16:28:31.241+06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1662460111:241647][7151:0x7f63aed90bc0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 37908 through 37909\"}}\n{\"t\":{\"$date\":\"2022-09-06T16:28:31.331+06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1662460111:331583][7151:0x7f63aed90bc0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 37909 through 37909\"}}\n{\"t\":{\"$date\":\"2022-09-06T16:28:31.395+06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1662460111:395447][7151:0x7f63aed90bc0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (1659204188, 1)\"}}\n{\"t\":{\"$date\":\"2022-09-06T16:28:31.395+06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1662460111:395512][7151:0x7f63aed90bc0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (1659204183, 1)\"}}\n{\"t\":{\"$date\":\"2022-09-06T16:28:32.098+06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":1825}}\n{\"t\":{\"$date\":\"2022-09-06T16:28:32.098+06:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":1659204188,\"i\":1}}}}\n{\"t\":{\"$date\":\"2022-09-06T16:28:32.099+06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4366408, \"ctx\":\"initandlisten\",\"msg\":\"No table logging settings modifications are required for existing WiredTiger tables\",\"attr\":{\"loggingEnabled\":false}}\n{\"t\":{\"$date\":\"2022-09-06T16:28:32.103+06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22383, \"ctx\":\"initandlisten\",\"msg\":\"The size storer reports that the oplog contains\",\"attr\":{\"numRecords\":205531,\"dataSize\":53787115711}}\n{\"t\":{\"$date\":\"2022-09-06T16:28:32.103+06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22386, \"ctx\":\"initandlisten\",\"msg\":\"Sampling the oplog to determine where to place markers for truncation\"}\n{\"t\":{\"$date\":\"2022-09-06T16:28:32.107+06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22389, \"ctx\":\"initandlisten\",\"msg\":\"Sampling from the oplog to determine where to place markers for truncation\",\"attr\":{\"from\":{\"$timestamp\":{\"t\":1658914933,\"i\":1}},\"to\":{\"$timestamp\":{\"t\":1659204198,\"i\":1}}}}\n{\"t\":{\"$date\":\"2022-09-06T16:28:32.107+06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22390, \"ctx\":\"initandlisten\",\"msg\":\"Taking samples and assuming each oplog section contains\",\"attr\":{\"numSamples\":1001,\"containsNumRecords\":2052,\"containsNumBytes\":537004935}}\n{\"t\":{\"$date\":\"2022-09-06T16:28:42.159+06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22392, \"ctx\":\"initandlisten\",\"msg\":\"Oplog sampling progress\",\"attr\":{\"completed\":53,\"total\":1001}}\n{\"t\":{\"$date\":\"2022-09-06T16:28:52.198+06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22392, \"ctx\":\"initandlisten\",\"msg\":\"Oplog sampling progress\",\"attr\":{\"completed\":103,\"total\":1001}}\n{\"t\":{\"$date\":\"2022-09-06T16:29:02.199+06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22392, \"ctx\":\"initandlisten\",\"msg\":\"Oplog sampling progress\",\"attr\":{\"completed\":157,\"total\":1001}}\n{\"t\":{\"$date\":\"2022-09-06T16:29:12.200+06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22392, \"ctx\":\"initandlisten\",\"msg\":\"Oplog sampling progress\",\"attr\":{\"completed\":219,\"total\":1001}}\n{\"t\":{\"$date\":\"2022-09-06T16:29:22.450+06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22392, \"ctx\":\"initandlisten\",\"msg\":\"Oplog sampling progress\",\"attr\":{\"completed\":277,\"total\":1001}}\n{\"t\":{\"$date\":\"2022-09-06T16:29:32.521+06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22392, \"ctx\":\"initandlisten\",\"msg\":\"Oplog sampling progress\",\"attr\":{\"completed\":352,\"total\":1001}}\n{\"t\":{\"$date\":\"2022-09-06T16:29:42.660+06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22392, \"ctx\":\"initandlisten\",\"msg\":\"Oplog sampling progress\",\"attr\":{\"completed\":422,\"total\":1001}}\n{\"t\":{\"$date\":\"2022-09-06T16:29:52.741+06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22392, \"ctx\":\"initandlisten\",\"msg\":\"Oplog sampling progress\",\"attr\":{\"completed\":500,\"total\":1001}}\n{\"t\":{\"$date\":\"2022-09-06T16:30:00.241+06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23377, \"ctx\":\"SignalHandler\",\"msg\":\"Received signal\",\"attr\":{\"signal\":15,\"error\":\"Terminated\"}}\n{\"t\":{\"$date\":\"2022-09-06T16:30:00.241+06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23378, \"ctx\":\"SignalHandler\",\"msg\":\"Signal was sent by kill(2)\",\"attr\":{\"pid\":1,\"uid\":0}}\n{\"t\":{\"$date\":\"2022-09-06T16:30:00.245+06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23381, \"ctx\":\"SignalHandler\",\"msg\":\"will terminate after current cmd ends\"}\n{\"t\":{\"$date\":\"2022-09-06T16:30:00.261+06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"SignalHandler\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":10000}}\n{\"t\":{\"$date\":\"2022-09-06T16:30:00.305+06:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2022-09-06T16:30:00.305+06:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2022-09-06T16:30:00.305+06:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"SignalHandler\",\"msg\":\"Shutdown: going to close listening sockets\"}\n{\"t\":{\"$date\":\"2022-09-06T16:30:00.305+06:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2022-09-06T16:30:00.305+06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784906, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2022-09-06T16:30:00.305+06:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"SignalHandler\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2022-09-06T16:30:00.305+06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784907, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the replica set node executor\"}\n{\"t\":{\"$date\":\"2022-09-06T16:30:00.305+06:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2022-09-06T16:30:00.305+06:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2022-09-06T16:30:00.305+06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2022-09-06T16:30:00.305+06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784927, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2022-09-06T16:30:00.305+06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784929, \"ctx\":\"SignalHandler\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2022-09-06T16:30:00.305+06:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"SignalHandler\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2022-09-06T16:30:00.305+06:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":4784926, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down full-time data capture\"}\n{\"t\":{\"$date\":\"2022-09-06T16:30:00.305+06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"SignalHandler\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2022-09-06T16:30:00.305+06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":0}}\n",
"text": "Hello! i have a Replica set which contains of 3 nodesbut my 3rd replica accidentally stop working and if i type systemctl start mongod it says that",
"username": "tabi_jonsan"
},
{
"code": "",
"text": "This does not look like a crash.It looks like a normal terminal due to mongod receiving SIGTERM, either manually from shell using kill or automatically from some unknown process.",
"username": "steevej"
},
{
"code": "",
"text": "it’s strange bcoz no one kill it",
"username": "tabi_jonsan"
},
{
"code": "See \"systemctl status mongod.service\" and \"journalctl -xe\" for details.",
"text": "Your first line indicateSee \"systemctl status mongod.service\" and \"journalctl -xe\" for details.Can you share what you have find when following what was advised?",
"username": "steevej"
},
{
"code": "",
"text": "solved, it kills by systemd because of long mongodb startup Increase startup timeout solve the issue",
"username": "tabi_jonsan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB replica crashed | 2022-09-06T10:58:26.152Z | MongoDB replica crashed | 4,669 |
null | [
"graphql"
]
| [
{
"code": "",
"text": "Hello,How to call GraphQL API through MongoDB realm framework??If Apollo Client framework is the solution then How can we manage pagination(limit and offset).Thanks in advance.",
"username": "Vishal_Deshai"
},
{
"code": "query {\n movies(\n query: { year: 2000 }\n sortBy: RUNTIME_DESC\n limit: 10\n ) {\n _id\n title\n year\n runtime\n }\n}",
"text": "@Vishal_Deshai Yes I would use the Apollo client to connect to the GraphQL API. You can send the limit operator in a GraphQL query just as you normally would - Introduction to Apollo iOS - Apollo GraphQL Docs",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Hello,Thanks for the answer.first 10 records are fetch then next time we need 11 to 20 records(Next 10).Thanks",
"username": "Vishal_Deshai"
},
{
"code": "",
"text": "HiWaiting for your positive feedback.Let me repeat my query. How can we get next 10 records(i.e. 11 to 20) if page size is 10.Thanks.",
"username": "Vishal_Deshai"
},
{
"code": "",
"text": "Hi Vishal,You will have to use find() and limit() to implement this with custom resolvers.",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Any chance this could be a feature? Doing a custom resolver doesn’t make the use as dynamic and you need keep track of more and more functions.updateMany etc are features for example. Count, Skip and Next are features at AWS and Dgraph.",
"username": "Martin_Ericson"
},
{
"code": "",
"text": "I completely agree. I would expect proper pagination handling to be built into any GraphQL API. It’s hard to believe this still hasn’t been implemented on MongoDB when it’s fairly common in other GraphQL APIs.There is a feature request here that’s worth voting on.",
"username": "Ian"
},
{
"code": "",
"text": "+1\nIt’s so sad that there is zero progress on this. I guess, AWS AppSync or using DGraph are our only options.",
"username": "Tim46"
}
]
| How to call GraphQL API through Realm framework? | 2020-06-17T16:57:50.036Z | How to call GraphQL API through Realm framework? | 8,260 |
null | [
"compass",
"atlas-search"
]
| [
{
"code": "{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"LOKATION\": {\n \"type\": \"geo\"\n } } } }\n{\n \"_id\": {\n \"$oid\": \"...\"\n },\n \"ID\": 1,\n \"COLL\": 0,\n \"GROUP\": 0,\n \"INDUSTRY\": \"ABB\",\n <... other fields ...>,\n \"LOKATION\": {\n \"type\": \"Point\",\n \"coordinates\": [{\n \"$numberDecimal\": \"12.071365000000000\"\n }, {\n \"$numberDecimal\": \"45.781243000000000\"\n }]\n },\n <... other fields ...>\n}\n[\n {\n '$search': {\n 'index': 'geo4', \n 'geoWithin': {\n 'box': {\n 'bottomLeft': {\n 'type': 'Point', \n 'coordinates': [\n 0, 0\n ]\n }, \n 'topRight': {\n 'type': 'Point', \n 'coordinates': [\n 90, 90\n ]\n }\n }, \n 'path': 'LOKATION'\n }\n }\n }\n]\n",
"text": "Hello,\nI tried to create a first Atlas Search index with a “geo” type of field according to this syntax:The index, incidentally, is named “geo4”.\nThe data in the underlying database contains about 1000 documents, each of which has the following structure (most fields omitted – content mainly cut+pasted from Compass JSON view of a document:I’ve tried many different queries (esp with “circle”), all returning 0 – the following is the last one, tried both from Compass and from the mongodb shell – the parameters may seem quite extreme, but I was trying to make sure I’d hit at least one document, as the one partially reported above:Note that from Atlas, the search index is shown to have 100% of documents indexed.\nNote also that I have previously created a 2dsphere index on a previous version of the LOKATION field where the field was an Array of lon, lat values, as required – it worked just fine on the same data.Anyone has a hint of what’s going wrong?Thanks\nstefano-",
"username": "Stefano_Odoardi"
},
{
"code": "{ \"$numberDecimal\": \"<...some value...>\" }",
"text": "Well, for anyone who might stumble on this in the future:I’ve fixed the issue, and it had nothing to do with the index setup or the query syntax and parameters.\nThe problem was with the type the lon and lat values in the LOKATION.coordinates field were assigned upon creation. The data in fact comes from a mongoimport operation, and the lon/lat values were specified there to be of type decimal().\nIt turns out that this way, as shown above, the resulting lon and lat values are represented as:\n{ \"$numberDecimal\": \"<...some value...>\" }\ninstead of just decimal numbers.\nWhile this is apparently not an issue with 2dsphere indexes (which I had set up and successfully used on the same data, imported as decimal() as well), it doesn’t work with “geo” index fields in Atlas Search indexes.\nFor me, the solution was to mongoimport that data with a type of “auto()”.",
"username": "Stefano_Odoardi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| geoWithin $search on Atlas "geo" indexed field returning nothing | 2022-11-25T23:23:29.124Z | geoWithin $search on Atlas “geo” indexed field returning nothing | 1,432 |
null | []
| [
{
"code": "",
"text": "Hi,MongoDB version 4.4.14\nI’ve been trying to find all current active users.\nI located the effectiveUsers when running db.currentOp() but when I run db.currentOp().effectiveUsers I get nothing. What am I doing wrong here?\nThanks!",
"username": "Shalom_Sagges"
},
{
"code": "effectiveUsersdb.currentOp().inprog.forEach((op) => {if(op.effectiveUsers) {op.effectiveUsers.forEach(eUser => print(eUser.user))}} );truecurrentOp",
"text": "Hello @Shalom_Sagges,effectiveUsers is a nested field, so try this -db.currentOp().inprog.forEach((op) => {if(op.effectiveUsers) {op.effectiveUsers.forEach(eUser => print(eUser.user))}} );Additionally, if you like to see idle connections and system operations, pass in argument true to currentOpHope this helps.Thanks,\nMahi",
"username": "Mahi_Satyanarayana"
},
{
"code": "",
"text": "Amazing!\nThis most definitely helps.\nThanks a lot @Mahi_Satyanarayana ",
"username": "Shalom_Sagges"
}
]
| User Connections | 2022-11-28T16:25:46.275Z | User Connections | 953 |
null | [
"aggregation"
]
| [
{
"code": "result = db.sensordatas.aggregate(\n {\n $match: {sensorType: \"temp\"}\n },\n {\n $sort: {timeStamp: 1}\n },\n { \"$project\": {\n \"y\":{\"$year\":\"$timestamp\"},\n \"m\":{\"$month\":\"$timestamp\"},\n \"d\":{\"$dayOfMonth\":\"$timestamp\"},\n \"h\":{\"$hour\":\"$timestamp\"},\n \"sensorValue\":1,\n \"timestamp\":1,\n \"sensorType\": 1 }\n },\n { \"$group\":{\n \"_id\": { \"type\": \"$sensorType\",\"year\":\"$y\",\"month\":\"$m\",\"day\":\"$d\",\"hour\":\"$h\"},\n \"gemiddeld\":{ \"$avg\": \"$sensorValue\"}\n }\n }\n)\n {\n _id: { type: 'temp', year: 2022, month: 11, day: 25, hour: 13 },\n gemiddeld: Decimal128(\"19.6745\")\n },\n {\n _id: { type: 'temp', year: 2022, month: 11, day: 26, hour: 4 },\n gemiddeld: Decimal128(\"17.774\")\n },\n {\n _id: { type: 'temp', year: 2022, month: 11, day: 25, hour: 5 },\n gemiddeld: Decimal128(\"19.15593220338983050847457627118644\")\n },\n {\n _id: { type: 'temp', year: 2022, month: 11, day: 28, hour: 9 },\n gemiddeld: Decimal128(\"15.35220338983050847457627118644068\")\n },\n {\n _id: { type: 'temp', year: 2022, month: 11, day: 28, hour: 14 },\n gemiddeld: Decimal128(\"16.77283333333333333333333333333333\")\n },\n {\n _id: { type: 'temp', year: 2022, month: 11, day: 27, hour: 8 },\n gemiddeld: Decimal128(\"15.89933333333333333333333333333333\")\n }\n",
"text": "Absolute beginners by Bowie comes to mind…Started myself a project to learn mongoDB: Got an arduino uploading temperature, pressure and humidity every minute. Could have done that to a SQL database… but want to get aquanted with mongoDB… so the data lives on a mongoDB server.Plain old data dumps, filtered on sensor type, go quite wel. But a measurement every minute accumulates a lot of datapoints real soon. So I wanted to do some aggregation. Average per hour to start with.\nCreated this piece of code which I load into mongsh:which yields the following result:I was pretty pleased with myself getting this far. But kinda anoyed when I noticed the result come in random order. Allready tried to do a sort on it… at the begining and at the end of the chain… but to no result (I could observe)What am I doing wrong?Peter",
"username": "Peter_Kaagman"
},
{
"code": "db.testcoll.aggregate(\n{\n '$sort': { '_id.year': -1, '_id.month': -1, '_id.day': -1, '_id.hour': -1 }\n})\n[\n {\n _id: { type: 'temp', year: 2022, month: 11, day: 28, hour: 14 },\n gemiddeld: Decimal128(\"16.77283333333333333333333333333333\")\n },\n {\n _id: { type: 'temp', year: 2022, month: 11, day: 28, hour: 9 },\n gemiddeld: Decimal128(\"15.35220338983050847457627118644068\")\n },\n {\n _id: { type: 'temp', year: 2022, month: 11, day: 27, hour: 8 },\n gemiddeld: Decimal128(\"15.89933333333333333333333333333333\")\n },\n {\n _id: { type: 'temp', year: 2022, month: 11, day: 26, hour: 4 },\n gemiddeld: Decimal128(\"17.774\")\n },\n {\n _id: { type: 'temp', year: 2022, month: 11, day: 25, hour: 13 },\n gemiddeld: Decimal128(\"19.6745\")\n },\n {\n _id: { type: 'temp', year: 2022, month: 11, day: 25, hour: 5 },\n gemiddeld: Decimal128(\"19.15593220338983050847457627118644\")\n }\n]\n$addFields",
"text": "Hi Peter - Welcome to the community!Would the following work for you? I only tested it on your current output as I am not sure what the data would look like at each stage of the pipeline.Although i’m not sure what the original data looks like, perhaps another idea would be to use $addFields and have a Date value field and sort on that. Perhaps other community members have other suggestions as well Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks Jason… I’ll give it a try Tried it… did the trick… thank you for learning me something.",
"username": "Peter_Kaagman"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Got aggregation per hour working, but now to sort it | 2022-11-28T19:24:50.700Z | Got aggregation per hour working, but now to sort it | 1,030 |
null | [
"queries",
"indexes"
]
| [
{
"code": "",
"text": "I have a MongoDB collection with over 2.9M records and i want to remove the old data till a specific date range. & there’s only one index (i.e. _id UNIQUE) created over this collectionUpon running the following command in a shell, it is taking too much time:\n\"db.collection.remove({_id: {$in: db.collection.find({}, {_id : 1}).limit(100).sort({id:1}).toArray().map(function(doc) { return doc._id; }) }})\n\"\nSo which index is more suitable for a function “deleteMany or remove” to remove data from a column using a wildcard?Secondly, i am creating a new index right now , on the same column in ASC order with option “Create index in the backgound”. How much time the index creation will take ?",
"username": "Ahmed_Hussain"
},
{
"code": "db.collection.deleteMany({createdAt : {$gte: <START_TIMESTAMP> , $lte : <END_TIMESTAMP>}})\n",
"text": "Hi @Ahmed_Hussain ,It sounds like you should have a field in each document stating the date of the document? Is that correct?If so then you can potentially use a TTL (Time To Live) index to maintain the window of documents alive:\nHowever, in case you want to delete documents based on a timestamp field (eg. createdAt ), you can index that field and delete chunks of data :Its better to split the deletes into smaller batches to not overwhelm the database at once. Consider clearing week by week or month by month etc…Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny ,Unfortunately we don’t have a date column in the collection but we do have a column that holds such information in this format (“1021-30-12-2021”) . The right side of this string is the date (i.e. 30-12-2021).I tried using this query ( db.collection.deleteMany({routeKey: /12-01-2022/}) ) but the query is taking too much time.Can you please suggest under this situation?",
"username": "Ahmed_Hussain"
},
{
"code": "",
"text": "the query is taking too much timeBecause you are using a regular expression that is not anchored at the beginning. This means that even if you have an index, all documents needs to be scanned to produce a result.The first error is that you do not keep your dates as date but as string. The second error is that you used the worst string representation of a date by using day-month-year. This means you cannot even use relational operators, you cannot sort your data on date.If querying with date is important you must have a date field store as a date data type. It takes less space than string, it is faster to compare than string and there is a rich library of date manipulation function.Using the right model, using the right data type and have an index are the first steps for performance.",
"username": "steevej"
},
{
"code": "",
"text": "@steevej I strongly agree with your perspective about the structure but what to do now?Once the data has been deleted, i will surely work on it.",
"username": "Ahmed_Hussain"
},
{
"code": "{ routeKey : { $gte : \"0000-12-01-2022\" , $lte : \"2400-12-01-2022\" } }\n",
"text": "What is the meaning of the first 4 digits of routeKey?If it is HHMM like hour-hour-minute-minute, then you might be able to delete one day at a time withyou would still be scanning all documents (no choice since you only have an index on _id) but no regular expression.You could also leverage the fact that the first 4 bytes of an object ID is a timestamp. See https://www.mongodb.com/docs/manual/reference/method/ObjectId/.The following is certainly time consuming and having an ascending index on a field for which you already have a descending index is probably completely useless.Do not create a new index just for your delete. It will probably take more time to 1) create the index 2) delete the documents and 3) update the index for all deleted document than just doing the delete.",
"username": "steevej"
},
{
"code": "{ routeKey : 1}",
"text": "Hi @Ahmed_Hussain ,Why not create an index on { routeKey : 1} and perform a single day deletion as @steevej suggested and see how it performs?Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Data Retrieval and then deletion | 2022-11-28T07:50:51.292Z | Data Retrieval and then deletion | 2,316 |
null | []
| [
{
"code": "",
"text": "Hey, friends!I’m Lauren Schaefer, a developer advocate at MongoDB. I’ve been with MongoDB for a little over a year, and I love helping users get started with the technology. I’m on the Content team, so I spend most of my team creating blogs and videos.I work out of my home office in Pennsylvania, which is on the east coast of the United States. I love traveling and speaking at conferences. Maybe we’ll bump into each other at a developer conference in the future. I’m a big advocate of remote work. It has enabled me to create a career I thoroughly enjoy despite living in the rural locations where my husband is employed. I gave a talk entitled Does Remote Work Really Work? at Pycon last year. Check it out.In my free time, I enjoy hanging out with my daughter and husband, watching comedies (I’m currently enjoying season 3 of The Marvelous Mrs. Maisel), and empowering and encouraging other women.You can find me on Twitter and LinkedIn. See you around the community!",
"username": "Lauren_Schaefer"
},
{
"code": "",
"text": "Hi I have a few questions for a school project the first question is.\nWhat made you get into programming ?\n@Lauren_Schaefer",
"username": "Tyree_mccloud"
},
{
"code": "",
"text": "Welcome to the community! I’m glad you made your way here from TikTok! So what made me get into programming?My family got the internet when I was in middle school, and I was fascinated. I remember creating a personal webpage using a basic GUI tool and thinking how fun it was. At some point, I got interested in the code behind the webpages. I went to a bookstore (yes, a real bookstore!) and picked up HTML for Dummies. I read the whole thing and had fun building out my own personal website.When I was trying to figure out what service project to do for my Girl Scout Gold Award (equivalent to an Eagle Scout), I knew I wanted to do something with the web. My troop leader said she knew a local nonprofit that needed a website. I decided to teach myself to use Dreamweaver and Flash (Flash was super hot tech back then) to build the website for them. I had a lot of fun creating the site and then training a staff member how to update the site, so it could continue on after I left for college.When I needed to pick a college major, I really didn’t know what to pick. Math had always been my favorite subject in school, but I didn’t know what a professional mathematician really did. I had always liked computers, so I decided to roll with that. Computer Science would be my major.No one told me that Computer Science was “for guys.” I truly had no idea. My parents had no idea.I also had no idea that many of my classmates would already have programming experience. They had taken programming courses in high school (my high school didn’t offer any), or they had taught themselves to program. I remember in my first programming course the instructor asked us how many of us had programming experience. I raised my hand. A classmate asked me what I had programmed. I told him I had built websites using HTML. He informed that wasn’t real programming.When I took my second-level programming course, I discovered that programming was a male-dominated field. I was the only woman in my class. And I was shocked. Why had no one told me?I knew so little in comparison to many of my classmates, and I was constantly aware of the fact that I was the only female student.I considered switching majors many times. But I honestly couldn’t think of another major I would enjoy doing. Programming was a struggle for me initially (I remember being very confused at variable assignment – why were we saying two things were equal when they clearly were not?). I spent a LOT of time in office hours and a LOT of time with my classmates who helped me understand the material.I remember when the instructor in my second-level programming course passed out our graded exams, my classmates were shocked that I got an A. I had to work so hard in those programming courses as I felt like I was catching up with my classmates.Anyway, I never did find a major that sounded more interesting than Computer Science, so I graduated with a BS in Computer Science. I decided to stick around and get a MS in Computer Science. And then I was hired as a software engineer.Well, that was probably more info than you were looking for, but there it is. Let me know if you have any other questions. ",
"username": "Lauren_Schaefer"
},
{
"code": "",
"text": "Wow, I felt a similar way when deciding what I would major in because computer science is the only thing I’m interested in because of my curiosity of how things work. You actually answered a lot of my questions that already had so thank you ! I only have two more …As you most likely know homelessness is a huge issue. If I wanted to develop an app where people can individually donate to homeless people how would developers go about starting a project like this?Lastly, why do you think programming has one of the best job outlooks in the future ?\n@Lauren_Schaefer",
"username": "Tyree_mccloud"
},
{
"code": "",
"text": "Also what programming language do you mostly use now ?\nAnd what language is best to learn when looking for a job ?\n@Lauren_Schaefer",
"username": "Tyree_mccloud"
},
{
"code": "",
"text": "Regarding the app - It’s unclear to me what stage of app development you’re in.If you’re at the very beginning, you probably need to start with some requirements gathering. What will actually help? What will be effective? Will donations be tax deductible? Why would people donate through your app rather than existing channels? Can you partner with existing organizations? etcThinking lean, you may want to start by validating your idea while building as little as possible. Can you accept donations through an existing system (mail, Paypal, gofundme, etc) to validate people are willing to give and that you have a way to effectively distribute donations.Once you’ve validated your idea, you’re ready to build. This is the fun part. If you’re looking for a platform to build your app, I recommend MongoDB Realm. It has a lot of features that make building web and mobile apps easier. Things like serverless functions, GraphQL, and a mobile database.Why do I think programming has one of the best job outlooks in the future?Every company is becoming a software company. Nearly every business needs software in order to stay competitive or move ahead. Programmers make this happen.Also what programming language do you mostly use now ?JavaScriptAnd what language is best to learn when looking for a job ?Some employers look for experience in specific programming languages while others acknowledge that, if you know one programming language, you can probably pick up another language fairly easily. I learned Java in school, but I haven’t touched it in years. Throughout my career, I’ve picked up JavaScript, Python, and PHP.I recommend checking out the Stack Overflow Developer Survey for more information on programming language trends.",
"username": "Lauren_Schaefer"
},
{
"code": "",
"text": "This really helped a lot thank you and thank you for your time ! ",
"username": "Tyree_mccloud"
},
{
"code": "",
"text": "You’re welcome, @Tyree_mccloud!",
"username": "Lauren_Schaefer"
},
{
"code": "",
"text": "2 posts were split to a new topic: Adjusting results of $lookup aggregation from an Array",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi @Lauren_Schaefer ! I am really enjoying your teaching style so I wanted to introduce myself and say thanks. The course on Schema Anti-patterns I followed with you was very well taught…and super funny. My favorite part is “Andy has acquired 32,563 lions and a country can have many policies, Finland has decide to use Lions in lue of military for it’s national defense policies” . I did learn so much as well. From why an array is NOT a good option for storing huge unbounded lists to reasons why you should drop a collection(the collection size is mostly indexes or if it’s just empty). I learned methods of restructuring data to fit use cases or the option to move to a larger cluster if necessary and the two collation strengths that provide case-insensitivity(1 and 2 obvi ). You taught us that the rule of thumb when modeling our data should be, Data that is accessed together should be stored together and that the best practices suggest considering your particular use case when determining how to model our data. SO, in conclusion, I would have to say you deserve to own Leslie Nope’s quote “I’m big enough to admit, that I often inspire myself!” That’s classic humor sandwich teaching, which I appreciate! ",
"username": "Jason_Nutt"
},
{
"code": "",
"text": "@Jason_Nutt Thank you sooooo much! Finding that other people not only enjoy your content but have retained the main points is the highest compliment. You have just made my day. \n",
"username": "Lauren_Schaefer"
},
{
"code": "",
"text": "I just maxed out my hearts on that reply lol. So you made mine too! I’m going to try and make a D&D world database based on the ideas I’m getting from the Pawnee model Untied Nations presentation. Watching it again on a different platform this morning and writing all my ideas down on stickys. I love your teaching style. Any suggestions or direction for growth are welcome. You have inspired and encouraged and most importantly …we laughed lol. “ok so I din’t do model United Nations in High school…OH WAIT, WE TOTALLY DID!” ~Best quote of the lesson?",
"username": "Jason_Nutt"
},
{
"code": "",
"text": "In addition to Parks and Rec, I’m also a huge fan of Community. They did two D&D episode (I assume we’re talking about Dungeons and Dragons?)The MongoDB University Courses are awesome. I definitely recommend those.I also include a list of data modeling resources at the bottom of this blog post: A Summary of Schema Design Anti-Patterns and How to Spot Them | MongoDBIf you’re interested in Node, I have a Node Quick Start video and blog series: Connect to a MongoDB Database Using Node.js | MongoDB Blog",
"username": "Lauren_Schaefer"
},
{
"code": "",
"text": "They did two D&D episode (I assume we’re talking about Dungeons and Dragons?)That is the only D & D I know of! Yay Community! My wife and I love that one too. Thanks for the direction. @Lauren_Schaefer . I’m well into the path into MongoDB University also ( I think I have done something like 7 course and data modeling & anti-pattern schema design is my favorite thing so far ), that’s what has led me to you and Anti-patterns. Super excited about the growth I am seeing with y’all!",
"username": "Jason_Nutt"
},
{
"code": "",
"text": "4 posts were split to a new topic: Haven’t received my Anti-Patterns badge yet",
"username": "Stennie_X"
},
{
"code": "",
"text": "A post was split to a new topic: Images aren’t displaying on the Champions page",
"username": "Stennie_X"
},
{
"code": "",
"text": "I need some help from you let me know when you available to I can post my question",
"username": "Sheroz_Khan"
},
{
"code": "",
"text": "Hello @Sheroz_Khan\nwelcome to the community \nWe are here to get your MongoDB questions answered. You, and the community, will most benefit from your question and the answer when you phrase your question and post it in a fitting category. I highly recommend to read the Getting started section. You will find there a lot of valuable information. Also the chance to get a fast response is much higher when you ask the community, individual persons might be busy.\nRegards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "A post was split to a new topic: Can I use MongoDB as my primary db for text search engine, that contain approx billion records",
"username": "Stennie_X"
},
{
"code": "",
"text": "A post was split to a new topic: Recommendations regarding a playground for prototyping Mongoose queries in VS code?",
"username": "Stennie_X"
}
]
| 🌱 Hey, friends! I'm Lauren | 2020-01-24T14:43:03.892Z | :seedling: Hey, friends! I’m Lauren | 24,346 |
null | [
"dot-net"
]
| [
{
"code": "",
"text": "I am using MongoDB C# driver version 2.17.0 in a prod service and found that the service faces a lot of MongoDB.Driver.MongoWaitQueueFullException every day. Initially I thought that the service was getting a lot of requests and it was hitting the wait queue limits. So, I started tracking connection count and wait queue size per service instance with events.Then I found that the service was well below the default active connection limit of 100 and the wait queue was empty most of the time too. But the exception was happening whenever there was a sharp rise in requests to open new connections to MongoDB. For example, 30 requests to open new connections in 10 seconds. I was able to re-produce this locally with k6 load testing tool. I made 100 requests gradually in 10 seconds and a small percentage (approx. 3% to 7%) of requests failed due to MongoWaitQueueFullException on multiple runs.Is it possible to avoid this error since the service is not actually hitting the wait queue limits? One potential solution that comes to my mind is to open a minimum number of connections on startup so that my service always has some available connections to spare and can deal with sharp increase in requests more gracefully. Is there any other potential solution?",
"username": "Asiful_Nobel"
},
{
"code": "MongoWaitQueueFullExceptionsWaitQueueMulitplierMaxPoolSizeMongoWaitQueueFullExceptionsmongosmaxPoolSizemaxConnectingmaxConnectingmaxPoolSizeMongoWaitQueueFullExceptionsmaxConnectingMongoClientSettings.MaxConnectingmaxConnectingMongoWaitQueueFullExceptions",
"text": "Hi, @Asiful_Nobel,Welcome to the MongoDB Community Forums. I understand that you are experiencing MongoWaitQueueFullExceptions sporadically in your production application.The default wait queue size is the WaitQueueMulitplier (default 5) times the MaxPoolSize (default 100). But what is the wait queue and why do you receive MongoWaitQueueFullExceptions? To understand this, let’s talk about server selection and connection pools…When you execute an operation on your MongoDB cluster, the first step is server selection. If the operation is a write, that write must be executed against the primary (or a mongos in a sharded cluster which will then route it to the correct primary). If it is a read, the driver will evaluate the requested read preference against the cached cluster topology to look for a suitable node. This includes the node’s state (e.g. primary, secondary, etc.), latency, max staleness, and other factors. See Server Selection for a detailed explanation.Once a server has been selected, the driver will attempt to check a connection out of the connection pool for that node. First it enters the wait queue, which specifies how many threads can block waiting for a connection. (As mentioned above the default number is 500.) If a connection is available, it will be checked out and the wait queue exited. If one is not available but the pool is not at maxPoolSize, a new connection will be established and then the wait queue exited.To help prevent connection storms, MongoDB .NET/C# Driver 2.13.0 introduced maxConnecting, which limits the number of connection establishment requests to a cluster node. maxConnecting was made configurable in 2.14.0. The default value is 2.If you are not at maxPoolSize but are seeing MongoWaitQueueFullExceptions, you may be slow to establish new connections (2 concurrent to a single cluster node) causing a lot of threads to block on connection establishment. You can try increasing maxConnecting either via the connection string or via MongoClientSettings.MaxConnecting. I would suggest trying 4 (e.g. double the default value) and gradually increasing from there to see if it resolves the issue.Hopefully tuning maxConnecting alleviates the MongoWaitQueueFullExceptions in your deployment. Please let us know the results of this tuning or if you have additional questions.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| MongoWaitQueueFullException when wait queue is still empty | 2022-11-27T15:10:19.331Z | MongoWaitQueueFullException when wait queue is still empty | 4,622 |
null | [
"php"
]
| [
{
"code": " \"$rank must be specified with a top level sortBy expression with exactly one element\" \nuser_score.aggregate([\n{\"$setWindowFields\":{\"sortBy\":{\"score\":-1, \"attempts\":-1},\n\"output\":{\"scoreRank\":{\"$rank\":{}}}}}])\n",
"text": "I am getting an issue when trying to rank documents based on score and attempts:The query isIs it not possible to rank docs based on multiple fields?",
"username": "Nimisha"
},
{
"code": "$rank$addFieldsDB>db.user_score.find()\n[\n { _id: ObjectId(\"638433ca31f9fbfd0869f475\"), score: 1, attempts: 1 },\n { _id: ObjectId(\"638433ce31f9fbfd0869f476\"), score: 1, attempts: 2 },\n { _id: ObjectId(\"638433d031f9fbfd0869f477\"), score: 1, attempts: 3 },\n { _id: ObjectId(\"638433d731f9fbfd0869f478\"), score: 2, attempts: 1 },\n { _id: ObjectId(\"638433d931f9fbfd0869f479\"), score: 2, attempts: 2 },\n { _id: ObjectId(\"638433da31f9fbfd0869f47a\"), score: 2, attempts: 3 },\n { _id: ObjectId(\"638433de31f9fbfd0869f47b\"), score: 3, attempts: 3 },\n { _id: ObjectId(\"638433df31f9fbfd0869f47c\"), score: 3, attempts: 1 },\n { _id: ObjectId(\"638433e131f9fbfd0869f47d\"), score: 3, attempts: 2 },\n { _id: ObjectId(\"638433e231f9fbfd0869f47e\"), score: 3, attempts: 3 }\n]\n\"score\"\"attempts\"$setWindowFields$rankdb.user_score.aggregate([\n {\n '$addFields': { testField: { score: '$score', attempts: '$attempts' } }\n },\n {\n '$setWindowFields': {\n sortBy: { testField: -1 },\n output: { scoreRank: { '$rank': {} } }\n }\n }\n])\n[\n {\n _id: ObjectId(\"638433de31f9fbfd0869f47b\"),\n score: 3,\n attempts: 3,\n testField: { score: 3, attempts: 3 },\n scoreRank: 1\n },\n {\n _id: ObjectId(\"638433e231f9fbfd0869f47e\"),\n score: 3,\n attempts: 3,\n testField: { score: 3, attempts: 3 },\n scoreRank: 1\n },\n {\n _id: ObjectId(\"638433e131f9fbfd0869f47d\"),\n score: 3,\n attempts: 2,\n testField: { score: 3, attempts: 2 },\n scoreRank: 3\n },\n {\n _id: ObjectId(\"638433df31f9fbfd0869f47c\"),\n score: 3,\n attempts: 1,\n testField: { score: 3, attempts: 1 },\n scoreRank: 4\n },\n {\n _id: ObjectId(\"638433da31f9fbfd0869f47a\"),\n score: 2,\n attempts: 3,\n testField: { score: 2, attempts: 3 },\n scoreRank: 5\n },\n {\n _id: ObjectId(\"638433d931f9fbfd0869f479\"),\n score: 2,\n attempts: 2,\n testField: { score: 2, attempts: 2 },\n scoreRank: 6\n },\n {\n _id: ObjectId(\"638433d731f9fbfd0869f478\"),\n score: 2,\n attempts: 1,\n testField: { score: 2, attempts: 1 },\n scoreRank: 7\n },\n {\n _id: ObjectId(\"638433d031f9fbfd0869f477\"),\n score: 1,\n attempts: 3,\n testField: { score: 1, attempts: 3 },\n scoreRank: 8\n },\n {\n _id: ObjectId(\"638433ce31f9fbfd0869f476\"),\n score: 1,\n attempts: 2,\n testField: { score: 1, attempts: 2 },\n scoreRank: 9\n },\n {\n _id: ObjectId(\"638433ca31f9fbfd0869f475\"),\n score: 1,\n attempts: 1,\n testField: { score: 1, attempts: 1 },\n scoreRank: 10\n }\n]\nsortBy$rank",
"text": "Hi @Nimisha - Welcome to the community.Is it not possible to rank docs based on multiple fields?Yes, currently using $rank with multiple fields is not possible. There is currently a SERVER ticket raised for this behaviour - SERVER-56572. As noted on the ticket, a potential workaround is:to prepend an $addFields stage that places the compound set of fields into their own sub-object, then applying the window function rank on that new object.I am not sure if this would work for your use case but I had created the following documents in my test environment:I then created added an extra field with the \"score\" and \"attempts\" values within it as an object before running the $setWindowFields with $rank:In the meantime, I have also created a request to have the documentation updated to perhaps note that only a single element in the sortBy when using $rank.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| $rank must be specified with a top level sortBy expression with exactly one element | 2022-11-15T13:12:06.888Z | $rank must be specified with a top level sortBy expression with exactly one element | 2,227 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "result = list(\n collection.find({field_name: {'$regex': phrase, '$options': 'i'}}, {'_id': 0, field_name: 1}).skip(\n offset).limit(limit).distinct(field_name))\n",
"text": "field_name = taste\nphrase = pep\nexample:\ntaste:[‘pear’, ‘pepper’, ‘toffi’]my code:I would like it to go through collection and return all tastes that start with pep so expected result would be: [‘pepper’]\nbut instead it returns me the whole list [‘pear’, ‘pepper’, ‘toffi’]",
"username": "Thyme1"
},
{
"code": " result = list(collection.aggregate([\n {'$match': {field_name: {'$regex': phrase, '$options': 'i'}}},\n {\n\n '$project':\n {\n '_id': 0,\n field_name: {\n '$filter': {\n 'input': f'${field_name}',\n 'cond': {\n \"$regexMatch\": {\n 'input': \"$this\",\n 'regex': phrase,\n 'options': 'i'\n }\n }\n }\n }\n }\n },\n {'$skip': offset},\n {'$limit': limit}\n ]))\n",
"text": "Second approach with aggregation:result is [{‘taste’:[‘pepper’]},{‘taste’:[‘pepperoni’, ‘pepper’]}]How to make it a list of distinct values?",
"username": "Thyme1"
},
{
"code": "$filter$regexMatch{\n '$addFields': {\n filteredArray: {\n '$filter': {\n input: '$taste',\n cond: {\n '$regexMatch': { input: '$$taste', regex: /pep/, options: 'i' }\n },\n as: 'taste'\n }\n }\n }\n}\nDB>db.coll.find()\n[\n {\n _id: ObjectId(\"6383fb1931f9fbfd0869f470\"),\n taste: [ 'pear', 'pepper', 'toffi' ]\n },\n {\n _id: ObjectId(\"6383fd8f31f9fbfd0869f471\"),\n taste: [ 'test', 'per', 'toffi' ]\n },\n {\n _id: ObjectId(\"6383fd9f31f9fbfd0869f472\"),\n taste: [ 'testpepper', 'toffi' ]\n },\n {\n _id: ObjectId(\"6383fda631f9fbfd0869f473\"),\n taste: [ 'testpepper', 'toffi', 'pepper' ]\n }\n]\n[\n {\n _id: ObjectId(\"6383fb1931f9fbfd0869f470\"),\n taste: [ 'pear', 'pepper', 'toffi' ],\n filteredArray: [ 'pepper' ]\n },\n {\n _id: ObjectId(\"6383fd8f31f9fbfd0869f471\"),\n taste: [ 'test', 'per', 'toffi' ],\n filteredArray: []\n },\n {\n _id: ObjectId(\"6383fd9f31f9fbfd0869f472\"),\n taste: [ 'testpepper', 'toffi' ],\n filteredArray: [ 'testpepper' ]\n },\n {\n _id: ObjectId(\"6383fda631f9fbfd0869f473\"),\n taste: [ 'testpepper', 'toffi', 'pepper' ],\n filteredArray: [ 'testpepper', 'pepper' ]\n }\n]\n",
"text": "Hi @Thyme1 - Welcome to the community.I had a similar approach with the following pipeline using $filter and $regexMatch:In my test environment, I had the following documents:The output using the pipeline above:result is [{‘taste’:[‘pepper’]},{‘taste’:[‘pepperoni’, ‘pepper’]}]\nHow to make it a list of distinct values?Based off the above 4 output documents, could you describe your desired output in regards distinct values?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "$unwind$group",
"text": "In the meantime, you might possibly be able to achieve what you are after regarding the distinct values using $unwind and $group.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "I would like it to return just a list of distinct tastes matching regex so: [‘pepper’].",
"username": "Thyme1"
},
{
"code": "",
"text": "I think you are right, I will try to do it with $unwind and $group.",
"username": "Thyme1"
},
{
"code": "",
"text": "Oh, I found information that the .aggregate() method always returns Objects no matter what you do and that cannot change.My goal is to create function that will hint values that are already in database to someone typing in form.",
"username": "Thyme1"
},
{
"code": " result = list(collection.aggregate([\n # Match the possible documents. Always the best approach\n {'$match': {field_name: {'$regex': phrase, '$options': 'i'}}},\n # De-normalize the array content to separate documents\n {'$unwind': f'${field_name}'},\n # Now \"filter\" the content to actual matches\n {'$match': {field_name: {'$regex': phrase, '$options': 'i'}}},\n # Group the \"like\" terms as the \"key\"\n {\n '$group': {\n '_id': f'${field_name}'\n }},\n {'$skip': offset},\n {'$limit': limit}\n ]))\n",
"text": "I think I found satisfying result based on Mongodb distinct on a array field with regex query? - Stack Overflow :So lets say i have two documents containing taste field\ntaste: [‘pepper’, ‘pepperoni’, ‘tofffi’]\ntaste: [‘pepper’, ‘pepsomething’, ‘salt’]\nIt will give me [{‘_id’: ‘pepper’}, {‘_id’: ‘pepperoni’}, {‘_id’: ‘pepsomething’}] which is fine, because I will just extract the values",
"username": "Thyme1"
},
{
"code": "",
"text": "Sounds like you’ve gotten something that works for you. If so, please feel free to mark your comment as the solution Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Return list of values from mongo array that satisfy regex | 2022-11-27T20:26:27.051Z | Return list of values from mongo array that satisfy regex | 2,528 |
null | [
"queries"
]
| [
{
"code": " {\n _id: ObjectId(\"\"),\n id: 999,\n name: \"prod1\",\n // all other product fields\n sellers:[\n \t\t{\n \t\t\t\"_id\" : ObjectId(\"\"),\n \t\t\t\"seller_id\" : 99,\n \t\t\t\"name\" : \"Business 1\",\n // all other fields\n \"providers\":[\n {\n seller_id:99,\n provider_id:1,\n provider_name:\"prov 1\"\n quantity:50,\n order_allowed:40,\n notification_on_stock:true \n },\n {\n seller_id:99,\n provider_id:2,\n provider_name:\"prov 2\"\n quantity:20,\n order_allowed:20,\n notification_on_stock:true \n }\n ],\n // next fields\n \t\t},{\n \"_id\" : ObjectId(\"\"),\n \t\t\t\"seller_id\" : 9,\n \t\t\t\"name\" : \"Business 2\",\n // all other fields\n \"providers\":[\n {\n seller_id:9,\n provider_id:3,\n provider_name:\"prov 3\"\n quantity:50,\n order_allowed:40,\n notification_on_stock:true \n },\n {\n seller_id:9,\n provider_id:4,\n provider_name:\"prov 4\"\n quantity:20,\n order_allowed:20,\n notification_on_stock:true \n }\n ],\n // next fields\n }\n \t],\n },\n {\n _id: ObjectId(\"\"),\n id: 1000,\n name: \"prod 2\",\n // all other product fields\n sellers:[\n \t\t{\n \t\t\t\"_id\" : ObjectId(\"\"),\n \t\t\t\"seller_id\" : 44,\n \"product_id\" : 2,\n \t\t\t\"name\" : \"Business 22\",\n // all other fields\n \"providers\":[\n {\n seller_id:44,\n provider_id:3,\n provider_name:\"prov 3\"\n quantity:50,\n order_allowed:40,\n notification_on_stock:true \n },\n {\n seller_id:44,\n provider_id:4,\n provider_name:\"prov 4\"\n quantity:20,\n order_allowed:20,\n notification_on_stock:true \n }\n ],\n // next fields\n \t\t},{\n \"_id\" : ObjectId(\"\"),\n \t\t\t\"seller_id\" : 91,\n \t\t\t\"name\" : \"Business 21\",\n // all other fields\n \"providers\":[\n {\n seller_id:91,\n provider_id:1,\n provider_name:\"prov 1\"\n quantity:50,\n order_allowed:40,\n notification_on_stock:true \n },\n {\n seller_id:91,\n provider_id:2,\n provider_name:\"prov 2\"\n quantity:20,\n order_allowed:20,\n notification_on_stock:true \n }\n ],\n // next fields\n }\n \t],\n },\n {\n _id: ObjectId(\"\"),\n id: 1001,\n name: \"prod 3\",\n // all other product fields\n sellers:[\n \t\t{\n \t\t\t\"_id\" : ObjectId(\"\"),\n \t\t\t\"seller_id\" : 33,\n \t\t\t\"name\" : \"Business 112\",\n // all other fields\n \"providers\":[\n {\n seller_id:33,\n provider_id:1,\n provider_name:\"prov 1\"\n quantity:50,\n order_allowed:40,\n notification_on_stock:true \n },\n {\n seller_id:33,\n provider_id:2,\n provider_name:\"prov 2\"\n quantity:20,\n order_allowed:20,\n notification_on_stock:true \n }\n ],\n // next fields\n \t\t},{\n \"_id\" : ObjectId(\"\"),\n \t\t\t\"seller_id\" : 32,\n \t\t\t\"name\" : \"Business 2\",\n // all other fields\n \"providers\":[\n {\n seller_id:32,\n provider_id:1,\n provider_name:\"prov 1\"\n quantity:50,\n order_allowed:40,\n notification_on_stock:true \n },\n {\n seller_id:32,\n provider_id:2,\n provider_name:\"prov 2\"\n quantity:20,\n order_allowed:20,\n notification_on_stock:true \n }\n ],\n // next fields\n }\n \t],\n },\n",
"text": "Hi All,I need help regarding query optimisation for nested array of objects.We have collection with large number of documents and every single document contains nested array of objects up to 3rd level as below :Collection name : productsTotal documents in collection : 20 millionSize of each document : >= 500 kbI have added below indexes for my products collection as below,My Query :db.products.find({\n“id”: 999,\n“sellers”: { “$elemMatch”: { “providers”: { “$elemMatch”: { “seller_id”: 30098, “provider_id”: 517 } } } }\n});My issue is query always picks up the first index on field id and query took time around 800ms which I need to optimise.",
"username": "Pradip_Chavda"
},
{
"code": "db.foo.insertMany([{\n \"_id\":ObjectId(\"603ce892addf7e2a40bb301b\"), \n \"id\":999,\n \"name\":\"prod1\",\n \"sellers\":\n [{\n \"_id\": ObjectId(\"603ce892addf7e2a40bb301c\"),\n \"seller_id\": 99,\n \"name\": \"Business 1\",\n \"providers\": [{\n \"seller_id\": 99,\n \"provider_id\": 1,\n \"provider_name\": \"prov 1\",\n \"quantity\": 50,\n \"order_allowed\": 40,\n \"notification_on_stock\": true\n }, {\n \"seller_id\": 99,\n \"provider_id\": 2,\n \"provider_name\": \"prov 2\",\n \"quantity\": 20,\n \"order_allowed\": 20,\n \"notification_on_stock\": true\n }]\n }, {\n \"_id\": ObjectId(\"603ce892addf7e2a40bb301d\"),\n \"seller_id\": 9,\n \"name\": \"Business 2\",\n \"providers\": [{\n \"seller_id\": 9,\n \"provider_id\": 3,\n \"provider_name\": \"prov 3\",\n \"quantity\": 50,\n \"order_allowed\": 40,\n \"notification_on_stock\": true\n }, {\n \"seller_id\": 9,\n \"provider_id\": 4,\n \"provider_name\": \"prov 4\",\n \"quantity\": 20,\n \"order_allowed\": 20,\n \"notification_on_stock\": true\n }]\n }]\n},\n{\n \"_id\":ObjectId(\"603ce892addf7e2a40bb301e\"),\n \"id\":1000,\n \"name\":\"prod 2\",\n \"sellers\":\n [{\n \"_id\": ObjectId(\"603ce892addf7e2a40bb301f\"),\n \"seller_id\": 44,\n \"product_id\": 2,\n \"name\": \"Business 22\",\n \"providers\": [{\n \"seller_id\": 44,\n \"provider_id\": 3,\n \"provider_name\": \"prov 3\",\n \"quantity\": 50,\n \"order_allowed\": 40,\n \"notification_on_stock\": true\n }, {\n \"seller_id\": 44,\n \"provider_id\": 4,\n \"provider_name\": \"prov 4\",\n \"quantity\": 20,\n \"order_allowed\": 20,\n \"notification_on_stock\": true\n }]\n }, {\n \"_id\": ObjectId(\"603ce892addf7e2a40bb3020\"),\n \"seller_id\": 91,\n \"name\": \"Business 21\",\n \"providers\": [{\n \"seller_id\": 91,\n \"provider_id\": 1,\n \"provider_name\": \"prov 1\",\n \"quantity\": 50,\n \"order_allowed\": 40,\n \"notification_on_stock\": true\n }, {\n \"seller_id\": 91,\n \"provider_id\": 2,\n \"provider_name\": \"prov 2\",\n \"quantity\": 20,\n \"order_allowed\": 20,\n \"notification_on_stock\": true\n }]\n }]\n},\n{\n \"_id\": ObjectId(\"603ce892addf7e2a40bb3021\"),\n \"id\": 1001,\n \"name\": \"prod 3\",\n \"sellers\":\n [{\n \"_id\": ObjectId(\"603ce892addf7e2a40bb3022\"),\n \"seller_id\": 33,\n \"name\": \"Business 112\",\n \"providers\": [{\n \"seller_id\": 33,\n \"provider_id\": 1,\n \"provider_name\": \"prov 1\",\n \"quantity\": 50,\n \"order_allowed\": 40,\n \"notification_on_stock\": true\n }, {\n \"seller_id\": 33,\n \"provider_id\": 2,\n \"provider_name\": \"prov 2\",\n \"quantity\": 20,\n \"order_allowed\": 20,\n \"notification_on_stock\": true\n }]\n }, {\n \"_id\": ObjectId(\"603ce892addf7e2a40bb3023\"),\n \"seller_id\": 32,\n \"name\": \"Business 2\",\n \"providers\": [{\n \"seller_id\": 32,\n \"provider_id\": 1,\n \"provider_name\": \"prov 1\",\n \"quantity\": 50,\n \"order_allowed\": 40,\n \"notification_on_stock\": true\n }, {\n \"seller_id\": 32,\n \"provider_id\": 2,\n \"provider_name\": \"prov 2\",\n \"quantity\": 20,\n \"order_allowed\": 20,\n \"notification_on_stock\": true\n }]\n }]\n}])\ndb.foo.createIndex({id:1,\"sellers.providers.seller_id\":1, \"sellers.providers.provider_id\":1 })\nsellers.seller_idsellers.providers.seller_iddb.foo.find({id:999, \"sellers.providers\": {\"$elemMatch\": {\"seller_id\": 99, \"provider_id\": 1}}}).explain()\n{\n\t\"queryPlanner\" : {\n\t\t\"plannerVersion\" : 1,\n\t\t\"namespace\" : \"test.foo\",\n\t\t\"indexFilterSet\" : false,\n\t\t\"parsedQuery\" : {\n\t\t\t\"$and\" : [\n\t\t\t\t{\n\t\t\t\t\t\"sellers.providers\" : {\n\t\t\t\t\t\t\"$elemMatch\" : {\n\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"provider_id\" : {\n\t\t\t\t\t\t\t\t\t\t\"$eq\" : 1\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"seller_id\" : {\n\t\t\t\t\t\t\t\t\t\t\"$eq\" : 99\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"id\" : {\n\t\t\t\t\t\t\"$eq\" : 999\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"queryHash\" : \"E1D4971C\",\n\t\t\"planCacheKey\" : \"9B12C768\",\n\t\t\"winningPlan\" : {\n\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\"filter\" : {\n\t\t\t\t\"sellers.providers\" : {\n\t\t\t\t\t\"$elemMatch\" : {\n\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"seller_id\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : 99\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"provider_id\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : 1\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\"id\" : 1,\n\t\t\t\t\t\"sellers.providers.seller_id\" : 1,\n\t\t\t\t\t\"sellers.providers.provider_id\" : 1\n\t\t\t\t},\n\t\t\t\t\"indexName\" : \"id_1_sellers.providers.seller_id_1_sellers.providers.provider_id_1\",\n\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\"id\" : [ ],\n\t\t\t\t\t\"sellers.providers.seller_id\" : [\n\t\t\t\t\t\t\"sellers\",\n\t\t\t\t\t\t\"sellers.providers\"\n\t\t\t\t\t],\n\t\t\t\t\t\"sellers.providers.provider_id\" : [\n\t\t\t\t\t\t\"sellers\",\n\t\t\t\t\t\t\"sellers.providers\"\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\"id\" : [\n\t\t\t\t\t\t\"[999.0, 999.0]\"\n\t\t\t\t\t],\n\t\t\t\t\t\"sellers.providers.seller_id\" : [\n\t\t\t\t\t\t\"[99.0, 99.0]\"\n\t\t\t\t\t],\n\t\t\t\t\t\"sellers.providers.provider_id\" : [\n\t\t\t\t\t\t\"[1.0, 1.0]\"\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"rejectedPlans\" : [ ]\n\t},\n\t\"serverInfo\" : {\n\t\t\"host\" : \"hafx\",\n\t\t\"port\" : 27017,\n\t\t\"version\" : \"4.4.4\",\n\t\t\"gitVersion\" : \"8db30a63db1a9d84bdcad0c83369623f708e0397\"\n\t},\n\t\"ok\" : 1,\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1614606674, 1),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\t\t\t\"keyId\" : NumberLong(0)\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1614606674, 1)\n}\n{ _id:1 }\n{ _id:1, something: 1 }\n",
"text": "Hi @Pradip_Chavda and welcome in the MongoDB Community !First, I have done some cleaning on your 3 sample documents so they are easier to insert in a test collection:Then I created this index:Note that this index is different than yours. You are indexing sellers.seller_id which is one level above. In your query, you are looking for sellers.providers.seller_id which is one level deeper.I removed the first elemMatch in your query because I think it’s useless. It doesn’t change the result though, both return the same document & use the index so it’s up to you really.As you can see, it’s using the index as expected.Also, please note that having 2 indexes like the 2 below is a waste of ressources because the second one also contains the first one. So you can remove the first one & save some RAM. Any query that was using the first one can also use the second one instead.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "If doing $and,$or queries, do I need to index each of the props separately? I think I read that somewhere…\nthx ",
"username": "Eyal_Barta"
},
{
"code": "",
"text": "I’ll go with “it depends”. Please open a new topic for your question and provide context, sample documents and your queries so we can check & help. Bonus points if we can copy & paste the sample docs easily into the DB. (see the insertMany above in this thread).",
"username": "MaBeuLux88"
}
]
| MongoDB query optimisation for nested array of objects | 2021-03-01T10:25:36.571Z | MongoDB query optimisation for nested array of objects | 8,398 |
null | [
"aggregation",
"text-search"
]
| [
{
"code": "",
"text": "Hello,I use text search in the aggregation pipeline, and I face a problem. It uses too much RAM (8GB of RAM is not enough for a single aggregation call). Can I make any optimizations to reduce the memory usage?Collection metrics:Database: MongoDB 4.4",
"username": "Roman_Right"
},
{
"code": "",
"text": "Hi @Roman_Right,Probably. But I don’t see how we could potentially help you without some sample documents, index definitions and the pipeline you mentioned.Also, you are using Atlas Search here, correct?Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "[\n {\n \"$match\": {\n \"$expr\": {\n \"$eq\": [\"$tag\", \"8\"]\n },\n \"$text\": {\"$search\": \"good\"}\n }\n },\n {\n \"$limit\": 10\n }\n]\n",
"text": "Hi @MaBeuLux88 ,Thank you for your reply.No, I use stand-alone MongoDB 4.4.The aggregation query is next:The document schema is {“text”: string, “tag”: string}.The “text” field is ~ 20000 symbols in length. It can be any text, I think. For the synthetic tests, I used parts of a book “20000 leagues under the sea” and it had the same results.The tag field is small (<10 symbols).The text index is set the next way: db[“my-collection”].createIndex({“text”: “text”});I created a repo with scripts, that can reproduce my problem: GitHub - roman-right/text_index_memory_usage",
"username": "Roman_Right"
},
{
"code": "",
"text": "I face this problem only in MongoDB 4.4.MongoDB 5.0 works well, Atlas (with 5.0 on board) works well too.Mb there are specific tweaks for 4.4, that I should use?",
"username": "Roman_Right"
},
{
"code": "",
"text": "Hi @Roman_Right,Sorry for the break, I had a baby !Are you familiar with the allowDiskUse option?If your cluster has 8 GB or RAM, most of it is already use by the OS, the working set, the indexes and the other queries. Your aggregation can only use whatever RAM is left. Is your cluster already maxed at 8GB of RAM constantly or there is some room left for queries and for your cluster to be healthy?I’m not sure why there is a difference between 4.4 and 5.0. It could be that your 5.0 isn’t as loaded as the 4.4 one which is in prod I’m guessing and therefore has more RAM & ressources available.Also 5.0 is, of course, an improved version since 4.4 so maybe some features are improving the performances. Maybe it’s time to plan an upgrade and say goodbye to 2020.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "guide_progresses[\n\t{\n\t\t\"v\" : 2,\n\t\t\"key\" : {\n\t\t\t\"_id\" : 1\n\t\t},\n\t\t\"name\" : \"_id_\"\n\t},\n\t{\n\t\t\"v\" : 2,\n\t\t\"key\" : {\n\t\t\t\"guide.slug\" : 1,\n\t\t\t\"last_assignment.exercise.eid\" : 1\n\t\t},\n\t\t\"name\" : \"ExBibIdIndex\"\n\t},\n\t{\n\t\t\"v\" : 2,\n\t\t\"key\" : {\n\t\t\t\"organization\" : 1,\n\t\t\t\"course\" : 1,\n\t\t\t\"student.uid\" : 1\n\t\t},\n\t\t\"name\" : \"organization_1_course_1_student.uid_1\"\n\t},\n\t{\n\t\t\"v\" : 2,\n\t\t\"key\" : {\n\t\t\t\"_fts\" : \"text\",\n\t\t\t\"_ftsx\" : 1\n\t\t},\n\t\t\"name\" : \"student.first_name_text_student.last_name_text_student.email_text\",\n\t\t\"default_language\" : \"english\",\n\t\t\"language_override\" : \"language\",\n\t\t\"weights\" : {\n\t\t\t\"student.email\" : 1,\n\t\t\t\"student.first_name\" : 1,\n\t\t\t\"student.last_name\" : 1\n\t\t},\n\t\t\"textIndexVersion\" : 3\n\t},\n\t{\n\t\t\"v\" : 2,\n\t\t\"key\" : {\n\t\t\t\"organization\" : 1,\n\t\t\t\"course\" : 1,\n\t\t\t\"guide.slug\" : 1,\n\t\t\t\"student.uid\" : 1\n\t\t},\n\t\t\"name\" : \"organization_1_course_1_guide.slug_1_student.uid_1\"\n\t}\n]\ndb.guide_progresses.aggregate([\n {\n \"$match\":\n {\n \"organization\": \"wwwwwwwwwwwwww\",\n \"course\": \"xxxxxxxxxxxxxx\",\n \"guide.slug\": \"yyyyyyyyyyyyyy\",\n \"detached\":\n {\n \"$exists\": false\n },\n \"$text\":\n {\n \"$search\": \"\\\"zzzzzzzzzzzzzz\\\"\"\n }\n }\n },\n {\n \"$sort\":\n {\n \"stats.passed\": 1,\n \"stats.passed_with_warnings\": 1,\n \"stats.failed\": 1,\n \"student.last_name\": 1,\n \"student.first_name\": 1\n }\n },\n {\n \"$project\":\n {\n \"_id\": 0,\n \"assignments\": 0,\n \"notifications\": 0,\n \"guide._id\": 0,\n \"student._id\": 0,\n \"last_assignment._id\": 0,\n \"last_assignment.guide._id\": 0,\n \"last_assignment.exercise._id\": 0,\n \"last_assignment.submission._id\": 0\n }\n },\n {\n \"$facet\":\n {\n \"results\":\n [\n {\n \"$skip\": 0\n },\n {\n \"$limit\": 30\n }\n ],\n \"total\":\n [\n {\n \"$count\": \"count\"\n }\n ]\n }\n }\n],\n{\n \"allowDiskUse\": true\n})\n{\n\t\"stages\" : [\n\t\t{\n\t\t\t\"$cursor\" : {\n\t\t\t\t\"query\" : {\n\t\t\t\t\t\"organization\" : \"wwwwwwwwwwwwww\",\n\t\t\t\t\t\"course\" : \"xxxxxxxxxxxxxx\",\n\t\t\t\t\t\"guide.slug\" : \"yyyyyyyyyyyyyy\",\n\t\t\t\t\t\"detached\" : {\n\t\t\t\t\t\t\"$exists\" : false\n\t\t\t\t\t},\n\t\t\t\t\t\"$text\" : {\n\t\t\t\t\t\t\"$search\" : \"\\\"zzzzzzzzzzzzzz\\\"\"\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"fields\" : {\n\t\t\t\t\t\"$textScore\" : {\n\t\t\t\t\t\t\"$meta\" : \"textScore\"\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"queryPlanner\" : {\n\t\t\t\t\t\"plannerVersion\" : 1,\n\t\t\t\t\t\"namespace\" : \"classroom.guide_progresses\",\n\t\t\t\t\t\"indexFilterSet\" : false,\n\t\t\t\t\t\"parsedQuery\" : {\n\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"course\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : \"xxxxxxxxxxxxxx\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"guide.slug\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : \"yyyyyyyyyyyyyy\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"organization\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : \"wwwwwwwwwwwwww\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"$nor\" : [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"detached\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$exists\" : true\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"$text\" : {\n\t\t\t\t\t\t\t\t\t\"$search\" : \"\\\"zzzzzzzzzzzzzz\\\"\",\n\t\t\t\t\t\t\t\t\t\"$language\" : \"english\",\n\t\t\t\t\t\t\t\t\t\"$caseSensitive\" : false,\n\t\t\t\t\t\t\t\t\t\"$diacriticSensitive\" : false\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t\"winningPlan\" : {\n\t\t\t\t\t\t\"stage\" : \"PROJECTION\",\n\t\t\t\t\t\t\"transformBy\" : {\n\t\t\t\t\t\t\t\"$textScore\" : {\n\t\t\t\t\t\t\t\t\"$meta\" : \"textScore\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"course\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$eq\" : \"xxxxxxxxxxxxxx\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"guide.slug\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$eq\" : \"yyyyyyyyyyyyyy\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"organization\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$eq\" : \"wwwwwwwwwwwwww\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"$nor\" : [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"detached\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"$exists\" : true\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"TEXT\",\n\t\t\t\t\t\t\t\t\"indexPrefix\" : {\n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"indexName\" : \"student.first_name_text_student.last_name_text_student.email_text\",\n\t\t\t\t\t\t\t\t\"parsedTextQuery\" : {\n\t\t\t\t\t\t\t\t\t\"terms\" : [\n\t\t\t\t\t\t\t\t\t\t\"zzzzzzzzzzzzzz\"\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"negatedTerms\" : [ ],\n\t\t\t\t\t\t\t\t\t\"phrases\" : [\n\t\t\t\t\t\t\t\t\t\t\"zzzzzzzzzzzzzz\"\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"negatedPhrases\" : [ ]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"textIndexVersion\" : 3,\n\t\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\t\"stage\" : \"TEXT_MATCH\",\n\t\t\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\t\t\"stage\" : \"TEXT_OR\",\n\t\t\t\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\"_fts\" : \"text\",\n\t\t\t\t\t\t\t\t\t\t\t\t\"_ftsx\" : 1\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t\"indexName\" : \"student.first_name_text_student.last_name_text_student.email_text\",\n\t\t\t\t\t\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\t\t\t\"indexVersion\" : 1,\n\t\t\t\t\t\t\t\t\t\t\t\"direction\" : \"backward\",\n\t\t\t\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"rejectedPlans\" : [ ]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$sort\" : {\n\t\t\t\t\"sortKey\" : {\n\t\t\t\t\t\"stats.passed\" : 1,\n\t\t\t\t\t\"stats.passed_with_warnings\" : 1,\n\t\t\t\t\t\"stats.failed\" : 1,\n\t\t\t\t\t\"student.last_name\" : 1,\n\t\t\t\t\t\"student.first_name\" : 1\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$project\" : {\n\t\t\t\t\"notifications\" : false,\n\t\t\t\t\"assignments\" : false,\n\t\t\t\t\"_id\" : false,\n\t\t\t\t\"student\" : {\n\t\t\t\t\t\"_id\" : false\n\t\t\t\t},\n\t\t\t\t\"last_assignment\" : {\n\t\t\t\t\t\"_id\" : false,\n\t\t\t\t\t\"exercise\" : {\n\t\t\t\t\t\t\"_id\" : false\n\t\t\t\t\t},\n\t\t\t\t\t\"submission\" : {\n\t\t\t\t\t\t\"_id\" : false\n\t\t\t\t\t},\n\t\t\t\t\t\"guide\" : {\n\t\t\t\t\t\t\"_id\" : false\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"guide\" : {\n\t\t\t\t\t\"_id\" : false\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$facet\" : {\n\t\t\t\t\"results\" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$limit\" : NumberLong(30)\n\t\t\t\t\t}\n\t\t\t\t],\n\t\t\t\t\"total\" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$group\" : {\n\t\t\t\t\t\t\t\"_id\" : {\n\t\t\t\t\t\t\t\t\"$const\" : null\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"count\" : {\n\t\t\t\t\t\t\t\t\"$sum\" : {\n\t\t\t\t\t\t\t\t\t\"$const\" : 1\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$project\" : {\n\t\t\t\t\t\t\t\"_id\" : false,\n\t\t\t\t\t\t\t\"count\" : true\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t],\n\t\"ok\" : 1\n}\n{\n \"$match\":\n {\n \"organization\": \"wwwwwwwwwwwwww\",\n \"course\": \"xxxxxxxxxxxxxx\",\n \"guide.slug\": \"yyyyyyyyyyyyyy\",\n \"detached\":\n {\n \"$exists\": false\n },\n \"$or\": \n [\n { \n \"first_name\": /zzzzzzzzzzzzzz/\n },\n { \n \"last_name\": /zzzzzzzzzzzzzz/\n },\n { \n \"email\": /zzzzzzzzzzzzzz/\n },\n ]\n }\n }\n{\n\t\"stages\" : [\n\t\t{\n\t\t\t\"$cursor\" : {\n\t\t\t\t\"query\" : {\n\t\t\t\t\t\"organization\" : \"wwwwwwwwwwwwww\",\n\t\t\t\t\t\"course\" : \"xxxxxxxxxxxxxx\",\n\t\t\t\t\t\"guide.slug\" : \"yyyyyyyyyyyyyy\",\n\t\t\t\t\t\"detached\" : {\n\t\t\t\t\t\t\"$exists\" : false\n\t\t\t\t\t},\n\t\t\t\t\t\"$or\" : [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"first_name\" : /zzzzzzzzzzzzzz/\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"last_name\" : /zzzzzzzzzzzzzz/\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"email\" : /zzzzzzzzzzzzzz/\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"queryPlanner\" : {\n\t\t\t\t\t\"plannerVersion\" : 1,\n\t\t\t\t\t\"namespace\" : \"classroom.guide_progresses\",\n\t\t\t\t\t\"indexFilterSet\" : false,\n\t\t\t\t\t\"parsedQuery\" : {\n\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"$or\" : [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"email\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$regex\" : \"zzzzzzzzzzzzzz\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"first_name\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$regex\" : \"zzzzzzzzzzzzzz\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"last_name\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$regex\" : \"zzzzzzzzzzzzzz\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"course\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : \"xxxxxxxxxxxxxx\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"guide.slug\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : \"yyyyyyyyyyyyyy\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"organization\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : \"wwwwwwwwwwwwww\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"$nor\" : [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"detached\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$exists\" : true\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t\"winningPlan\" : {\n\t\t\t\t\t\t\"stage\" : \"CACHED_PLAN\",\n\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"$or\" : [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"email\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"$regex\" : \"zzzzzzzzzzzzzz\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"first_name\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"$regex\" : \"zzzzzzzzzzzzzz\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"last_name\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"$regex\" : \"zzzzzzzzzzzzzz\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"$nor\" : [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"detached\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"$exists\" : true\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\"organization\" : 1,\n\t\t\t\t\t\t\t\t\t\"course\" : 1,\n\t\t\t\t\t\t\t\t\t\"guide.slug\" : 1,\n\t\t\t\t\t\t\t\t\t\"student.uid\" : 1\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"indexName\" : \"organization_1_course_1_guide.slug_1_student.uid_1\",\n\t\t\t\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\"organization\" : [ ],\n\t\t\t\t\t\t\t\t\t\"course\" : [ ],\n\t\t\t\t\t\t\t\t\t\"guide.slug\" : [ ],\n\t\t\t\t\t\t\t\t\t\"student.uid\" : [ ]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\"indexVersion\" : 1,\n\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\"organization\" : [\n\t\t\t\t\t\t\t\t\t\t\"[\\\"wwwwwwwwwwwwww\\\", \\\"wwwwwwwwwwwwww\\\"]\"\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"course\" : [\n\t\t\t\t\t\t\t\t\t\t\"[\\\"xxxxxxxxxxxxxx\\\", \\\"xxxxxxxxxxxxxx\\\"]\"\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"guide.slug\" : [\n\t\t\t\t\t\t\t\t\t\t\"[\\\"yyyyyyyyyyyyyy\\\", \\\"yyyyyyyyyyyyyy\\\"]\"\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"student.uid\" : [\n\t\t\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"rejectedPlans\" : [ ]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$sort\" : {\n\t\t\t\t\"sortKey\" : {\n\t\t\t\t\t\"stats.passed\" : 1,\n\t\t\t\t\t\"stats.passed_with_warnings\" : 1,\n\t\t\t\t\t\"stats.failed\" : 1,\n\t\t\t\t\t\"student.last_name\" : 1,\n\t\t\t\t\t\"student.first_name\" : 1\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$project\" : {\n\t\t\t\t\"notifications\" : false,\n\t\t\t\t\"assignments\" : false,\n\t\t\t\t\"_id\" : false,\n\t\t\t\t\"student\" : {\n\t\t\t\t\t\"_id\" : false\n\t\t\t\t},\n\t\t\t\t\"last_assignment\" : {\n\t\t\t\t\t\"_id\" : false,\n\t\t\t\t\t\"exercise\" : {\n\t\t\t\t\t\t\"_id\" : false\n\t\t\t\t\t},\n\t\t\t\t\t\"submission\" : {\n\t\t\t\t\t\t\"_id\" : false\n\t\t\t\t\t},\n\t\t\t\t\t\"guide\" : {\n\t\t\t\t\t\t\"_id\" : false\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"guide\" : {\n\t\t\t\t\t\"_id\" : false\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$facet\" : {\n\t\t\t\t\"results\" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$limit\" : NumberLong(30)\n\t\t\t\t\t}\n\t\t\t\t],\n\t\t\t\t\"total\" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$group\" : {\n\t\t\t\t\t\t\t\"_id\" : {\n\t\t\t\t\t\t\t\t\"$const\" : null\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"count\" : {\n\t\t\t\t\t\t\t\t\"$sum\" : {\n\t\t\t\t\t\t\t\t\t\"$const\" : 1\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$project\" : {\n\t\t\t\t\t\t\t\"_id\" : false,\n\t\t\t\t\t\t\t\"count\" : true\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t],\n\t\"ok\" : 1\n}\norganization_1_course_1_guide.slug_1_student.uid_1",
"text": "Sorry for reviving an old thread, but we just ran into this problem on a MongoDB 4.4 replica set and after running some tests we came to a solution that seems somewhat silly? I was hoping to get some insight if possible. Let me know if you’d rather I opened a new thread altogether!First some context:\nWe have a collection guide_progresses with around ~2M documents, and the following indices:Our application, as part of its regular work flow, runs the following query:Now, here comes the part I could use some insight with; upon asking for the query plan for the previous query, it returns the following:Meaning, if I’m understanding that correctly, that Mongo seems to be ignoring all other possible index scans, and using only the text search index.\nThis also means that a full text search is done over our whole 2M document collection, which seems to shoot RAM usage up extremely fast, which in our case ended up causing thrasing, slow performance across all other queries too, and after a little while, a server restart.As a temporary solution to this we’re found that changing the aforementioned query’s match stage to the following:Comes up with a much more desirable query plan:I realise this is not a perfect replacement of the text search feature, although depending on the use case and tuning the regexp similar results (if not exactly the same) can be obtained. This way however, instead of running the full text search over the whole 2M document collection, it is first filtered via our organization_1_course_1_guide.slug_1_student.uid_1 index, resulting in the “text search” to be executed (in our use case) only over a couple thousand documents, and negligible RAM usage and response times.So I was left wondering, is this the expected behavior? Does the text search feature just not play well with other indices? Or maybe we don’t have our text index properly setup?Any help on this would be greatly appreciated!",
"username": "Julian_Berbel_Alt"
},
{
"code": "",
"text": "Sorry for the long reply.Congratulations! Yes, finally we decided to move to 5 as in addition to this there are many other critical updates there. Thank you for your help! ",
"username": "Roman_Right"
},
{
"code": "",
"text": "Hi @Julian_Berbel_Alt,If this is still a problem for you, I would create a new topic indeed as I think there is an entire conversation to have here with someone that knows FTS in and out. Maybe @Karen_Huaulme or @Erik_Hatcher.Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
]
| Text search uses too much RAM | 2022-04-28T17:45:54.341Z | Text search uses too much RAM | 4,685 |
null | [
"containers",
"php"
]
| [
{
"code": "",
"text": "Experiencing Error ‘MongoDB\\Driver\\Manager’ not found when migrating from PHP 7.4 to 8.1 using docker on Debian 11 in Azure, Kindly assist with resolution or tips to resolve this please.I perform the below with no resolution:Make sure you has installed php mongodb\nsudo apt install php8.1-mongodbTo do this in Ubuntu, you’ll need the php-pear plugin and php-devsudo apt install php-dev php-pearthen you can run:\nsudo pecl install mongodbthen add “extension=mongodb.so” to php.ini\nthen restart your php serviceservice php8.1-fpm restartOrservice php-fpm restarthttps://www.php.net/manual/en/mongodb.installation.pecl.php#125027Below was the outcome:Reading package lists… Done\nBuilding dependency tree… Done\nReading state information… Done\nPackage php-pear is not available, but is referred to by another package.\nThis may mean that the package is missing, has been obsoleted, or\nis only available from another sourcePackage php-dev is not available, but is referred to by another package.\nThis may mean that the package is missing, has been obsoleted, or\nis only available from another sourceE: Package ‘php-dev’ has no installation candidate\nE: Package ‘php-pear’ has no installation candidate",
"username": "Daniel_Njoku"
},
{
"code": "peclpecl install mongodb",
"text": "Hello @Daniel_Njoku and welcome to the MongoDB developer community!Each version of PHP require a different .so PHP extension file. If you have the pecl command you can install the extension by running pecl install mongodbIf your webserver is up and running, I suggest using phpinfo() to checkWe have a great startup article for referenceGetting Started with MongoDB and PHP - Part 1 - Setupphpinfo() reference\nhttps://www.php.net/manual/en/function.phpinfo.php",
"username": "Hubert_Nguyen1"
},
{
"code": "",
"text": "This topic was automatically closed after 180 days. New replies are no longer allowed.",
"username": "system"
}
]
| \Error 'MongoDB\\Driver\\Manager' not found when migrating from PHP 7.4 to 8.1 | 2022-11-28T14:26:20.443Z | \Error ‘MongoDB\\Driver\\Manager’ not found when migrating from PHP 7.4 to 8.1 | 3,642 |
null | [
"python"
]
| [
{
"code": "",
"text": "It seems that every script I run that uses a MongoClient with auto_encryption_opts succeeds but ends with an exception when mongocrypt tries to close.The exception is:\nException ignored in: <function MongoCrypt._ del _ at 0x00000136DAFAAB00>\nTraceback (most recent call last):\nFile “C:\\git\\insights\\venv\\lib\\site-packages\\pymongocrypt\\mongocrypt.py”, line 296, in _ del _\nFile “C:\\git\\insights\\venv\\lib\\site-packages\\pymongocrypt\\mongocrypt.py”, line 292, in close\nAttributeError: ‘NoneType’ object has no attribute ‘mongocrypt_destroy’",
"username": "Harel_Danieli"
},
{
"code": "",
"text": "Thanks for reporting this issue. We’re tracking this issue in https://jira.mongodb.org/browse/PYTHON-3530. Please follow the jira ticket for updates.",
"username": "Shane"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Exception appears after running any script that connects to MongoDB | 2022-11-21T16:30:50.950Z | Exception appears after running any script that connects to MongoDB | 1,253 |
[
"monitoring"
]
| [
{
"code": "",
"text": "I can see the number of active connections are only 2 which include the current terminal. with just 2 active connections the system consuming lesser CPU and higher memory(all available) ! Is this a expected behavior. What is the need of consuming all memory with minimal operation?As you mentioned the crud operation happening continuously can you please let know from which IP the traffic is more , so that I can check locally. Is there a way we can check ? where it shows the active traffic from client?.Also this behavior is consistent . The metrics shared is just for 2 day but mongodb in this lab never releases memory (always 98%). Would like to understand more.PRIMARY> db.serverStatus().connections{ “current” : 151, “available” : 838709, “totalCreated” : 98058, “active” : 2 }Below image in my local set up clearly shows the virtual memory increased from 6GB to 8GB during insert operation adn after that remained high at 8GB despite no active connections or any curd operation.This memory came down when a restart of mongo triggered. Why the behavior is like this ?is this not expected that mongo should release the memory if there are no operations ?image-2021-05-21-15-19-28-1061862×789 75 KB",
"username": "S_P"
},
{
"code": "",
"text": "Hi @S_P and welcome in the MongoDB Community !MongoDB needs memory for several things:Which is, of course, on top of what your OS is consuming. So for example, when you run a query, it will first create a connection which will consume RAM and then release it (if you close it…). But it will also consume RAM to run the query and retrieve documents from disk which will then stay in the working set (most frequently used documents), until they are replaced at some point by more recently needed documents.Indexes can also get smaller or bigger of course, but they need to fit in RAM to ensure good performances.Usually, in most use cases, about 10-20% of RAM compared to the cluster size is about right. So 10 to 20GB or RAM for 100GB of data is about right.With only 8 GB of RAM and a bunch allocated for the OS, I guess you shouldn’t have more than 60GB or so of data without too many indexes and large in-memory sorts and aggregations. You would require more to support theses correctly.MongoDB tends to use all the RAM available to keep documents in RAM & avoid disk accesses. Too many IOPS can potentially be solved by adding more RAM as less docs would be evicted from the RAM too early and would need to be fetched from disk.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "ch will consume RAM and then release it (if you close it…). But it will also consumeHi, I am facing a similar issue, can you offer some help ?\nI am using pymongo to operate a mongo database, but after i quit the python program, the mongo server still take up the memory? how can i release this ? I can only restart the server now… In contrast, when using the mongo compass to operate data, the memory released just as i close the compass.",
"username": "Kyrie_Yan"
},
{
"code": "",
"text": "Hi @Kyrie_Yan,Sorry for the delay, I was on extended paternity leave.When you close MongoDB Compass, the only thing that would affect the RAM in the MongoDB cluster is the RAM that was used by the TCP connections opened by the program. MongoDB Compass probably close the connection properly when the program ends. Are you closing the connections properly when your Python script ends?The rest of the RAM wouldn’t be impacted.Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
]
| Mongo memory not released post usage as per observation | 2021-05-31T07:55:00.649Z | Mongo memory not released post usage as per observation | 6,436 |
|
null | []
| [
{
"code": "",
"text": "Hi,\nWe are planning to implement MONGO DB. Approx. 100 writes per month.How can we estimate RAM and DB Size initially. Please provide me with some docs as to what\nare the parameters that’s needs to be considered for planning this.",
"username": "Ramya_Navaneeth"
},
{
"code": "100000000/(60×60×24×31) = 37.34",
"text": "Hi @Ramya_Navaneeth,100 millions writes per month isn’t a lot if the workload is evenly distributed across the months.100000000/(60×60×24×31) = 37.34 writes/sec on average.The first question are:MongoDB Atlas Tiers are usually a good place to start. It gives you a good idea of the ideal ratio of CPU/Disk/RAM that you need.\nimage876×1404 154 KB\nUsually 15 to 20% of your data storage as RAM is a good starting point for a healthy cluster.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thanks Maxime. Here the data size is approx 80G, then the estimate RAM 30G, 8 cores, 2000 iops is sufficient?",
"username": "Ramya_Navaneeth"
},
{
"code": "",
"text": "Hi Maxime,Estimate RAM, vCPUs, Storage, IOPS and how many nodes we can configure initially?",
"username": "Ramya_Navaneeth"
},
{
"code": "",
"text": "A 3 nodes Replica Set is usually enough from 0 to 2 TB of data. Unless their is another constraint (data sovereignty, low latency targets, …), usually you only need to switch to a Sharded Cluster when you reach 2 TB.If you have about 100 GB of data today, then the equivalent of an M50 should do the job just fine. I’m more worried about the growth rate per month of 20 to 30%. Your hardware will have to follow this growth as you go so your cluster isn’t missing RAM in 2 months; unless you implement an archiving policy that retires data to a cheaper storage solution (S3) as fast as you are adding some more in the cluster.That’s what Atlas Online Archive automates, so you don’t have to upgrade to a higher tier every month or so. With this solution, you only keep the hot data in the cluster.Depending on your constraints and High Availability, you could also choose to run 5 nodes instead of 3. But the minimum is 3. 5 gives you more HA during maintenance operations for instance. It also depends how you implement the backups, etc.Atlas automates and solves all these problems for you. I can’t recommend it enough .Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hi Maxime,\nThanks for the detailed information. Here 20 to 30% is year growth.",
"username": "Ramya_Navaneeth"
},
{
"code": "",
"text": "Ha that’s more manageable !\nBut still, if you keep all the data in the same cluster, every couple of years you should consider an upgrade so the performances don’t deteriorate over time as the data grows.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thanks a lot Maxime for the detailed response.",
"username": "Ramya_Navaneeth"
},
{
"code": "",
"text": "Hi Maxime,May I know is it possible to store 60TB data in a single Replicate set? If yes, how about the performance issue? And how about the OS requirement?Yours,\nManhing",
"username": "Man_Hing_Chan"
},
{
"code": "",
"text": "Hi @Man_Hing_Chan,Sorry for the very long delay for my answer, I was on extended paternity leave.It would be nice to have this discussion in a dedicated thread (feel free to tag me @MaBeuLux88 in there).But my short answer is absolutely not. The cost of this Replica Set (RS) would be astronomic compared to a Sharded Cluster with “standard” & less pricey hardware.To manage 60TB in a RS, you would need at leastThe cost of these machines (if it’s even possible to build them) would be a lot more expensive that a sharded cluster with 4TB SSDs and 256GB RAM each and they would be full of scalability issues.Divide & conquer. There is no other way to sustain that much data without sharding.Also, let’s say one day you have a big crash and you have to restore from a backup. How long is this going to take to copy 60TB 3 times and restart mongod? See RPO & RTO.Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
]
| Estimate RAM, DB Size(Approx. 100 million writes per month) | 2022-04-26T12:51:23.850Z | Estimate RAM, DB Size(Approx. 100 million writes per month) | 5,279 |
null | [
"aggregation",
"realm-web"
]
| [
{
"code": "",
"text": "I am using email/password as my authentication provider for my realm web project., and my current user is null until my user is confirmed by logging in. It is at that point that my custom data object is setup. The issue is that now, my custom data object is set up each time my user logs in. How can I, for example, setup custom data on sign up, or ensure that it is setup only once? I am considering on signup, because that is certainly only going to happen once. Is there some aggregation I might use, any help would be appreciated",
"username": "Alfred_Lotsu"
},
{
"code": "",
"text": "Upon log in, read the custom user datraif it doesn’t exist, set it up.If it does exist, leave it alone?That can be done by calling the function to refresh the user data (In Swift: user.refreshCustomData) or just attempting to read that data, and if it fails, that data doesn’t exist.",
"username": "Jay"
}
]
| Set up custom data only once | 2022-11-27T20:21:16.754Z | Set up custom data only once | 1,757 |
null | [
"dot-net",
"replication",
"atlas-cluster"
]
| [
{
"code": "mongodb+srv://<user>:<pass>@cluster0.<cluster>.mongodb.netSystem.TimeoutException: A timeout occurred after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 }, OperationsCountServerSelector }. Client view of cluster state is { ClusterId : \"1\", ConnectionMode : \"ReplicaSet\", Type : \"ReplicaSet\", State : \"Disconnected\", Servers : [] }.\nmongodb://",
"text": "I have an Atlas cluster and I am connecting from my dotnet application using MongoDB C# Driver 2.11.2\nThe connection string is default mongodb+srv://<user>:<pass>@cluster0.<cluster>.mongodb.netWhen launching the app locally everything works fine.When I deploy it to AKS in 90% cases it failes to connect with a time-out error:If I restart a Pod multiple times it will work eventually. So it looks like some transient error.When I switch to mongodb:// scheme everything works fine in AKS. What is the possible reason of such behaviour in the cloud?",
"username": "Anton_Petrov"
},
{
"code": "",
"text": "We have been facing the same issue for the past 2 weeks. Thanks to your comments, we were able to resolve the issue by replacing the SRV scheme to mongodb:\\The issue is resolved after that. Following are the specs:\n.Net5 APIs using .Net Mongodb driver V2.14.1.Mongdb Atlas M10.We are not facing this issue in our QA or UAT environments. The only difference is that on these environments we are using free and M2 tier of Atlas.Still need to know the cause of this recurring issue by the Mongodb Atlas team.",
"username": "Asad_Khan"
},
{
"code": "",
"text": "We have seen the same thing recently. For us it looks like a combination of the c# drivers, SRV scheme and AKS. Thanks for the tip about changing the connection strings to use the mongodb:// format, this gives us a workaround although clearly it is less flexible should we decide to add or remove members.The really annoying this is that it is intermittent so sometimes it looks like all is ok, then it breaks again.It feels like a DNS issue in the AKS stack.",
"username": "Simon_Piercy"
},
{
"code": "",
"text": "Folks can you share what version of AKS you’re experiencing this issue on? I’d like to make sure we share this back with the Microsoft / AKS teamThanks\n-Andrew",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "AKS Kubernetes version: 1.21.7\nMongoDB C# Driver: 2.11.2\nDotnet SDK: 5.0\nMongo Atlas: M10",
"username": "Anton_Petrov"
},
{
"code": "",
"text": "There’s a chance the underlying issue is https://jira.mongodb.org/browse/CSHARP-4001",
"username": "Simon_Piercy"
},
{
"code": "",
"text": "Good catch, Simon: for any readers, in the interim you can always use the non-SRV connection string as a temporary workaround assuming this is indeed the issue",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "Resolved in c# driver 2.15.0",
"username": "Simon_Piercy"
},
{
"code": "",
"text": "We are still experiencing this issue on one of our deployments despite upgrading to 2.15.0 (C#). We are using the SRV connection string and are running on AKS (1.22.4).A timeout occurred after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 }, OperationsCountServerSelector }. Client view of cluster state is { ClusterId : “1”, ConnectionMode : “ReplicaSet”, Type : “ReplicaSet”, State : “Disconnected”, Servers : [] }.We have some identical deployments in other regions that are running just fine.Any idea what the problem might be?",
"username": "Stephen_Hall"
},
{
"code": "",
"text": "Interesting, does using the legacy style non-SRV connection string (shows under older drivers in the Atlas cluster connect UI) potetntially solve the issue? if yes there has been a long-running Azure DNS resolver issue for SRV addresses that are long: I’d love it if you’d escalate this with your Azure point of contact",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "Yes, using the legacy style non-SRV connection string solved the problem.",
"username": "Stephen_Hall"
},
{
"code": "",
"text": "Thank you for confirming: if you could let the Azure team know that this SRV limitation got in your way it would help us get them to prioritize this",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "Is it possible for you to point to a any github issues created for “Azure DNS resolver issue for SRV addresses”. We spent a good chunk of time starting this year to figure the connectivity issues.",
"username": "Neelabh_Kher"
}
]
| Mongodv+srv connection time-out when connecting from AKS (Azure Kubernetes Service) | 2022-02-10T06:34:16.823Z | Mongodv+srv connection time-out when connecting from AKS (Azure Kubernetes Service) | 7,310 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.