image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"aggregation",
"python",
"compass",
"atlas-search"
] | [
{
"code": "{'Sentence': \"Cameron Baker\", 'Embedding': [0.55, 0.89, 0.44]}import os\nimport sys\nimport pymongo\nfrom random import randint\nfrom dotenv import load_dotenv\nload_dotenv()\n\nCONNECTION_STRING_ = os.environ.get(\"CONNECTION_STRING\")\n# Connection String format: ?tls=true&authMechanism=SCRAM-SHA-256&retrywrites=true&maxIdleTimeMS=120000\"\n\nDB_NAME = \"documentsearch\"\nCOLLECTION_NAME = \"Test\"\nclient = pymongo.MongoClient(CONNECTION_STRING_)\ndb = client[DB_NAME]\ncollection = db[COLLECTION_NAME]\nquery_vector = [0.52, 0.28, 0.12] \n \npipeline = [ \n { \n \"$search\": { \n \"cosmosSearch\": { \n \"vector\": query_vector, \n \"path\": \"Embedding\", # vectorContent => Embedding\n \"k\": 2 \n }, \n \"returnStoredSource\": True \n } \n } \n] \n \nresults = collection.aggregate(pipeline) \n \nfor result in results: \n print(result) \n",
"text": "Hello, everyone. I just started learning about MongoDB.\nI have a question about accessing the data in MongoDB Compass via Python.\nI have studies the document: Quickstart: Azure Cosmos DB for MongoDB for Python with MongoDB driver.\nI stored the data in MongoDB Compass.\nThis data consist of{'Sentence': \"Cameron Baker\", 'Embedding': [0.55, 0.89, 0.44]}After that, I want to accessing the data in MongoDB Compass via Python.\nI have studies the document: Using vector search on embeddings in Azure Cosmos DB for MongoDB vCore\nI write the code for query vector as follow:When I run this code. I got the ServerSelectionTimeoutError.ServerSelectionTimeoutError: localhost:27017: [Errno 111] Connection refused, Timeout: 30s, Topology Description: <TopologyDescription id: 6481ed3e2a6be5698ffbd666, topology_type: Unknown, servers: [<ServerDescription (‘localhost’, 27017) server_type: Unknown, rtt: None, error=AutoReconnect(‘localhost:27017: [Errno 111] Connection refused’)>]>I will solve the problem about ServerSelectionTimeoutError?\nNote: I’ve tried changing the timeout but still can’t solve the problem.",
"username": "Jaturong_Jaitrong"
},
{
"code": "",
"text": "Hey @Jaturong_Jaitrong,Quickstart: Azure Cosmos DB for MongoDB for Python with MongoDB driver.The CosmosDB is a Microsoft product and is semi-compatible with a genuine MongoDB server. Hence, I cannot comment on how it works, or even know why it’s not behaving like a genuine MongoDB server.At the moment of this writing, CosmosDB currently passes only 33.51% of MongoDB server tests , so I would encourage you to engage CosmosDB support regarding this issue.To learn MongoDB, please refer to the MongoDB Univerisity, MongoDB DevCenter, and MongoDB Documentation.Best regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | ServerSelectionTimeoutError: localhost:27017: [Errno 111] Connection refused, Timeout: 30s | 2023-06-08T15:28:46.140Z | ServerSelectionTimeoutError: localhost:27017: [Errno 111] Connection refused, Timeout: 30s | 1,204 |
null | [
"replication",
"database-tools",
"backup"
] | [
{
"code": "$ /usr/bin/mongodump --username=<masteruser> --config=\"/home/<mongouser>/.MdbConf\" --authenticationDatabase=admin --out=/mnt/MongoBak/MongoBakUps/FullDump-$(date +%F.%H%M%S) --readPreference='{mode: \"secondary\", tagSets: [ { \"NickName\": \"read\" } ], maxStalenessSeconds: 120}'\n$ cat ~/.MdbConf\npassword: <masterpasswd>\nuri: mongodb://<masteruser>@mdb00.<domain>:27017,mdb01.<domain>:27017,mdb02.<domain>:27017/?authSource=admin&tls=false&replicaSet=rs0\n",
"text": "We have just converted our enterprise servers to community (v6.0.6). All is running well. What is puzzling is: when I perform a backup using mongodump, from a member of the cluster (replica set), the config db is included in the dump. However, if I perform the same backup, using the same config file, from a remote server that is not part of the replica set, the config db is not part of the dump.\nCan someone explain why?\nI suspect this was also occurring on the enterprise binaries, but I am not certain.This is the dump statement (sanitized) I am using (the user executing the command is a member of the mongod and mongodb groups in both cases):The contents of the .MdbConf file contains only the password and uri:Thanks in advance for your input",
"username": "Darrell_Cormier"
},
{
"code": "--db=<database>, -d=<database>mongodump",
"text": "from a remote server that is not part of the replica set, the config db is not part of the dump--db=<database>, -d=<database> Specifies a database to backup. If you do not specify a database, mongodump copies all databases in this instance into the dump files.Maybe that’s because your remote server doesn’t have a config database?",
"username": "Kobe_W"
}
] | Why does config db not backup from remote server | 2023-06-12T13:36:16.917Z | Why does config db not backup from remote server | 569 |
null | [
"python"
] | [
{
"code": "import requests\nimport json\n\n# Substitua pelos valores reais\nPUBLIC_KEY = 'xxxxxxxxx'\nPRIVATE_KEY = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'\nGROUP_ID = 'xxxxxxxxxxxxxxxxxxxxxxxx'\nCLUSTER_NAME = 'cloud-test'\n\n# URL base da API\nBASE_URL = 'https://cloud.mongodb.com/api/atlas/v1.0'\n\n# Autenticação\nauth = requests.auth.HTTPDigestAuth(PUBLIC_KEY, PRIVATE_KEY)\n\n# Atualização das configurações do cluster\ncluster_url = f'{BASE_URL}/groups/{GROUP_ID}/clusters/{CLUSTER_NAME}'\ndata = {\n \"providerSettings\": {\n \"providerName\": \"AWS\", # Substitua pelo seu provedor (AWS, GCP, AZURE)\n \"regionName\": \"US_EAST_1\", # Substitua pela sua região\n \"instanceSizeName\": \"M20\"\n },\n \"autoScaling\": {\n \"diskGBEnabled\": True,\n \"compute\": {\n \"enabled\": True,\n \"scaleDownEnabled\": True\n }\n }\n}\n\nresponse = requests.patch(cluster_url, auth=auth, json=data)\n\n# Tratar a resposta\nif response.status_code == 200:\n print('Cluster updated successfully.')\nelse:\n print(f'Error updating cluster: {response.content}')\n\npython scale_cluster.py\nError updating cluster: b'{\"detail\":\"Compute auto-scaling min instance size required.\",\"error\":400,\"errorCode\":\"COMPUTE_AUTO_SCALING_MIN_INSTANCE_SIZE_REQUIRED\",\"parameters\":[],\"reason\":\"Bad Request\"}'\n# Atualização das configurações do cluster\ncluster_url = f'{BASE_URL}/groups/{GROUP_ID}/clusters/{CLUSTER_NAME}'\ndata = {\n \"providerSettings\": {\n \"providerName\": \"AWS\", # Substitua pelo seu provedor (AWS, GCP, AZURE)\n \"regionName\": \"US_EAST_1\", # Substitua pela sua região\n \"instanceSizeName\": \"M20\"\n },\n \"autoScaling\": {\n \"diskGBEnabled\": True,\n \"compute\": {\n \"enabled\": True,\n \"scaleDownEnabled\": True,\n \"minInstanceSize\": \"M10\",\n \"maxInstanceSize\": \"M20\"\n }\n }\n}\n\npython scale_cluster.py\nError updating cluster: b'{\"detail\":\"Invalid attribute minInstanceSize specified.\",\"error\":400,\"errorCode\":\"INVALID_ATTRIBUTE\",\"parameters\":[\"minInstanceSize\"],\"reason\":\"Bad Request\"}'\n",
"text": "Hello,\nI’m trying to develop a python script to schedule a scale up of a mongodb atlas instance, but I’m having a lot of difficulty with some steps.\nThis is my script:But I keep getting the following error:if I change to answer the message forDoes anyone in the group have any idea of this problem.",
"username": "Edson_Fernandes_Cunha"
},
{
"code": "\"autoScaling\"\"providerSettings\"data",
"text": "Hi @Edson_Fernandes_Cunha,I’m not entirely sure this is the cause of the error but can you try putting the \"autoScaling\" object inside of the \"providerSettings\" object as to match the Update Configuration of One Cluster documentation (specifically the body request)?If you’ve tried that and it doesn’t work, resend the data portion of your request after the changes and any new error messages.Regards,\nJason",
"username": "Jason_Tran"
}
] | Auto Scale APi Python eoor | 2023-06-12T14:49:52.618Z | Auto Scale APi Python eoor | 590 |
null | [] | [
{
"code": "",
"text": "Hi! In the last days I can’t connect to my free cluster, I put the my IP address in the Network IP list but always receive connection to IP ADDRESS closed.I use a VPN and never get this error.Someone can help me?",
"username": "Pedro_12290"
},
{
"code": "mongosh",
"text": "In the last days I can’t connect to my free cluster, I put the my IP address in the Network IP list but always receive connection to IP ADDRESS closed.I use a VPN and never get this error.We’ll need more information to try and help. Provide the following details:Regards,\nJason",
"username": "Jason_Tran"
}
] | Connection always closed to connect with my free cluster | 2023-06-12T20:46:35.721Z | Connection always closed to connect with my free cluster | 316 |
null | [
"node-js"
] | [
{
"code": " return process.dlopen(module, path.toNamespacedPath(filename));\n ^\n\nError: The specified module could not be found.\n.......\\node_modules\\realm\\build\\Release\\realm.node\nimport { createRequire } from \"module\";\nconst require = createRequire(import.meta.url);\nconst Realm = require('realm');\nimport * as Realm from 'realm';\nconst realm = require('realm');\n",
"text": "I cannot obtain a reference to the realm module from nodejs when either a ES6 or CJS module type.repro:ErrorFails:Also fails:Also fails:node version is:\nv16.13.1\nrealm: 10.21.1",
"username": "Joseph_Bittman"
},
{
"code": "// Prevent React Native packager from seeing modules required with this\nconst nodeRequire = require;\n\nfunction getRealmConstructor(environment) {\n switch (environment) {\n case \"node.js\":\n case \"electron\":\n ----> return nodeRequire(\"bindings\")(\"realm.node\").Realm;\n",
"text": "The realm code that is failing is lib/index.js",
"username": "Joseph_Bittman"
},
{
"code": "",
"text": "Turns out that if I downgrade realm to 10.20.0, then I can get a reference to realm. So something broke starting in 10.21.0. @Ian_WardThis is on windows 10, v16.13.1.",
"username": "Joseph_Bittman"
},
{
"code": "import Realm from \"realm\";\nconsole.log(Realm)\nError: The specified module could not be found.\n\\\\?\\C:\\Users\\C\\AppData\\Roaming\\npm\\node_modules\\realm\\build\\Release\\realm.node\n at Object.Module._extensions..node (node:internal/modules/cjs/loader:1203:18)\n at Module.load (node:internal/modules/cjs/loader:997:32)\n at Function.Module._load (node:internal/modules/cjs/loader:838:12)\n at Module.require (node:internal/modules/cjs/loader:1021:19)\n at require (node:internal/modules/cjs/helpers:103:18)\n at bindings (C:\\Users\\C\\AppData\\Roaming\\npm\\node_modules\\realm\\node_modules\\bindings\\bindings.js:112:48)\n at getRealmConstructor (C:\\Users\\C\\AppData\\Roaming\\npm\\node_modules\\realm\\lib\\index.js:28:37)\n at Object.<anonymous> (C:\\Users\\C\\AppData\\Roaming\\npm\\node_modules\\realm\\lib\\index.js:53:26)\n at Module._compile (node:internal/modules/cjs/loader:1119:14)\n at Module._extensions..js (node:internal/modules/cjs/loader:1173:10) {\n code: 'ERR_DLOPEN_FAILED'\n}\n",
"text": "The same happens on my Windows 11",
"username": "ccc"
},
{
"code": "",
"text": "Did anyone find a solution for this?I’m building an electron and Web app using Realm. In my electron app (which apparently wants me to use require - an electron issue - and not use import / esm), I can’t seem to use Realm = require(‘realm’)…Any ideas?",
"username": "d33p"
},
{
"code": "",
"text": "I had a mix of import / require in my project. When I removed all the imports and used the following for Realm, it works in Node / Electron:const Realm = require(‘realm’);",
"username": "d33p"
}
] | Cannot find module realm.node when from ES6 or CJS module | 2022-09-22T02:25:25.182Z | Cannot find module realm.node when from ES6 or CJS module | 3,027 |
[
"aggregation",
"queries",
"atlas-functions",
"atlas-search"
] | [
{
"code": " if(filter.query.category)\n {\n strCategory = filter.query.category;\n strMustData[\"must\"].push({\"text\": {\"path\": \"category\",\"query\": strCategory}});\n }\n \n const filteredItems = await collection.aggregate([\n {\n $search: {\n index: \"productIndexes\",\n \"compound\": {\n \"must\": \n strMustData.must\n }\n }\n },\n \n {\n $project: {\n \"_id\": 10,\n \"title\": 1,\n \"object\": 2,\n \"price\":3,\n \"imgArray\":6,\n \"category\": 7,\n score: { $meta: \"searchScore\" }\n }\n }\n ]).toArray();\n return filteredItems;\n",
"text": "Here is my functionI pass a category ID and get the data and it returns the data but its only returns 50000 itemsthere is more than this in the data but i am only returning the 50000 is there a way to remove this limit",
"username": "Aneurin_Jones"
},
{
"code": "",
"text": "Hi @Aneurin_Jones and welcome to MongoDB community forums!!there is more than this in the data but i am only returning the 50000 is there a way to remove this limitAs per my understanding, there is no limit on the results obtained by the query.\nHowever, to understand further could you help me with some information which would help me reproduce the issue.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "GraphQLHttpRequest mqRequest = new()\n {\n Query = @\"\n query {\n myTable (limit: \" + int.MaxValue.ToString() + @\")\n {\n _id,\n partition\n }\n }\n \"\n };\n",
"text": "Hi @Aasawari,I’m having the same issue. I’m doing the below with the C# graphql client.I have around 66k documents. It’s returning only 50k.Thanks,Tam",
"username": "Tam_Nguyen1"
},
{
"code": "",
"text": "Oh I see there’s a service limitation of 50k documents returned for standard clusters.",
"username": "Tam_Nguyen1"
}
] | Hitting a limit of 50000 items | 2023-03-24T15:11:58.002Z | Hitting a limit of 50000 items | 1,150 |
|
null | [
"cluster-to-cluster-sync"
] | [
{
"code": "mongosync_reserved_for_inernal_use",
"text": "We have finished a migration and successfully “COMMITED”. Is it now safe to delete the table that was created by mongosync mongosync_reserved_for_inernal_use ?",
"username": "Kay_Khan"
},
{
"code": "",
"text": "Hi Kay! It is safe to delete it now",
"username": "Alexander_Komyagin"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Safe to delete temporary collection after COMMITED state | 2023-05-25T09:20:39.200Z | Safe to delete temporary collection after COMMITED state | 774 |
null | [] | [
{
"code": "",
"text": "Hi,I would like to know if I can “restart” the a sync session after user custom data has been updated.Currently I’m getting a compensating write error because changes from user custom data wasn’t reflected on the sync session.I have an example of my use case in the mongodb device sync permissions guide, which can be found in this link.Considering the example above, a user custom data has the field “subscribedTo” which is an array of ids. This field is used in a permission role, it checks if the document field is inside the “subscribedTo” array.Everything works as the guide shows, the problem appear when “subscribedTo” field is updated. At that point, I’m calling user.refreshCustomData() to get the newest addition, but even after that, if I try to insert a document, I get this compensating write error “Reporting compensating write for client version 3312 in server version 3635: Client attempted a write that is outside of permissions or query filters; it has been reverted”.The guide says that when the “subscribedTo” field is updated, the changes don’t take affect until current session is closed and new session is started.I’m wondering if I can restart the current session without closing my realm connection, since I would like to keep my sync subscriptions intact.Thank you!",
"username": "Rossicler_Junior"
},
{
"code": "",
"text": "I found a “workaround” for the issue. I’m getting the current sync session and calling “pause()” and “resume()” one right after the other, to force the sync session to stop and start again. Is this the best practice for this? Or is there a better solution for this?",
"username": "Rossicler_Junior"
},
{
"code": "",
"text": "Hello!\nCurrently, the sync server caches custom user data for the duration of the session. So, as you’ve found, pausing/resuming is a good way to restart the session, resulting in updated permissions. This is a good solution for this problem",
"username": "Sudarshan_Muralidhar"
},
{
"code": "",
"text": "Sounds good, thanks for the reply. I would still suggest to provide and example of how to handle that in the mentioned guide, or some other part of the documentation. Thanks again.",
"username": "Rossicler_Junior"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Reset sync session after user custom data update | 2023-06-07T22:01:43.586Z | Reset sync session after user custom data update | 898 |
null | [
"replication",
"python"
] | [
{
"code": "pymongo.errors.ServerSelectionTimeoutError: No replica set members match selector \"Primary()\", Timeout: 5.0s\n",
"text": "I have a M20-M30 cluster, and it scales up and down often. This time while scaling there was about 30-60min downtime.Our customers had issues with accessing our service because of this.Error example",
"username": "U_U"
},
{
"code": "",
"text": "This is definitely unexpected, and likely suggests a buggy driver version or connection misconfiguration. Have you opened a support case?",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "(for context, vertical scaling events require a rolling replacement which does require replica set level elections but your driver should seamlessly handle these elections–and retryable writes ease this burden on the write path)",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "I can’t seem to be able to open a support ticket, it says support not enabled",
"username": "U_U"
},
{
"code": "",
"text": "Weird, can you see he chat icon in the lower right? that can also be a place to ask for help",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "Hello,\nDid you manage to run autoscale via python, I’m trying it here and I’m having some difficulties, would you mind sharing how you did it.",
"username": "Edson_Fernandes_Cunha"
}
] | Atlas Down during autoscale | 2022-07-22T08:57:01.091Z | Atlas Down during autoscale | 2,654 |
null | [
"ops-manager",
"kubernetes-operator"
] | [
{
"code": "",
"text": "I deployed ops manager in local mode and mongodb enterprise kubernetes operator.\nI followed the documentation and I didn’t understood the part of putting mongodb binaries in the ops manager pvc. If I have image of Mongodb which the operator know to use when deploying custom reasurce of Mongo why do I need the binaries?. I tried to put them in the pvc but I am still getting error when creating Mongodb cr. 401 unauthorized.",
"username": "ori.simhovich"
},
{
"code": "",
"text": "Hi @ori.simhovichI can’t specifically advise on the error - we’d need more information and I’d suggest opening a support case so you get quick responses to help you work this out. MongoDB Support PortalBut I can explain why you need the binaries. In the default mode the agent pulls the binary for mongod from the internet. In local mode you’re telling it to get it from Ops Manager. As a result there’s no internet connection needed, instead you need to have downloaded the binary and loaded it into the mounted volume.In Kubernetes, while we support both Local and Remote mode, it’s sometimes easier to use Remote mode, where instead of pulling the binary from the internet (default mode) or from Ops Manager (local mode), you use an HTTP(S) server. This can be easier as its often easier to load the binaries onto an HTTP(S) server than having to load it into the volume attached to Ops Manager.",
"username": "Dan_Mckean"
},
{
"code": "",
"text": "Hi @Dan_Mckean Thanks for responding!\nIs there any way to check if the ops manager in local mode detect the binaries?\nIf I enter the pod terminal I can see the fils in the mongodb-releases director. y but how can I know that the ops manager can use them?",
"username": "ori.simhovich"
},
{
"code": "",
"text": "Hi, I’m not sure there is a way to manually check that. If they’re in the directly that should work.I suggest opening a support case to help progress this ",
"username": "Dan_Mckean"
}
] | Mongodb enterprise kubernetes operator failed with ops manger | 2023-06-11T15:10:52.893Z | Mongodb enterprise kubernetes operator failed with ops manger | 640 |
[
"compass"
] | [
{
"code": "",
"text": "I am accessing my self hosted MongoDB 6 via Compass as Admin.The Performance tab does not populate.\nimage2866×1502 237 KB\nWhat privileges/roles could I be lacking ?Thanks",
"username": "Robert_Alexander"
},
{
"code": "",
"text": "Hi @Robert_Alexander,\nSince it is your own server, I would say give administrator privileges on each database, so:BR",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "I think I tried doing what you kindly suggested but failing In mongosh I did:use admin\ndb.grantRolesToUser(“admin”,“dbAdminAnyDatabase”)\nimage2854×1434 247 KB\n",
"username": "Robert_Alexander"
},
{
"code": "",
"text": "Hi @Robert_Alexander ,\nIn which database you’ve created the user?\nIn your connection string is set the correct db, for a correct authentication?BR",
"username": "Fabio_Ramohitaj"
},
{
"code": "admin> show users\n[\n {\n _id: 'admin.admin',\n userId: new UUID(\"4bfc7477-b0a5-4108-bcc5-c3dfae401d0b\"),\n user: 'admin',\n db: 'admin',\n roles: [\n { role: 'readWriteAnyDatabase', db: 'admin' },\n { role: 'userAdminAnyDatabase', db: 'admin' },\n { role: 'dbAdminAnyDatabase', db: 'admin' }\n ],\n mechanisms: [ 'SCRAM-SHA-1', 'SCRAM-SHA-256' ]\n }\n",
"text": "",
"username": "Robert_Alexander"
}
] | Not able to get performance data as Admin in the performance tab of Compass | 2023-06-10T17:29:27.402Z | Not able to get performance data as Admin in the performance tab of Compass | 449 |
|
[
"node-js",
"mongoose-odm"
] | [
{
"code": "const mongoose = require('mongoose');\n\nconst connectDatabase = async () => {\n try {\n await mongoose.connect(process.env.DB_LOCAL_URI, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n });\n console.log(`MongoDB Database connected with HOST: ${mongoose.connection.host}`);\n } catch (error) {\n console.error('MongoDB connection error:', error);\n }\n};\n\nmodule.exports = connectDatabase;\nconst app = require('./app')\nconst connectDatabase = require('./config/database')\n\n\nconst dotenv = require('dotenv');\n\n// Setting up config file\ndotenv.config({ path: 'backend/config/config.env' })\n\n// Connecting to database\nconnectDatabase();\n\napp.listen(process.env.PORT, () => {\n console.log(`Server started on PORT: ${process.env.PORT} in ${process.env.NODE_ENV} mode.`)\n})\nconst mongoose = require('mongoose')\n\nconst productSchema = new mongoose.Schema({\n name: {\n type: String,\n required: [true, 'Please enter product name'],\n trim: true,\n maxLength: [100, 'Product name cannot exceed 100 characters']\n },\n price: {\n type: Number,\n required: [true, 'Please enter product price'],\n maxLength: [5, 'Product name cannot exceed 5 characters'],\n default: 0.0\n },\n description: {\n type: String,\n required: [true, 'Please enter product description'],\n },\n ratings: {\n type: Number,\n default: 0\n },\n images: [\n {\n public_id: {\n type: String,\n required: true\n },\n url: {\n type: String,\n required: true\n },\n }\n ],\n category: {\n type: String,\n required: [true, 'Please select category for this product'],\n enum: {\n values: [\n 'Electronics',\n 'Cameras',\n 'Laptops',\n 'Accessories',\n 'Headphones',\n 'Food',\n \"Books\",\n 'Clothes/Shoes',\n 'Beauty/Health',\n 'Sports',\n 'Outdoor',\n 'Home'\n ],\n message: 'Please select correct category for product'\n }\n },\n seller: {\n type: String,\n required: [true, 'Please enter product seller']\n },\n stock: {\n type: Number,\n required: [true, 'Please enter product stock'],\n maxLength: [5, 'Product name cannot exceed 5 characters'],\n default: 0\n },\n numOfReviews: {\n type: Number,\n default: 0\n },\n reviews: [\n {\n user: {\n type: mongoose.Schema.ObjectId,\n ref: 'User'\n },\n name: {\n type: String,\n required: true\n },\n rating: {\n type: Number,\n required: true\n },\n comment: {\n type: String,\n required: true\n }\n }\n ],\n user: {\n type: mongoose.Schema.ObjectId,\n ref: 'User'\n },\n createdAt: {\n type: Date,\n default: Date.now\n }\n})\n\nmodule.exports = mongoose.model('Product', productSchema);\nconst Product = require('../models/product')\n\n// Create new product => /api/v1/product/new\nexports.newProduct = async(req, res, next) => {\n\n const product = await Product.create(req.body);\n\n res.status(201).json({\n success: true,\n product\n })\n}\nexports.getProducts = (req, res, next) => {\n res.status(200).json({\n success: true,\n message: 'This route will show all products in database.'\n })\n}\n",
"text": "Hi all, I’m new to learn MERN stack and node js. However, I have been struggling the mongoose error for a long while. Hopefully some can help me out! MongoDB is connected.\nScreenshot 2023-06-07 1535201777×366 96.6 KBHere is the database.js .Here’s the server.js:This is the model, prduct.js:Here’s the controller.",
"username": "Phyllis"
},
{
"code": "",
"text": "Hey @Phyllis,Welcome to the MongoDB Community forums.Thanks for sharing the code snippets. Could you please share the error message/log you are seeing?Best,\nKushagra",
"username": "Kushagra_Kesav"
}
] | MongooseError: Operation `products.insertOne()` buffering timed out after 10000ms | 2023-06-07T19:39:59.991Z | MongooseError: Operation `products.insertOne()` buffering timed out after 10000ms | 717 |
|
null | [
"brisbane-mug"
] | [
{
"code": "",
"text": "Hi everyone,I’m Thiago Bernardes , and, like many of you, I am passionate about MongoDB.At the moment, I’m working full-time on some revenue strategies from a product that I launched a few months ago, and it’s getting tracking. The foundation of this product is running in MongoDB, which has been the game changer in pivoting and changing the product strategy quickly.I have been living in Brisbane for the last 16 years, and I’m originally from Brazil (let’s say I’m a Braussie). I have always been advocating for MongoDB in every company I go to, as coming from 15 years of experience with SQL Server, I can see the benefits of pivoting to MongoDB, so it’s always been a nice journey to introduce this technology into new projects.If you know people around Brisbane who use MongoDB, feel free to connect them with me I hope I can help to grow this community around SEQ,",
"username": "Thiago_Bernardes1"
},
{
"code": "",
"text": "Hey @Thiago_Bernardes1,\nWelcome to the MongoDB Community!Congrats on the successful launch of your product. I’m sure you’ll find this community to be a valuable resource for all things MongoDB.Thank you for taking the initiative to bring the Brisbane community together. Excited to see the community grow and foster knowledge sharing. ",
"username": "Harshit"
}
] | Hey all, I'm Thiago, the new MUG Leader in Brisbane, Austrália | 2023-06-12T07:18:06.986Z | Hey all, I’m Thiago, the new MUG Leader in Brisbane, Austrália | 722 |
null | [
"python",
"atlas-triggers"
] | [
{
"code": "",
"text": "Hello,I would like to know, If I can a schedule a trigger or database trigger\nif code is written in python in the external editor.For example: I have written a code using python in vs code by making use of pymongo, that function does some job and now I would like to schedule it to run the program either by timely basis or if there is any change in database.\n(there is a option in Atlas for trigger the function which supports node js, similarly would like to know\nif can do the same for the function written in python).\nAny help would be helpful.\nThank you",
"username": "SHANMUKHA_K_H"
},
{
"code": "",
"text": "Hi @SHANMUKHA_K_H and welcome to the MongoDB community forum!!If I understand your question correctly, you are looking to set up the Triggers using Python for your application.\nWe have an Admin API that allows you to construct triggers, thus combining it with a typical Python HTTP client could be useful here.https://www.mongodb.com/docs/atlas/app-services/admin/api/v3/#tag/triggers/operation/adminCreateTriggerPlease let me know if my understanding is wrong here.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Can we write the trigger function in python?\n\nimage1228×664 79.8 KB\n",
"username": "Rayudu_Dola"
}
] | Trigger a function written in python | 2022-11-23T06:42:36.861Z | Trigger a function written in python | 2,501 |
null | [
"time-series"
] | [
{
"code": "2020-03-02T01:11:18.965Z",
"text": "So to put it simply, we have a huge database of entries and the time associated with each entry is being saved in UTC in the standard format/syntax 2020-03-02T01:11:18.965ZNow there are multiple users using the application that uses VueJS as the front-end and each user has the ability to set his own time zone.Let’s say the user wants to calculate stats based on these entries for durations like Today, Yesterday, This Week, This Month.Now for the Today stat, we would need to calculate data from entries that are between 00:00 and 23:59 in this user’s set timezone but the problem is that the entries are saved in the system in the UTC timezone for all users.My question is - what is the best and most efficient (performance wise) way to carry out such calculations? The idea is to continue to keep all entries in UTC for everyone but changing timezones means that some entries might go in the previous day and some might go in the next day based on the time zone set which would affect the calculation of the stats completely.",
"username": "Dev_User"
},
{
"code": "",
"text": "Hi @Dev_User ,Since the indexed values are in UTC , before you provide the predicate to the query/aggregation you need to move personal timezone of the predicate to UTC. Since it is at most 2 dates (upper and lower bound) its a fairly light operation.Once the matching of the documents complete you can decide to move the dates into user timezone using aggregation operators like $dateToString in $project or doing it on front end side when forming vue to convert any UTC to relevant timezones…Thanks\nPavel",
"username": "Pavel_Duchovny"
}
] | Querying data according to time zone set by user when all entries are in UTC | 2023-06-10T12:11:49.023Z | Querying data according to time zone set by user when all entries are in UTC | 859 |
null | [
"queries",
"transactions"
] | [
{
"code": "await session.withTransaction(async () => {\n const coll1 = client.db('mydb1').collection('foo');\n\n await coll1.find( { qty: { $gt: 4 } }, { session } );\n \n // some code before update\n // if the retrieved data from the above, changed due to a separate write(separate session/transaction) will this transaction abort?\n\n await coll1.update( \n { id: 3 },\n { $set: { qty: 2 } },\n { session }\n )\n\n}, transactionOptions);\n",
"text": "In the sample code below, what will happen if retrieved data in the transaction changes caused by a outside/separated write operation before the transaction end or committed? does transaction automatically aborted?I read the documentation about transaction, to my understanding it mostly offer data atomicity and durability, but I can’t find any what will happen to a transaction if any read data changed caused by outside/separate write operation",
"username": "Paulo_Lucero"
},
{
"code": "",
"text": "If a transaction is in progress and a write outside the transaction modifies a document that an operation in the transaction later tries to modify, the transaction aborts because of a write conflict.If a transaction is in progress and has taken a lock to modify a document, when a write outside the transaction tries to modify the same document, the write waits until the transaction ends.from doc.",
"username": "Kobe_W"
}
] | Question on transaction isolation | 2023-06-11T09:51:05.604Z | Question on transaction isolation | 566 |
null | [
"aggregation",
"queries",
"node-js"
] | [
{
"code": "[\n {\n \"_id\": ObjectId(\"648031bd784fbf6081de41cf\"),\n \"orgId\": 1,\n \"applications\": {\n \"_id\": ObjectId(\"6479ddda073ced427d04e9dd\"),\n \"orgId\": 1,\n \"firstTimeInstalled\": [\n {\n \"refId\": ObjectId(\"648031bd784fbf6081de41cf\"),\n \"installDate\": \"2023-06-08T09:18:49.233+00:00\"\n },\n {\n \"refId\": ObjectId(\"6479ddda073ced427d04e9dd\"),\n \"installDate\": \"2023-06-08T09:18:49.233+00:00\"\n }\n ]\n }\n },\n {\n \"_id\": ObjectId(\"648031bd784fbf6081de41cd\"),\n \"orgId\": 1,\n \"applications\": {\n \"_id\": ObjectId(\"6479ddda073ced427d04e9dd\"),\n \"orgId\": 1,\n \"firstTimeInstalled\": [\n {\n \"refId\": ObjectId(\"648031bd784fbf6081de41cf\"),\n \"installDate\": \"2023-06-08T09:18:49.233+00:00\"\n },\n {\n \"refId\": ObjectId(\"6479ddda073ced427d04e9dd\"),\n \"installDate\": \"2023-06-08T09:18:49.233+00:00\"\n }\n ]\n }\n }\n]\napplications.firstTimeInstalled.refId_idrefIdfirstTimeInstalleddb.collection.aggregate([\n {\n $match: {\n \"applications.firstTimeInstalled.refId\": {\n $ne: \"$_id\"\n }\n }\n }\n])\n",
"text": "i have a dataset like this. I want to filter out the docs where any of the applications.firstTimeInstalled.refId is not equal to the _id. So in this I should get back the second doc only because in the first document, the _id is same as the first refId in firstTimeInstalledI triedbut still it is giving back both the docs.here is a demo playground",
"username": "schach_schach"
},
{
"code": "$match$not$indb.collection.aggregate([\n {\n $match: {\n $expr: {\n $not: {\n $in: [\"$_id\", \"$applications.firstTimeInstalled.refId\"]\n }\n }\n }\n }\n])\n",
"text": "Hello @schach_schach,The $match can’t allow checking the internal fields condition directly, you need to use $expr operator, $not and $in operator to match your condition,Playground",
"username": "turivishal"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | I want to $match where a field of an array of objects is not equal to the _id | 2023-06-11T16:34:06.111Z | I want to $match where a field of an array of objects is not equal to the _id | 488 |
[] | [
{
"code": "",
"text": "Hi Steeve, How I can achieve it using Atlas search, need to get matching of “dodge ram” 1st\n\nsearch_query965×757 35.4 KB\n",
"username": "Shopi_Ads"
},
{
"code": "phrase",
"text": "Hi @Shopi_Ads,I’ve moved this to a new post as it is a different topic to the original post in which you replied to.In saying so, does using phrase work for you?If you require further assistance, please provide the following details:Please also provide these in text format so we can copy and paste to test in our test environment if required as opposed to screenshots.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "[\n {\n $search: {\n index: \"SearchProduct\",\n text: {\n query: \"dodge ram floor mats\",\n path: \"name\",\n fuzzy: {}\n }\n }\n }\n]\n",
"text": "Hi @Jason_Tran ,Phrase does not work as it does not operate under OR condition. We want to match if any of the word is matched in queryLet me know if you need more info\nThanks",
"username": "Shopi_Ads"
},
{
"code": "dodge> db.collection.find({},{_id:0})\n[\n { name: 'dodge' },\n { name: 'dodge ram' },\n { name: 'ram dodge 2016' },\n { name: 'dodge ram 2500' },\n { name: 'floor mats for dodge ram' }\n]\ncompoundshouldnameindexdb.collection.aggregate([\n{\n $search: {\n index: 'nameindex',\n compound: {\n should: [{\n text: {\n query : 'dodge ram floor mats',\n path: 'name'\n }\n }]\n }\n }\n},\n{\n $project: {\n _id: 0,\n 'name': 1,\n 'score': {$meta: 'searchScore'}\n }\n}\n])\nscore[\n { name: 'floor mats for dodge ram', score: 1.082603096961975 },\n { name: 'dodge ram', score: 0.19285692274570465 },\n { name: 'ram dodge 2016', score: 0.16547974944114685 },\n { name: 'dodge ram 2500', score: 0.16547974944114685 },\n { name: 'dodge', score: 0.05366339907050133 }\n]\nqueryString",
"text": "Atlas search index definition\nIndex created on NameI’ve provided further details below which may help but if not and in future, can you provide the JSON format of the index definition? The behaviour may differ if I use the default index definition in my own environment compared to your environment so testing would not be as effective.Sample document(s)\nDocuments with name : dodge, dodge ram, ram dodge 2016, dodge ram 2500, floor mats for dodge ramIn future could, you provide documents in JSON format so we can easily copy and paste them into our test environment if we need to test? This makes it easier for the people assisting so that it would be easier for them to help you.In saying so, i’ve used the default index definition for my test environment. Please see sample documents below:I then utilised the compound operator with the should clause (my index is called nameindex for this test):I’ve projected the score for your reference as well. The output is below:You might also be able to use queryString to achieve what you’re after.With any of these examples, please test thoroughly in a test environment to ensure it suits all your use case(s) and requirement(s).Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "{\n \"analyzer\": \"lucene.english\",\n \"searchAnalyzer\": \"lucene.english\",\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"name\": {\n \"type\": \"string\"\n }\n }\n },\n \"storedSource\": {\n \"include\": [\n \"name\"\n ]\n }\n}\n",
"text": "Index definitionResults:\nI am getting right output for “dodge ram floor mats” but not for “dodge ram” and “dodge ram 2500”.\nWe are looking to get the high score for maximum number of matching words first.In “dodge ram 2500” the output of “dodge” should be last as there are other data with more then 1 matches\n\nimage755×776 41.5 KB\n",
"username": "Shopi_Ads"
},
{
"code": "",
"text": "It works as expected when we have records only which belongs to “dodge…” which are 5 records. But its not working on full set of data which contain non-dodge data as well\nPlease find the full collection attached\nsearch_prod.json (1.4 MB)",
"username": "Shopi_Ads"
},
{
"code": "dldldodge{\n \"analyzer\": \"lucene.english\",\n \"searchAnalyzer\": \"lucene.english\",\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"name\": {\n \"indexOptions\": \"docs\",\n \"norms\": \"omit\",\n \"type\": \"string\"\n }\n }\n },\n \"storedSource\": {\n \"include\": [\n \"name\"\n ]\n }\n}\nnorms\"omit\"indexOptions\"docs\"\"dodge\"dodge>db.collection.aggregate([\n {\n \t$search: {\n \t\tindex: 'nameindex',\n \t\ttext: {\n \t\t\tquery: 'dodge ram 2500',\n \t\t\tpath: 'name'\n \t\t}\n \t}\n },\n {$project:{_id:0,name:1,score:{$meta:'searchScore'}}}\n ])\n[\n { name: 'dodge ram 2500', score: 3.8390729427337646 },\n {\n name: 'Floor Mats for Dodge Ram 2019 2020 1500 All new Crew Cab (not Classic) weather guard Front & Rear Row TPE Slush Liner Mats',\n score: 1.9325730800628662\n },\n { name: 'dodge ram', score: 1.9325730800628662 },\n { name: 'ram dodge 2016', score: 1.9325730800628662 },\n { name: 'floor mats for dodge ram', score: 1.9325730800628662 },\n { name: 'Floor Mats for Dodge Ram 2019', score: 1.9325730800628662 },\n {\n name: 'Floor Mats for Dodge Ram 2019 2020 1500 All new Crew Cab (not classic)...',\n score: 1.9325730800628662\n },\n {\n name: 'Floor Mats for Dodge Ram 2019 2020 1500 All new Crew Cab (not classic) random text random stuff randomly random testing test',\n score: 1.9325730800628662\n },\n {\n name: 'Floor Mats for Dodge Ram 2019 2020 1500 All new Crew Cab (not classic)...',\n score: 1.9325730800628662\n },\n {\n name: 'Floor Mats for Dodge Ram 2019 2020 1500 All new Crew Cab (not classic)...',\n score: 1.9325730800628662\n },\n {\n name: 'Floor Mats for Dodge Ram 2019 2020 1500 All new Crew Cab (not classic)...',\n score: 1.9325730800628662\n },\n {\n name: 'Floor Mats for Dodge Ram 2019 2020 1500 All new Crew Cab (not classic)...',\n score: 1.9325730800628662\n },\n {\n name: 'Floor Mats for 2014-2018 Chevrolet Silverado/GMC Sierra 1500 Crew Cab, 2015-2019 Silverado/Sierra 2500/3500 HD Crew Cab All Weather Guard 1st and 2nd Row Mat TPE Slush Liners',\n score: 1.906499981880188\n },\n { name: '2500', score: 1.906499981880188 },\n { name: 'dodge', score: 0.9386987090110779 }\n]\n",
"text": "In “dodge ram 2500” the output of “dodge” should be last as there are other data with more then 1 matchesYou can try using search score details to explain the results you’re seeing but I believe one factor that may be dl (length of the field in the document) - I did some brief testing off a smaller data set from the JSON file you provided which appeared to also show that dl was a factor that caused the scoring to be how it was amongst the last two documents shown in your screenshot. In short, it does appear like the behaviour you’ve mentioned is expected based off your index definition, search query and test documents.As per the Score documentation:Every document returned by an Atlas Search query is assigned a score based on relevance, and the documents included in a result set are returned in order from highest score to lowest.Many factors can influence a document’s score, including:In this particular case here,the document only containing dodge was scored higher due to the significantly longer length of the other document(s) containing more terms. Please also consider that those documents contained more words that did not match the search criteria.You can try again with the following index definition:The main changes performed above were setting:The following is the output I get (you can see the document only containing \"dodge\" is at the end):Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Check the string type properties documentation for more information regarding the above index definition example I gave. As always, it’s recommended to alter your index / search accordingly and then test thoroughly to ensure it suits all your use case(s) and requirement(s).",
"username": "Jason_Tran"
},
{
"code": "",
"text": "A post was split to a new topic: Atlas search - autocomplete scoring",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | Atlas Search - scoring | 2023-06-07T00:44:31.063Z | Atlas Search - scoring | 1,187 |
|
null | [
"java",
"spring-data-odm"
] | [
{
"code": "",
"text": "Hi,Hi, I’m trying connect to the mongodb atlas on my spring boot application 2.7 but I’m getting this error.Not working even on localhost with google dns resolver.Caused by: com.mongodb.spi.dns.DnsException: DNS errorDoes anyone know how to fix it?Thanks",
"username": "pipemais_tech"
},
{
"code": "",
"text": "Its your DNS server causing the error. Change DNS to google . If you are on a linux or macsudo nano /etc/resolv.confchange name server to 8.8.8.8\nThis will work instantly. Also in your MongoDB clusters allow connection from anywhere. Once in production you can allow from your production APIs.I had the same problem chatGTP gave me this solution. It worked for me.",
"username": "H_C_R_N_N_N_A"
}
] | Connection to database DNS Error | 2023-05-31T21:50:28.670Z | Connection to database DNS Error | 1,080 |
null | [
"app-services-cli",
"api"
] | [
{
"code": "curl --request GET --header \"Authrorization: Bearer <TOKEN>\" [<URL>](https://realm.mongodb.com/api/admin/v3.0/groups/<project_id>/apps)\ncurl --request POST \\\n --header 'Content-Type: application/json' \\\n --header 'Accept: application/json' \\\n --data '{\"username\": \" <Public API Key>\", \"apiKey\": \"<Private API Key>\"}' \\\n https://realm.mongodb.com/api/admin/v3.0/auth/providers/mongodb-cloud/login\n",
"text": "Hi,I’m trying to get app_id in order to deploy trigger function in terraform (mongodbatlas_event_trigger). Getting app_id in terraform isn’t available right now.\nI’m using curl command to check the API calls:as advice here: Used Token was provided using the following API calls:This URL, however, does not return any value - the response is an empty array.This is not access issue because I get information with realm cli, using the same private key (realm-cli apps list).Please help.",
"username": "alexandre_bergere"
},
{
"code": "productcurl --request GET --header \"Authrorization: Bearer <TOKEN>\" [<URL>](https://realm.mongodb.com/api/admin/v3.0/groups/<projectId>/apps?product=atlas)\n",
"text": "I was able to get the right response (thanks to Atlas support).In order to get Atlas Triggers there is an optional product query parameter we have to pass to the adminListApplications endpoint.The correct API calls is:",
"username": "alexandre_bergere"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongodb API for getting application IDs does not return any value | 2023-06-09T14:20:18.639Z | Mongodb API for getting application IDs does not return any value | 804 |
null | [
"atlas-cluster"
] | [
{
"code": "organic \tSCRAM \t\natlasAdmin@admin\ndbAdmin@*\nAll Resources\nshow dbs\nappsmith 4.00 KiB\nadmin 280.00 KiB\nlocal 1.94 GiB\n\nAtlas atlas-rpzlrl-shard-0 [primary] organic> use appsmith\nswitched to db appsmith\nAtlas atlas-rpzlrl-shard-0 [primary] organic> rs.initiate()\nMongoServerError: (Unauthorized) not authorized on admin to execute command { replSetInitiate: { }, apiVersion: \"1\", lsid: { id: {4 [186 179 92 153 213 51 75 239 132 55 145 35 115 149 12 163]} }, $clusterTime: { clusterTime: {1686379830 1}, signature: { hash: {0 [21 155 210 75 223 46 113 58 228 58 71 104 20 204 211 110 125 65 36 124]}, keyId: 7200415790166704128.000000 } }, $db: \"admin\" }\n\nAtlas atlas-rpzlrl-shard-0 [primary] organic> show users\nMongoServerError: (Unauthorized) not authorized on admin to execute command { usersInfo: 1, apiVersion: \"1\", lsid: { id: {4 [186 179 92 153 213 51 75 239 132 55 145 35 115 149 12 163]} }, $clusterTime: { clusterTime: {1686379830 1}, signature: { hash: {0 [21 155 210 75 223 46 113 58 228 58 71 104 20 204 211 110 125 65 36 124]}, keyId: 7200415790166704128.000000 } }, $db: \"organic\" }\n",
"text": "Hey folks,Hope everyone is doing well Could you pls help? I’m not sure how to solve this issue. I’ve already checked the roles and they look fine Thank you CLI logs:",
"username": "Tarlan_Isaev"
},
{
"code": "",
"text": "As which user you logged\nDid you login by authenticating against admin db?\nDbAdmin can do administrative tasks but cannot query db\nI think you need read privs on admin db",
"username": "Ramachandra_Tummala"
},
{
"code": "mongosh \"mongodb+srv://cluster0.tkwb4.mongodb.net/appsmith\" --apiVersion 1 --username organic\n\nEnter password: ***************\nCurrent Mongosh Log ID:\t64844fb4e5ab65e6b408a2e4\nConnecting to:\t\tmongodb+srv://<credentials>@cluster0.tkwb4.mongodb.net/appsmith?appName=mongosh+1.9.1\nUsing MongoDB:\t\t6.0.6 (API Version 1)\nUsing Mongosh:\t\t1.9.1\n\nFor mongosh info see: https://docs.mongodb.com/mongodb-shell/\n\nAtlas atlas-rpzlrl-shard-0 [primary] appsmith> show users\nMongoServerError: (Unauthorized) not authorized on admin to execute command { usersInfo: 1, apiVersion: \"1\", lsid: { id: {4 [253 203 135 27 140 176 74 183 144 213 91 135 241 226 63 42]} }, $clusterTime: { clusterTime: {1686392773 3}, signature: { hash: {0 [95 129 127 104 101 25 113 177 86 68 70 78 82 114 251 149 186 154 159 35]}, keyId: 7200415790166704128.000000 } }, $db: \"appsmith\" }\n\nAtlas atlas-rpzlrl-shard-0 [primary] appsmith> db.getUser(\"organic\")\n{\n _id: 'admin.organic',\n user: 'organic',\n db: 'admin',\n roles: [ { role: 'atlasAdmin', db: 'admin' }, { role: 'dbAdmin', db: '*' } ]\n}\n\nAtlas atlas-rpzlrl-shard-0 [primary] appsmith> db.getUser(\"admin\")\nMongoServerError: (Unauthorized) not authorized on admin to execute command { usersInfo: { user: \"admin\", db: \"appsmith\" }, apiVersion: \"1\", lsid: { id: {4 [253 203 135 27 140 176 74 183 144 213 91 135 241 226 63 42]} }, $clusterTime: { clusterTime: {1686392773 3}, signature: { hash: {0 [95 129 127 104 101 25 113 177 86 68 70 78 82 114 251 149 186 154 159 35]}, keyId: 7200415790166704128.000000 } }, $db: \"appsmith\" }\nAtlas atlas-rpzlrl-shard-0 [primary] appsmith> show users\nMongoServerError: (Unauthorized) not authorized on admin to execute command { usersInfo: 1, apiVersion: \"1\", lsid: { id: {4 [253 203 135 27 140 176 74 183 144 213 91 135 241 226 63 42]} }, $clusterTime: { clusterTime: {1686392773 3}, signature: { hash: {0 [95 129 127 104 101 25 113 177 86 68 70 78 82 114 251 149 186 154 159 35]}, keyId: 7200415790166704128.000000 } }, $db: \"appsmith\" }\nAtlas atlas-rpzlrl-shard-0 [primary] appsmith> use admin\nswitched to db admin\nAtlas atlas-rpzlrl-shard-0 [primary] admin> db.getUser(\"admin\")\nMongoServerError: (Unauthorized) not authorized on admin to execute command { usersInfo: { user: \"admin\", db: \"admin\" }, apiVersion: \"1\", lsid: { id: {4 [14 77 209 98 125 34 75 140 146 152 131 103 17 117 8 171]} }, $clusterTime: { clusterTime: {1686393103 5}, signature: { hash: {0 [185 49 198 18 71 120 232 137 174 111 13 147 161 63 212 7 72 103 183 162]}, keyId: 7200415790166704128.000000 } }, $db: \"admin\" }\nAtlas atlas-rpzlrl-shard-0 [primary] admin>\n",
"text": "Thanks so much for your reply, mate I’m logged as organic into Appsmith db:\nimage1920×1053 58 KB\n",
"username": "Tarlan_Isaev"
},
{
"code": "",
"text": "Hi @Tarlan_Isaev ,\nYou need to authenticate on admin database and not on appsmith db, so you need to correct your connection string!BR",
"username": "Fabio_Ramohitaj"
},
{
"code": "mongosh \"mongodb+srv://cluster0.tkwb4.mongodb.net/admin\" --apiVersion 1 --username organic\nEnter password: ***************\nCurrent Mongosh Log ID:\t6484a34a70ba66ec31b6045c\nConnecting to:\t\tmongodb+srv://<credentials>@cluster0.tkwb4.mongodb.net/admin?appName=mongosh+1.9.1\nUsing MongoDB:\t\t6.0.6 (API Version 1)\nUsing Mongosh:\t\t1.9.1\n\nFor mongosh info see: https://docs.mongodb.com/mongodb-shell/\n\nAtlas atlas-rpzlrl-shard-0 [primary] admin> use admin\nalready on db admin\n\nAtlas atlas-rpzlrl-shard-0 [primary] admin> rs.initiate()\nMongoServerError: (Unauthorized) not authorized on admin to execute command { replSetInitiate: { }, apiVersion: \"1\", lsid: { id: {4 [31 28 154 162 113 189 69 25 151 245 10 63 140 9 112 161]} }, $clusterTime: { clusterTime: {1686414185 3}, signature: { hash: {0 [117 147 23 148 52 226 223 49 99 69 212 50 169 110 168 73 195 67 245 170]}, keyId: 7200415790166704128.000000 } }, $db: \"admin\" }\n",
"text": "Hi Fabio,Thanks so much, mate \nStill the same ",
"username": "Tarlan_Isaev"
},
{
"code": "",
"text": "Hi @Tarlan_Isaev,\nAre you able now to show DBS?\nI don’ t understand why you’re trying to iniziate a replica set in MongoDB Atlas.\nYou’re using a free tier on atlas (M0)?BR",
"username": "Fabio_Ramohitaj"
},
{
"code": "Atlas atlas-rpzlrl-shard-0 [primary] admin> show dbs;\nappsmith 4.00 KiB\nadmin 280.00 KiB\nlocal 1.94 GiB\n",
"text": "Hey @Fabio_Ramohitaj,Yeah, there’s no issue with that Ohh apologies, I didn’t realise before that the replica set requires a pay tier. Have to turn it on for Appsmith to use it. Yeah, currently on the free tier Thank you ",
"username": "Tarlan_Isaev"
},
{
"code": "",
"text": "Hi @Tarlan_Isaev,\nSo i think you can flag the solution BR",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "Hey @Fabio_Ramohitaj,Yeah. Thanks for all your help and assistance guys Kind regards.",
"username": "Tarlan_Isaev"
}
] | MongoServerError: (Unauthorized) | 2023-06-10T07:10:24.236Z | MongoServerError: (Unauthorized) | 874 |
null | [
"react-native"
] | [
{
"code": "",
"text": "Hello,\nI am having challenges migrating my online app (node backend) to be offline first.\nI have successfully connected my app to the app services and enable device sync. In the users section under app services, i have registered users and am able to log them in in my app.\nHowever here is where am stuck, I want to link the user to their data which they had while the app was online.\nWhat do I mean?\nWhile the app was online they would login through a rest api endpoint, right ? and as per the schema each user would their data.\nIn mongo realm I did not find a way to login the user with the data that is already stored in mongo db. The only way was to create users under the user section in app services.(please correct me if am wrong here)\nThis works and i can log them in, but now i want to access the data in mongo db so i can display that to the users.\nHow can achieve this?\nplease any help will be appreciated, Thanks",
"username": "louis_Muriuki"
},
{
"code": "",
"text": "Cross post over to Move online first react native app to offline first with mongodb realm - Stack OverflowJust in case more data or answers comes to light.",
"username": "Jay"
}
] | Move online first react native app to offline first with mongodb realm | 2023-06-10T11:53:13.770Z | Move online first react native app to offline first with mongodb realm | 713 |
null | [
"aggregation"
] | [
{
"code": "Moddulle.aggregate([\n {\n $unwind: \"$list_epreuves\"\n },\n {\n $unwind: \"$list_epreuves.resultat\"\n },\n {\n $group: {\n _id: {\n nom: \"$list_epreuves.resultat.nom\",\n prenom: \"$list_epreuves.resultat.prenom\"\n },\n cursus: {\n $push: {\n designation_moddulle: \"$designation_moddulle\",\n pv_modulaire: {\n $push:{\n Code_epreuve: \"$list_epreuves.Code_epreuve\",\n valeur_note: \"$list_epreuves.resultat.valeur_note\"\n \n }}\n }\n }\n }\n },\n {\n $project: {\n _id: 0,\n nom: \"$_id.nom\",\n prenom: \"$_id.prenom\",\n cursus: 1\n }\n },\n \n ])\n",
"text": "hello,I don’t understand why it doesn’t work??:it send me : MongoServerError: Unrecognized expression ‘$push’",
"username": "Amina_Mesbah"
},
{
"code": "$group$pushdesignation_moddullepv_modulairecursus {\n $group: {\n _id: {\n nom: \"$list_epreuves.resultat.nom\",\n prenom: \"$list_epreuves.resultat.prenom\",\n designation_moddulle: \"$designation_moddulle\"\n },\n pv_modulaire: {\n $push:{\n Code_epreuve: \"$list_epreuves.Code_epreuve\",\n valeur_note: \"$list_epreuves.resultat.valeur_note\"\n }\n }\n }\n },\n {\n $group: {\n _id: {\n nom: \"$_id.nom\",\n prenom: \"$_id.prenom\"\n },\n cursus: {\n $push: {\n designation_moddulle: \"$_id.designation_moddulle\",\n pv_modulaire: \"$pv_modulaire\"\n }\n }\n }\n },\n",
"text": "Hello @Amina_Mesbah,The $group won’t allow a nested $push operator, you need two groups, the first group by your designation_moddulle and get an array of pv_modulaire and the second group is to prepare cursus array,Haven’t tested the query but you can do something like this.",
"username": "turivishal"
},
{
"code": "[\n {\n cursus: [ [Object], [Object], [Object], [Object] ],\n nom: 'smith',\n prenom: 'jack'\n },\n {\n cursus: [ [Object], [Object], [Object], [Object] ],\n nom: 'doe',\n prenom: 'john'\n }\n]\n [{designation_moddulle:physique,\n pv_modulaire:\n {\n code_epreuve: physique_emd1,\n valeur_note: 10},\n {code_epreuve: physics_emd2,\n valuer_note: 14}\n }]\n",
"text": "thank you,here is the result of the query:in the cursus field I would like to have the epreuve’s result of each module",
"username": "Amina_Mesbah"
},
{
"code": "",
"text": "Hi @Amina_Mesbah,Can you please provide the example existing documents and the expected result from that documents?",
"username": "turivishal"
},
{
"code": "[\n {nom: 'smith',\n prenom: 'jack',\n cursus: [ {designation_moddulle:physique\n pv_modulaire:[{code_epreuve:physique_emd1,\n valeur_note:15,50},\n {code_epreuve:physique_emd2,\n valeur_note:11,00}]},\n {designation_moddulle:biologie\n pv_modulaire:[{code_epreuve:biologie_emd1,\n valeur_note:17,50},\n {code_epreuve:biologie_emd2,\n valeur_note:10,50}]} ]\n \n },\n {nom: 'doe',\n prenom: 'john',\n cursus: [ {designation_moddulle:physique\n pv_modulaire:[{code_epreuve:physique_emd1,\n valeur_note:08,50},\n {code_epreuve:physique_emd2,\n valeur_note:18,00}]},\n {designation_moddulle:biologie\n pv_modulaire:[{code_epreuve:biologie_emd1,\n valeur_note:13,50},\n {code_epreuve:biologie_emd2,\n valeur_note:19,00}]} ],\n \n }\n]\n\n\n\n[\n {\n cursus: [ [Object], [Object], [Object], [Object] ],\n nom: 'smith',\n prenom: 'jack'\n },\n {\n cursus: [ [Object], [Object], [Object], [Object] ],\n nom: 'doe',\n prenom: 'john'\n }\n]\n",
"text": "Hi @turivishal ,i wanna have for each student and for each module all the marks he had at each epreuve of the module to finally be able to calculate the average of each module then the overall averagewith your solution I have this result;thank’s",
"username": "Amina_Mesbah"
},
{
"code": "",
"text": "Hi @Amina_Mesbah,Where is the actual document that exists in your collection? i can’t predict on the base of the expected result.",
"username": "turivishal"
},
{
"code": "{\"_id\":{\"$oid\":\"647a5f3d5f4e7d2f9b781c62\"},\n\"code_moddulle\":\"97\",\n\"designation_moddulle\":\"physique\",\n\"coefficient\":{\"$numberInt\":\"2\"},\n\"nombre_epreuves\":{\"$numberInt\":\"2\"},\n\"année\":{\"$numberInt\":\"1\"},\n\"listEpreuves\":\n\t[{\"code_epreuve\":\"physique_emd1\",\n\t\"date_epreuve\":{\"$date\":{\"$numberLong\":\"1686441600000\"}},\n\t\"année_epreuve\":{\"$numberInt\":\"1\"},\n\t\"nature_epreuve\":\"EMD\",\n\t\"resultat\":[{\n\t\t\"nom_etudiant\":\"laouar\",\n\t\t\"prenom_etudiant\":\"hocine\",\n\t\t\"valeur_note\":{\"$numberDouble\":\"13.283\"},\n\t\t\"_id\":{\"$oid\":\"647b21c9c868ea40c8942eec\"}},\n\t\t{\"nom_etudiant\":\"bouzenda\",\n\t\t\"prenom_etudiant\":\"khaled\",\n\t\t\"valeur_note\":{\"$numberDouble\":\"12.783\"},\n\t\t\"_id\":{\"$oid\":\"647b21c9c868ea40c8942eed\"}}],\n\t\"_id\":{\"$oid\":\"647abd98789a4a1e3ae735ba\"}}],\n\t\"__v\":{\"$numberInt\":\"0\"}}\n",
"text": "hi @turivishal ,here is the document:",
"username": "Amina_Mesbah"
},
{
"code": "\"listEpreuves\": $unwind: \"$list_epreuves\"nomprenomnom: \"$list_epreuves.resultat.nom\",\n prenom: \"$list_epreuves.resultat.prenom\"\n",
"text": "Hi @Amina_Mesbah ,The properties are totally different than you used in your query,\"listEpreuves\": $unwind: \"$list_epreuves\"The nom and prenom are not exists in your document.I can’t help you with this incomplete information.",
"username": "turivishal"
},
{
"code": "",
"text": "no it is not the problem because i arranged them in my code. In any case, i have the values they just aren’t being printed .\nthank you",
"username": "Amina_Mesbah"
},
{
"code": "",
"text": "that’s what i had to do:\nconsole.log(JSON.stringify(result))\nthank’s @turivishal",
"username": "Amina_Mesbah"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to make 2 consecutives $ group | 2023-06-08T16:56:52.727Z | How to make 2 consecutives $ group | 663 |
null | [
"swift"
] | [
{
"code": "ResultsListlet realm = try! Realm()\nlet dogResults = realm.objects(Dog.self)\nlet dogList = List(dogResults)\nListListlet dogList = RealmSwift.List(dogResults)\nlet dogList = RealmSwift.List(dogResults)RealmSwift.List(collection: dogResults)Results",
"text": "Back in the day, we could cast Results to a List like thisThen, SwiftUI brought a naming collision with SwiftUI’s List so we needed to define which List it wasBut, somewhere along the way, that stopped working as well.let dogList = RealmSwift.List(dogResults)err: Generic parameter ‘Element’ could not be inferred`Without iterating, is there another castable/init type solution?JayOh, and this version throws an error as wellRealmSwift.List(collection: dogResults)err: Argument type ‘Results’ does not conform to expected type ‘RLMCollection’Hmmm. Results is not an RLMCollection?",
"username": "Jay"
},
{
"code": "",
"text": "It’s a bit hard to believe that the functionality to cast Results to a List has been removed.",
"username": "Jay"
}
] | Results To List | 2023-06-01T17:50:43.867Z | Results To List | 716 |
null | [
"cxx",
"c-driver"
] | [
{
"code": "",
"text": "Suppose if the data in db as shown below:\n{“title”:“The Adventures of Tom Thumb & Thumbelina”,“director”:“Niccolo Bris”,“actors”:[“Lonni Fulger”,“Benedetto Jeandeau”,“Sylvester Argyle”],“release_year”:{“$date”:“1999-09-09T11:50:02.000Z”},“genres”:[“Musical”],“gross”:1015338,“runtime_min”:60,“ratings”:{“soft_avocados”:12,“mndb”:7,“votes”:2014}}Where can I find the example code snippets about different bson types.Will be really helpful if someone comes up with a solution.",
"username": "Raja_S1"
},
{
"code": "\n \n make_document(kvp(\"messagelist\",\n make_array(new_message(413098706, 3, \"Lorem ipsum...\"),\n new_message(413098707, 2, \"Lorem ipsum...\"),\n new_message(413098708, 1, \"Lorem ipsum...\"))));\n \n \n // Normally, one should check the return value for success.\n coll.insert_one(std::move(doc));\n }\n \n \n// Iterate over contents of messagelist.\n void iterate_messagelist(const bsoncxx::document::element& ele) {\n // Check validity and type before trying to iterate.\n if (ele.type() == type::k_array) {\n bsoncxx::array::view subarray{ele.get_array().value};\n for (const bsoncxx::array::element& msg : subarray) {\n // Check correct type before trying to access elements.\n // Only print out fields if they exist; don't report missing fields.\n if (msg.type() == type::k_document) {\n bsoncxx::document::view subdoc = msg.get_document().value;\n bsoncxx::document::element uid = subdoc[\"uid\"];\n bsoncxx::document::element status = subdoc[\"status\"];\n \n ",
"text": "There are examples & tutorials available here:For your question specifically, please take a look at this:",
"username": "Rishabh_Bisht"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to read a json array and nested json elements using Mongocxx driver | 2023-06-09T15:20:19.931Z | How to read a json array and nested json elements using Mongocxx driver | 807 |
null | [
"kafka-connector"
] | [
{
"code": "",
"text": "Hi guys,we started to use atlas mongodb in our project with combination of kafka connect + debezium source connector.We use private endpoints for services and debezium connection string also uses private endpoints.\nThe way that debezium works is that it reads config database’s shards collection\nBut entries in shards collection contain adresses that are only available using public endpoints which are blocked in our setup.Basically I am able to connect to shard directly using connection string in shards collection where it is allowed to access using public links. But obviously debezium source can’t connect to shard because we only allow private links for our services (debezium included).Is it possible for Atlas mongo to have private endpoint accessible uris in shards collection in config database?Thanks",
"username": "Yehor_Serheiev"
},
{
"code": "",
"text": "Hi Yehor,If the connector you’re connecting with needs direct shard (mongod) level access instead of working through the mongos (sharded cluster router) tier, you will need to connect via public IP or network peering. Note that you can connect via private endpoints and also one or both of those other options for different parts of your application. Note that I am not familiar with Debezium but as a general rule if the connection isn’t going through a mongos there is risk of accessing orphan data which could lead to data quality issues.An alternative strategy could be to explore using MongoDB’s Kafka Connector which can connect via private endpoints through the mongos directly.Cheers\n-Andrew",
"username": "Andrew_Davidson"
}
] | Atlas private endpoint and debezium connector for kafka connect | 2023-06-01T07:04:08.429Z | Atlas private endpoint and debezium connector for kafka connect | 857 |
null | [
"flutter"
] | [
{
"code": "flutter pub run realm generatepub finished with exit code 255code 255flutter pub run realm generatepub finished with exit code 255import 'package:realm/realm.dart';\n\npart 'RealmModels.g.dart';\n\n@RealmModel()\nclass _Chatroom {\n @PrimaryKey()\n late final String id;\n\n late List<String> users;\n}\npart 'RealmModels.g.dart';",
"text": "Hi, I’m trying to generate a RealmModel by following this guide.\nI ran flutter pub run realm generate, but it fails with an error pub finished with exit code 255.\nI do not see any more logs nor stacktrace. Do you know what code 255 stands for and how i can fix it?Ran on: M1 Macbook | Flutter 3.3.8 | Dart 2.18.4 | realm ^0.7.0+rc\nWhat I’ve done:Repro StepsCode Snippet\n/lib/Functions/Realm/RealmModels.dartThere’s a red line under part 'RealmModels.g.dart';, which says “Target of URI hasn’t been generated: ‘‘RealmModels.g.dart’’. Try running the generator that will generate the file referenced by the URI.”Relevant Log Output\n% flutter pub run realm generate\npub finished with exit code 255",
"username": "Shawn_L1"
},
{
"code": "RealmModels.g.dartflutter pub run realm generate --cleanflutter pub run realm generate",
"text": "Hi @Shawn_L1 ,\nThank you for interest to realm package!\nTo solve the issue, please make sure you have deleted manually your generated file RealmModels.g.dart.\nThen run flutter pub run realm generate --clean.\nNow, try again flutter pub run realm generate.\nPlease, let me know if this help!",
"username": "Desislava_St_Stefanova"
},
{
"code": "",
"text": "Hi @Desislava_St_Stefanova , thank you for your interest in my issue!\nI’m facing a new issue ([Bug]: Unable to install on iOS · Issue #1023 · realm/realm-dart · GitHub) which makes me unable to install realm on iOS. I think this issue would be solved if I solve that one.\nI’ll come back if I still face the issue after solving that one.\nThank you",
"username": "Shawn_L1"
},
{
"code": "",
"text": "I managed to reproduce this issue when I was using XCode and the external terminal. Maybe the generated file is somehow locked when it is used by different processes. That’s why you have to remove it manually.",
"username": "Desislava_St_Stefanova"
},
{
"code": "flutter pub run realm generateUnhandled exception:\nProcessException: No such file or directory\n Command: dart run build_runner build --delete-conflicting-outputs\n#0 _ProcessImpl._start (dart:io-patch/process_patch.dart:401:33)\n#1 Process.start (dart:io-patch/process_patch.dart:38:20)\n#2 GenerateCommand.run (package:realm/src/cli/generate/generate_command.dart:41:35)\n#3 CommandRunner.runCommand (package:args/command_runner.dart:209:27)\n#4 CommandRunner.run.<anonymous closure> (package:args/command_runner.dart:119:25)\n#5 new Future.sync (dart:async/future.dart:302:31)\n#6 CommandRunner.run (package:args/command_runner.dart:119:14)\n#7 main (package:realm/src/cli/main.dart:40:7)\n#8 main (file:///Users/***/flutter/.pub-cache/hosted/pub.dartlang.org/realm-0.8.0+rc/bin/realm.dart:20:40)\n#9 _delayEntrypointInvocation.<anonymous closure> (dart:isolate-patch/isolate_patch.dart:295:32)\n#10 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:192:12)\npub finished with exit code 255\nimport 'package:realm/realm.dart';\n\npart 'schemas.g.dart';\n\n@RealmModel()\nclass _Item {\n @MapTo('_id')\n @PrimaryKey()\n late ObjectId id;\n bool isComplete = false;\n late String summary;\n @MapTo('owner_id')\n late String ownerId;\n}\nrealm: ^0.8.0+rc\nFlutter version 3.3.8\nDart version 2.18.4\nDevTools version 2.15.0\n",
"text": "Hi @Desislava_St_Stefanova, I have the same problem:when I try to generate the RealmObject Class with the command flutter pub run realm generate I received this error:This is the class that I want to generate the RealmObjectClass:My configuration is:I tried both with the test app that provides Realm and on a new project.\nThank for your help!",
"username": "Omar_Quattrin"
},
{
"code": "pod install",
"text": "Hi @Omar_Quattrin ,\nwelcome to MongoDB forum!\nThis error happened on my environment only when I had already a generated file and I tried to change the model and to regenerate. But after I deleted the old file I was able to generate the new model.\nBut it seems that you received that error even in a new project on your first generate.\nDid you have the same issue like @Shawn_L1 running pod install?\nCould you share some more details about your environment and the commands that you run?\nThank you!",
"username": "Desislava_St_Stefanova"
},
{
"code": "[✓] Flutter (Channel stable, 3.3.8, on macOS 12.6 21G115 darwin-x64, locale it)\n • Flutter version 3.3.8 on channel stable at /Users/***/flutter\n • Upstream repository https://github.com/flutter/flutter.git\n • Framework revision 52b3dc25f6 (3 weeks ago), 2022-11-09 12:09:26 +0800\n • Engine revision 857bd6b74c\n • Dart version 2.18.4\n • DevTools version 2.15.0\n\n[✓] Android toolchain - develop for Android devices (Android SDK version 33.0.0)\n • Android SDK at /Users/***/Library/Android/sdk\n • Platform android-33, build-tools 33.0.0\n • Java binary at: /Applications/Android Studio Chipmunk.app/Contents/jre/Contents/Home/bin/java\n • Java version OpenJDK Runtime Environment (build 11.0.13+0-b1751.21-8125866)\n • All Android licenses accepted.\n\n[✓] Xcode - develop for iOS and macOS (Xcode 14.0.1)\n • Xcode at /Applications/Xcode.app/Contents/Developer\n • Build 14A400\n • CocoaPods version 1.11.3\n\n[✓] Chrome - develop for the web\n • Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome\n\n[✓] Android Studio (version 2021.3)\n • Android Studio at /Applications/Android Studio Chipmunk.app/Contents\n • Flutter plugin can be installed from:\n 🔨 https://plugins.jetbrains.com/plugin/9212-flutter\n • Dart plugin can be installed from:\n 🔨 https://plugins.jetbrains.com/plugin/6351-dart\n • Java version OpenJDK Runtime Environment (build 11.0.13+0-b1751.21-8125866)\nflutter pub run realm generate\nUnhandled exception:\nProcessException: No such file or directory\n Command: dart run build_runner build --delete-conflicting-outputs\n#0 _ProcessImpl._start (dart:io-patch/process_patch.dart:401:33)\n#1 Process.start (dart:io-patch/process_patch.dart:38:20)\n#2 GenerateCommand.run (package:realm/src/cli/generate/generate_command.dart:41:35)\n#3 CommandRunner.runCommand (package:args/command_runner.dart:209:27)\n#4 CommandRunner.run.<anonymous closure> (package:args/command_runner.dart:119:25)\n#5 new Future.sync (dart:async/future.dart:302:31)\n#6 CommandRunner.run (package:args/command_runner.dart:119:14)\n#7 main (package:realm/src/cli/main.dart:40:7)\n#8 main (file:///Users/***/flutter/.pub-cache/hosted/pub.dartlang.org/realm-0.8.0+rc/bin/realm.dart:20:40)\n#9 _delayEntrypointInvocation.<anonymous closure> (dart:isolate-patch/isolate_patch.dart:295:32)\n#10 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:192:12)\npub finished with exit code 255\n",
"text": "Hi @Desislava_St_Stefanova thank you for the answer.\nYes, I’m receiving the error even in a new project on my first generate.\nI tried to running pod install (I hadn’t opened the iOS project yet) but if I try to rerun the command I received the same error.I’m using Android Studio (Dolphin | 2021.3.1 Patch 1) on Mac with macOS Monterey (version 12.6) and the following configuration of flutter:The command that 'm unsing for generate the RealmObject is:and the error I’m getting is:Thanks for your help!",
"username": "Omar_Quattrin"
},
{
"code": "pod install",
"text": "Did pod install command completed successfully on your environment?\nDid you follow this thread Unable to install on iOS? It is a different issue, but they might be related.",
"username": "Desislava_St_Stefanova"
},
{
"code": "pod installUnable to install vendored xcframework `realm_dart` for Pod `realm`, because it contains both static and dynamic frameworks.\n",
"text": "Could you share some more details about your environment and the commands that you run?Yes, I had the problem described in this thread when I used the command pod installbut I resolved this, instead my problem remained. I had also try to use Visual Studio Code instead Android Studio but no difference.",
"username": "Omar_Quattrin"
},
{
"code": "flutter cleanflutter pub get",
"text": "Did you try:\nflutter clean\nflutter pub get\nand then to generate?",
"username": "Desislava_St_Stefanova"
},
{
"code": "flutter pub run realm generateflutter cleanflutter pub get.g.dart",
"text": "Hi, same for me too.\nI thought I solved it by successfully installing the package, but I still couldn’t successfully run flutter pub run realm generate.For me, I tried running flutter clean and flutter pub get multiple times, but no luck.\n+Fyi, there was no .g.dart file in the expected location, so that wouldn’t have caused the issue.",
"username": "Shawn_L1"
},
{
"code": "flutter pub add build_runnerflutter pub run build_runner build --delete-conflicting-outputs",
"text": "Unfortunately, I can’t reproduce this problem.\nWe will continue the investigation.\nMeanwhile could you please try:\nflutter pub add build_runner\nflutter pub run build_runner build --delete-conflicting-outputs",
"username": "Desislava_St_Stefanova"
},
{
"code": "",
"text": "Solution for this is to remove generate: true in flutter section in pub spec.yaml it will work @Shawn_L1 @Omar_Quattrin",
"username": "Oleksii_Moisieienko"
},
{
"code": "flutter pub add build_runnerflutter pub run build_runner build --delete-conflicting-outputs% **flutter pub add build_runner**\nResolving dependencies...\n _fe_analyzer_shared 47.0.0 (50.0.0 available)\n analyzer 4.7.0 (5.2.0 available)\n async 2.9.0 (2.10.0 available)\n boolean_selector 2.1.0 (2.1.1 available)\n build_resolvers 2.0.10 (2.1.0 available)\n collection 1.16.0 (1.17.0 available)\n matcher 0.12.12 (0.12.13 available)\n material_color_utilities 0.1.5 (0.2.0 available)\n source_span 1.9.0 (1.9.1 available)\n stack_trace 1.10.0 (1.11.0 available)\n stream_channel 2.1.0 (2.1.1 available)\n string_scanner 1.1.1 (1.2.0 available)\n test_api 0.4.12 (0.4.16 available)\n vector_math 2.1.2 (2.1.4 available)\nGot dependencies!\n\n% **flutter pub run build_runner build --delete-conflicting-outputs**\n[INFO] Generating build script...\n[INFO] Generating build script completed, took 480ms\n\n[INFO] Precompiling build script......\n[INFO] Precompiling build script... completed, took 7.7s\n\n[INFO] Initializing inputs\n[INFO] Building new asset graph...\n[INFO] Building new asset graph completed, took 808ms\n\n[INFO] Checking for unexpected pre-existing outputs....\n[INFO] Checking for unexpected pre-existing outputs. completed, took 1ms\n\n[INFO] Running build...\n[INFO] Generating SDK summary...\n[INFO] 2.4s elapsed, 0/3 actions completed.\n[INFO] 4.3s elapsed, 0/3 actions completed.\n[INFO] Generating SDK summary completed, took 5.1s\n\n[INFO] 6.2s elapsed, 0/3 actions completed.\n[INFO] 7.3s elapsed, 0/3 actions completed.\n[INFO] 8.3s elapsed, 0/3 actions completed.\n[INFO] 15.2s elapsed, 0/3 actions completed.\n[WARNING] No actions completed for 15.2s, waiting on:\n - realm:realm_generator on lib/schemas.dart\n - realm:realm_generator on lib/main.dart\n - realm:realm_generator on test/widget_test.dart\n\n[INFO] 16.2s elapsed, 0/3 actions completed.\n[INFO] realm:realm_generator on lib/main.dart:[generate (0)] completed, took 1.6s\n[INFO] realm:realm_generator on test/widget_test.dart:[generate (0)] completed, took 67ms\n[INFO] realm:realm_generator on lib/schemas.dart:[generate (0)] completed, took 87ms\n[INFO] Running build completed, took 17.2s\n\n[INFO] Caching finalized dependency graph...\n[INFO] Caching finalized dependency graph completed, took 36ms\n\n[INFO] Succeeded after 17.3s with 2 outputs (7 actions)\nflutter pub run realm generate",
"text": "I tried to makeflutter pub add build_runner\nflutter pub run build_runner build --delete-conflicting-outputsand this is the result:After that I tried to running flutter pub run realm generate and I got the same error.",
"username": "Omar_Quattrin"
},
{
"code": "generate:truename: new_app\ndescription: A new Flutter project.\n\n# The following line prevents the package from being accidentally published to\n# pub.dev using `flutter pub publish`. This is preferred for private packages.\npublish_to: 'none' # Remove this line if you wish to publish to pub.dev\n\nversion: 1.0.0+1\n\nenvironment:\n sdk: '>=2.18.5 <3.0.0'\n\ndependencies:\n flutter:\n sdk: flutter\n\n\n # The following adds the Cupertino Icons font to your application.\n # Use with the CupertinoIcons class for iOS style icons.\n cupertino_icons: ^1.0.2\n realm: ^0.8.0+rc\n build_runner: ^2.3.2\n\ndev_dependencies:\n flutter_test:\n sdk: flutter\n\n # The \"flutter_lints\" package below contains a set of recommended lints to\n # encourage good coding practices. The lint set provided by the package is\n # activated in the `analysis_options.yaml` file located at the root of your\n # package. See that file for information about deactivating specific lint\n # rules and activating additional ones.\n flutter_lints: ^2.0.0\n\n# For information on the generic Dart part of this file, see the\n# following page: https://dart.dev/tools/pub/pubspec\n\n# The following section is specific to Flutter packages.\nflutter:\n\n # The following line ensures that the Material Icons font is\n # included with your application, so that you can use the icons in\n # the material Icons class.\n uses-material-design: true\n",
"text": "Hi @Oleksii_Moisieienko in my pubspec.yaml I don’t find generate:true in the flutter section",
"username": "Omar_Quattrin"
},
{
"code": ".g.dartflutter pub run build_runner buildflutter pub run realm generateflutter pub run build_runner build",
"text": "Great!\nSo @Omar_Quattrin you have already the models generated, right? As I see the output “Succeeded after 17.3s with 2 outputs” - your .g.dart files must be there.\nYou can use flutter pub run build_runner build as a workaround for now.\nWe will try to figure out the issue with flutter pub run realm generate for some of the next releases.\n@Shawn_L1 does this work for you flutter pub run build_runner build?",
"username": "Desislava_St_Stefanova"
},
{
"code": "flutter pub run build_runner build",
"text": "YES!! The file .g.dart was generated with flutter pub run build_runner build !Thank you",
"username": "Omar_Quattrin"
},
{
"code": "",
"text": "It worked for me too! Thanks",
"username": "Shawn_L1"
},
{
"code": "flutter pub rundart run realm_dart generatedart run realm generate",
"text": "Hi again after long time.\nSince flutter pub run command has been deprecated, for further generating realm models you can use the following command:",
"username": "Desislava_St_Stefanova"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Keep failing to generate RealmModel | 2022-11-10T23:11:31.788Z | Keep failing to generate RealmModel | 6,435 |
null | [
"dot-net",
"cxx"
] | [
{
"code": "",
"text": "I have an application which access MongoDB from the frontend using C# and from the backend using C++.\nWhile running, a couple of TCP ports are established with MongoDB.\nThe backend also communicates with other devices using TCP.\nThe issue is when the device closes the port (server side), all MongoDB ports close, loosing the DB connectivity. After this I need to restart the application.\nWhen the device TCP port is normally closed by the client (backend), everything works fine.Any help with this issue will be much appreciated.",
"username": "Jose_Figueiredo"
},
{
"code": "",
"text": "Hi @Jose_Figueiredo welcome to the community!I’m not sure I understand the architecture yet.I have an application which access MongoDB from the frontend using C# and from the backend using C++.What do you mean exactly by “frontend” and “backend” in this case? Is this a web application, or something else?when the device closes the port (server side)What is a device, and what is server side in this context?When the device TCP port is normally closed by the client (backend), everything works fine.In this context, what does client refer to, and why do you call it the backend?I’m hoping you could provide an example workflow for me to understand the terminology you’re using, and what they are referring to.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi Kevin,This is a windows application where the frontend is the GUI, which is implemented using C#, and the backend are C++ DLLs.\nAs I mentioned both access MongoDB, for different purposes, and from what I see, at a certain point there are 3 TCP connections with MongoDB.\nWhen I mention client/server, this refers to TCP. The application communicates with an external device using TCP, and typically the application (TCP client) connects to the device (TCP server) and later disconnects. This is the normal process, and everything works fine.\nThe issue is when the device (TCP server) disconnects on its own (network issues, device reset, etc). When this happens we verify that this connection is not available anymore and it is reported, but the problem is that all the connections with MongoDB are closed too (by what we see in the MongoDB log), without any apparent reason. To reconnect to the MongoDB we need to restart the application.Hope this was clearer.\nBest regards,\nJosé",
"username": "Jose_Figueiredo"
}
] | Unrelated TCP closure automatically closes mongoDB's TCP port | 2023-06-05T16:00:16.714Z | Unrelated TCP closure automatically closes mongoDB’s TCP port | 705 |
null | [
"replication"
] | [
{
"code": "",
"text": "My company uses MongoDB as the primary database for our production environment, currently running version 4.0. We are planning to upgrade to the latest version, 6.0. Additionally, our existing replica set doesn’t have authentication enabled, and I would like to add SCRAM authentication during the upgrade process. However, I’ve found that it’s not possible to add authentication without stopping the database. Is there a solution that can meet my requirements?",
"username": "991glasses_N_A"
},
{
"code": "",
"text": "Who says there has to be downtime?But i suggest that you finish the upgrade first and then enable auth. Do one thing at a time and do it well.",
"username": "Kobe_W"
},
{
"code": "",
"text": "I apologize for any inaccuracies in my previous description. To clarify, as far as I know, if a MongoDB replica set is not initially configured with authentication, enabling authentication later on will result in all existing connections being terminated. This means that while MongoDB itself will not be stopped, any application services relying on it will be disrupted. I’m looking for a way to avoid this situation.",
"username": "991glasses_N_A"
},
{
"code": "",
"text": "I have found the tutorial for the solution, thanks a lot",
"username": "991glasses_N_A"
}
] | How can MongoDB enable authentication without downtime? | 2023-06-09T03:00:04.317Z | How can MongoDB enable authentication without downtime? | 477 |
null | [
"c-driver"
] | [
{
"code": "",
"text": "I am facing two issues:Any help on these issues please.",
"username": "Raja_S1"
},
{
"code": "",
"text": "Hi @Raja_S1The driver must be built with zlib and/or snappy and/or zstd support to enable compression support.\nYou may also look at this article for installing C driver on Windows using Visual Studio - Getting Started with MongoDB and C++ | MongoDB",
"username": "Rishabh_Bisht"
},
{
"code": "",
"text": "",
"username": "Raja_S1"
},
{
"code": "",
"text": "As of now I am moving forward with approach 2 as above i.e., 32-bit binaries.\nI have linked the binaries and include file in my C++ project. It is throwing the following errorError\tC2371\t‘ssize_t’: redefinition; different basic types\tPipeImage\tC:\\Program Files (x86)\\mongo-c-driver\\include\\libbson-1.0\\bson\\bson-compat.h\t109Please help me how to resolve this issue.\nBy the way I have tried including sys/types.h and couldn’t resolve it.",
"username": "Raja_S1"
},
{
"code": "",
"text": "Can you share the code/screenshot where this error is occurring? From the error message it seems like something is being called before it is defined/declared.",
"username": "Rishabh_Bisht"
},
{
"code": "",
"text": "\nimage1397×95 3.19 KB\n\n\nHere is the code snippet, I just trying to init the Mongo instance.Binaries location\nProject settings in VS2019\n\nimage946×443 7.44 KB\n\nInclude directories\n\nimage944×76 1.25 KB\nBy the way I have tried C:\\Program Files(x86)\\mongo-c-driver as well.",
"username": "Raja_S1"
},
{
"code": "#include <mongoc/mongoc.h>\n\nint main (int argc, char **argv)\n{\n\tmongoc_client_t *client = NULL;\n\tbson_error_t error = {0};\n\tmongoc_database_t *database = NULL;\n\tbson_t *command = NULL, reply;\n\n\n\t// Initialize the MongoDB C Driver.\n\tmongoc_init ();\n\n\t// Replace the <connection string> with your MongoDB deployment's connection string.\n\tclient = mongoc_client_new(\"<connection string>\");\n\n\t// Get a handle on the \"admin\" database.\n\tdatabase = mongoc_client_get_database (client, \"admin\");\n \n\t// Ping the database.\n\tcommand = BCON_NEW(\"ping\", BCON_INT32(1));\n\tif (mongoc_database_command_simple(database, command, NULL, &reply, &error))\n\t{\n\t\tprintf(\"Pinged your deployment. You successfully connected to MongoDB!\\n\");\n\t}\n\telse\n\t{\n\t\t// Error condition.\n\t\tprintf(\"Error: %s\\n\", error.message);\n\t\treturn 0;\n\t}\n \n\n\t// Perform Cleanup.\n\tbson_destroy (&reply);\n\tbson_destroy (command);\n\tmongoc_database_destroy (database);\n\tmongoc_client_destroy (client);\n\tmongoc_cleanup ();\n\n\treturn 0;\n}\n\n",
"text": "For reference, It would be helpful if you could post screenshot of the entire Visual Studio window (with code and error).Can you do a quick check for me - try to compile and run this c program (replace your code with this):It is taken from the docs page - https://www.mongodb.com/docs/drivers/c/",
"username": "Rishabh_Bisht"
},
{
"code": "",
"text": "Thanks, this example is working fine.\nMy project is referencing many third party libraries and never had this problem before. I’m seeing this error only after adding/referencing libbson or libmongoc.\nThird party libraries = boost 1.83, python, hd5, zlib etc.",
"username": "Raja_S1"
},
{
"code": "",
"text": "Can you please share the version of Visual Studio and compiler that you are using?",
"username": "Rishabh_Bisht"
},
{
"code": "",
"text": "Its VS2019 and Win32 config.\n\nimage776×512 17.5 KB\n",
"username": "Raja_S1"
},
{
"code": "ssize_tssize_t",
"text": "I suspect there maybe a conflict with the ssize_t definition by any of the other third parties/libraries that you are using. Could you cross check for ssize_t definition in the libraries that you are using?",
"username": "Rishabh_Bisht"
},
{
"code": "",
"text": "If I include mongo headers in another source cpp file then the issue has been resolved. Very strange!\nAtleast it solved my problem. Thank you for all the help.",
"username": "Raja_S1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Libmongoc and libbson build issues | 2023-05-31T10:39:32.897Z | Libmongoc and libbson build issues | 1,199 |
null | [] | [
{
"code": "",
"text": "atlas clusters search indexes create --clusterName myAtlasClusterEDU -f /app/search_index.jsonafter several attempts,\ni am getting following error : Error: json: cannot unmarshal array into Go struct field FTSMappings.mappings.fields of type map[string]interface {}Lab: Group Search Results by Using Facets",
"username": "Aakash_Mahawar"
},
{
"code": "",
"text": "same issue with that mongodb univeristy lab. Hope they solve it soon",
"username": "Giorgio_Laura"
},
{
"code": "atlas clusters search indexes list --clusterName myAtlasClusterEDU --db sample_supplies --collection sales\nError: json: cannot unmarshal array into Go struct field FTSMappings.mappings.fields of type map[string]interface {}\n{\n \"analyzer\": \"lucene.standard\",\n \"searchAnalyzer\": \"lucene.standard\",\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"purchaseMethod\": [\n {\n \"dynamic\": true,\n \"type\": \"document\"\n },\n {\n \"type\": \"string\"\n }\n ],\n \"storeLocation\": [\n {\n \"dynamic\": true,\n \"type\": \"document\"\n },\n {\n \"type\": \"stringFacet\"\n }\n ]\n }\n }\n}\n",
"text": "Ok, so this is really annoying. I can create the index using the Atlas GUI. The index even becomes active, therefore it seems that there are no issues with the syntax or the config. However, within the terminal, when I try to run this command in the CLI/terminal after creating the index in the GUII get the sameTherefore, since the command above is used to view the indexes and even that does not run, it seems that the issue is with the settings in the terminal. It appears that the program is struggling with the parsing the arrays in “purchaseMethod” and “storeLocation”.To create the index using the Atlas GUI, I used the following settings, along with selecting the ‘sample_supplies’ database and the ‘sales’ collection, along with using the same index name",
"username": "Areeb_A"
},
{
"code": "",
"text": "Same issue here… it’s still failing at this point… ",
"username": "BhEaN"
},
{
"code": "",
"text": "Hey, i am also having same issue, any fix?",
"username": "Mohan_Selvam"
},
{
"code": "",
"text": "Hey everyone,Thanks for reaching out to the MongoDB Community forums Please allow me some time to get it checked and will keep you all updated.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Hey @Mohan_Selvam/@BhEaN/@Areeb_A/@Giorgio_Laura/@Aakash_Mahawar,Thanks for highlighting it. There has been an identified issue with the Atlas CLI regarding JSON parsing, and that has been successfully resolved. You should now be able to complete your labs without any issues.If you encounter any problems or have any questions, please feel free to reach out to us.Best regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "thanks @Kushagra_Kesav",
"username": "Aakash_Mahawar"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Lab: Group Search Results by Using Facets | 2023-06-02T12:57:18.775Z | Lab: Group Search Results by Using Facets | 941 |
[
"mongodb-shell"
] | [
{
"code": "554.83MBdb.stats()mongosh\nAtlas atlas-fmbop9-shard-0 [primary] news_db> db.stats()\n{\n db: 'news_db',\n collections: 4,\n views: 0,\n objects: 1040164,\n avgObjSize: 758.4941961075369,\n dataSize: 788958357,\n storageSize: 581779456,\n indexes: 4,\n indexSize: 18321408,\n totalSize: 600100864,\n scaleFactor: 1,\n fsUsedSize: 7438405632,\n fsTotalSize: 10726932480,\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1686131293, i: 1 }),\n signature: {\n hash: Binary(Buffer.from(\"448344519386fd7a20bfe7c11263d4977bbd771b\", \"hex\"), 0),\n keyId: Long(\"7241613558851567622\")\n }\n },\n operationTime: Timestamp({ t: 1686131293, i: 1 })\n}\n\n\nfsUsedSizenews_collectionAtlas atlas-fmbop9-shard-0 [primary] news_db> db.news_collection.findOne()\n{\n _id: 'https://www.npr.org/2023/05/16/1176508568/biden-meets-with-congressional-leaders-as-debt-limit-deadline-looms',\n fingerprints: [\n 374865, 241001, 48745, 259731, 70017, 156617, 673765, 331956,\n 270995, 130199, 45513, 457174, 113615, 603545, 116103, 163152,\n 408810, 18232, 522353, 541818, 182149, 84889, 438504, 257512,\n 256602, 460686, 287592, 59625, 261816, 189635, 346076, 291558,\n 14098, 164881, 39145, 189446, 291303, 35600, 298657, 83163,\n 313783, 278993, 182793, 157549, 201840, 527436, 21982, 380645,\n 56697, 48363, 67714, 113824, 94203, 144955, 37354, 199260,\n 249434, 268125, 366513, 196130, 181955, 11618, 337985, 553825,\n 224547, 398779, 323990, 299317, 41589, 234623, 167918, 155028,\n 214631, 397630, 395398, 62818, 310355, 10561, 359734, 99014,\n 641902, 155804, 224220, 91110, 109134, 162947, 21310, 322004,\n 413283, 75042, 346425, 420857, 222899, 58817, 55335, 159492,\n 189573, 267054, 140012, 180607,\n ... 976 more items\n ]\n}\n\n\nhashes_collectionAtlas atlas-fmbop9-shard-0 [primary] news_db> db.hashes_collection.findOne()\n{\n _id: 126992,\n urls: [\n 'https://www.npr.org/2023/05/16/1176508568/biden-meets-with-congressional-leaders-as-debt-limit-deadline-looms',\n 'https://100percentfedup.com/classless-chicago-mayor-told-trump-f-u-now-wants-to-fire-cop-who-flipped-off-protesters/',\n 'https://5pillarsuk.com/2020/06/21/deadly-reading-stabbings-declared-a-terrorist-incident/',\n 'https://5pillarsuk.com/2020/10/30/jumuah-khutbah-islam-is-not-in-crisis-french-secularism-is/',\n 'https://5pillarsuk.com/2021/03/19/tanzania-has-a-new-female-muslim-leader/',\n 'https://5pillarsuk.com/2021/07/26/tunisia-in-crisis-as-president-is-accused-of-launching-coup/',\n 'https://92newshd.tv/about/at-least-40-killed-one-still-missing-in-turkey-mine-blast',\n 'https://academicfreedom.org/letter-to-concordia-wisconsin-on-gregory-schulz/',\n 'https://z3news.com/w/antichristbeast-systemglobal-governance-unveiled/',\n\nnews_documentshashes_collection",
"text": "Hi community!\nI am currently working on a Software Project, where I’ve set up an M10 Dedicated cluster on MongoDB Atlas. The statistics of my database look like this:\nnews_db_state3130×554 172 KB\nAs it can be seen in the picture above, my storage size is 554.83MB. If I run db.stats() in mongosh, I obtain the following statistics:As it can be seen in the code block above, the fsUsedSize is 7438405632 Bytes ~= 6.92GB.I have recently received an e-mail from MongoDB Atlas that states that I have almost reached the limit of the 10GB size of my cluster.\nMy question is: Is that “Storage” of 10GB offered by my plan refering to the “Disk space used”, and if so: how can I get some more insight about what composes this huge disk space usage? My gut feeling is that this large discrepancy between “storageSize” and “Disk space used” may indicate some problems in configuring the database, but I do not how to debug this issue properly.news_collection document example:hashes_collection document example:To provide more insights about the structure of my documents, I will list below one document from news_documents collection, and another one from hashes_collection:I am also open to (and would kindly appreciate) hearing your opinion on whether a NoSQL database storage system may not be the most viable option in my case, and whether it would be better to migrate to a relational database instead.Thank you in advance for your time, feedback and support!-Vlad",
"username": "Vlad-Petru_Nitu"
},
{
"code": "db.collection.stats()storageSizefreeStorageSize",
"text": "Hi @Vlad-Petru_Nitu welcome to the community!This can happen if you deleted a lot of documents in those collections. WiredTiger don’t usually give space back to the OS, since it assumes that you’ll have more data in the future. Giving space after deletes then reclaiming it back later would be a wasted effort in this case. Rather, WiredTiger will keep those space available for reuse.You might want to check the output of db.collection.stats() for all the collections in your database. Especially the numbers for storageSize and freeStorageSize. This will give you an idea of how much space available to be reused.Alternatively, you might be able to run compact command to reclaim space. This needs to be done with care, and I encourage you to follow the procedure outlined in that page before performing this operation.I am also open to (and would kindly appreciate) hearing your opinion on whether a NoSQL database storage system may not be the most viable option in my case, and whether it would be better to migrate to a relational database instead.This depends on your use case. There are use cases that’s easier for a NoSQL solution, and the other way around. I don’t think there’s a single correct answer for every use case for this question, even for similar-looking ones. Are you seeing some performance issues, usability issues, or anything else with MongoDB currently?Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "storageSizefreeStorageSizefreeStorageSize",
"text": "Hi community!\nI am currently working on a Software Project, where I’ve set up an M10 Dedicated cluster on MongoDB Atlas. The statistics of my database look like this:Dear Kevin,Thank you for your quick reply! In terms of performance, all the queries I had to perform worked as expected. My only concern is that the disk space usage is too large (compared to the Storage Size that I am currently using), and I am going to exceed the constraints of the plan I have paid for. I assume that this disk space usage comes from the “oplog.rs” collection of “local” database. Is there any way to mitigate this high usage level?I have to note that I did not perform any deletion operations throughout the existence of my collection, but I have added 10k documents in “news_collection” (and their associated 100k hashes in “hashes_collection”) in short period of time (~ 6 hours)\nThanks in advance.Moreover, the storageSize command works, but the freeStorageSize does not return anything (I wasn’t able to find freeStorageSize in the documentation either).Best,\n–Vlad",
"username": "Vlad-Petru_Nitu"
},
{
"code": "createCollectiondb.createCollection('zlib', {storageEngine: {wiredTiger: {configString: 'block_compressor=zlib'}}})storageSizefreeStorageSizefreeStorageSizefreeStorageSizedb.collection.stats()test> db.test.stats()\n{\n ns: 'test.test',\n size: 66,\n count: 2,\n avgObjSize: 33,\n numOrphanDocs: 0,\n storageSize: 36864,\n freeStorageSize: 16384,\n capped: false,\n ....\n",
"text": "Hi @Vlad-Petru_NituI have to note that I did not perform any deletion operations throughout the existence of my collection, but I have added 10k documents in “news_collection” (and their associated 100k hashes in “hashes_collection”) in short period of time (~ 6 hours)By default, Atlas tries to keep at least 24 hours of oplog window. So if you just added a lot of documents within the last 24 hours, you may see an elevated oplog size in the short term. This should go down to a more workload-related size once the deployment goes into a steady state. Do you mind checking after a day or two if space is still an issue?Alternatively, by default WiredTiger compresses data using Snappy, which provides good compression performance vs. speed. You can change this using the server-wide block compression setting to use e.g. zlib, whose tradeoff is more compression but less performance.You can also set this compressor setting per-collection, see my response on StackOverflow on how to do this. Basically you execute the createCollection command:db.createCollection('zlib', {storageEngine: {wiredTiger: {configString: 'block_compressor=zlib'}}})Note that using compression on the collections doesn’t change the oplog retention of 24 hours, so you’ll still see this after a large import.Moreover, the storageSize command works, but the freeStorageSize does not return anything (I wasn’t able to find freeStorageSize in the documentation either).Sorry I should’ve provide more details. freeStorageSize is not a command, but a field in the output of the db.collection.stats(). For example if I have a collection named test:It shows the storage size in bytes, and freeStorageSize which is space available for reuse. See https://www.mongodb.com/docs/manual/reference/command/dbStats/#mongodb-data-dbStats.freeStorageSize for more details.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "stats().stats() sharded: false,\n size: 1035091724,\n count: 499626,\n numOrphanDocs: 0,\n storageSize: 5177339904,\n totalIndexSize: 0,\n totalSize: 5177339904,\n indexSizes: {},\n avgObjSize: 2071,\n maxSize: 1038090240,\n ns: 'local.oplog.rs',\n nindexes: 0,\n scaleFactor: 1\n",
"text": "Dear @kevinadi ,Thank you for your comprehensive answer. The “Disk storage used” has not yet decreased since I last added the bulk of documents to my database (~30 hours ago), but I am willing to wait more in order to observe whether the space is still an issue or not.When it comes to running the stats() command from an arbitrary collection, the returning values I get are different from those that you provided in the code above. Here is a snippet of what is outputted when I do .stats():Thank you for your time!Best,\n–Vlad",
"username": "Vlad-Petru_Nitu"
}
] | Disk space usage larger than expected (compared to Storage size) | 2023-06-07T10:13:39.626Z | Disk space usage larger than expected (compared to Storage size) | 1,325 |
|
null | [
"aggregation",
"atlas-search"
] | [
{
"code": "",
"text": "I have a requirement where user can multi select search fields from application and fire the search.\nSo lets take 2 fields for the discussion where I need to apply full text search on both fields. I have created 2 separate text index on both columns. How can I make use of both indexes using compound operator? is there a way?Note: there can be up to 5 search fields user can select and in different order.db.sampleCollection.aggregate([\n{$search: {index: “index1” , “compound”: {“should”: [ { “regex”: {“path”: “name.fullName”,“query”: “(.)bin(.)”}}]}}},\n])db.sampleCollection.aggregate([\n{$search: {index: “index2” , “compound”: {“should”: [ { “regex”: {“path”: “references.name”,“query”: “(.)intel(.)”}}]}}},\n])The queries above are working, however I want to merge these into one using compound operator and mention both indexes.",
"username": "Prabhu_Pasupathiraj"
},
{
"code": "{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"name\": [\n {\n \"type\": \"document\",\n \"dynamic\": true\n }\n ],\n \"references\": {\n \"type\": \"document\",\n \"dynamic\": true\n }\n }\n }\n}\ndb.sampleCollection.aggregate([\n{$search: \n {\n index: “default” , \n “compound”: {\n “should”: [ \n {\n “regex”: {\n “path”: “references.name”,\n “query”: “(.)intel(.)”\n },\n },\n {\n “regex”: {\n “path”: “name.fullName”,\n “query”: “(.)bin(.)”\n },\n },\n ]\n }\n }\n}\n])\n",
"text": "Hey @Prabhu_Pasupathiraj , welcome to the community!You can just use one search index that indexes both the name.fullName field and the references.name field.For the index definition:Query:let me know if this helps!",
"username": "amyjian"
},
{
"code": "",
"text": "Thanks Amy.\nAs mentioned earlier, I have up to 5 fields that can be selected by the user in any order and all 5 are searchable fields. In this case, If I add all those 5 fields in one index - will it work even I change the order of fields or limit the search with 3 fields?",
"username": "Prabhu_Pasupathiraj"
},
{
"code": "",
"text": "In this case, If I add all those 5 fields in one index - will it work even I change the order of fields or limit the search with 3 fields?This should work unless i’m interpreting the question wrong. You can test it out and advise the results.For reference, I have one Atlas search index which covers 5 fields in my test environment and i’m able to search on all of those fields.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks Jason. I will try and get back.",
"username": "Prabhu_Pasupathiraj"
}
] | Atlas search on more than 1 field within same collection | 2023-06-08T06:49:24.819Z | Atlas search on more than 1 field within same collection | 669 |
null | [
"aggregation"
] | [
{
"code": "\"planSummary\": \"COLLSCAN\",\n\t\t\"numYields\": 225,\n\t\t\"queryHash\": \"FB805F4B\",\n\t\t\"queryFramework\": \"classic\",\n\t\t\"ok\": 0,\n\t\t\"errMsg\": \"PlanExecutor error during aggregation :: caused by :: Out of memory\",\n\t\t\"errName\": \"JSInterpreterFailure\",\n\t\t\"errCode\": 139,\n\t\t\"reslen\": 163,\ndb.user_103_tmp.aggregate([\n {\n $project: {\n emits: {\n k: \"$groupid\",\n v: {\n ...\n }\n }\n }\n },\n {\n $unwind: \"$emits\"\n },\n {\n $group: {\n _id: \"$emits.k\",\n value: {\n $accumulator: {\n init: function() {\n return {\n d: []\n };\n },\n initArgs: [],\n accumulate: function(state, values) {\n state.d.push(JSON.stringify(values));\n return state;\n },\n accumulateArgs: [\"$emits.v\"],\n merge: function(state1, state2) {\n return {\n d: state1.d.concat(state2.d)\n };\n },\n finalize: function(state) {\n return state.d\n },\n lang: \"js\"\n }\n }\n }\n },\n {\n $out: \"user_103_group\"\n }\n], {\n allowDiskUse: true\n})\n\n",
"text": "It runs successfully in mongodb version 5.0+, but fails in version 6.0, why???version:6.0.6mongo.logmy code",
"username": "lijinhua6324"
},
{
"code": "user_103_tmpuser_103_tmpuser_103_tmpmongodmongod",
"text": "Hi @lijinhua6324,It runs successfully in mongodb version 5.0+, but fails in version 6.0, why???To help us troubleshoot it would help to understand a few things about the environments you’re working in.The goal here is to reproduce the behavior you’ve described so we can determine the source of the issue.",
"username": "alexbevi"
},
{
"code": "db.user_103_tmp.aggregate([\n {\n $group: {\n _id: \"$userid\",\n v: { $push: \"$$ROOT\" }\n }\n }\n {\n $out: \"user_103_group\"\n }\n])\n\n",
"text": "I solved it using the following",
"username": "lijinhua6324"
}
] | MongoServerError: PlanExecutor error during aggregation :: caused by :: Out of memory | 2023-06-08T10:06:25.115Z | MongoServerError: PlanExecutor error during aggregation :: caused by :: Out of memory | 1,198 |
null | [
"sharding",
"time-series"
] | [
{
"code": " {\n _id: ObjectId(\"62a0384c5eefb9223069dd01\"),\n control: {\n version: 2,\n min: {\n _id: ObjectId(\"62a03868cb0ba41ed02aeba3\"),\n param: 1,\n timeStamp: ISODate(\"2022-06-08T05:49:00.000Z\"),\n isOk: false,\n _class: 'test.class',\n hs: 'test.host1'\n },\n max: {\n _id: ObjectId(\"62a03aa2cc14743b3525d572\"),\n param: 89,\n timeStamp: ISODate(\"2022-06-08T05:58:54.000Z\"),\n isOk: true,\n _class: 'test.class',\n hs: 'test.host2'\n },\n count: 577\n },\n meta: { zone: 1, internalID: 1810 },\n data: {\n timeStamp: Binary(Buffer.from(\"0900482bdc4181010000870a7d0000000000001f0000000000000004000000000000000a00000000d0f77c01000000000000001f0000000000000006000000000000000e0000000000000000\", \"hex\"), 7),\n isOk: Binary(Buffer.from(\"0800008f02000000000000209240200180040020121200000000000002000000008001000200000000000000622009098404000002986160000000000200001860801900010000000000000082810400004800000260000000060000020000000000000002000000062609094208210118000600026060809909006102000000000000008282040000000600801200000060928101090000000000000000\", \"hex\"), 7),\n param: Binary(Buffer.from(\"01000000000000002e409f758440e131e39aec95a42c098807098a45dcc32e0d9854e8752507e999618252c68481b469c9120c861ed3a04757840c9648d32036889145851b43c180a8923235e650095f30c240062802f50c4ea3b0b66ddcc13c1f408835a3116aa4e2147d950d44abc92493532644d4835583e02e35753995f357b68106d58ba2404364559f36204301788bf51c0676116245c9611c46868a0344c40055651348e2d9334831071c4425c1a958c726150138415d20241502e0cd00b50a5b655b8866c054ec5c6610e8b61009c20cc52f8a453052222666389d030d91350e358c8071681e4fd0055ed92c88f0782a869043d2211180253605881040eaf620762c62900805b2109fc66d0982084c880c262e0fb00c482160954121431d2648314506be0fb0c7581966448640044c300c7698d00818daa300070f8ce633810140083c810512031227f6d84426940a806ed65097c1a0675310263040102008931ae678951109ea8835c64c1cc1152123e4c670d764144601281681c05140cb003c458e5bd01416952996a5054e2da34a0e14060482a344091a11561485c08c856326751641d108d908190618863549c5a705760458ab2ccca220fc00000c0000000000\", \"hex\"), 7),\n _class: Binary(Buffer.from(\"02002b000000636f6d2e616b616d61692e70726f6d6f6e2e6473732e646f63756d656e742e52656c6561726e4461746100833f0000000000000001000000000000000200000000000000090000000000000000\", \"hex\"), 7),\n hs: Binary(Buffer.from(\"02002c00000070726f642d6a69726164632d61707030312d7531362e64667730322e636f72702e616b616d61692e636f6d00833f0000000000000001000000000000000200000000000000090000000000000000\", \"hex\"), 7),\n _id: Binary(Buffer.from(\"070062a03868cb0ba41ed02aeba38107044000000000044e00000000000000070062a03871cc14743b35259e3e8149608002040080001c00000200100000070062a0387bcb0ba41ed02aeca28129000000000000813c1f000f00000000070062a03885cc14743b3525a13e806c00000000300000070062a03886cb0ba41ed02aef20802a00000000000000070062a03890cc14743b3525a1ba81070200000000300e0e00000000000000070062a0389acb0ba41ed02af0138207020000000008002b00c0a7ec5300002b00280020000600070062a038a4cc14743b3525a2b4802c00000200100000070062a038aecb0ba41ed02af117800702000000000000070062a038b8cc14743b3525a5a8804614460000801008070062a038c2cb0ba41ed02af3fb822b000000000000000ce01ffffd0100000d00000050540000070062a038cccc14743b3525a69c800702000000000000070062a038d6cb0ba41ed02af6f281070280a0600000047d00000020f80700070062a038e0cc14743b3525a99a80070241c110000000070062a038eacb0ba41ed02af7f48107020000200800002d00000000f80700070062a038f4cc14743b3525aa9b8207020000000008020c0000d810720d210c00000200500000070062a038fdcb0ba41ed02af8de832c00000000800000cd0220004025f8073d980020080000006e02000000000000070062a03907cc14743b3525ad7d802a000000000ed000070062a03911cb0ba41ed02afbc981070200401080b5001e00000000000000070062a0391bcc14743b3525ae788107840040105024000e00000000000000070062a03925cb0ba41ed02afec88107020000000008001e00000000000000070062a0392fcc14743b3525b16e8107028160000010060e00000000000000070062a03939cb0ba41ed02affd98107020000202818020e00000000000000070062a03944cc14743b3525b279804a00000000010000070062a03944cb0ba41ed02b0052804c2000ff01100000070062a0394ecc14743b3525b3d082298000040c1080112cfc1f05000200004e2d210000000000070062a03958cb0ba41ed02b03358149400000107080014d20000014080000070062a03962cc14743b3525b5dc81070440c1100000000e00000000000000070062a0396ccb0ba41ed02b04338007028120a1180802070062a03976cc14743b3525b8cb8107020040503004008e20000000000000070062a03980cb0ba41ed02b0736810802000200050601ad20000024080000070062a0398acc14743b3525b9e281290000000000e07f0c00000200100000070062a03994cb0ba41ed02b08278169c0000008008083dc20000200300000070062a0399ecc14743b3525bc29802980000100008001070062a0399ecb0ba41ed02b09fb822c000000006025415c254100000000000a0000000000a020070062a039b2cc14743b3525bdd7804e00000000000000070062a039b2cb0ba41ed02b0b9b8329000000006060838c00000600c00f211c102102005000008c00000700600000070062a039bdcc14743b3525be44812c00000400002f413c2f410000200000070062a039c6cb0ba41ed02b0e89802a00000000044021070062a039d1cc14743b3525c12e8107044040000004000c00000015b04f01070062a039dbcb0ba41ed02b0f83800804010000040300070062a039e5cc14743b3525c2328107044040100000002e00000000000000070062a039efcb0ba41ed02b12898107860161000000000e00000000000000070062a039f8cc14743b3525c5288308020000020001000d210000340800004d0ce11f0c43f8071a00000000043000070062a03a01cb0ba41ed02b136b808d20000074070000070062a03a04cc14743b3525c5c8822c00009c0eb2e9200900000e342040016e4e410000000000070062a03a16cb0ba41ed02b1659800702000000200c00070062a03a20cc14743b3525c9098107020120001000085e00000000000000070062a03a2acb0ba41ed02b1777802900000000000000070062a03a2acc14743b3525c994822c00000000003021fc2f2100000000000a000eb000010000070062a03a34cb0ba41ed02b17e981ac2e61e512060000aa000b00000a9000070062a03a3ecc14743b3525cc82814c00000413523021078202e2c0581800070062a03a52cb0ba41ed02b1b69802e00000000000000070062a03a52cc14743b3525cd7b070062a03a52cb0ba41ed02b1b6c070062a03a52cc14743b3525cd7c824a000140000300006c342143130200001e00000000000000070062a03a5ccb0ba41ed02b1bf2804a00000000089000070062a03a66cc14743b3525d06a81080200001a1900000d200000fc070000070062a03a70cb0ba41ed02b1ed28108020000020104036d21000054080000070062a03a7acc14743b3525d183802a00000000000000070062a03a7acb0ba41ed02b1f55822b000010fc0708263b300100080000005c00000600100000070062a03a85cc14743b3525d1f8826c0000cc10f60c610980000110300088fd21000000000000070062a03a98cb0ba41ed02b22c6804a00012000027000070062a03a99cc14743b3525d4e78267120004200160843c102100000000000c0000000000000000\", \"hex\"), 7)\n```",
"text": "Hi all,\nWe are migrating the data from regular collections to timeseries collections in mongodb.\nwe checked how data looks like in system.buckets.<collection_name>You can see that 577 documents are stored in one bucket but why data is not readable here ( Binary(Buffer.from)?If we read the data from time series collection directly, which is basically view, it shows correct data.Is it that, mongodb stores compressed/hashed data in system.buckets.<collection_name> ?Thanks in advance.",
"username": "Yogesh_Sonawane1"
},
{
"code": "Binary(Buffer.from)Binary(Buffer.from)db.system.buckets.<collection_name>timestampmetafieldsystem.bucketsystem.bucket",
"text": "Hey @Yogesh_Sonawane1,Thank you for reaching out to the MongoDB Community forums.In MongoDB, the time-series collection follows a bucketing pattern to store the data in an optimized format. So, when you create a time-series collection, it creates 3 collections within the same database out of which 2 are internal collections:For example, in your case, you have created a time-series collection. The collection acts as a view that allows you to interact with all the documents and perform operations.You can see that 577 documents are stored in one bucket but why data is not readable here ( Binary(Buffer.from)?Regarding your question about the readability of the data using Binary(Buffer.from) , the actual data is stored in a bucketing pattern within the db.system.buckets.<collection_name> collection. This means that the individual documents are merged together into a single document known as a “bucket” based on timestamp and metafield.It’s worth noting that the time series collection was primarily designed to capture a stream of data and aggregate those data, where the aggregation result is typically the end result of the workflow, instead of looking at individual data points. Hence, the current implementation of system.bucket is one way to achieve this goal, and we should expect that the underlying storage pattern will change for the better over time. Therefore, I don’t think you can depend on the current pattern being what it is today. In the future, the system.bucket collection may not exist or may not resemble what it is today. To ensure that your workflow still works with future time series implementations, I encourage you not to depend on its current structure.Hope it answers your questions, in case you have any further questions or concerns feel free to reach out.Best regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "system.bucket",
"text": "Thank you @Kushagra_Kesav for detailed explanation. I have got answer to my question.I don’t think you can depend on the current pattern being what it is today. In the future, the system.bucket collection may not exist or may not resemble what it is today.This statement confused me, because currently, shard zone key range can be added to timeseries collections only using system.buckets.<collection_name> collection. Otherwise zone key range is not working if added to time series collection directly.\nIf system.buckets.<collection_names> does not exist in future, how shard zone key range will work for time series collection.\nwe are using Mongodb 6.06Thanks,\nYogesh S",
"username": "Yogesh_Sonawane1"
}
] | Time series collections - data storage in buckets | 2023-06-07T13:25:03.857Z | Time series collections - data storage in buckets | 1,076 |
null | [
"replication"
] | [
{
"code": "",
"text": "Hi All!Can you please help to configure encryption on existing replica set (using local key). I tried enabling the required parameter but facing errors.Configuration Parameters:security:\nenableEncryption: true\nencryptionKeyFile: /mongo/encryption/mykeyError:\n“{“t”:{”$date\":“2023-06-08T12:04:29.895+05:00”},“s”:“E”, “c”:“STORAGE”, “id”:24248, “ctx”:“initandlisten”,“msg”:“Unable to retrieve key”,“attr”:{“keyId”:“.system”,“error”:{“code”:2,“codeName”:“BadValue”,“errmsg”:“There are existing data files, but no valid keystore could be located.”}}}\"Regards.\nNAQ",
"username": "Noman_Ahmed_Qazi"
},
{
"code": "",
"text": "Follow the tutorial below. You will have to enable encryption on each member one by one in a rolling fashion and perform initial syncs. After that all member will have encrypted at rest data.",
"username": "chris"
},
{
"code": "",
"text": "Many thanks, the issue has been resolved and like you’ve mentioned all steps are taken care of. In fact, there was a permission issue on my local key file. Documents say to keep the permission 600 in root ownership, but it did not work out. Keeping permission to 400 under ownership of mongod, helped in the end.\nThanks for your valuable input.\nRegards.",
"username": "Noman_Ahmed_Qazi"
}
] | Enable Encryption on existing Replica set | 2023-06-08T07:12:38.098Z | Enable Encryption on existing Replica set | 659 |
null | [
"replication"
] | [
{
"code": "",
"text": "Hi,\nnot able to reload the mongod service after modifying the .conf file , i just enabled the security and replicaset on mongod.conf file. i have tried. both 4.4 and 6.0 version",
"username": "Mohamed_Ismail"
},
{
"code": "",
"text": "mongodb service go to active state while disable the security or replicaset in conf file.",
"username": "Mohamed_Ismail"
},
{
"code": "",
"text": "what is your config like?any error messages in log file?",
"username": "Kobe_W"
},
{
"code": "",
"text": "@Kobe_W , here i included log and configurationmongod.log{“t”:{“$date”:“2023-06-08T14:05:24.149+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:23377, “ctx”:“SignalHandler”,“msg”:“Received signal”,“attr”:{“signal”:15,“error”:“Terminated”}}\n{“t”:{“$date”:“2023-06-08T14:05:24.149+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:23378, “ctx”:“SignalHandler”,“msg”:“Signal was sent by kill(2)”,“attr”:{“pid”:1,“uid”:0}}\n{“t”:{“$date”:“2023-06-08T14:05:24.149+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:23381, “ctx”:“SignalHandler”,“msg”:“will terminate after current cmd ends”}\n{“t”:{“$date”:“2023-06-08T14:05:24.149+05:30”},“s”:“I”, “c”:“REPL”, “id”:4784900, “ctx”:“SignalHandler”,“msg”:“Stepping down the ReplicationCoordinator for shutdown”,“attr”:{“waitTimeMillis”:10000}}\n{“t”:{“$date”:“2023-06-08T14:05:24.149+05:30”},“s”:“I”, “c”:“COMMAND”, “id”:4784901, “ctx”:“SignalHandler”,“msg”:“Shutting down the MirrorMaestro”}\n{“t”:{“$date”:“2023-06-08T14:05:24.149+05:30”},“s”:“I”, “c”:“REPL”, “id”:40441, “ctx”:“SignalHandler”,“msg”:“Stopping TopologyVersionObserver”}\n{“t”:{“$date”:“2023-06-08T14:05:24.149+05:30”},“s”:“I”, “c”:“REPL”, “id”:40447, “ctx”:“TopologyVersionObserver”,“msg”:“Stopped TopologyVersionObserver”}\n{“t”:{“$date”:“2023-06-08T14:05:24.150+05:30”},“s”:“I”, “c”:“SHARDING”, “id”:4784902, “ctx”:“SignalHandler”,“msg”:“Shutting down the WaitForMajorityService”}\n{“t”:{“$date”:“2023-06-08T14:05:24.150+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:4784903, “ctx”:“SignalHandler”,“msg”:“Shutting down the LogicalSessionCache”}\n{“t”:{“$date”:“2023-06-08T14:05:24.151+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:20562, “ctx”:“SignalHandler”,“msg”:“Shutdown: going to close listening sockets”}\n{“t”:{“$date”:“2023-06-08T14:05:24.151+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:23017, “ctx”:“listener”,“msg”:“removing socket file”,“attr”:{“path”:“/tmp/mongodb-27017.sock”}}\n{“t”:{“$date”:“2023-06-08T14:05:24.152+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4784905, “ctx”:“SignalHandler”,“msg”:“Shutting down the global connection pool”}\n{“t”:{“$date”:“2023-06-08T14:05:24.152+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:4784906, “ctx”:“SignalHandler”,“msg”:“Shutting down the FlowControlTicketholder”}\n{“t”:{“$date”:“2023-06-08T14:05:24.152+05:30”},“s”:“I”, “c”:“-”, “id”:20520, “ctx”:“SignalHandler”,“msg”:“Stopping further Flow Control ticket acquisitions.”}\n{“t”:{“$date”:“2023-06-08T14:05:24.152+05:30”},“s”:“I”, “c”:“REPL”, “id”:4784907, “ctx”:“SignalHandler”,“msg”:“Shutting down the replica set node executor”}\n{“t”:{“$date”:“2023-06-08T14:05:24.153+05:30”},“s”:“I”, “c”:“ASIO”, “id”:22582, “ctx”:“ReplNodeDbWorkerNetwork”,“msg”:“Killing all outstanding egress activity.”}\n{“t”:{“$date”:“2023-06-08T14:05:24.153+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:4784908, “ctx”:“SignalHandler”,“msg”:“Shutting down the PeriodicThreadToAbortExpiredTransactions”}\n{“t”:{“$date”:“2023-06-08T14:05:24.153+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:4784934, “ctx”:“SignalHandler”,“msg”:“Shutting down the PeriodicThreadToDecreaseSnapshotHistoryCachePressure”}\n{“t”:{“$date”:“2023-06-08T14:05:24.153+05:30”},“s”:“I”, “c”:“REPL”, “id”:4784909, “ctx”:“SignalHandler”,“msg”:“Shutting down the ReplicationCoordinator”}\n{“t”:{“$date”:“2023-06-08T14:05:24.153+05:30”},“s”:“I”, “c”:“REPL”, “id”:21328, “ctx”:“SignalHandler”,“msg”:“Shutting down replication subsystems”}\n{“t”:{“$date”:“2023-06-08T14:05:24.153+05:30”},“s”:“I”, “c”:“REPL”, “id”:21302, “ctx”:“SignalHandler”,“msg”:“Stopping replication reporter thread”}\n{“t”:{“$date”:“2023-06-08T14:05:24.153+05:30”},“s”:“I”, “c”:“REPL”, “id”:21303, “ctx”:“SignalHandler”,“msg”:“Stopping replication fetcher thread”}\n{“t”:{“$date”:“2023-06-08T14:05:24.153+05:30”},“s”:“I”, “c”:“REPL”, “id”:21304, “ctx”:“SignalHandler”,“msg”:“Stopping replication applier thread”}\n{“t”:{“$date”:“2023-06-08T14:05:24.573+05:30”},“s”:“I”, “c”:“-”, “id”:4333222, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM received failed isMaster”,“attr”:{“host”:“tp-testreplica2:27017”,“error”:“HostUnreachable: Error connecting to tp-testreplica2:27017 (192.168.1.191:27017) :: caused by :: Connection refused”,“replicaSet”:“tp1”,“isMasterReply”:“{}”}}\n{“t”:{“$date”:“2023-06-08T14:05:24.573+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4712102, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Host failed in replica set”,“attr”:{“replicaSet”:“tp1”,“host”:“tp-testreplica2:27017”,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to tp-testreplica2:27017 (192.168.1.191:27017) :: caused by :: Connection refused”},“action”:{“dropConnections”:true,“requestImmediateCheck”:false,“outcome”:{“host”:“tp-testreplica2:27017”,“success”:false,“errorMessage”:“HostUnreachable: Error connecting to tp-testreplica2:27017 (192.168.1.191:27017) :: caused by :: Connection refused”}}}}\n{“t”:{“$date”:“2023-06-08T14:05:24.621+05:30”},“s”:“I”, “c”:“REPL_HB”, “id”:23974, “ctx”:“ReplCoord-0”,“msg”:“Heartbeat failed after max retries”,“attr”:{“target”:“tp-testreplica2:27017”,“maxHeartbeatRetries”:2,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to tp-testreplica2:27017 (192.168.1.191:27017) :: caused by :: Connection refused”}}}\n{“t”:{“$date”:“2023-06-08T14:05:25.073+05:30”},“s”:“I”, “c”:“CONNPOOL”, “id”:22576, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Connecting”,“attr”:{“hostAndPort”:“tp-testreplica2:27017”}}\n{“t”:{“$date”:“2023-06-08T14:05:25.073+05:30”},“s”:“I”, “c”:“-”, “id”:4333222, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM received failed isMaster”,“attr”:{“host”:“tp-testreplica2:27017”,“error”:“HostUnreachable: Error connecting to tp-testreplica2:27017 (192.168.1.191:27017) :: caused by :: Connection refused”,“replicaSet”:“tp1”,“isMasterReply”:“{}”}}\n{“t”:{“$date”:“2023-06-08T14:05:25.073+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4712102, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Host failed in replica set”,“attr”:{“replicaSet”:“tp1”,“host”:“tp-testreplica2:27017”,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to tp-testreplica2:27017 (192.168.1.191:27017) :: caused by :: Connection refused”},“action”:{“dropConnections”:true,“requestImmediateCheck”:true}}}\n{“t”:{“$date”:“2023-06-08T14:05:25.075+05:30”},“s”:“I”, “c”:“REPL”, “id”:21225, “ctx”:“OplogApplier-0”,“msg”:“Finished oplog application”}\n{“t”:{“$date”:“2023-06-08T14:05:25.078+05:30”},“s”:“I”, “c”:“REPL”, “id”:21107, “ctx”:“BackgroundSync”,“msg”:“Stopping replication producer”}\n{“t”:{“$date”:“2023-06-08T14:05:25.078+05:30”},“s”:“I”, “c”:“REPL”, “id”:21307, “ctx”:“SignalHandler”,“msg”:“Stopping replication storage threads”}\n{“t”:{“$date”:“2023-06-08T14:05:25.078+05:30”},“s”:“I”, “c”:“ASIO”, “id”:22582, “ctx”:“OplogApplierNetwork”,“msg”:“Killing all outstanding egress activity.”}\n{“t”:{“$date”:“2023-06-08T14:05:25.078+05:30”},“s”:“I”, “c”:“ASIO”, “id”:22582, “ctx”:“ReplCoordExternNetwork”,“msg”:“Killing all outstanding egress activity.”}\n{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“ASIO”, “id”:22582, “ctx”:“ReplNetwork”,“msg”:“Killing all outstanding egress activity.”}\n{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“SHARDING”, “id”:4784910, “ctx”:“SignalHandler”,“msg”:“Shutting down the ShardingInitializationMongoD”}\n{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“REPL”, “id”:4784911, “ctx”:“SignalHandler”,“msg”:“Enqueuing the ReplicationStateTransitionLock for shutdown”}\n{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“-”, “id”:4784912, “ctx”:“SignalHandler”,“msg”:“Killing all operations for shutdown”}\n{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“-”, “id”:4695300, “ctx”:“SignalHandler”,“msg”:“Interrupted all currently running operations”,“attr”:{“opsKilled”:6}}\n{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“COMMAND”, “id”:4784913, “ctx”:“SignalHandler”,“msg”:“Shutting down all open transactions”}\n{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“REPL”, “id”:4784914, “ctx”:“SignalHandler”,“msg”:“Acquiring the ReplicationStateTransitionLock for shutdown”}\n{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“INDEX”, “id”:4784915, “ctx”:“SignalHandler”,“msg”:“Shutting down the IndexBuildsCoordinator”}\n{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“REPL”, “id”:4784916, “ctx”:“SignalHandler”,“msg”:“Reacquiring the ReplicationStateTransitionLock for shutdown”}\n{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“REPL”, “id”:4784917, “ctx”:“SignalHandler”,“msg”:“Attempting to mark clean shutdown”}\n{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4784918, “ctx”:“SignalHandler”,“msg”:“Shutting down the ReplicaSetMonitor”}\n{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4333209, “ctx”:“SignalHandler”,“msg”:“Closing Replica Set Monitor”,“attr”:{“replicaSet”:“tp1”}}\n{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4333210, “ctx”:“SignalHandler”,“msg”:“Done closing Replica Set Monitor”,“attr”:{“replicaSet”:“tp1”}}\n{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“ASIO”, “id”:22582, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Killing all outstanding egress activity.”}\n{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“CONNPOOL”, “id”:22572, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Dropping all pooled connections”,“attr”:{“hostAndPort”:“tp-testreplica1:27017”,“error”:“ShutdownInProgress: Shutting down the connection pool”}}\n{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:22944, “ctx”:“conn2”,“msg”:“Connection ended”,“attr”:{“remote”:“192.168.1.190:52606”,“connectionId”:2,“connectionCount”:1}}\n{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“REPL”, “id”:4784920, “ctx”:“SignalHandler”,“msg”:“Shutting down the LogicalTimeValidator”}\n{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:22944, “ctx”:“conn3”,“msg”:“Connection ended”,“attr”:{“remote”:“192.168.1.190:52612”,“connectionId”:3,“connectionCount”:0}}\n{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“SHARDING”, “id”:4784921, “ctx”:“SignalHandler”,“msg”:“Shutting down the MigrationUtilExecutor”}\n{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:4784925, “ctx”:“SignalHandler”,“msg”:“Shutting down free monitoring”}\n{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:20609, “ctx”:“SignalHandler”,“msg”:“Shutting down free monitoring”}\n{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:4784927, “ctx”:“SignalHandler”,“msg”:“Shutting down the HealthLog”}\n{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:4784929, “ctx”:“SignalHandler”,“msg”:“Acquiring the global lock for shutdown”}\n{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:4784930, “ctx”:“SignalHandler”,“msg”:“Shutting down the storage engine”}\n{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:22320, “ctx”:“SignalHandler”,“msg”:“Shutting down journal flusher thread”}\n{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:22321, “ctx”:“SignalHandler”,“msg”:“Finished shutting down journal flusher thread”}\n{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:20282, “ctx”:“SignalHandler”,“msg”:“Deregistering all the collections”}\n{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:22372, “ctx”:“OplogVisibilityThread”,“msg”:“Oplog visibility thread shutting down.”}\n{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:22261, “ctx”:“SignalHandler”,“msg”:“Timestamp monitor shutting down”}\n{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:22317, “ctx”:“SignalHandler”,“msg”:“WiredTigerKVEngine shutting down”}\n{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:22318, “ctx”:“SignalHandler”,“msg”:“Shutting down session sweeper thread”}\n{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:22319, “ctx”:“SignalHandler”,“msg”:“Finished shutting down session sweeper thread”}\n{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:22322, “ctx”:“SignalHandler”,“msg”:“Shutting down checkpoint thread”}\n{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:22323, “ctx”:“SignalHandler”,“msg”:“Finished shutting down checkpoint thread”}\n{“t”:{“$date”:“2023-06-08T14:05:25.081+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:4795902, “ctx”:“SignalHandler”,“msg”:“Closing WiredTiger”,“attr”:{“closeConfig”:“leak_memory=true,”}}\n{“t”:{“$date”:“2023-06-08T14:05:25.082+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“SignalHandler”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1686213325:82578][98841:0x7faf3b10d700], close_ckpt: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 10, snapshot max: 10 snapshot count: 0, oldest timestamp: (1686080677, 1) , meta checkpoint timestamp: (1686080677, 1) base write gen: 2908”}}\n{“t”:{“$date”:“2023-06-08T14:05:25.089+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:4795901, “ctx”:“SignalHandler”,“msg”:“WiredTiger closed”,“attr”:{“durationMillis”:8}}\n{“t”:{“$date”:“2023-06-08T14:05:25.089+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:22279, “ctx”:“SignalHandler”,“msg”:“shutdown: removing fs lock…”}\n{“t”:{“$date”:“2023-06-08T14:05:25.089+05:30”},“s”:“I”, “c”:“-”, “id”:4784931, “ctx”:“SignalHandler”,“msg”:“Dropping the scope cache for shutdown”}\n{“t”:{“$date”:“2023-06-08T14:05:25.089+05:30”},“s”:“I”, “c”:“FTDC”, “id”:4784926, “ctx”:“SignalHandler”,“msg”:“Shutting down full-time data capture”}\n{“t”:{“$date”:“2023-06-08T14:05:25.089+05:30”},“s”:“I”, “c”:“FTDC”, “id”:20626, “ctx”:“SignalHandler”,“msg”:“Shutting down full-time diagnostic data capture”}\n{“t”:{“$date”:“2023-06-08T14:05:25.092+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:20565, “ctx”:“SignalHandler”,“msg”:“Now exiting”}\n{“t”:{“$date”:“2023-06-08T14:05:25.092+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:23138, “ctx”:“SignalHandler”,“msg”:“Shutting down”,“attr”:{“exitCode”:0}}",
"username": "Mohamed_Ismail"
},
{
"code": "",
"text": "#mongod.confsystemLog:\ndestination: file\nlogAppend: true\npath: /var/log/mongodb/mongod.logstorage:\ndbPath: /var/lib/mongo\njournal:\nenabled: trueprocessManagement:\ntimeZoneInfo: /usr/share/zoneinfonet:\nport: 27017\nbindIp: 0.0.0.0 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.security:\nauthorization: enabled\nkeyfile: /root/keyfile/keyfile_mongod\n#transitionToAuth: true#operationProfiling:replication:\nreplSetName: “tp1”#sharding:#auditLog:#snmp:",
"username": "Mohamed_Ismail"
},
{
"code": "",
"text": "[Unit]\nDescription=MongoDB Database Server\nDocumentation=https://docs.mongodb.org/manual\nAfter=network-online.target\nWants=network-online.target[Service]\nUser=mongod\nGroup=mongod\nEnvironment=“OPTIONS=-f /etc/mongod.conf”\nEnvironment=“MONGODB_CONFIG_OVERRIDE_NOFORK=1”\nEnvironmentFile=-/etc/sysconfig/mongod\nExecStart=/usr/bin/mongod $OPTIONSLimitFSIZE=infinityLimitCPU=infinityLimitAS=infinityLimitNOFILE=64000LimitNPROC=64000LimitMEMLOCK=infinityTasksMax=infinity\nTasksAccounting=false[Install]\nWantedBy=multi-user.target",
"username": "Mohamed_Ismail"
},
{
"code": "brew",
"text": "Hi @Mohamed_Ismailnot able to reload the mongod service after modifying the .conf file , i just enabled the security and replicaset on mongod.conf file.I’m assuming you installed MongoDB using some service e.g. brew or some package management? Note that in most cases, MongoDB installed by those management systems are meant to be used as a development platform, and thus very lax in security, and many of them deploy as a standalone node.If this is for development, please try to enable replication first and not auth, and see if it works. This is so you don’t end up trying to solve two things at once.See Deploy a Replica SetOnce that works, then enable auth for the replica set.See Update Replica Set to Keyfile AuthenticationBest regards\nKevin",
"username": "kevinadi"
}
] | Can't start the mongod service after modify the mongo.conf file | 2023-06-07T16:28:47.054Z | Can’t start the mongod service after modify the mongo.conf file | 790 |
null | [
"queries",
"crud",
"golang"
] | [
{
"code": "client, _ := mongo.Client(....uri....)\n\ntb := client.Database(\"database_name\").Collection(\"table_name\")\n\ncursor := tb.Find({}).Cursor()\n\nvar maxBulkSize = 15000\n\nvar model DataModelStruct{\n ID primitive.ObjectID `bson:\"_id\"`\n UpdateAt uint64 `bson:\"update_at\"`\n}\n\nbulks := make([]DataModelStruct, 0, maxBulkSize)\n\nfor cursor.Next(&model) {\n bulks = append(bulks, model)\n if len(bulks) < maxBulkSize {\n continue\n }\n bm := make([]UpdateOneModel, 0, maxBulkSize)\n for _, m := range bulks {\n uom := NewUpdateOneModel().\n SetFilter(bson.M{\"_id\":m.ID}).\n SetUpdate(bson.M{\n \"$set\": bson.M{\n \"update_at\": time.Now().UnixNano() // this will be changed everytime to ensure this value not equal the value already in doc\n }\n })\n bm = append(bm, uom)\n }\n res,err := tb.BulkWrite(ctx, bm, ordered=false)\n // no err is here\n\n // reset bm、bulks to zero slice\n bm = make([]UpdateOneModel, 0, maxBulkSize)\n bulks = make([]DataModelStruct, 0, maxBulkSize)\n \n // but got res.MatchedCount not equal res.ModifiedCount there\n \n}\n\n2023-06-08 16:13:52: start migrate basebill, concurrency:20000, online:false\n2023-06-08 16:13:59: loop update count: 15000, Matched:14999 Modified:10732\n2023-06-08 16:14:07: loop update count: 30000, Matched:14999 Modified:10758\n2023-06-08 16:14:13: loop update count: 45000, Matched:14999 Modified:11163\n2023-06-08 16:14:14: loop update count: 60000, Matched:14999 Modified:9661\n2023-06-08 16:14:19: loop update count: 75000, Matched:15000 Modified:11181\n2023-06-08 16:14:22: loop update count: 90000, Matched:15000 Modified:13024\n2023-06-08 16:14:24: loop update count: 105000, Matched:15000 Modified:7384\n2023-06-08 16:14:28: loop update count: 120000, Matched:15000 Modified:11119\n2023-06-08 16:14:31: loop update count: 135000, Matched:14999 Modified:12710\n2023-06-08 16:14:35: loop update count: 150000, Matched:14999 Modified:12216\n2023-06-08 16:14:36: loop update count: 165000, Matched:15000 Modified:10312\n2023-06-08 16:14:40: loop update count: 180000, Matched:15000 Modified:11271\n2023-06-08 16:14:44: loop update count: 195000, Matched:15000 Modified:12886\n2023-06-08 16:14:47: loop update count: 210000, Matched:15000 Modified:11166\n2023-06-08 16:14:50: loop update count: 225000, Matched:15000 Modified:12736\n2023-06-08 16:14:51: loop update count: 240000, Matched:15000 Modified:8337\n2023-06-08 16:14:54: loop update count: 255000, Matched:15000 Modified:10093\n2023-06-08 16:14:59: loop update count: 270000, Matched:15000 Modified:12313\n2023-06-08 16:15:02: last update count: 277492, Matched:7492 Modified:6813\n2023-06-08 16:15:02: end migrate basebill, online:false, total:277492\n",
"text": "I have a collection contains about 277492 records, use BulkWrite API to update all records.codes like this(golang):",
"username": "javasgl_N_A"
},
{
"code": "",
"text": "it’s my fualt. cursor return the same doc more then once",
"username": "javasgl_N_A"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | BulkWrite ModifyCount not equal MatchedCount even if filter is _id Filter | 2023-06-08T09:55:24.192Z | BulkWrite ModifyCount not equal MatchedCount even if filter is _id Filter | 658 |
null | [] | [
{
"code": "",
"text": "Anyone can help please. When I try to change a user’s password in Mongoshell I got this problem:\nMongoServerError: not authorized on sample_airbnb to execute command { updateUser: “cnxtestd”, pwd: “xxx”, apiVersion: “1”, lsid: { id: UUID(“fb2a5dae-9403-4d51-a55f-88b8f00862f4”) }, $clusterTime: { clusterTime: Timestamp(1682349221, 1), signature: { hash: BinData(0, 6847C01D19DF0639E909898431595E96BB0A9966), keyId: 7223441771265523717 } }, $db: “sample_airbnb” }\nSame happened with the command:\ndb.changeUserPassword(“cnxtestd”, “secretpassword”)",
"username": "Sergio_Anibal_Agudelo_Correa"
},
{
"code": "updateUserM0M2M5updateUser",
"text": "Hello @Sergio_Anibal_Agudelo_Correa ,Welcome to The MongoDB Community Forums! MongoServerError: not authorized on sample_airbnb to execute command { updateUser: “cnxtestd”The error message you are seeing indicates that the user you are logged in as does not have sufficient privileges to execute the updateUser command or the changeUserPassword method. Please confirm the role assigned to the user attempting to perform the updateUser command.Note: You can check the roles assigned to your user by running db.getRoles() command.Additionally, could you confirm whether or not this is an Atlas deployment and if so, what is the cluster tier? For reference, the M0 free clusters and M2/M5 shared clusters don’t support the updateUser command as per the Unsupported Commands documentation.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Hi Tarun, thanks for your response…the MongoDB Atlas cluster tier is M10 dedicated. However, I think there are limitations to execute these commands in mongo shell as per the following URL: https://www.mongodb.com/docs/atlas/security-add-mongodb-users/\nI will take a look at API to try to automate the creation of database users and password change. Please let me know if I’m incorrect on this preliminary analysis.",
"username": "Sergio_Anibal_Agudelo_Correa"
},
{
"code": "updateUser",
"text": "@Sergio_Anibal_Agudelo_Correa you are right. As per this documentation on unsupported commands in m10 clusters, updateUser is not supported to ensure cluster stability and performance.",
"username": "Tarun_Gaur"
},
{
"code": "Preformatted textPreformatted textPreformatted textPreformatted text",
"text": "Hi Tarun, thanks for your reply!. Nowadays, I am trying to change a mongodb atlas database user password using the Update One Database User in One Project administration API with no success so far. I’m using curl command and when I run it, I got the following error message:\nPreformatted text{\n“error” : 405,\n“reason” : “Method Not Allowed”\n}Preformatted text\nThe curl command I’m running in Windows cmd window is the following:\nPreformatted textG:\\curl-8.0.1_9-win64-mingw\\curl-8.0.1_9-win64-mingw\\bin>curl --user “…:…” --digest “https://cloud.mongodb.com/api/atlas/v1.0/groups/..../databaseUsers/sample_airbnb/…” --json @\"G:\\curl-8.0.1_9-win64-mingw\\curl-8.0.1_9-win64-mingw\\bin\\updateUser.json\"Preformatted textAny ideas on how to solve this issue would be highly appreciate it!.",
"username": "Sergio_Anibal_Agudelo_Correa"
},
{
"code": "curl --location --request PATCH 'https://cloud.mongodb.com/api/atlas/v1.0/groups/<group-id>/databaseUsers/admin/<user-name>?envelope=false&pretty=false' \\\n--header 'Content-Type: application/json' \\\n--header 'Accept: application/json' \\\n--data '{\n \"databaseName\": \"admin\",\n \"groupId\": \"<group-id>\",\n \"username\": \"<user-name>\",\n \"password\": \"<new-password>\",\n\"roles\": [\n {\n \"databaseName\": \"<database-name>\",\n \"roleName\": \"<role>\"\n },\n {\n \"databaseName\": \"<database-name>\",\n \"roleName\": \"<role>\"\n }\n ],\n \"scopes\": [\n {\n \"name\": \"<Cluster-name>\",\n \"type\": \"CLUSTER\"\n }\n ]\n}'\n",
"text": "Hi @Sergio_Anibal_Agudelo_Correa ,I tried the MongoDB Atlas Administration API - Update One Database User in One Project and it is working as expected, below is an example of cURL command I used (sensitive information is redacted).Can you please share your update.json content (redact any sensitive content) and how are you running this?Tarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Hi Tarun, how is it going?..hope everything is okay. I want to thank you for your quick reply on this case. I’m trying to change the database user’s password with Atlas Admin API. I adjusted the curl command to be used in the project I’m working on. Although the curl command example is good for Linux environment, It did help me a lot as reference to be used and to be executed in Windows to test the Atlas cluster user’s password through Windows CMD. I did use backslash for --data-raw in the curl command to get correct syntax and worked it as expected for the SCRAM user I’m working on!. Again, thanks a lot for your support. Best regards!",
"username": "Sergio_Anibal_Agudelo_Correa"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Error changing user password on Mongo shell | 2023-04-24T15:37:14.755Z | Error changing user password on Mongo shell | 1,041 |
null | [] | [
{
"code": "",
"text": "I have spent 3 hours trying unsuccessfully to install MongoDB 5 on my Ubuntu 20.04 box (Virtual Box 7), regardless of what I try it fails.I have uninstalled, reinstalled multiple times, followed this: Setting up MongoDB v5.0 on Ubuntu 20: \"core-dump: STATUS 4/ILL\" - #4 by Stennie_X , installed Mongo v 4 and it ended up showing as Active, however Graylog failed to work properly as it says it need Mongo version 5 installed:ERROR [ServerBootstrap] Preflight check failed with error: You’re running MongoDB 4.4.22 but Graylog requires at least MongoDB 5.0.0. Please upgrade.I have tried installing Mongo 6, finish, same error as below:When I try: sudo systemctl status mongod, I get:● mongod.service - MongoDB Database Server\nLoaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\nActive: failed (Result: core-dump) since Fri 2023-06-02 03:49:41 UTC; 6s ago\nDocs: https://docs.mongodb.org/manual\nProcess: 4153 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=dumped, signal=ILL)\nMain PID: 4153 (code=dumped, signal=ILL)Jun 02 03:49:41 cislogs systemd[1]: Started MongoDB Database Server.\nJun 02 03:49:41 cislogs systemd[1]: mongod.service: Main process exited, code=dumped, status=4/ILL\nJun 02 03:49:41 cislogs systemd[1]: mongod.service: Failed with result ‘core-dump’.\n~My cpu info is as followscat /proc/cpuinfo\nprocessor\t: 0\nvendor_id\t: GenuineIntel\ncpu family\t: 6\nmodel\t\t: 140\nmodel name\t: 11th Gen Intel(R) Core™ i7-1165G7 @ 2.80GHz\nstepping\t: 1\nmicrocode\t: 0xffffffff\ncpu MHz\t\t: 2803.200\ncache size\t: 12288 KB\nphysical id\t: 0\nsiblings\t: 3\ncore id\t\t: 0\ncpu cores\t: 3\napicid\t\t: 0\ninitial apicid\t: 0\nfpu\t\t: yes\nfpu_exception\t: yes\ncpuid level\t: 22\nwp\t\t: yes\nflags\t\t: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 movbe popcnt aes rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ibrs_enhanced fsgsbase bmi1 bmi2 invpcid rdseed clflushopt md_clear flush_l1d arch_capabilities\nbugs\t\t: spectre_v1 spectre_v2 spec_store_bypass swapgs retbleed eibrs_pbrsb\nbogomips\t: 5606.40\nclflush size\t: 64\ncache_alignment\t: 64\naddress sizes\t: 39 bits physical, 48 bits virtual\npower management:processor\t: 1\nvendor_id\t: GenuineIntel\ncpu family\t: 6\nmodel\t\t: 140\nmodel name\t: 11th Gen Intel(R) Core™ i7-1165G7 @ 2.80GHz\nstepping\t: 1\nmicrocode\t: 0xffffffff\ncpu MHz\t\t: 2803.200\ncache size\t: 12288 KB\nphysical id\t: 0\nsiblings\t: 3\ncore id\t\t: 1\ncpu cores\t: 3\napicid\t\t: 1\ninitial apicid\t: 1\nfpu\t\t: yes\nfpu_exception\t: yes\ncpuid level\t: 22\nwp\t\t: yes\nflags\t\t: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 movbe popcnt aes rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ibrs_enhanced fsgsbase bmi1 bmi2 invpcid rdseed clflushopt md_clear flush_l1d arch_capabilities\nbugs\t\t: spectre_v1 spectre_v2 spec_store_bypass swapgs retbleed eibrs_pbrsb\nbogomips\t: 5606.40\nclflush size\t: 64\ncache_alignment\t: 64\naddress sizes\t: 39 bits physical, 48 bits virtual\npower management:processor\t: 2\nvendor_id\t: GenuineIntel\ncpu family\t: 6\nmodel\t\t: 140\nmodel name\t: 11th Gen Intel(R) Core™ i7-1165G7 @ 2.80GHz\nstepping\t: 1\nmicrocode\t: 0xffffffff\ncpu MHz\t\t: 2803.200\ncache size\t: 12288 KB\nphysical id\t: 0\nsiblings\t: 3\ncore id\t\t: 2\ncpu cores\t: 3\napicid\t\t: 2\ninitial apicid\t: 2\nfpu\t\t: yes\nfpu_exception\t: yes\ncpuid level\t: 22\nwp\t\t: yes\nflags\t\t: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 movbe popcnt aes rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ibrs_enhanced fsgsbase bmi1 bmi2 invpcid rdseed clflushopt md_clear flush_l1d arch_capabilities\nbugs\t\t: spectre_v1 spectre_v2 spec_store_bypass swapgs retbleed eibrs_pbrsb\nbogomips\t: 5606.40\nclflush size\t: 64\ncache_alignment\t: 64\naddress sizes\t: 39 bits physical, 48 bits virtual\npower management:Any advice or help regarding this would be greatly appreciated. Thank you.",
"username": "Zevl_Renain"
},
{
"code": "",
"text": "Boosting this, is anyone available to help me troubleshoot?",
"username": "Zevl_Renain"
},
{
"code": "",
"text": "Boosting again , please help can someone look at my processor log and tell me if my cpu is capable of running this software or not?",
"username": "Zevl_Renain"
},
{
"code": "",
"text": "Another user got the same error with Graylog configuration\nDoes your cpu support AVX2 instruction set?\nCheck compatibility matrix",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "My cpu is 11th gen intel core i7 1165G7 @ 2.80GHz , so Tiger Lake. I assumed this was compatible based on the hardware requirements. I’m not sure how to check AVX2 instruction set ill look into it further though…I just tried installing Mongodb 4 and it worked. Mongo 5 and 6 do not work on my Ubuntu 18 / 20 distros and give me the same core dump error when i restart the service and check status via systemctl. However I’m trying to setup a graylog server and its newer versions require mongo5 so im in a bit of a bind…",
"username": "Zevl_Renain"
},
{
"code": "",
"text": "Here is my processer:\n\nimage930×32 1.81 KB\nAnd here is the cpus which cover this from the article:*** [Tiger Lake] (Core, Pentium and Celeron branded[[11]] processors, Q3 2020**Based on my research , my core intel cpu would be tiger lake.\nI don’t see why mine shouldn’t work in this case. Maybe it’s an issue with the VirtualBox CPU settings?",
"username": "Zevl_Renain"
}
] | Trying to configure Graylog , unable to finish as unable to install MongoDB 5 on (Oracle Virtual Box 7) Ubuntu 20.04 LTS Focal "Main process exited, code=dumped, status=4/ILL" | 2023-06-02T04:35:12.979Z | Trying to configure Graylog , unable to finish as unable to install MongoDB 5 on (Oracle Virtual Box 7) Ubuntu 20.04 LTS Focal “Main process exited, code=dumped, status=4/ILL” | 1,356 |
null | [
"storage"
] | [
{
"code": "storage:\n dbPath: \"/var/lib/mongo/corp_group1\"\n directoryPerDB: true\n journal:\n enabled: true\n engine: \"wiredTiger\"\n wiredTiger:\n engineConfig:\n directoryForIndexes: false\n collectionConfig:\n blockCompressor: \"snappy\"\nsystemLog:\n quiet: true\n destination: file\n path: \"/var/log/mongo/corp_group1/mongod.log\"\n logAppend: true\n timeStampFormat: \"iso8601-utc\"\nprocessManagement:\n fork: true\nsetParameter:\n cursorTimeoutMillis: 600000\n honorSystemUmask: true\nnet:\n bindIp: 0.0.0.0\n port: 27018\n unixDomainSocket:\n pathPrefix: \"/var/lib/mongo\"\nsharding:\n clusterRole: \"shardsvr\"\nreplication:\n replSetName: \"corp_group1\"\n enableMajorityReadConcern: false\n",
"text": "Hello:I am using mongodb 4.4.x in several replicasets, & ! ran into frequent problem with growing file size for WiredTigerHS.wt. The file grows non-stop, eventually eating up all disk space on the server, killing the mongod process.I want to see if anyone has any idea what might be gone wrong. Below is the mongod config we use.Thanks in advance.\nEric",
"username": "Eric_Wong"
},
{
"code": "WiredTigerHS.wt",
"text": "Hi @Eric_WongThe file WiredTigerHS.wt contains WiredTiger’s history store. From the relevant page in WiredTiger documentation:The history store in WiredTiger tracks historical versions of records required to service older readers. By having these records in storage separate from the current version, they can be used to service long running transactions and be evicted as necessary, without interfering with activity that uses the most recent committed versions.Basically it’s used to keep historical state of your data, due to some process that’s requiring MongoDB to hold onto those old data versions. Typically this was caused by the application opening a long running transaction that never commit/rollback.I am using mongodb 4.4.xIf you’re not using 4.4.17, I would suggest you to upgrade to it since it’s the latest version in the 4.4 series, just be sure that you’re not hitting an old issue that was resolved. Also note that versions 4.4.2-4.4.8 are not recommended to be used anymore due to some serious issues. Please see the 4.4 Release Notes for more details.If upgrading doesn’t solve this, then you may need to take a look in your application code for any workload that opens a transaction, a no-timeout cursor, or anything similar.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "but why it grows in standalone mongodb without replSet?\nstat WiredTigerHS.wt\nFile: WiredTigerHS.wt\nSize: 362814038016\tBlocks: 708621176 IO Block: 4096 regular fileand how to disable history and compact this file?mongdb 6.0.5\noverall collections size ~14Tb / 800M docs",
"username": "Tema_Gordiyenko"
},
{
"code": "",
"text": "but why it grows in standalone mongodb without replSet?you can still have transaction on it. So doesn’t matter.",
"username": "Kobe_W"
},
{
"code": "db.adminCommand( { getCmdLineOpts: 1 } )rs.status()",
"text": "Hi @Tema_Gordiyenko welcome to the community!but why it grows in standalone mongodb without replSet?Could you post the ouput of db.adminCommand( { getCmdLineOpts: 1 } ), and also the output of rs.status()?and how to disable history and compact this file?You cannot disable the history store as far as I know.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "**MongoServerError**: not running with --replSetdb.adminCommand( { getCmdLineOpts: 1 } )\n{\n argv: [ '/usr/bin/mongod', '--quiet', '--config', '/etc/mongod.conf' ],\n parsed: {\n config: '/etc/mongod.conf',\n net: { bindIp: '::1,127.0.0.1', ipv6: true, port: 27017 },\n operationProfiling: { slowOpSampleRate: 1, slowOpThresholdMs: 10000 },\n processManagement: {\n fork: false,\n pidFilePath: '/var/run/mongodb/mongod.pid',\n timeZoneInfo: '/usr/share/zoneinfo'\n },\n setParameter: {\n ShardingTaskExecutorPoolMinSize: '150',\n allowDiskUseByDefault: '1',\n connPoolMaxConnsPerHost: '1000',\n connPoolMaxInUseConnsPerHost: '300',\n globalConnPoolIdleTimeoutMinutes: '300'\n },\n storage: {\n dbPath: '/db/mongo/d',\n directoryPerDB: true,\n engine: 'wiredTiger',\n journal: { commitIntervalMs: 300, enabled: true },\n wiredTiger: {\n collectionConfig: { blockCompressor: 'zstd' },\n engineConfig: {\n cacheSizeGB: 128,\n configString: 'eviction=(threads_min=20,threads_max=20),checkpoint=(wait=60),eviction_dirty_trigger=15,eviction_dirty_target=3,eviction_trigger=95,eviction_target=90',\n directoryForIndexes: true,\n journalCompressor: 'none',\n zstdCompressionLevel: 10\n }\n }\n },\n systemLog: {\n destination: 'file',\n logAppend: true,\n path: '/db/mongo/log/mongod.log',\n quiet: true\n }\n },\n ok: 1\n}\n\n",
"text": "**MongoServerError**: not running with --replSet\ni can’t set neither read preference nor write concern - it is “you can’t - not replica set”I do not want replSet but any transactional stuff is documented only for replicas.",
"username": "Tema_Gordiyenko"
},
{
"code": "",
"text": "413290467328 May 29 12:23 WiredTigerHS.wt\nhow to stop it from growing ???",
"username": "Tema_Gordiyenko"
},
{
"code": "",
"text": "ok. it can. but app not use this functionality.\nso why?",
"username": "Tema_Gordiyenko"
},
{
"code": "WiredTigerHS.wtWiredTigerHS.wtmongodmongod --repair",
"text": "Hi @Tema_GordiyenkoSorry you’re seeing this. This is pretty much unexpected, as typically the WiredTigerHS.wt file growth is usually related to replication. This is quite strange as you don’t use replication.At the moment, there are some options that may be possible to do:Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "it is growing not monotonically. Somthing causing it to grow - maybe TTL drops or gridfs?418498617344 May 30 05:55 WiredTigerHS.wt",
"username": "Tema_Gordiyenko"
},
{
"code": "422 291 902 464 May 31 02:42 WiredTigerHS.wt\n427 954 937 856 May 31 06:01 WiredTigerHS.wt\n428 556 591 104 May 31 07:00 WiredTigerHS.wt\n",
"text": "",
"username": "Tema_Gordiyenko"
},
{
"code": "",
"text": "it was very stupid solution run --repair on 16Tb data\nmongo use 1 thread to read and verify collections and indexes it is processing about 2Tb per 24h\non 32CPU/256GB/io2 64kiops …\nterminated and restored from snapshot.\nalso created clone and restored from backup\nmongo 6.0.62658304 Jun 4 14:48 WiredTigerHS.wt\n10194944 Jun 4 14:51 WiredTigerHS.wt\n7569408 Jun 5 07:13 WiredTigerHS.wt\n39284736 Jun 5 14:39 WiredTigerHS.wt\n8404992 Jun 8 15:00 WiredTigerHS.wtincreasing/decreasing …",
"username": "Tema_Gordiyenko"
}
] | Growing WiredTigerHS.wt | 2022-11-02T08:03:05.959Z | Growing WiredTigerHS.wt | 2,645 |
null | [
"aggregation",
"change-streams",
"monitoring",
"free-monitoring"
] | [
{
"code": "4.2.22ii mongodb-org-server 4.2.22 amd64 MongoDB database serverlsb_release -a\nNo LSB modules are available.\nDistributor ID:\tUbuntu\nDescription:\tUbuntu 20.04.5 LTS\nRelease:\t20.04\nCodename:\tfocal\ndb.getSiblingDB(\"admin\").createUser(\n {\n user: \"metricUser\",\n pwd: \"Password123@\",\n roles: [\"clusterMonitor\"],\n })\nJan 19 15:41:50 mongo-server-2.private.xxxx.com otelopscol[543784]: 2023-01-19T15:41:50.280Z debug scraperhelper/scrapercontroller.go:197 Error scraping metrics {\"error\": \"failed to fetch index stats metrics: (Unauthorized) not authorized on local to execute command { aggregate: \\\"system.replset\\\", pipeline: [ { $indexStats: {} } ], cursor: {}, lsid: { id: UUID(\\\"df5c11f9-e865-45d7-8c11-aeb6e1cdeddd\\\") }, $clusterTime: { clusterTime: Timestamp(1674142905, 1), signature: { hash: BinData(0, 9161B7FBCD2C952834EA0DAB5B87B6153B3BE5CA), keyId: 7138546094778089478 } }, $db: \\\"local\\\", $readPreference: { mode: \\\"primary\\\" } }\", \"scraper\": \"mongodb\"}\nJan 19 15:41:50 mongo-server-2.private..xxxx.com otelopscol[543784]: go.opentelemetry.io/collector/receiver/scraperhelper.(*controller).scrapeMetricsAndReport\nJan 19 15:41:50 mongo-server-2.private.xxxx.com otelopscol[543784]: /root/go/pkg/mod/go.opentelemetry.io/[email protected]/receiver/scraperhelper/scrapercontroller.go:197\nJan 19 15:41:50 mongo-server-2.private.xxxx.com otelopscol[543784]: go.opentelemetry.io/collector/receiver/scraperhelper.(*controller).startScraping.func1\nJan 19 15:41:50 mongo-server-2.private.xxxx.com otelopscol[543784]: /root/go/pkg/mod/go.opentelemetry.io/[email protected]/receiver/scraperhelper/scrapercontroller.go:172\n2023-01-20T17:02:36.891+0000 I ACCESS [conn67] Unauthorized: not authorized on local to execute command { aggregate: \"system.replset\", pipeline: [ { $indexStats: {} } ], cursor: {}, lsid: { id: UUID(\"ec721915-45ce-4ac5-a8be-99c676afba63\") }, $clusterTime: { clusterTime: Timestamp(1674234148, 1), signature: { hash: BinData(0, F171EA196DF7052781BD2A4B04117F0108CFC181), keyId: 7138546094778089478 } }, $db: \"local\", $readPreference: { mode: \"primary\" } }\n2023-01-20T17:03:06.892+0000 I ACCESS [conn67] Unauthorized: not authorized on local to execute command { aggregate: \"system.replset\", pipeline: [ { $indexStats: {} } ], cursor: {}, lsid: { id: UUID(\"ec721915-45ce-4ac5-a8be-99c676afba63\") }, $clusterTime: { clusterTime: Timestamp(1674234178, 1), signature: { hash: BinData(0, F8BF81DE3805E58A97310628BAB162337F481413), keyId: 7138546094778089478 } }, $db: \"local\", $readPreference: { mode: \"primary\" } }\n2023-01-20T17:03:36.892+0000 I ACCESS [conn67] Unauthorized: not authorized on local to execute command { aggregate: \"system.replset\", pipeline: [ { $indexStats: {} } ], cursor: {}, lsid: { id: UUID(\"ec721915-45ce-4ac5-a8be-99c676afba63\") }, $clusterTime: { clusterTime: Timestamp(1674234208, 1), signature: { hash: BinData(0, F02AE9EC78F26A6525DDE4363D2ABDD0A602501E), keyId: 7138546094778089478 } }, $db: \"local\", $readPreference: { mode: \"primary\" } }\nUnauthorizeddb.getRole(\"clusterMonitor\",{showPrivileges:true})\n{\n\t\"role\" : \"clusterMonitor\",\n\t\"db\" : \"admin\",\n\t\"isBuiltin\" : true,\n\t\"roles\" : [ ],\n\t\"inheritedRoles\" : [ ],\n\t\"privileges\" : [\n\t\t{\n\t\t\t\"resource\" : {\n\t\t\t\t\"cluster\" : true\n\t\t\t},\n\t\t\t\"actions\" : [\n\t\t\t\t\"checkFreeMonitoringStatus\",\n\t\t\t\t\"connPoolStats\",\n\t\t\t\t\"getCmdLineOpts\",\n\t\t\t\t\"getLog\",\n\t\t\t\t\"getParameter\",\n\t\t\t\t\"getShardMap\",\n\t\t\t\t\"hostInfo\",\n\t\t\t\t\"inprog\",\n\t\t\t\t\"listDatabases\",\n\t\t\t\t\"listSessions\",\n\t\t\t\t\"listShards\",\n\t\t\t\t\"netstat\",\n\t\t\t\t\"replSetGetConfig\",\n\t\t\t\t\"replSetGetStatus\",\n\t\t\t\t\"serverStatus\",\n\t\t\t\t\"shardingState\",\n\t\t\t\t\"top\",\n\t\t\t\t\"useUUID\"\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\t\"resource\" : {\n\t\t\t\t\"db\" : \"\",\n\t\t\t\t\"collection\" : \"\"\n\t\t\t},\n\t\t\t\"actions\" : [\n\t\t\t\t\"collStats\",\n\t\t\t\t\"dbStats\",\n\t\t\t\t\"getDatabaseVersion\",\n\t\t\t\t\"getShardVersion\",\n\t\t\t\t\"indexStats\"\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\t\"resource\" : {\n\t\t\t\t\"db\" : \"config\",\n\t\t\t\t\"collection\" : \"\"\n\t\t\t},\n\t\t\t\"actions\" : [\n\t\t\t\t\"changeStream\",\n\t\t\t\t\"collStats\",\n\t\t\t\t\"dbHash\",\n\t\t\t\t\"dbStats\",\n\t\t\t\t\"find\",\n\t\t\t\t\"getDatabaseVersion\",\n\t\t\t\t\"getShardVersion\",\n\t\t\t\t\"indexStats\",\n\t\t\t\t\"killCursors\",\n\t\t\t\t\"listCollections\",\n\t\t\t\t\"listIndexes\",\n\t\t\t\t\"planCacheRead\"\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\t\"resource\" : {\n\t\t\t\t\"db\" : \"local\",\n\t\t\t\t\"collection\" : \"\"\n\t\t\t},\n\t\t\t\"actions\" : [\n\t\t\t\t\"changeStream\",\n\t\t\t\t\"collStats\",\n\t\t\t\t\"dbHash\",\n\t\t\t\t\"dbStats\",\n\t\t\t\t\"find\",\n\t\t\t\t\"getDatabaseVersion\",\n\t\t\t\t\"getShardVersion\",\n\t\t\t\t\"indexStats\",\n\t\t\t\t\"killCursors\",\n\t\t\t\t\"listCollections\",\n\t\t\t\t\"listIndexes\",\n\t\t\t\t\"planCacheRead\"\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\t\"resource\" : {\n\t\t\t\t\"db\" : \"local\",\n\t\t\t\t\"collection\" : \"system.js\"\n\t\t\t},\n\t\t\t\"actions\" : [\n\t\t\t\t\"changeStream\",\n\t\t\t\t\"collStats\",\n\t\t\t\t\"dbHash\",\n\t\t\t\t\"dbStats\",\n\t\t\t\t\"find\",\n\t\t\t\t\"killCursors\",\n\t\t\t\t\"listCollections\",\n\t\t\t\t\"listIndexes\",\n\t\t\t\t\"planCacheRead\"\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\t\"resource\" : {\n\t\t\t\t\"db\" : \"config\",\n\t\t\t\t\"collection\" : \"system.js\"\n\t\t\t},\n\t\t\t\"actions\" : [\n\t\t\t\t\"changeStream\",\n\t\t\t\t\"collStats\",\n\t\t\t\t\"dbHash\",\n\t\t\t\t\"dbStats\",\n\t\t\t\t\"find\",\n\t\t\t\t\"killCursors\",\n\t\t\t\t\"listCollections\",\n\t\t\t\t\"listIndexes\",\n\t\t\t\t\"planCacheRead\"\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\t\"resource\" : {\n\t\t\t\t\"db\" : \"local\",\n\t\t\t\t\"collection\" : \"system.replset\"\n\t\t\t},\n\t\t\t\"actions\" : [\n\t\t\t\t\"find\"\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\t\"resource\" : {\n\t\t\t\t\"db\" : \"\",\n\t\t\t\t\"collection\" : \"system.profile\"\n\t\t\t},\n\t\t\t\"actions\" : [\n\t\t\t\t\"find\"\n\t\t\t]\n\t\t}\n\t],\n\t\"inheritedPrivileges\" : [\n\t\t{\n\t\t\t\"resource\" : {\n\t\t\t\t\"cluster\" : true\n\t\t\t},\n\t\t\t\"actions\" : [\n\t\t\t\t\"checkFreeMonitoringStatus\",\n\t\t\t\t\"connPoolStats\",\n\t\t\t\t\"getCmdLineOpts\",\n\t\t\t\t\"getLog\",\n\t\t\t\t\"getParameter\",\n\t\t\t\t\"getShardMap\",\n\t\t\t\t\"hostInfo\",\n\t\t\t\t\"inprog\",\n\t\t\t\t\"listDatabases\",\n\t\t\t\t\"listSessions\",\n\t\t\t\t\"listShards\",\n\t\t\t\t\"netstat\",\n\t\t\t\t\"replSetGetConfig\",\n\t\t\t\t\"replSetGetStatus\",\n\t\t\t\t\"serverStatus\",\n\t\t\t\t\"shardingState\",\n\t\t\t\t\"top\",\n\t\t\t\t\"useUUID\"\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\t\"resource\" : {\n\t\t\t\t\"db\" : \"\",\n\t\t\t\t\"collection\" : \"\"\n\t\t\t},\n\t\t\t\"actions\" : [\n\t\t\t\t\"collStats\",\n\t\t\t\t\"dbStats\",\n\t\t\t\t\"getDatabaseVersion\",\n\t\t\t\t\"getShardVersion\",\n\t\t\t\t\"indexStats\"\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\t\"resource\" : {\n\t\t\t\t\"db\" : \"config\",\n\t\t\t\t\"collection\" : \"\"\n\t\t\t},\n\t\t\t\"actions\" : [\n\t\t\t\t\"changeStream\",\n\t\t\t\t\"collStats\",\n\t\t\t\t\"dbHash\",\n\t\t\t\t\"dbStats\",\n\t\t\t\t\"find\",\n\t\t\t\t\"getDatabaseVersion\",\n\t\t\t\t\"getShardVersion\",\n\t\t\t\t\"indexStats\",\n\t\t\t\t\"killCursors\",\n\t\t\t\t\"listCollections\",\n\t\t\t\t\"listIndexes\",\n\t\t\t\t\"planCacheRead\"\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\t\"resource\" : {\n\t\t\t\t\"db\" : \"local\",\n\t\t\t\t\"collection\" : \"\"\n\t\t\t},\n\t\t\t\"actions\" : [\n\t\t\t\t\"changeStream\",\n\t\t\t\t\"collStats\",\n\t\t\t\t\"dbHash\",\n\t\t\t\t\"dbStats\",\n\t\t\t\t\"find\",\n\t\t\t\t\"getDatabaseVersion\",\n\t\t\t\t\"getShardVersion\",\n\t\t\t\t\"indexStats\",\n\t\t\t\t\"killCursors\",\n\t\t\t\t\"listCollections\",\n\t\t\t\t\"listIndexes\",\n\t\t\t\t\"planCacheRead\"\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\t\"resource\" : {\n\t\t\t\t\"db\" : \"local\",\n\t\t\t\t\"collection\" : \"system.js\"\n\t\t\t},\n\t\t\t\"actions\" : [\n\t\t\t\t\"changeStream\",\n\t\t\t\t\"collStats\",\n\t\t\t\t\"dbHash\",\n\t\t\t\t\"dbStats\",\n\t\t\t\t\"find\",\n\t\t\t\t\"killCursors\",\n\t\t\t\t\"listCollections\",\n\t\t\t\t\"listIndexes\",\n\t\t\t\t\"planCacheRead\"\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\t\"resource\" : {\n\t\t\t\t\"db\" : \"config\",\n\t\t\t\t\"collection\" : \"system.js\"\n\t\t\t},\n\t\t\t\"actions\" : [\n\t\t\t\t\"changeStream\",\n\t\t\t\t\"collStats\",\n\t\t\t\t\"dbHash\",\n\t\t\t\t\"dbStats\",\n\t\t\t\t\"find\",\n\t\t\t\t\"killCursors\",\n\t\t\t\t\"listCollections\",\n\t\t\t\t\"listIndexes\",\n\t\t\t\t\"planCacheRead\"\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\t\"resource\" : {\n\t\t\t\t\"db\" : \"local\",\n\t\t\t\t\"collection\" : \"system.replset\"\n\t\t\t},\n\t\t\t\"actions\" : [\n\t\t\t\t\"find\"\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\t\"resource\" : {\n\t\t\t\t\"db\" : \"\",\n\t\t\t\t\"collection\" : \"system.profile\"\n\t\t\t},\n\t\t\t\"actions\" : [\n\t\t\t\t\"find\"\n\t\t\t]\n\t\t}\n\t]\n}\n",
"text": "using MongoDB server version: 4.2.22ii mongodb-org-server 4.2.22 amd64 MongoDB database serverI m trying to setup a monitoring on GCP - > MongoDB | Cloud Monitoring | Google CloudI can get the metric by\nadding a new user for the monitoring :metrics works !but from the metric scraping process (google-cloud-ops-agent-opentelemetry-collector.service),\nI get the following errors every 30seconds :from mongo logs every 30seconds:I added also this role: readAnyDatabase to metricUser user but still getting the noisy Unauthorized errors log.",
"username": "Rambo"
},
{
"code": "",
"text": "Please check mongo documentation buit-in-roles\nClustermonitor has only find action on system.replset\nYou have to give clustermanager or clusteradmin role\nReadwriteAnyDatabase does not give access to config & local DBs",
"username": "Ramachandra_Tummala"
},
{
"code": "clusterManagermetricUser2023-01-25T10:40:14.320+0000 I ACCESS [conn129] Unauthorized: not authorized on local to execute command { aggregate: \"system.replset\", pipeline: [ { $indexStats: {} } ], cursor: {}, lsid: { id: UUID(\"1ff3c60b-297f-4950-abea-ca7eb8279bcb\") }, $clusterTime: { clusterTime: Timestamp(1674643214, 1), signature: { hash: BinData(0, A5FB6886D52D59CD67B9C752120EA6EB8DE3BD08), keyId: 7138546094778089478 } }, $db: \"local\", $readPreference: { mode: \"primary\" } }\n2023-01-25T10:40:44.319+0000 I ACCESS [conn129] Unauthorized: not authorized on local to execute command { aggregate: \"system.replset\", pipeline: [ { $indexStats: {} } ], cursor: {}, lsid: { id: UUID(\"1ff3c60b-297f-4950-abea-ca7eb8279bcb\") }, $clusterTime: { clusterTime: Timestamp(1674643244, 1), signature: { hash: BinData(0, 8F1E4EA53825279E8D13EF8FF3233F5E5846EF76), keyId: 7138546094778089478 } }, $db: \"local\", $readPreference: { mode: \"primary\" } }\n2023-01-25T10:41:14.329+0000 I ACCESS [conn129] Unauthorized: not authorized on local to execute command { aggregate: \"system.replset\", pipeline: [ { $indexStats: {} } ], cursor: {}, lsid: { id: UUID(\"1ff3c60b-297f-4950-abea-ca7eb8279bcb\") }, $clusterTime: { clusterTime: Timestamp(1674643274, 1), signature: { hash: BinData(0, A054F0CE3E53C5AB189A9CFCA61EC5D65E250F0B), keyId: 7138546094778089478 } }, $db: \"local\", $readPreference: { mode: \"primary\" } }\nclusterAdmin{\n\t\"_id\" : \"admin.metricUser\",\n\t\"userId\" : UUID(\"260cf642-1c10-49a5-bb4a-25875f2087ea\"),\n\t\"user\" : \"metricUser\",\n\t\"db\" : \"admin\",\n\t\"roles\" : [\n\t\t{\n\t\t\t\"role\" : \"clusterAdmin\",\n\t\t\t\"db\" : \"admin\"\n\t\t}\n\t],\n\t\"customData\" : {\n\n\t},\n\t\"mechanisms\" : [\n\t\t\"SCRAM-SHA-1\",\n\t\t\"SCRAM-SHA-256\"\n\t]\n}\ngoogle-cloud-ops-agent-opentelemetry-collector.service",
"text": "@Ramachandra_Tummala I added clusterManager role to the metricUser user but I m still getting the same error logs.no difference if I just only grant clusterAdmin roleDo you think google-cloud-ops-agent-opentelemetry-collector.service is trying to scrape some disallowed/non-existing metrics?",
"username": "Rambo"
},
{
"code": "",
"text": "I resolved this by creating the below custom role and assigning it to userNote that the operation and db may change for each caseuse admin\ndb.createRole(\n{\nrole: “customRoleConfig”,\nprivileges: [\n{\nactions: [ “collStats”, “indexStats” ],\nresource: { db: “config”, collection: “system.indexBuilds” }\n},\n{\nactions: [ “collStats”, “indexStats” ],\nresource: { db: “local”, collection: “replset.election” }\n},\n{\nactions: [ “collStats”, “indexStats” ],\nresource: { db: “local”, collection: “replset.initialSyncId” }\n},\n{\nactions: [ “collStats”, “indexStats” ],\nresource: { db: “local”, collection: “replset.minvalid” }\n},\n{\nactions: [ “collStats”, “indexStats” ],\nresource: { db: “local”, collection: “replset.oplogTruncateAfterPoint” }\n}],\nroles: })db.grantRolesToUser( “mongodb_exporter”, [ {role: “customRoleConfig”, db: “admin” }])",
"username": "Salim_Ali"
}
] | Using clusterMonitor role Unauthorized: not authorized on local to execute command { aggregate: "system.replset", pipeline: [ { $indexStats: ....... $db: "local", $readPreference: { mode: "primary" } } | 2023-01-20T17:44:41.582Z | Using clusterMonitor role Unauthorized: not authorized on local to execute command { aggregate: “system.replset”, pipeline: [ { $indexStats: ……. $db: “local”, $readPreference: { mode: “primary” } } | 2,062 |
null | [
"node-js",
"flutter"
] | [
{
"code": "",
"text": "Hi, I’m a beginner at mobile app development and am having trouble with understanding where to integrate Realm. I’m building my mobile frontend on Realm and backend on NodeJS, and there are Realm SDKs for both.Do I need to connect both my frontend and backend to the Realm SDKs? Or only the backend? If I only connect the backend, how does the local sync work (since requests from my Flutter app to NodeJS presumably would not go through)?",
"username": "thinkpurple"
},
{
"code": "",
"text": "Did you find https://www.mongodb.com/docs/realm/introduction/?Realm is an embedded database that runs on your phone/tablet/desktop. It has the ability to synchronise with a MongoDB running on Atlas using Atlas Device Sync (ADS for short).There is no direct peer-to-peer sync between devices running Realm - all sync happens through Atlas in a star topology.You do not (typically) run Realm in server-side processes, but it is obviously possible since we support both Linux and Windows for many of our Realm SDKs.",
"username": "Kasper_Nielsen1"
},
{
"code": "",
"text": "Yes, I’ve read the docs. Where I’m confused is on the backend NodeJS part. I think I should be accessing my data through a REST API or GraphQL, but the Realm NodeJS SDK exists. Is there one I should prefer?In addition, I would. like to kick off business logic on my backend based on writes to the database, which I could do with subscriptions in the NodeJS SDK, but if I use another way to connect, I would need to use something like Triggers. Is that understanding correct?",
"username": "thinkpurple"
},
{
"code": "",
"text": "Hi, @Kasper_Nielsen1 would you be able to help?",
"username": "thinkpurple"
},
{
"code": "",
"text": "I haven’t understood uour question properly but if you are asking there is a node js sdk then yes there is one",
"username": "33_ANSHDEEP_Singh"
},
{
"code": "",
"text": "G’day @thinkpurple,Welcome to MongoDB Community Realm has 7 SDKs that you can select from to create your mobile application. If you create your backend with MongoDB Atlas, the same SDK can be used to connect your mobile application to the backend.Yes, I’ve read the docs. Where I’m confused is on the backend NodeJS part. I think I should be accessing my data through a REST API or GraphQL, but the Realm NodeJS SDK exists. Is there one I should prefer?In addition, I would. like to kick off business logic on my backend based on writes to the database, which I could do with subscriptions in the NodeJS SDK, but if I use another way to connect, I would need to use something like Triggers. Is that understanding correct?These are different options that you can use as per your use case. GraphQL and Data API provide other ways to access the data without the need for a backend. You can create HTTP endpoints that you can use within the client. If you are using node js to create your backend, then you can use MongoDB Driver to connect your application to MongoDB. If you want to use Realm SDKs, you can use MongoDB Atlas to create your backend and use Atlas Device Sync to sync your data between the mobile client and Atlas on the cloud.You can read mobile bytes or articles on our developer Hub to get started with Atlas and Realm.I hope the provided information helps.Cheers, \nhenna",
"username": "henna.s"
}
] | Mobile app architecture with NodeJS and Flutter | 2023-05-11T23:34:53.211Z | Mobile app architecture with NodeJS and Flutter | 1,812 |
null | [
"aggregation"
] | [
{
"code": "\"pipeline\": [\n {\n \"$match\": {\n \"$expr\": {\n \"$in\": [\"$_id\", \"$states\"]\n }\n }\n },\n {\n \"$addFields\": {\n \"__order\": {\n \"$indexOfArray\": [\"$states\", \"$_id\"]\n }\n }\n },\n {\n \"$sort\": {\n \"__order\": 1\n }\n },\n {\n \"$project\": {\n \"__order\": 0\n }\n }\n ],\n \"as\": \"state\"\n$_id$match\"$_id\" \"$in\" \"$states\"\"$_id\"\"$addFields\"",
"text": "This is the first time I notice a $lookup to an array return the matched documents in different order, like unsorted. And I have two questions…Firstly, why’s that?And secondly, I ended up with this solution. This is a pipeline inside a $lookup stage, where the ‘states’ variable is an array of ObjectId.I know that the $_id inside the $match stage is referring to the ‘foreign’ collection, so \"$_id\" \"$in\" \"$states\". Which is what I want.But now. What does the second \"$_id\" mean? The one inside the \"$addFields\" stage.Thank you for reading.",
"username": "ayrton_co_belogit"
},
{
"code": "",
"text": "The second $_id is the same as in the $match.It is not clear what $states is. With a single $ sign it is not a variable it is the value of the states field within the looked up collection, just like _id. If it is a variable defined with let: in the part of the code you did not share, then you have to use 2 $ signs, $$states rather than $states.The only way to have a guarantied order is too $sort.We can only help you further if you share the whole pipeline.",
"username": "steevej"
}
] | Lookup returns array unsorted, not respecting the original array's index order | 2023-06-08T02:44:25.805Z | Lookup returns array unsorted, not respecting the original array’s index order | 343 |
null | [
"database-tools"
] | [
{
"code": "",
"text": "I want to import a MongoDB database in my account.\nHow should I proceed?\nthank you",
"username": "Raul_Gimeno"
},
{
"code": "",
"text": "Hey @Raul_Gimeno,Thank you for reaching out to the MongoDB Community forums I want to import a MongoDB database in my account.\nHow should I proceed?Feel free to provide any further details related to your deployment so that we can assist you more effectively. Meanwhile, you can refer to and use the available MongoDB tools, such as mongodump and mongoimport, to migrate the data.Best regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | How to import a MongoDB from one user to another user? | 2023-06-08T10:09:19.991Z | How to import a MongoDB from one user to another user? | 667 |
null | [
"queries",
"server"
] | [
{
"code": "3605802bsonsizebsonsize(db.col.findOne({\"_id\": ObjectId(\"645ce70b91f8446dbcbb74e4\")}))3605802",
"text": "Hello MongoDB Community,I’m facing a perplexing issue with one of my collections and I’m seeking your expertise to shed some light on the problem. Recently, I encountered a big document in my MongoDB collection that seems to be un-deleteable and un-updateable. Whenever I attempt to delete or update this document, it takes an extraordinarily long time with no discernible result, and to make matters worse, my server starts freezing during the process.Here are some additional details:Has anyone encountered a similar problem with un-deleteable and un-updateable large documents causing collection lag and server freezing, especially those exceeding 3605802 in size? I would greatly appreciate any insights, suggestions, or guidance you can provide to help resolve this issue.Additionally, if there are any specific troubleshooting steps or best practices to address such situations, I would be grateful to know about them.Thank you in advance for your assistance!Best regards,\nMohammed Alhanafi",
"username": "Mohammed_Alhanafi"
},
{
"code": "",
"text": "Mogodb supports 16MB document, so 3.x M is really not that big (i suppose).When your delete operation hangs, what does your server say in the log file?",
"username": "Kobe_W"
},
{
"code": "mongo --eval 'db.col.deleteOne({\"_id\": ObjectId(\"645ce70b91f8446dbcbb74e4\")});' myDocs",
"text": "Thank you for replying, when I do deleting, it didn’t delete or finish the operation at all, so after more than 10 hours and still server is performing this query with no results and server is lagging until restarted.For you to know, the command is executed from same database server by background process to take all the time needed, but after all the time given nothing still happens.mongo --eval 'db.col.deleteOne({\"_id\": ObjectId(\"645ce70b91f8446dbcbb74e4\")});' myDocsThanks in advanced.",
"username": "Mohammed_Alhanafi"
}
] | Need Help: Un-deleteable and Un-updateable Big Document Causing Collection Lag and Server Freezing | 2023-06-07T21:36:31.843Z | Need Help: Un-deleteable and Un-updateable Big Document Causing Collection Lag and Server Freezing | 693 |
[
"queries",
"react-native",
"android"
] | [
{
"code": "",
"text": "Hi,I have been trying to build my Android app using React-NAtive and failing since an error follows,\nWhatsApp Image 2023-05-07 at 12.23.37 AM540×1170 45.6 KB\nMy IOS build is successful, but android shows this,\nI have installed pods and checked for them.Please help",
"username": "Abhishek_Mittal"
},
{
"code": "",
"text": "Hi I have the same problem. Are you using Expo app or have you tried with react native app too? Do you use typescript or only javascript? I dont know if it can be a factor, just want to see which environment the problem appears in .",
"username": "Ben_Zo"
},
{
"code": "",
"text": "You can try to disable RN New Architecture as I assume you are not running v12.0.0-alpha.2.",
"username": "Kenneth_Geisshirt"
},
{
"code": "",
"text": "Thank you! Sorry about the late reaction. I am back again to realm . So in my case it seems the case was simple. I tried to use realm in developement and with expo go on real device. Howver I read its only working in production and after the build of the app as Expo Go is not compatible with third party libraries and so with Realm. Now after the build my application is started smoothly in expo and on its android emulator.So just install realm and try npm start android in your app folder.",
"username": "Ben_Zo"
}
] | React-native debug build for android throwing "pod install" error | 2023-05-07T23:58:12.839Z | React-native debug build for android throwing “pod install” error | 1,096 |
|
[] | [
{
"code": "",
"text": "MongoDB doesn’t start and I’m getting a “core dump” error. I want to download Graylog using CentOS with MongoDB 5 and 6, but I’m facing the same issue\nimage934×182 4.43 KB\n",
"username": "malcolm_courtois"
},
{
"code": "",
"text": "ILL means illegal instruction\nMay be your CPU not supporting the version your are trying to install\nFor 5.0 requires AVX set instructions\nCheck this link",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I don’t know if proxmox (it s a center of virtualization server) and I have no idea how I can change this but I test on oracle virtual box with cent os and I started normally",
"username": "malcolm_courtois"
}
] | I have a problem with mongodb | 2023-06-07T11:57:46.160Z | I have a problem with mongodb | 504 |
|
null | [
"indexes"
] | [
{
"code": "",
"text": "I read this text “Delete operations do not drop indexes, even if deleting all documents from a collection.” in the manual. ( Delete Documents — MongoDB Manual )Does this mean that the index size does not change if some documents are removed by TTL or delete operations ?Does this behavior also apply to TTL of the time-series collection ?",
"username": "bgkim"
},
{
"code": "",
"text": "Hi @bgkim welcome to the community!Does this mean that the index size does not change if some documents are removed by TTL or delete operations ?The sentence in question is saying that index will not be dropped even if you delete all documents in the collection. However the index will be dropped if the collection is dropped. This is important because you might want to clean up your collection, but you want to keep your index definition intact.To answer your question, an index entry is a pointer toward a document’s physical location. Thus if the document is removed, the index entry that pointed toward that document is also removed (via TTL or otherwise). If you don’t, you’ll have a disconnect between the content of the index and the collection, which is not a good situation to be in.Does this behavior also apply to TTL of the time-series collection ?Yes, although the actual details are different. Time series is a special kind of collection, and it handles index differently compared to a regular collection. The same idea applies here as well though: if you delete a document, the corresponding index entry that points to that document will also be deleted.Best regards\nKevin",
"username": "kevinadi"
}
] | Does indexes are updated when documents are deleted from the collection? | 2023-05-26T10:15:53.503Z | Does indexes are updated when documents are deleted from the collection? | 1,147 |
null | [
"replication",
"python",
"connecting",
"spark-connector"
] | [
{
"code": "",
"text": "Hi, I have setup a proxies to connect Mongodb cluster endpoints. The mapping is something like thislocalhost:12345 —> mongodb-cluster-endpoint-1:27017\nlocalhost:12346 —> mongodb-cluster-endpoint-2:27017I am using PySpark and the URI ismongodb://user:password@localhost:32456,localhost:12345,localhost:12346/db.cl?replicaSet=morep&authSource=admin&directConnection=trueMongodb driver resolves the canonical addresses and instead of using localhost; it start using mongodb-cluster-endpoint-1:27017 as endpoint which is not reachable and thus connection fails.Is there any way where I can make mongodb stop to discover canonical address or restrict it removing localhost from client view of cluster?",
"username": "Muhammad_Imran_Tariq"
},
{
"code": "localhostmongodb-cluster-endpoint/etc/hostsmongodb-cluster-endpointlocalhost",
"text": "Hi @Muhammad_Imran_Tariq welcome to the community!Even though you’re using PySpark, this address resolution is uniform across all MongoDB drivers, so you should see a similar issue when connecting with any driver.The main issue is when you’re connecting to a replica set, when a driver connects to one of the nodes, it will grab the content of rs.conf() and try to connect to all the nodes in the replica set using the addresses defined in the config.Thus, using localhost won’t work in this case since the driver will auto-discover all the nodes addresses from the config instead of using what you have given it.The drivers connects in this manner to provide high availability. That is, if the primary is down, the driver can monitor the set’s election and connect to the new primary quickly. However this process does require that the driver can connect to the canonical names as defined in the replica set config.The recommended solution is to ensure that mongodb-cluster-endpoint is solvable from your client side using DNS. However if this is not practical or possible, perhaps you can modify your /etc/hosts to map mongodb-cluster-endpoint addresses to localhost?Best regards\nKevin",
"username": "kevinadi"
}
] | Stop mongodb driver to resolve canonical address in case of replica set | 2023-06-05T10:38:17.186Z | Stop mongodb driver to resolve canonical address in case of replica set | 941 |
null | [
"sharding",
"server"
] | [
{
"code": "",
"text": "Hello MongoDB community,I have a question regarding sharding servers in MongoDB. I currently have multiple sharding servers in my cluster, but I’m wondering if it’s possible to manually merge them into a single server.Is there a way to consolidate the data and configuration from multiple sharding servers into a single server? Are there any recommended best practices or tools available for this process?I understand that sharding is designed to distribute data across multiple servers for scalability and performance reasons. However, due to certain requirements in my environment, I need to explore the possibility of merging these servers into a single unit.Any insights, suggestions, or guidance on this topic would be greatly appreciated. Thank you in advance for your help!Best regards,\nMohammed Alhanafi",
"username": "Mohammed_Alhanafi"
},
{
"code": "",
"text": "Remove shards until you have one left.",
"username": "Kobe_W"
}
] | Merging Sharding Servers: Can I manually consolidate multiple servers into a single server? | 2023-06-07T21:32:45.516Z | Merging Sharding Servers: Can I manually consolidate multiple servers into a single server? | 589 |
null | [
"replication"
] | [
{
"code": "",
"text": "Hello. I am trying to upgrade my Ubuntu OS from 18.04 to 20.04. In this case the mongodb packages will also be updated to its latest version which is 6.0.4Here is my setup10.1.0.11:27017 - Primary\n10.1.0.12:27017 - Secondary\n10.1.0.12:27018 - ArbiterI first upgrade my Primary VM and the replica set is running successfully. However, when I try to proceed for the Secondary VM (which involves shutting down mongodb and server itself) which also contains the arbiter.Is there something wrong on my steps? Should I do the Secondary first? Or in my setup there should at least be 2 members running for this to work? This is dev environment. In our production, the arbiter is in another VM outside Primary and Secondary VM.",
"username": "sg_irz"
},
{
"code": "",
"text": "If you read carefully about replication you will find the answer toOr in my setup there should at least be 2 members running for this to work?And the answer is yes. If you shut down any instance in you replica set, there will be an election. For an election to select a PRIMARY, you need a majority of voting nodes. WithSecondary VM (which involves shutting down mongodb and server itself) which also contains the arbiteryou end up with only 1 node, hence no majority, hence no PRIMARY.As a quick fix, you may set up an arbiter on the primary VM, remove the arbiter of the 2nd VM from the replica set, update the secondary, then move back the arbitern from the 1st VM to the 2nd VM.In our production, the arbiterArbiters are not really recommended for production.",
"username": "steevej"
},
{
"code": "",
"text": "Arbiters are not really recommended for production.Instead of having an arbiter to achieve a 3 node setup, what do you recommend though? Having 1 Primary then 2 Secondary Nodes?",
"username": "sg_irz"
},
{
"code": "",
"text": "Yes P-S-S is a valid setup and all 3 nodes should be in different data centers",
"username": "Ramachandra_Tummala"
}
] | Upgrading 6.0.1 to 6.0.6 problem. No primary detected | 2023-06-07T11:38:01.766Z | Upgrading 6.0.1 to 6.0.6 problem. No primary detected | 376 |
[
"storage"
] | [
{
"code": "MongoDB shell version v3.6.8connecting to: mongodb://127.0.0.1:270172023-06-03T23:20:30.600+0200 W NETWORK [thread1] Failed to connect to 127.0.0.1:27017, in(checking socket for error after poll), reason: Connection refused2023-06-03T23:20:30.602+0200 E QUERY [thread1] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed :connect@src/mongo/shell/mongo.js:257:13@(connect):1:6 \n● mongodb.service - An object/document-oriented database\n Loaded: loaded (/lib/systemd/system/mongodb.service; enabled; vendor preset: enabled)\n Active: failed (Result: exit-code) since Tue 2023-06-06 18:43:29 CEST; 26min ago\n Docs: man:mongod(1)\n Process: 702081 ExecStart=/usr/bin/mongod --unixSocketPrefix=${SOCKETPATH} --config ${CONF} $DAEMON_OPTS (code=exit>\n Main PID: 702081 (code=exited, status=100)\n\nיונ 06 18:43:28 vmi693782.contaboserver.net systemd[1]: Started An object/document-oriented database.\nיונ 06 18:43:29 vmi693782.contaboserver.net systemd[1]: mongodb.service: Main process exited, code=exited, status=100/n>\nיונ 06 18:43:29 vmi693782.contaboserver.net systemd[1]: mongodb.service: Failed with result 'exit-code'.\n/var/lib/mongodbroot@vmi693782:~# mongo\nMongoDB shell version v3.6.8\nconnecting to: mongodb://127.0.0.1:27017\n2023-06-06T19:14:37.152+0200 W NETWORK [thread1] Failed to connect to 127.0.0.1:27017, in(checking socket for error after poll), reason: Connection refused\n2023-06-06T19:14:37.155+0200 E QUERY [thread1] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed :\nconnect@src/mongo/shell/mongo.js:257:13\n@(connect):1:6\nexception: connect failed\n# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongodb\n journal:\n enabled: true\n engine: mmapv1\n# engine:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1\n\n\n# how the process runs\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\n#security:\nsecurity:\n authorization: enabled\n\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:\nroot@vmi693782:~# mongod\n2023-06-06T19:18:21.801+0200 I CONTROL [initandlisten] MongoDB starting : pid=713911 port=27017 dbpath=/data/db 64-bit host=vmi693782.contaboserver.net\n2023-06-06T19:18:21.801+0200 I CONTROL [initandlisten] db version v3.6.8\n2023-06-06T19:18:21.801+0200 I CONTROL [initandlisten] git version: 8e540c0b6db93ce994cc548f000900bdc740f80a\n2023-06-06T19:18:21.801+0200 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1f 31 Mar 2020\n2023-06-06T19:18:21.801+0200 I CONTROL [initandlisten] allocator: tcmalloc\n2023-06-06T19:18:21.801+0200 I CONTROL [initandlisten] modules: none\n2023-06-06T19:18:21.801+0200 I CONTROL [initandlisten] build environment:\n2023-06-06T19:18:21.801+0200 I CONTROL [initandlisten] distarch: x86_64\n2023-06-06T19:18:21.801+0200 I CONTROL [initandlisten] target_arch: x86_64\n2023-06-06T19:18:21.801+0200 I CONTROL [initandlisten] options: {}\n2023-06-06T19:18:21.837+0200 I - [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.\n2023-06-06T19:18:21.837+0200 I STORAGE [initandlisten]\n2023-06-06T19:18:21.837+0200 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine\n2023-06-06T19:18:21.837+0200 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem\n2023-06-06T19:18:21.837+0200 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=3466M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),cache_cursors=false,compatibility=(release=\"3.0\",require_max=\"3.0\"),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),\n2023-06-06T19:18:22.701+0200 I STORAGE [initandlisten] WiredTiger message [1686071902:701276][713911:0x7f8c930d9ac0], txn-recover: Main recovery loop: starting at 10/5376\n2023-06-06T19:18:22.770+0200 I STORAGE [initandlisten] WiredTiger message [1686071902:770670][713911:0x7f8c930d9ac0], txn-recover: Recovering log 10 through 11\n2023-06-06T19:18:22.817+0200 I STORAGE [initandlisten] WiredTiger message [1686071902:817148][713911:0x7f8c930d9ac0], txn-recover: Recovering log 11 through 11\n2023-06-06T19:18:22.847+0200 I STORAGE [initandlisten] WiredTiger message [1686071902:847397][713911:0x7f8c930d9ac0], txn-recover: Set global recovery timestamp: 0\n2023-06-06T19:18:22.922+0200 I CONTROL [initandlisten]\n2023-06-06T19:18:22.922+0200 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.\n2023-06-06T19:18:22.922+0200 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.\n2023-06-06T19:18:22.922+0200 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.\n2023-06-06T19:18:22.922+0200 I CONTROL [initandlisten]\n2023-06-06T19:18:22.922+0200 I CONTROL [initandlisten] ** WARNING: This server is bound to localhost.\n2023-06-06T19:18:22.922+0200 I CONTROL [initandlisten] ** Remote systems will be unable to connect to this server.\n2023-06-06T19:18:22.922+0200 I CONTROL [initandlisten] ** Start the server with --bind_ip <address> to specify which IP\n2023-06-06T19:18:22.922+0200 I CONTROL [initandlisten] ** addresses it should serve responses from, or with --bind_ip_all to\n2023-06-06T19:18:22.922+0200 I CONTROL [initandlisten] ** bind to all interfaces. If this behavior is desired, start the\n2023-06-06T19:18:22.922+0200 I CONTROL [initandlisten] ** server with --bind_ip 127.0.0.1 to disable this warning.\n2023-06-06T19:18:22.922+0200 I CONTROL [initandlisten]\n2023-06-06T19:18:23.046+0200 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'\n2023-06-06T19:18:23.047+0200 I NETWORK [initandlisten] waiting for connections on port 27017\nroot@vmi693782:~# tail -n 100 /var/log/mongodb/mongodb.log\n2023-06-06T18:35:47.666+0200 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1f 31 Mar 2020\n2023-06-06T18:35:47.666+0200 I CONTROL [initandlisten] allocator: tcmalloc\n2023-06-06T18:35:47.666+0200 I CONTROL [initandlisten] modules: none\n2023-06-06T18:35:47.666+0200 I CONTROL [initandlisten] build environment:\n2023-06-06T18:35:47.666+0200 I CONTROL [initandlisten] distarch: x86_64\n2023-06-06T18:35:47.666+0200 I CONTROL [initandlisten] target_arch: x86_64\n2023-06-06T18:35:47.666+0200 I CONTROL [initandlisten] options: { config: \"/etc/mongodb.conf\", net: { bindIp: \"127.0.0.1\", unixDomainSocket: { pathPrefix: \"/run/mongodb\" } }, storage: { dbPath: \"/var/lib/mongodb\", journal: { enabled: true } }, systemLog: { destination: \"file\", logAppend: true, path: \"/var/log/mongodb/mongodb.log\" } }\n2023-06-06T18:35:47.667+0200 I - [initandlisten] Detected data files in /var/lib/mongodb created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.\n2023-06-06T18:35:47.667+0200 I STORAGE [initandlisten]\n2023-06-06T18:35:47.667+0200 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine\n2023-06-06T18:35:47.667+0200 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem\n2023-06-06T18:35:47.667+0200 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=3466M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),cache_cursors=false,compatibility=(release=\"3.0\",require_max=\"3.0\"),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),\n2023-06-06T18:35:48.434+0200 E STORAGE [initandlisten] WiredTiger error (-31802) [1686069348:434250][699248:0x7f0dc841aac0], file:WiredTiger.wt, connection: unable to read root page from file:WiredTiger.wt: WT_ERROR: non-specific WiredTiger error\n2023-06-06T18:35:48.434+0200 E STORAGE [initandlisten] WiredTiger error (0) [1686069348:434313][699248:0x7f0dc841aac0], file:WiredTiger.wt, connection: WiredTiger has failed to open its metadata\n2023-06-06T18:35:48.434+0200 E STORAGE [initandlisten] WiredTiger error (0) [1686069348:434320][699248:0x7f0dc841aac0], file:WiredTiger.wt, connection: This may be due to the database files being encrypted, being from an older version or due to corruption on disk\n2023-06-06T18:35:48.434+0200 E STORAGE [initandlisten] WiredTiger error (0) [1686069348:434326][699248:0x7f0dc841aac0], file:WiredTiger.wt, connection: You should confirm that you have opened the database with the correct options including all encryption and compression options\n2023-06-06T18:35:48.437+0200 E - [initandlisten] Assertion: 28595:-31802: WT_ERROR: non-specific WiredTiger error src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp 421\n2023-06-06T18:35:48.437+0200 I STORAGE [initandlisten] exception in initAndListen: Location28595: -31802: WT_ERROR: non-specific WiredTiger error, terminating\n2023-06-06T18:35:48.437+0200 I NETWORK [initandlisten] shutdown: going to close listening sockets...\n2023-06-06T18:35:48.437+0200 I NETWORK [initandlisten] removing socket file: /run/mongodb/mongodb-27017.sock\n2023-06-06T18:35:48.437+0200 I CONTROL [initandlisten] now exiting\n2023-06-06T18:35:48.437+0200 I CONTROL [initandlisten] shutting down with code:100\n2023-06-06T18:35:51.852+0200 I CONTROL [main] ***** SERVER RESTARTED *****\n2023-06-06T18:35:51.856+0200 I CONTROL [initandlisten] MongoDB starting : pid=699277 port=27017 dbpath=/var/lib/mongodb 64-bit host=vmi693782.contaboserver.net\n2023-06-06T18:35:51.856+0200 I CONTROL [initandlisten] db version v3.6.8\n2023-06-06T18:35:51.856+0200 I CONTROL [initandlisten] git version: 8e540c0b6db93ce994cc548f000900bdc740f80a\n2023-06-06T18:35:51.856+0200 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1f 31 Mar 2020\n2023-06-06T18:35:51.856+0200 I CONTROL [initandlisten] allocator: tcmalloc\n2023-06-06T18:35:51.856+0200 I CONTROL [initandlisten] modules: none\n2023-06-06T18:35:51.856+0200 I CONTROL [initandlisten] build environment:\n2023-06-06T18:35:51.856+0200 I CONTROL [initandlisten] distarch: x86_64\n2023-06-06T18:35:51.856+0200 I CONTROL [initandlisten] target_arch: x86_64\n2023-06-06T18:35:51.856+0200 I CONTROL [initandlisten] options: { config: \"/etc/mongodb.conf\", net: { bindIp: \"127.0.0.1\", unixDomainSocket: { pathPrefix: \"/run/mongodb\" } }, storage: { dbPath: \"/var/lib/mongodb\", journal: { enabled: true } }, systemLog: { destination: \"file\", logAppend: true, path: \"/var/log/mongodb/mongodb.log\" } }\n2023-06-06T18:35:51.856+0200 I - [initandlisten] Detected data files in /var/lib/mongodb created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.\n2023-06-06T18:35:51.856+0200 I STORAGE [initandlisten]\n2023-06-06T18:35:51.856+0200 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine\n2023-06-06T18:35:51.856+0200 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem\n2023-06-06T18:35:51.856+0200 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=3466M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),cache_cursors=false,compatibility=(release=\"3.0\",require_max=\"3.0\"),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),\n2023-06-06T18:35:52.521+0200 E STORAGE [initandlisten] WiredTiger error (-31802) [1686069352:521386][699277:0x7ff357be0ac0], file:WiredTiger.wt, connection: unable to read root page from file:WiredTiger.wt: WT_ERROR: non-specific WiredTiger error\n2023-06-06T18:35:52.521+0200 E STORAGE [initandlisten] WiredTiger error (0) [1686069352:521440][699277:0x7ff357be0ac0], file:WiredTiger.wt, connection: WiredTiger has failed to open its metadata\n2023-06-06T18:35:52.521+0200 E STORAGE [initandlisten] WiredTiger error (0) [1686069352:521446][699277:0x7ff357be0ac0], file:WiredTiger.wt, connection: This may be due to the database files being encrypted, being from an older version or due to corruption on disk\n2023-06-06T18:35:52.521+0200 E STORAGE [initandlisten] WiredTiger error (0) [1686069352:521452][699277:0x7ff357be0ac0], file:WiredTiger.wt, connection: You should confirm that you have opened the database with the correct options including all encryption and compression options\n2023-06-06T18:35:52.523+0200 E - [initandlisten] Assertion: 28595:-31802: WT_ERROR: non-specific WiredTiger error src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp 421\n2023-06-06T18:35:52.523+0200 I STORAGE [initandlisten] exception in initAndListen: Location28595: -31802: WT_ERROR: non-specific WiredTiger error, terminating\n2023-06-06T18:35:52.523+0200 I NETWORK [initandlisten] shutdown: going to close listening sockets...\n2023-06-06T18:35:52.523+0200 I NETWORK [initandlisten] removing socket file: /run/mongodb/mongodb-27017.sock\n2023-06-06T18:35:52.523+0200 I CONTROL [initandlisten] now exiting\n2023-06-06T18:35:52.523+0200 I CONTROL [initandlisten] shutting down with code:100\n2023-06-06T18:43:23.075+0200 I CONTROL [main] ***** SERVER RESTARTED *****\n2023-06-06T18:43:23.081+0200 I CONTROL [initandlisten] MongoDB starting : pid=702037 port=27017 dbpath=/var/lib/mongodb 64-bit host=vmi693782.contaboserver.net\n2023-06-06T18:43:23.081+0200 I CONTROL [initandlisten] db version v3.6.8\n2023-06-06T18:43:23.081+0200 I CONTROL [initandlisten] git version: 8e540c0b6db93ce994cc548f000900bdc740f80a\n2023-06-06T18:43:23.081+0200 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1f 31 Mar 2020\n2023-06-06T18:43:23.081+0200 I CONTROL [initandlisten] allocator: tcmalloc\n2023-06-06T18:43:23.081+0200 I CONTROL [initandlisten] modules: none\n2023-06-06T18:43:23.081+0200 I CONTROL [initandlisten] build environment:\n2023-06-06T18:43:23.081+0200 I CONTROL [initandlisten] distarch: x86_64\n2023-06-06T18:43:23.081+0200 I CONTROL [initandlisten] target_arch: x86_64\n2023-06-06T18:43:23.081+0200 I CONTROL [initandlisten] options: { config: \"/etc/mongodb.conf\", net: { bindIp: \"127.0.0.1\", unixDomainSocket: { pathPrefix: \"/run/mongodb\" } }, storage: { dbPath: \"/var/lib/mongodb\", journal: { enabled: true } }, systemLog: { destination: \"file\", logAppend: true, path: \"/var/log/mongodb/mongodb.log\" } }\n2023-06-06T18:43:23.081+0200 I - [initandlisten] Detected data files in /var/lib/mongodb created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.\n2023-06-06T18:43:23.082+0200 I STORAGE [initandlisten]\n2023-06-06T18:43:23.082+0200 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine\n2023-06-06T18:43:23.082+0200 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem\n2023-06-06T18:43:23.082+0200 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=3466M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),cache_cursors=false,compatibility=(release=\"3.0\",require_max=\"3.0\"),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),\n2023-06-06T18:43:23.826+0200 E STORAGE [initandlisten] WiredTiger error (-31802) [1686069803:826652][702037:0x7f09748f8ac0], file:WiredTiger.wt, connection: unable to read root page from file:WiredTiger.wt: WT_ERROR: non-specific WiredTiger error\n2023-06-06T18:43:23.826+0200 E STORAGE [initandlisten] WiredTiger error (0) [1686069803:826709][702037:0x7f09748f8ac0], file:WiredTiger.wt, connection: WiredTiger has failed to open its metadata\n2023-06-06T18:43:23.826+0200 E STORAGE [initandlisten] WiredTiger error (0) [1686069803:826716][702037:0x7f09748f8ac0], file:WiredTiger.wt, connection: This may be due to the database files being encrypted, being from an older version or due to corruption on disk\n2023-06-06T18:43:23.826+0200 E STORAGE [initandlisten] WiredTiger error (0) [1686069803:826721][702037:0x7f09748f8ac0], file:WiredTiger.wt, connection: You should confirm that you have opened the database with the correct options including all encryption and compression options\n2023-06-06T18:43:23.828+0200 E - [initandlisten] Assertion: 28595:-31802: WT_ERROR: non-specific WiredTiger error src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp 421\n2023-06-06T18:43:23.828+0200 I STORAGE [initandlisten] exception in initAndListen: Location28595: -31802: WT_ERROR: non-specific WiredTiger error, terminating\n2023-06-06T18:43:23.828+0200 I NETWORK [initandlisten] shutdown: going to close listening sockets...\n2023-06-06T18:43:23.828+0200 I NETWORK [initandlisten] removing socket file: /run/mongodb/mongodb-27017.sock\n2023-06-06T18:43:23.828+0200 I CONTROL [initandlisten] now exiting\n2023-06-06T18:43:23.828+0200 I CONTROL [initandlisten] shutting down with code:100\n2023-06-06T18:43:28.448+0200 I CONTROL [main] ***** SERVER RESTARTED *****\n2023-06-06T18:43:28.453+0200 I CONTROL [initandlisten] MongoDB starting : pid=702081 port=27017 dbpath=/var/lib/mongodb 64-bit host=vmi693782.contaboserver.net\n2023-06-06T18:43:28.453+0200 I CONTROL [initandlisten] db version v3.6.8\n2023-06-06T18:43:28.453+0200 I CONTROL [initandlisten] git version: 8e540c0b6db93ce994cc548f000900bdc740f80a\n2023-06-06T18:43:28.453+0200 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1f 31 Mar 2020\n2023-06-06T18:43:28.453+0200 I CONTROL [initandlisten] allocator: tcmalloc\n2023-06-06T18:43:28.453+0200 I CONTROL [initandlisten] modules: none\n2023-06-06T18:43:28.453+0200 I CONTROL [initandlisten] build environment:\n2023-06-06T18:43:28.453+0200 I CONTROL [initandlisten] distarch: x86_64\n2023-06-06T18:43:28.453+0200 I CONTROL [initandlisten] target_arch: x86_64\n2023-06-06T18:43:28.453+0200 I CONTROL [initandlisten] options: { config: \"/etc/mongodb.conf\", net: { bindIp: \"127.0.0.1\", unixDomainSocket: { pathPrefix: \"/run/mongodb\" } }, storage: { dbPath: \"/var/lib/mongodb\", journal: { enabled: true } }, systemLog: { destination: \"file\", logAppend: true, path: \"/var/log/mongodb/mongodb.log\" } }\n2023-06-06T18:43:28.454+0200 I - [initandlisten] Detected data files in /var/lib/mongodb created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.\n2023-06-06T18:43:28.454+0200 I STORAGE [initandlisten]\n2023-06-06T18:43:28.454+0200 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine\n2023-06-06T18:43:28.454+0200 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem\n2023-06-06T18:43:28.454+0200 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=3466M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),cache_cursors=false,compatibility=(release=\"3.0\",require_max=\"3.0\"),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),\n2023-06-06T18:43:29.196+0200 E STORAGE [initandlisten] WiredTiger error (-31802) [1686069809:196286][702081:0x7fcb204a6ac0], file:WiredTiger.wt, connection: unable to read root page from file:WiredTiger.wt: WT_ERROR: non-specific WiredTiger error\n2023-06-06T18:43:29.196+0200 E STORAGE [initandlisten] WiredTiger error (0) [1686069809:196409][702081:0x7fcb204a6ac0], file:WiredTiger.wt, connection: WiredTiger has failed to open its metadata\n2023-06-06T18:43:29.196+0200 E STORAGE [initandlisten] WiredTiger error (0) [1686069809:196426][702081:0x7fcb204a6ac0], file:WiredTiger.wt, connection: This may be due to the database files being encrypted, being from an older version or due to corruption on disk\n2023-06-06T18:43:29.196+0200 E STORAGE [initandlisten] WiredTiger error (0) [1686069809:196434][702081:0x7fcb204a6ac0], file:WiredTiger.wt, connection: You should confirm that you have opened the database with the correct options including all encryption and compression options\n2023-06-06T18:43:29.198+0200 E - [initandlisten] Assertion: 28595:-31802: WT_ERROR: non-specific WiredTiger error src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp 421\n2023-06-06T18:43:29.199+0200 I STORAGE [initandlisten] exception in initAndListen: Location28595: -31802: WT_ERROR: non-specific WiredTiger error, terminating\n2023-06-06T18:43:29.199+0200 I NETWORK [initandlisten] shutdown: going to close listening sockets...\n2023-06-06T18:43:29.199+0200 I NETWORK [initandlisten] removing socket file: /run/mongodb/mongodb-27017.sock\n2023-06-06T18:43:29.199+0200 I CONTROL [initandlisten] now exiting\n2023-06-06T18:43:29.199+0200 I CONTROL [initandlisten] shutting down with code:100\n",
"text": "I have a nodebb platform on the server, which of course ran on mongodb.\nIt all started after I decided to install another platform on mysql, I ran it, and it worked fine, I connected it to nginx, and everything is fixed, both sites work! Both the Mongo and the Mysql…\nToday I opened the nodebb site and I was very surprised to see that it is not connected (502 Unable to access this site)…\nAs a first step, I accessed the IP address of the server, and there a surprise awaited me… an Apache screen! Apparently while installing or something, I inadvertently installed Apache. Now everything was understandable, I entered ssh, turned off Apache, turned on nginx. And whoop, refresh.\nBrother, trouble comes in bunches! The 502 Bad Gateway screen usually indicates an error in the database, according to my experience, I didn’t wait long, I returned to ssh and ran ./nodebb setup knowing that it would soon connect to the database, and it would start running…\nbut what? Suddenly an error appeared, that it cannot connect to a database!\nI ran mongo and instead of inserting into the shell, this is the response:I thought it was related to a mysql conflict with mongodb, so I turned it off, and it still didn’t work…\nI restarted the server… and it didn’t help!\nWhen I run sudo service mongodb status\nThis is the output:I don’t know if the material has been deleted, or it’s simply impossible to access it, but in the /var/lib/mongodb folder I have the following material:\n\nimage842×764 22.9 KB\n\nApparently, from what I understood, these are database materials, I thought that if I deleted mongodb and reinstalled, and then put in the materials that were then it would be backed up automatically.\nNeed and would appreciate help! Thanks in advance!\nOf course, I’m attaching some files here to diagnose the problem:I changed some details in conf according to chatgpt because I thought it would help me, but nothing helps at allAll permissions on all mongodb folders are set to the mongodb userOn the last line he stays stuck like that the whole time, until I get out of himAnd of course, mongod.log\n(I exceeded the character limit, I will post the file in the next post…)\nAnd if anyone wants mongodb.log too:I would really appreciate the help. I’ve already tried to ask in several forums, no one has a clue. Or no one has the power to help.\nI have very important materials in it.",
"username": "levi.chviv770"
},
{
"code": "root@vmi693782:~# tail -n 100 /var/log/mongodb/mongod.log\n2023-06-06T19:20:10.505+0200 I NETWORK [initandlisten] shutdown: going to close listening sockets...\n2023-06-06T19:20:10.505+0200 I NETWORK [initandlisten] removing socket file: /tmp/mongodb-27017.sock\n2023-06-06T19:20:10.505+0200 I CONTROL [initandlisten] now exiting\n2023-06-06T19:20:10.505+0200 I CONTROL [initandlisten] shutting down with code:100\n2023-06-06T19:20:15.743+0200 I CONTROL [main] ***** SERVER RESTARTED *****\n2023-06-06T19:20:15.753+0200 I CONTROL [initandlisten] MongoDB starting : pid=714589 port=27017 dbpath=/var/lib/mongodb 64-bit host=vmi693782.contaboserver.net\n2023-06-06T19:20:15.753+0200 I CONTROL [initandlisten] db version v3.6.8\n2023-06-06T19:20:15.753+0200 I CONTROL [initandlisten] git version: 8e540c0b6db93ce994cc548f000900bdc740f80a\n2023-06-06T19:20:15.753+0200 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1f 31 Mar 2020\n2023-06-06T19:20:15.753+0200 I CONTROL [initandlisten] allocator: tcmalloc\n2023-06-06T19:20:15.753+0200 I CONTROL [initandlisten] modules: none\n2023-06-06T19:20:15.753+0200 I CONTROL [initandlisten] build environment:\n2023-06-06T19:20:15.753+0200 I CONTROL [initandlisten] distarch: x86_64\n2023-06-06T19:20:15.753+0200 I CONTROL [initandlisten] target_arch: x86_64\n2023-06-06T19:20:15.753+0200 I CONTROL [initandlisten] options: { config: \"/etc/mongod.conf\", net: { bindIp: \"127.0.0.1\", port: 27017 }, processManagement: { timeZoneInfo: \"/usr/share/zoneinfo\" }, security: { authorization: \"enabled\" }, storage: { dbPath: \"/var/lib/mongodb\", engine: \"mmapv1\", journal: { enabled: true } }, systemLog: { destination: \"file\", logAppend: true, path: \"/var/log/mongodb/mongod.log\" } }\n2023-06-06T19:20:15.754+0200 I STORAGE [initandlisten] exception in initAndListen: Location28662: Cannot start server. Detected data files in /var/lib/mongodb created by the 'wiredTiger' storage engine, but the specified storage engine was 'mmapv1'., terminating\n2023-06-06T19:20:15.754+0200 I NETWORK [initandlisten] shutdown: going to close listening sockets...\n2023-06-06T19:20:15.754+0200 I NETWORK [initandlisten] removing socket file: /tmp/mongodb-27017.sock\n2023-06-06T19:20:15.754+0200 I CONTROL [initandlisten] now exiting\n2023-06-06T19:20:15.754+0200 I CONTROL [initandlisten] shutting down with code:100\n2023-06-06T19:20:20.991+0200 I CONTROL [main] ***** SERVER RESTARTED *****\n2023-06-06T19:20:21.004+0200 I CONTROL [initandlisten] MongoDB starting : pid=714621 port=27017 dbpath=/var/lib/mongodb 64-bit host=vmi693782.contaboserver.net\n2023-06-06T19:20:21.004+0200 I CONTROL [initandlisten] db version v3.6.8\n2023-06-06T19:20:21.004+0200 I CONTROL [initandlisten] git version: 8e540c0b6db93ce994cc548f000900bdc740f80a\n2023-06-06T19:20:21.004+0200 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1f 31 Mar 2020\n2023-06-06T19:20:21.004+0200 I CONTROL [initandlisten] allocator: tcmalloc\n2023-06-06T19:20:21.004+0200 I CONTROL [initandlisten] modules: none\n2023-06-06T19:20:21.004+0200 I CONTROL [initandlisten] build environment:\n2023-06-06T19:20:21.004+0200 I CONTROL [initandlisten] distarch: x86_64\n2023-06-06T19:20:21.004+0200 I CONTROL [initandlisten] target_arch: x86_64\n2023-06-06T19:20:21.004+0200 I CONTROL [initandlisten] options: { config: \"/etc/mongod.conf\", net: { bindIp: \"127.0.0.1\", port: 27017 }, processManagement: { timeZoneInfo: \"/usr/share/zoneinfo\" }, security: { authorization: \"enabled\" }, storage: { dbPath: \"/var/lib/mongodb\", engine: \"mmapv1\", journal: { enabled: true } }, systemLog: { destination: \"file\", logAppend: true, path: \"/var/log/mongodb/mongod.log\" } }\n2023-06-06T19:20:21.005+0200 I STORAGE [initandlisten] exception in initAndListen: Location28662: Cannot start server. Detected data files in /var/lib/mongodb created by the 'wiredTiger' storage engine, but the specified storage engine was 'mmapv1'., terminating\n2023-06-06T19:20:21.005+0200 I NETWORK [initandlisten] shutdown: going to close listening sockets...\n2023-06-06T19:20:21.005+0200 I NETWORK [initandlisten] removing socket file: /tmp/mongodb-27017.sock\n2023-06-06T19:20:21.005+0200 I CONTROL [initandlisten] now exiting\n2023-06-06T19:20:21.005+0200 I CONTROL [initandlisten] shutting down with code:100\n2023-06-06T19:20:26.248+0200 I CONTROL [main] ***** SERVER RESTARTED *****\n2023-06-06T19:20:26.257+0200 I CONTROL [initandlisten] MongoDB starting : pid=714649 port=27017 dbpath=/var/lib/mongodb 64-bit host=vmi693782.contaboserver.net\n2023-06-06T19:20:26.257+0200 I CONTROL [initandlisten] db version v3.6.8\n2023-06-06T19:20:26.257+0200 I CONTROL [initandlisten] git version: 8e540c0b6db93ce994cc548f000900bdc740f80a\n2023-06-06T19:20:26.257+0200 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1f 31 Mar 2020\n2023-06-06T19:20:26.257+0200 I CONTROL [initandlisten] allocator: tcmalloc\n2023-06-06T19:20:26.257+0200 I CONTROL [initandlisten] modules: none\n2023-06-06T19:20:26.257+0200 I CONTROL [initandlisten] build environment:\n2023-06-06T19:20:26.257+0200 I CONTROL [initandlisten] distarch: x86_64\n2023-06-06T19:20:26.257+0200 I CONTROL [initandlisten] target_arch: x86_64\n2023-06-06T19:20:26.257+0200 I CONTROL [initandlisten] options: { config: \"/etc/mongod.conf\", net: { bindIp: \"127.0.0.1\", port: 27017 }, processManagement: { timeZoneInfo: \"/usr/share/zoneinfo\" }, security: { authorization: \"enabled\" }, storage: { dbPath: \"/var/lib/mongodb\", engine: \"mmapv1\", journal: { enabled: true } }, systemLog: { destination: \"file\", logAppend: true, path: \"/var/log/mongodb/mongod.log\" } }\n2023-06-06T19:20:26.257+0200 I STORAGE [initandlisten] exception in initAndListen: Location28662: Cannot start server. Detected data files in /var/lib/mongodb created by the 'wiredTiger' storage engine, but the specified storage engine was 'mmapv1'., terminating\n2023-06-06T19:20:26.257+0200 I NETWORK [initandlisten] shutdown: going to close listening sockets...\n2023-06-06T19:20:26.257+0200 I NETWORK [initandlisten] removing socket file: /tmp/mongodb-27017.sock\n2023-06-06T19:20:26.257+0200 I CONTROL [initandlisten] now exiting\n2023-06-06T19:20:26.257+0200 I CONTROL [initandlisten] shutting down with code:100\n2023-06-06T19:20:31.495+0200 I CONTROL [main] ***** SERVER RESTARTED *****\n2023-06-06T19:20:31.505+0200 I CONTROL [initandlisten] MongoDB starting : pid=714680 port=27017 dbpath=/var/lib/mongodb 64-bit host=vmi693782.contaboserver.net\n2023-06-06T19:20:31.505+0200 I CONTROL [initandlisten] db version v3.6.8\n2023-06-06T19:20:31.505+0200 I CONTROL [initandlisten] git version: 8e540c0b6db93ce994cc548f000900bdc740f80a\n2023-06-06T19:20:31.505+0200 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1f 31 Mar 2020\n2023-06-06T19:20:31.505+0200 I CONTROL [initandlisten] allocator: tcmalloc\n2023-06-06T19:20:31.505+0200 I CONTROL [initandlisten] modules: none\n2023-06-06T19:20:31.505+0200 I CONTROL [initandlisten] build environment:\n2023-06-06T19:20:31.505+0200 I CONTROL [initandlisten] distarch: x86_64\n2023-06-06T19:20:31.505+0200 I CONTROL [initandlisten] target_arch: x86_64\n2023-06-06T19:20:31.505+0200 I CONTROL [initandlisten] options: { config: \"/etc/mongod.conf\", net: { bindIp: \"127.0.0.1\", port: 27017 }, processManagement: { timeZoneInfo: \"/usr/share/zoneinfo\" }, security: { authorization: \"enabled\" }, storage: { dbPath: \"/var/lib/mongodb\", engine: \"mmapv1\", journal: { enabled: true } }, systemLog: { destination: \"file\", logAppend: true, path: \"/var/log/mongodb/mongod.log\" } }\n2023-06-06T19:20:31.506+0200 I STORAGE [initandlisten] exception in initAndListen: Location28662: Cannot start server. Detected data files in /var/lib/mongodb created by the 'wiredTiger' storage engine, but the specified storage engine was 'mmapv1'., terminating\n2023-06-06T19:20:31.506+0200 I NETWORK [initandlisten] shutdown: going to close listening sockets...\n2023-06-06T19:20:31.506+0200 I NETWORK [initandlisten] removing socket file: /tmp/mongodb-27017.sock\n2023-06-06T19:20:31.506+0200 I CONTROL [initandlisten] now exiting\n2023-06-06T19:20:31.506+0200 I CONTROL [initandlisten] shutting down with code:100\n2023-06-06T19:20:36.743+0200 I CONTROL [main] ***** SERVER RESTARTED *****\n2023-06-06T19:20:36.751+0200 I CONTROL [initandlisten] MongoDB starting : pid=714703 port=27017 dbpath=/var/lib/mongodb 64-bit host=vmi693782.contaboserver.net\n2023-06-06T19:20:36.751+0200 I CONTROL [initandlisten] db version v3.6.8\n2023-06-06T19:20:36.751+0200 I CONTROL [initandlisten] git version: 8e540c0b6db93ce994cc548f000900bdc740f80a\n2023-06-06T19:20:36.751+0200 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1f 31 Mar 2020\n2023-06-06T19:20:36.751+0200 I CONTROL [initandlisten] allocator: tcmalloc\n2023-06-06T19:20:36.751+0200 I CONTROL [initandlisten] modules: none\n2023-06-06T19:20:36.751+0200 I CONTROL [initandlisten] build environment:\n2023-06-06T19:20:36.751+0200 I CONTROL [initandlisten] distarch: x86_64\n2023-06-06T19:20:36.751+0200 I CONTROL [initandlisten] target_arch: x86_64\n2023-06-06T19:20:36.751+0200 I CONTROL [initandlisten] options: { config: \"/etc/mongod.conf\", net: { bindIp: \"127.0.0.1\", port: 27017 }, processManagement: { timeZoneInfo: \"/usr/share/zoneinfo\" }, security: { authorization: \"enabled\" }, storage: { dbPath: \"/var/lib/mongodb\", engine: \"mmapv1\", journal: { enabled: true } }, systemLog: { destination: \"file\", logAppend: true, path: \"/var/log/mongodb/mongod.log\" } }\n2023-06-06T19:20:36.752+0200 I STORAGE [initandlisten] exception in initAndListen: Location28662: Cannot start server. Detected data files in /var/lib/mongodb created by the 'wiredTiger' storage engine, but the specified storage engine was 'mmapv1'., terminating\n2023-06-06T19:20:36.752+0200 I NETWORK [initandlisten] shutdown: going to close listening sockets...\n2023-06-06T19:20:36.752+0200 I NETWORK [initandlisten] removing socket file: /tmp/mongodb-27017.sock\n2023-06-06T19:20:36.752+0200 I CONTROL [initandlisten] now exiting\n2023-06-06T19:20:36.752+0200 I CONTROL [initandlisten] shutting down with code:100\n2023-06-06T19:20:41.997+0200 I CONTROL [main] ***** SERVER RESTARTED *****\n2023-06-06T19:20:42.007+0200 I CONTROL [initandlisten] MongoDB starting : pid=714731 port=27017 dbpath=/var/lib/mongodb 64-bit host=vmi693782.contaboserver.net\n2023-06-06T19:20:42.007+0200 I CONTROL [initandlisten] db version v3.6.8\n2023-06-06T19:20:42.007+0200 I CONTROL [initandlisten] git version: 8e540c0b6db93ce994cc548f000900bdc740f80a\n2023-06-06T19:20:42.007+0200 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1f 31 Mar 2020\n2023-06-06T19:20:42.007+0200 I CONTROL [initandlisten] allocator: tcmalloc\n2023-06-06T19:20:42.007+0200 I CONTROL [initandlisten] modules: none\n2023-06-06T19:20:42.007+0200 I CONTROL [initandlisten] build environment:\n2023-06-06T19:20:42.007+0200 I CONTROL [initandlisten] distarch: x86_64\n2023-06-06T19:20:42.007+0200 I CONTROL [initandlisten] target_arch: x86_64\n2023-06-06T19:20:42.007+0200 I CONTROL [initandlisten] options: { config: \"/etc/mongod.conf\", net: { bindIp: \"127.0.0.1\", port: 27017 }, processManagement: { timeZoneInfo: \"/usr/share/zoneinfo\" }, security: { authorization: \"enabled\" }, storage: { dbPath: \"/var/lib/mongodb\", engine: \"mmapv1\", journal: { enabled: true } }, systemLog: { destination: \"file\", logAppend: true, path: \"/var/log/mongodb/mongod.log\" } }\n2023-06-06T19:20:42.007+0200 I STORAGE [initandlisten] exception in initAndListen: Location28662: Cannot start server. Detected data files in /var/lib/mongodb created by the 'wiredTiger' storage engine, but the specified storage engine was 'mmapv1'., terminating\n2023-06-06T19:20:42.007+0200 I NETWORK [initandlisten] shutdown: going to close listening sockets...\n2023-06-06T19:20:42.007+0200 I NETWORK [initandlisten] removing socket file: /tmp/mongodb-27017.sock\n2023-06-06T19:20:42.007+0200 I CONTROL [initandlisten] now exiting\n2023-06-06T19:20:42.007+0200 I CONTROL [initandlisten] shutting down with code:100\n",
"text": "From this log I understand that it moved to the wiredtiger filesystem, but I have no idea why, and since then it has been completely screwed up…",
"username": "levi.chviv770"
},
{
"code": "2023-06-06T18:43:29.196+0200 E STORAGE [initandlisten] WiredTiger error (-31802) [1686069809:196286][702081:0x7fcb204a6ac0], file:WiredTiger.wt, connection: unable to read root page from file:WiredTiger.wt: WT_ERROR: non-specific WiredTiger error\n2023-06-06T18:43:29.196+0200 E STORAGE [initandlisten] WiredTiger error (0) [1686069809:196409][702081:0x7fcb204a6ac0], file:WiredTiger.wt, connection: WiredTiger has failed to open its metadata\n2023-06-06T18:43:29.196+0200 E STORAGE [initandlisten] WiredTiger error (0) [1686069809:196426][702081:0x7fcb204a6ac0], file:WiredTiger.wt, connection: This may be due to the database files being encrypted, being from an older version or due to corruption on disk\n2023-06-06T18:43:29.196+0200 E STORAGE [initandlisten] WiredTiger error (0) [1686069809:196434][702081:0x7fcb204a6ac0], file:WiredTiger.wt, connection: You should confirm that you have opened the database with the correct options including all encryption and compression options\n2023-06-06T18:43:29.198+0200 E - [initandlisten] Assertion: 28595:-31802: WT_ERROR: non-specific WiredTiger error src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp 421\n",
"text": "Look like wiredtiger can’t recognize the data file. Maybe data corrupted, or incompatible format, whatever.If you don’t have critical/important data, (or if you have a full backup), you can try deleting the db files and start from a “clean state”",
"username": "Kobe_W"
},
{
"code": "",
"text": "/var/lib/mongodbI don’t have a backup, what I do have is the files of the db that I copied from /var/lib/mongodb after it was knocked out…\nMy question is why did it happen, why did it get screwed?\nThe data is very, very important!",
"username": "levi.chviv770"
},
{
"code": "",
"text": "There is a difference between starting mongod with and without config file\nIn your first screenshot you have started mongod issuing just mongod\nThis brought up your mongod successfully on default port 27017 and default dirpath /data/db\nIt appears to be hung but it is not.Since your mongod was started on foreground (without fork) the session appears to be hung but log clearly shows waiting for connections\nYou have to open another session and connect by issuing mongoBut above is not what you want\nYour data was in /var/lib/mongod\nSecond attempt you tried starting your service but there is a mismatch in your config file and the data files that are existing in your dbpath\nYour log clearly says wired tiger data files detected but config file is having mmapv1(incompatibility between what you have under that directory vs what is mentioned in config file)\nWhat all changes made to your config file?\nDid you take a backup of it before making changes\nMay be putting back the original values you can start your mongod and get back your data\nHopefully there is no corruption to the files",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I didn’t take a backup.\nWhat things do I need to change in conf for it to be good?",
"username": "levi.chviv770"
},
{
"code": "",
"text": "How can I best change the file system to xfs and prevent wiredtiger storage?\nI got very confused, but this is probably my solution.\nDo I need to create a new partition and set it to xfs?\nI would appreciate it if someone could help me with this!\nThanks!",
"username": "levi.chviv770"
},
{
"code": "in initAndListen: Location28662: Cannot start server. Detected data files in /var/lib/mongodb created by the 'wiredTiger' storage engine, but the specified storage engine was 'mmapv1'., terminating\n2023-06-06T19:20:15.754+0200 I NETWORK [initandlisten] shutdown: going to close listening sockets...\n",
"text": "What changes you made to your config file?\nDo you have a snapshot of it?\nin initAndListen: Location28662: Cannot start server. Detected data files in /var/lib/mongodb created by the ‘wiredTiger’ storage engine, but the specified storage engine was ‘mmapv1’., terminating\n2023-06-06T19:20:15.754+0200 I NETWORK [initandlisten] shutdown: going to close listening sockets…\nAll files ending with .wt are WT files\nWhy the storage was changed to mmapv1 if the .wt files already existed in the dbpath directory?\ncomment/disable this parameter in your config file and enable wiredtiger and see if you can start your mongod",
"username": "Ramachandra_Tummala"
}
] | Filesystem turned into WiredTiger and went crazy | 2023-06-06T17:29:51.786Z | Filesystem turned into WiredTiger and went crazy | 841 |
|
null | [
"production",
"golang"
] | [
{
"code": "decimal128RewrapManyDataKeymasterKeyproviderint64Cursor.SetBatchSize",
"text": "The MongoDB Go Driver Team is pleased to release version 1.11.7 of the MongoDB Go Driver.This release fixes various bugs, including:It also adds the Cursor.SetBatchSize API, which allows changing the document batch size requested for subsequent cursor iterations.For more information please see the 1.11.7 release notes.You can obtain the driver source from GitHub under the v1.11.7 tag.Documentation for the Go driver can be found on pkg.go.dev and the MongoDB documentation site. BSON library documentation is also available on pkg.go.dev. Questions and inquiries can be asked on the MongoDB Developer Community. Bugs can be reported in the Go Driver project in the MongoDB JIRA where a list of current issues can be found. Your feedback on the Go driver is greatly appreciated!Thank you,The Go Driver Team",
"username": "Matt_Dale"
},
{
"code": "'mongodb://admin:[email protected]:27017/?tls=true&tlsCAFile=/some/path/to/selfsigned.crt'2023/06/06 17:16:28 No .env file found\n2023/06/06 17:16:58 server selection error: server selection timeout, current topology: { Type: Unknown, Servers: [{ Addr: someserver.somedomain:27017, Type: Unknown, Last error: x509: certificate relies on legacy Common Name field, use SANs instead }, ] }\nmongoshcompassmongoshcompass%2F",
"text": "On this version of the driver, I’m having a problem with a self-signed cert.I use a URI of the form:'mongodb://admin:[email protected]:27017/?tls=true&tlsCAFile=/some/path/to/selfsigned.crt'My application fails as follows:I know it’s loading my local copy of the self-signed cert because if I munge the path, my application does a panic: … no such file or directory.The URI I am using works with mongosh and compass except that for mongosh and compass I urlencode the leading-slashes in the certificate path as %2F’s.Any tips, please? Obviously it’s being fussy about the Common Name field, but is there a switch to make it happy, or is this the new strict?",
"username": "Jack_Woehr"
},
{
"code": "GODEBUG=x509ignoreCN=0",
"text": "@Jack_Woehr that’s an interesting error. That error seems to be returned by the crypto/x509 package starting with Go 1.15. The error could be disabled with the environment variable GODEBUG=x509ignoreCN=0 until the override was removed with Go 1.17. I’m not aware of any recent intentional change in the Go driver that would surface that error.I have some questions that could help get to the bottom of the error:",
"username": "Matt_Dale"
},
{
"code": "mongoshcompass",
"text": "In the past I have used Go for MongoDB very infrequently.In fact, the last time I tried Go with MongoDB, the development MongoDB database was not configured for TLS.Since that last time, I have configured our development database for TLS and we have been happily accessing it that way, CN and all, via mongosh, compass, and via the PHP Driver, PHP being the language which I am using most for MongoDB development.So I am afraid I cannot provide helpful details. I asked the question assuming I had omitted some variable or URI component. And the answer, from what you say, would have been “Yes” in earlier versions of the crypto/x509 indirect requirement.So I guess I should simply comply by updating my self-signed certificates rather than wrestle with the programming interface.FWIW my Go version is 1.18.1.",
"username": "Jack_Woehr"
},
{
"code": "tlsInsecure=true",
"text": "Thanks for the additional info! The Go TLS standard libraries tend to be relatively aggressive at enforcing secure configuration and deprecating legacy behavior. Since the Go driver uses the TLS standard libraries, it also inherits those strict requirements.As far as a workaround, you could try using the tlsInsecure=true URI option. However, use that with extreme caution because it disables TLS hostname checking, so it may break the security you hoped to add by enabling TLS. Otherwise, as you also concluded, there’s not really a way to override that specific behavior in Go 1.17+ and you’re stuck updating your certs.",
"username": "Matt_Dale"
},
{
"code": "tlsInsecure=true",
"text": "tlsInsecure=true does the trick, @Matt_Dale\nIt’s the Dev installation, not exposed outside, so I’m not worried about the insecurity.\nThe Prod installation will have a genuine certificate.\nThanks for your help.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Go Driver 1.11.7 Released | 2023-06-06T22:14:52.007Z | MongoDB Go Driver 1.11.7 Released | 696 |
null | [] | [
{
"code": "",
"text": "Hi there,I’m currently going through M312 to prepare for my DBA Exam.\nUnfortunately, whole lessons are completely outdated. Especially the videos using Mtools to analyze the logs cannot be played along locally with current installations of MongoDB, since Mtools do not support the current log format, which has been the standard (and only available option) for quite some time now.\nAre these tasks going to be relevant in the exam? If so, what resources can you recommend for this type of work?\nSince debugging is IMHO a fairly important task in the life of a DBA, are there any plans to revise M312? Judging on some of the content that’s shown in the videos, the course was recorded in 2014, which is ages ago.Best regards\nMax",
"username": "MaxR"
},
{
"code": "",
"text": "Hey @MaxR,Thanks for flagging this.Since debugging is IMHO a fairly important task in the life of a DBA, are there any plans to revise M312?The University team is aware that the contents of the course are outdated. We assure you, they are working hard on creating and testing new content that will be relevant to the DBAs.Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "Thanks! Looking forward to it ",
"username": "MaxR"
},
{
"code": "",
"text": "Hi @SatyamThanks for your answer!Are the outdated contents relevant for the DBA exam? What resources do you recommend for learning these topics?Best regards\nMax",
"username": "MaxR"
},
{
"code": "",
"text": "Hello @MaxRat least when I did the DBA exam there was no question about mtools, however this is no guarantee.\nIn case you want to analyse the new log format have a look at hatchet and keyhole.\nI am pretty sure that this is not content for the DBA exam but very handy when it come to the point to get the hands dirty. Regards,\nMichael",
"username": "michael_hoeller"
}
] | Outdated content for M312 | 2023-06-02T07:18:22.362Z | Outdated content for M312 | 790 |
null | [
"queries",
"dot-net"
] | [
{
"code": "\nMongoBsonRespository<BsonDocument> DbBsonRepository = new MongoBsonRespository<BsonDocument>(collectionName);\nDbBsonRepository.MapFindOptions(reportQuery);\n\npublic void MapFindOptions(ReportQuery reportQuery)\n{\n\tint columnCount = 0;\n\tvar findOptions = new FindOptions<TDocument>();\n\tList<ProjectionDefinition<TDocument>> projectionList = new List<ProjectionDefinition<TDocument>>();\n\tif (reportQuery.Columns != null && reportQuery.Columns.Count > 0)\n\t{\n\t\tforeach (ColumnBase columnInfo in reportQuery.Columns)\n\t\t{\n\t\t\t//build the projection definition for select\n\t\t\tFieldDefinition<TDocument> fieldDefinition = columnInfo.Name;\n\t\t\tprojectionList.Add(Builders<TDocument>.Projection.Include(fieldDefinition));\n\t\t\tcolumnCount++;\n\t\t}\n\t}\n findOptions.Projection = Builders<TDocument>.Projection.Combine(projectionList);\n\tFindOptions = findOptions;\n}\n\n using (IAsyncCursor<BsonDocument> cursor = DbBsonRepository.FindSync())\n {\n //the logic to read data\n }\n",
"text": "I am trying to find a way to define the formatting for date and decimal types while I read data from Mongo using C# driver with Repository pattern. Below is the code which I use to define the FindOptions.My query here is, how do I specify formatting for date or decimal fields in the FieldDefinition itself ? or is there any other to solve this ? Thanks in advance.",
"username": "Saravanaram_Kumarasamy"
},
{
"code": "BsonDocumentsBsonDateTimeBsonDecimal128DateTimedecimal{price:C2}BsonDecimal128decimalToDecimal()BsonDocumentsBsonClassMap<T>",
"text": "Hi, @Saravanaram_Kumarasamy,Welcome to the MongoDB Community Forums. I understand that you are wondering how to specify date and decimal formatting when working with BsonDocuments.The data returned from MongoDB will be of type BsonDateTime or BsonDecimal128 (assuming that is how it is stored in the database). These are convertible to C# types such as DateTime or decimal. So you can format the data using standard .NET/C# constructs such as format strings and format specifiers. For example, to format a decimal as currency, you could use {price:C2}. (Note that you would first have to convert the BsonDecimal128 to a decimal (via ToDecimal()) in order to use C# format specifiers.NOTE: Rather than working with BsonDocuments, you can map your documents to C# classes (AKA POCOs) and the serialization framework built into the driver will convert the BSON types to C# types for you. This behaviour can be controlled via BSON attributes, BsonClassMap<T>, and conventions.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "Hi @James_Kovacs,Thanks for the reply.\nI understand that date or decimal formatting can be applied on the BsonDocument after retrieval. But this I feel is a hit on performance as I will have to additionally loop through the records every time to format the desired fields.And also I get to know from your reply that the type cast can be done by specifying the BSON attribute on the POCOs. This I can, but there are dynamic collections as well, so defining POCOs for all is not possible.But is there a way, where I can specify the format for decimal/date fields, in the FieldDefinition/StringFieldDefinition or in the ProjectionList so that the formatting would have been done already while data retrieval from the database.Thanks again!",
"username": "Saravanaram_Kumarasamy"
},
{
"code": "using MongoDB.Bson;\nusing MongoDB.Driver;\n\nvar client = new MongoClient();\nvar db = client.GetDatabase(\"test\");\nvar coll = db.GetCollection<Event>(\"events\");\n\nvar query = coll.Aggregate().Project(x => new { Start = x.StartTime.ToString(\"%Y-%m-%d\") });\nConsole.WriteLine(query);\n\nrecord Event(ObjectId Id, string Title, DateTime StartTime, DateTime EndTime);\nDateTimeDateTime.ToStringDatetime.ToString$dateToStringaggregate([{ \"$project\" : { \"Start\" : { \"$dateToString\" : { \"date\" : \"$StartTime\", \"format\" : \"%Y-%m-%d\" } }, \"_id\" : 0 } }])\n$dateToString",
"text": "If you want to perform the transformation on the server, you can use code such at the following:Note that my POCO is using DateTime because I need to be able to call DateTime.ToString in my Fluent Aggregate or LINQ query. I am projecting into an anonymous type, but you could just as easily project into a POCO where the date fields are strings.This code will send the following MQL to the server. Note how Datetime.ToString was transformed into $dateToString.The format specifier string is passed through unaltered and uses the $dateToString format specifiers, not the .NET ones.Sincerely,\nJames",
"username": "James_Kovacs"
}
] | Define formatting dates and decimals in FindOptions in C# driver | 2023-06-05T09:31:14.252Z | Define formatting dates and decimals in FindOptions in C# driver | 681 |
null | [] | [
{
"code": "",
"text": "We need to know how we can restrict having access to MongoDB even when we have admin access. we as devops create MongoDB then we get admin access, using these admin access we can see data in all database in MongoDB . How can we restrict read and write data in all database in MongoDB This user requires to perform all admin tasks and access to infra.But, as xyz being GDPR complaint, We should not have access to customer’s data . So, we need solution that this user can perform any admin activity but it should not able to access to customer’s data and can you provide links for this as well.",
"username": "Zinkal_Desai"
},
{
"code": "",
"text": "You can check the full list of built in roles from manual. And if no such role exists, you can try creating your own role with proper permissions for admin and read/write access.",
"username": "Kobe_W"
}
] | MongoDB how can we have all the admin privileges plus restriction to access the data inside the database | 2023-06-07T12:59:10.437Z | MongoDB how can we have all the admin privileges plus restriction to access the data inside the database | 266 |
null | [
"next-js"
] | [
{
"code": "",
"text": "Hi In this article about vercel integraiont (https://www.mongodb.com/docs/atlas/reference/partner-integrations/vercel/#std-label-vercel-access-lists), it is mentioned to open all ip address list to allow traffic from vercel deployment. Does this also mean that traffic from Vercel to Mongodb atlas is over internet? Is there any way to connect it over a private network?",
"username": "Ashish_Sheth"
},
{
"code": "",
"text": "Hi @Ashish_Sheth,I believe the traffic does go over the internet. There are some more details from the vercel page:When using third-party services such as databases, you may encounter the option to restrict incoming traffic to your resource to a specific IP address or address range.\nVercel deployments use dynamic IP addresses due to the dynamic nature of the platform. As a result, it is not possible to determine the deployment IP address or address range because the IP may change at any time as the deployment scales instances or across regions.I found the following as well regarding VPC / VPN’s but you may wish to contact the vercel sales / support instead:If your current security and compliance obligations require more than dedicated IP addresses, contact us for guidance related to your specific needs.\nNote: If you require support for VPC peering or VPN connections Contact SalesRegards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "It’s useful to emphasize that MongoDB Atlas requires TLS/SSL network encryption over the wire for all connections to the database",
"username": "Andrew_Davidson"
}
] | Does traffic from vercel to mongodb atlas goes over internet? | 2023-06-06T06:01:06.193Z | Does traffic from vercel to mongodb atlas goes over internet? | 956 |
null | [
"kotlin"
] | [
{
"code": "",
"text": "When I try to use the inMemory() setting, I run into various problems.Roughly it looks like this:RealmConfiguration.Builder(\nschema = setOf(\n…RealmObject declarations\n)\n).inMemory().build()In Unit tests it causes an ExceptionInInitializerError with NullPointerException.\nIn Robolectric or Instrumentation tests it causes the following:\nkotlin.UninitializedPropertyAccessException: lateinit property filesDir has not been initialized.What am I doing wrong?",
"username": "Zsolt_Bertalan"
},
{
"code": "",
"text": "The issue is that Android unittests are running in a non-Android JVM environment, but the infrastructure in AGP is pulling in Android dependencies for the test. The dependencies are not properly initialized when not running on an device/emulator.We are in the process of migrating to the new Android source set layout from Kotlin 1.8 and you can track the progress in Migrate to new Android source sets for tests introduced in Kotlin 1.8 by cmelchior · Pull Request #1399 · realm/realm-kotlin · GitHub.Until that you will have to run Android tests (both unit tests and device test) on a device/emulator.",
"username": "Claus_Rorbech"
},
{
"code": "",
"text": "That’s what I’m trying to do. That’s where I got the second exception.",
"username": "Zsolt_Bertalan"
},
{
"code": "",
"text": "I’m working on an Android only project, so the ticket is not relevant to me, I think.",
"username": "Zsolt_Bertalan"
},
{
"code": "",
"text": "Hmm, okay. Then it might be that our initializer, that picks up the context, just isn’t triggered correctly. We use the App Startup infrastruture as described in App Startup | Android Developers for that. Might be that Robolectric has an initializer that is running before ours. Would you be able to inspect the app manifest to see if Robolectric involves any initializers (App Startup | Android Developers). If so then we might be able to control the ordering of them by providing a custom initializer that depends on both.",
"username": "Claus_Rorbech"
},
{
"code": "",
"text": "That makes sense now, because we use Initializers, and when I set up the project, I had to add RealmInitializer as a dependency for a few Initializers to make it work and remove the very same error message with the filesDir.\nHowever, now I run into this in normal Ui tests as well, even when I turn HiltAndroidRule off. Do I need to pass context somehow in tests?",
"username": "Zsolt_Bertalan"
},
{
"code": " val appInitializer = AppInitializer.getInstance(this)\n appInitializer.initializeComponent(RealmInitializer::class.java)\n",
"text": "I think I solved it. We still used (even with Hilt off) a custom AndroidJUnitRunner with a custom Application. I added RealmInitializer to this like this:And it works! Thanks.",
"username": "Zsolt_Bertalan"
},
{
"code": "",
"text": "Great to hear.… Claus",
"username": "Claus_Rorbech"
},
{
"code": "com.getkeepsafe.relinker.MissingLibraryException: Could not find 'librealmc.dylib'. Looked for: [armeabi-v7a], but only found: [].\nandroidx.startup.StartupException: com.getkeepsafe.relinker.MissingLibraryException: Could not find 'librealmc.dylib'. Looked for: [armeabi-v7a], but only found: [].\n\tat androidx.startup.AppInitializer.doInitialize(AppInitializer.java:187)\n\tat androidx.startup.AppInitializer.doInitialize(AppInitializer.java:138)\n\tat androidx.startup.AppInitializer.initializeComponent(AppInitializer.java:117)\n\tat com.reachplc.data.news.InstrumentationTestApp.onCreate(InstrumentationTestApp.kt:15)\n\tat android.app.Instrumentation.callApplicationOnCreate(Instrumentation.java:1154)\n\tat org.robolectric.android.internal.RoboMonitoringInstrumentation.callApplicationOnCreate(RoboMonitoringInstrumentation.java:127)\n\tat org.robolectric.android.internal.AndroidTestEnvironment.lambda$installAndCreateApplication$2(AndroidTestEnvironment.java:368)\n\tat app//org.robolectric.util.PerfStatsCollector.measure(PerfStatsCollector.java:86)\n\tat org.robolectric.android.internal.AndroidTestEnvironment.installAndCreateApplication(AndroidTestEnvironment.java:366)\n\tat org.robolectric.android.internal.AndroidTestEnvironment.lambda$createApplicationSupplier$0(AndroidTestEnvironment.java:245)\n\tat app//org.robolectric.util.PerfStatsCollector.measure(PerfStatsCollector.java:53)\n\tat org.robolectric.android.internal.AndroidTestEnvironment.lambda$createApplicationSupplier$1(AndroidTestEnvironment.java:242)\n\tat com.google.common.base.Suppliers$NonSerializableMemoizingSupplier.get(Suppliers.java:183)\n\tat org.robolectric.RuntimeEnvironment.getApplication(RuntimeEnvironment.java:72)\n\tat org.robolectric.android.internal.AndroidTestEnvironment.setUpApplicationState(AndroidTestEnvironment.java:210)\n\tat app//org.robolectric.RobolectricTestRunner.beforeTest(RobolectricTestRunner.java:331)\n\tat app//org.robolectric.internal.SandboxTestRunner$2.lambda$evaluate$2(SandboxTestRunner.java:278)\n\tat app//org.robolectric.internal.bytecode.Sandbox.lambda$runOnMainThread$0(Sandbox.java:99)\n\tat [email protected]/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat [email protected]/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat [email protected]/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat [email protected]/java.lang.Thread.run(Thread.java:833)\nCaused by: com.getkeepsafe.relinker.MissingLibraryException: Could not find 'librealmc.dylib'. Looked for: [armeabi-v7a], but only found: [].\n\tat com.getkeepsafe.relinker.ApkLibraryInstaller.installLibrary(ApkLibraryInstaller.java:175)\n\tat com.getkeepsafe.relinker.ReLinkerInstance.loadLibraryInternal(ReLinkerInstance.java:180)\n\tat com.getkeepsafe.relinker.ReLinkerInstance.loadLibrary(ReLinkerInstance.java:136)\n\tat com.getkeepsafe.relinker.ReLinker.loadLibrary(ReLinker.java:70)\n\tat com.getkeepsafe.relinker.ReLinker.loadLibrary(ReLinker.java:57)\n\tat io.realm.kotlin.internal.AndroidUtilsKt.loadAndroidNativeLibs(AndroidUtils.kt:14)\n\tat io.realm.kotlin.internal.RealmInitializer.create(RealmInitializer.kt:42)\n\tat io.realm.kotlin.internal.RealmInitializer.create(RealmInitializer.kt:30)\n\tat androidx.startup.AppInitializer.doInitialize(AppInitializer.java:180)\n\tat androidx.startup.AppInitializer.doInitialize(AppInitializer.java:138)\n\tat androidx.startup.AppInitializer.initializeComponent(AppInitializer.java:117)\n\tat com.reachplc.data.news.InstrumentationTestApp.onCreate(InstrumentationTestApp.kt:15)\n\tat android.app.Instrumentation.$$robo$$android_app_Instrumentation$callApplicationOnCreate(Instrumentation.java:1154)\n\tat android.app.Instrumentation.callApplicationOnCreate(Instrumentation.java)\n\t... 17 more\n",
"text": "My last comment above solved the problems with Instrumentation tests. Now as the last part of our migration, I delete SQLite DataSources and convert their Robolectric integration Unit tests to target the Realm DataSources.\nHowever, I run into a similar problem as before. Here is the stack trace:Important to note, that we have normal Unit tests with mocked Realm, but these are integration tests with in-memory Realm.\nI found some related questions on the internet, but they are mostly old and probably outdated. I found this ticket too, which is outstanding, but this is for realm-java, and it seems there is no more blockers to add Robolectric support:Referring to: https://github.com/robolectric/robolectric/issues/1389\n\nCan we ple…ase get an update on this issue. We have tons of integration test we need to run outside of an emulator/device and roboelectric is the only way for us to shadow the Android classes. Without the required support on Realm, we are pretty much blocked before we make the move over from DB4O to Realm.\n\nThe other two major sticking points are support for null and the lack of a high level API list for performing migrations.\n\nThanks a lot for the support.So ultimately the question is: is Robolectric supported for in-memory Realm for realm-kotlin?",
"username": "Zsolt_Bertalan"
}
] | How to use the inMemory() setting? | 2023-05-26T12:08:04.859Z | How to use the inMemory() setting? | 860 |
[
"queries",
"cxx",
"c-driver"
] | [
{
"code": "mongocxx::instance instance{}; // This should be done only once.\nmongocxx::uri uri(\"mongodb://localhost:27017\");\nmongocxx::client client(uri);\n\nauto db = client[\"test\"];\nauto collection = db[\"inventory\"];\n\n// Find All Documents in a Collection\n{\n auto cursor_all = collection.find({});\n for (auto&& doc : cursor_all) {\n // Do something with doc\n std::cout << bsoncxx::to_json(doc) << std::endl;\n }\n}\n",
"text": "Data stored in MongoDB as follows:\n\nFollowed the approach as mentioned Getting Started with MongoDB and C++ | MongoDB to build C++ drivers for MongoDB.\nHere is my C++ code snippet\n#include \n#include <bsoncxx/builder/basic/document.hpp>\n#include <bsoncxx/json.hpp>\n#include <mongocxx/client.hpp>\n#include <mongocxx/instance.hpp>\n#include <mongocxx/stdx.hpp>\n#include <mongocxx/uri.hpp>int main()\n{\nstd::cout << “Hello World!\\n”;}Expected output should be the data available in the MongoDB collection. Unfortunately the actual result displayed as the following\n\nimage948×444 36.4 KB\nIn the same example,\nbsoncxx::document::element model_bson = doc[“model”];\nstd::string modelName = std::string(model_bson .get_utf8().value);\nstd::cout << modelName << std::endl; // value prints good and it is as per the databasebsoncxx::document::element price_bson = doc[“price”];\ndouble dVal = price_bson .get_double(); // ----- application crashing\nstd::cout << dVal << std::endl;Looking for the help in these issues please.",
"username": "Raja_S1"
},
{
"code": "",
"text": "This seems same as https://jira.mongodb.org/browse/CXX-1388 - can you try the suggestions shared in that ticket? You could also wrap to_json function in a try-catch block and check if it generates an exception.\nReference API doc - MongoDB C++ Driver: bsoncxx Namespace Reference",
"username": "Rishabh_Bisht"
}
] | Junk value being read from bsoncxx::to_json | 2023-06-07T13:16:53.880Z | Junk value being read from bsoncxx::to_json | 746 |
|
null | [] | [
{
"code": "",
"text": "Hi everyone,\nI have huge amounts of data to insert and want to use a data pipeline which is called “Vector”. Does anyone have any idea how to connect it to mongodb? Or use any other datapipline that is compatible with mongodb.",
"username": "Farbod_Seidali"
},
{
"code": "",
"text": "Hi @Farbod_Seidali ,Looking at the Vector “sinks” documentation I don’t see a direct MongoDB option:https://vector.dev/docs/reference/configuration/sinkHowever, there are a few options you can consider:Ty\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Stay tuned for MongoDB 7 product announcements",
"username": "Jeffery_Schmitz"
}
] | Connecting Vector to mongodb | 2022-10-26T05:59:35.483Z | Connecting Vector to mongodb | 2,954 |
[
"installation"
] | [
{
"code": "",
"text": "Hi, jest is a test framework with almost 4 million public repos on github.Jest has instructions to use MongoMemoryServer for testing:With the Global Setup/Teardown and Async Test Environment APIs, Jest can work smoothly with MongoDB.Unfortunately, it doesn’t work on Ubuntu 22.04 because of a libcrypto problem:Hi, cool project. I tried using it, but got an error. I googled for some ways …to install libcrypto, but nothing worked. I'm using Ubuntu 22.04\n\nHere's the error:\n```\nDetermining test suites to run...Starting the MongoMemoryServer Instance failed, enable debug log for more information. Error:\n StdoutInstanceError: Instance failed to start because a library is missing or cannot be opened: \"libcrypto.so.1.1\"\n at MongoInstance.checkErrorInLine (/home/michael/casefile/node_modules/mongodb-memory-server-core/lib/util/MongoInstance.js:368:62)\n at MongoInstance.stderrHandler (/home/michael/casefile/node_modules/mongodb-memory-server-core/lib/util/MongoInstance.js:290:14)\n at Socket.emit (node:events:513:28)\n at addChunk (node:internal/streams/readable:324:12)\n at readableAddChunk (node:internal/streams/readable:297:9)\n at Readable.push (node:internal/streams/readable:234:10)\n at Pipe.onStreamRead (node:internal/stream_base_commons:190:23)\n\n\n ● Test suite failed to run\n\n Jest: Got error running globalSetup - /home/michael/casefile/node_modules/@shelf/jest-mongodb/lib/setup.js, reason: Instance failed to start because a library is missing or cannot be opened: \"libcrypto.so.1.1\"\n\n at MongoInstance.checkErrorInLine (../node_modules/mongodb-memory-server-core/lib/util/MongoInstance.js:368:62)\n at MongoInstance.stderrHandler (../node_modules/mongodb-memory-server-core/lib/util/MongoInstance.js:290:14)\n```\n\nThis repo is a good reproduction of the libcrypto problem on Ubuntu 22.04. I tried changing the version to 5.0.14 and same-same.\n\nhttps://github.com/renatops1991/clean-code-api \n\nWhat am I missing?\n\nI have mongo running in a docker instance - that would be so much easier and more secure to deploy than downloading the images - is that a possibility?\n\nThank you!\n\np.s. I cross posted here: https://www.mongodb.com/community/forums/t/jest-mongo-and-ubuntu-22-04/208722How can I get a libcrypto that Mongo likes on Ubuntu?Or what is jest-mongo missing to work?Here is someone elses github project that reproduces the issue: GitHub - renatopsdev/clean-code-api: This is an API developed during the course \"Rest API NodeJs using TDD, Clean Architecture and Typescript\"Thank you!",
"username": "Michael_Cole"
},
{
"code": "",
"text": "MongoDB 6.0.3 supports Ubuntu 22.04 but the documentation has not been updated to reflect this.",
"username": "chris"
},
{
"code": "jest-mongodb-config.jsmodule.exports = {\n mongodbMemoryServerOptions: {\n binary: {\n version: '6.0.6',\n skipMD5: true,\n },\n autoStart: false,\n instance: {},\n },\n};\n",
"text": "Problem:\nMongoMemoryServer uses v5.0.13 which doesn’t support Ubuntu 22.04.Solution:\nYou can use latest mongodb version by configuring jest-mongodb using jest-mongodb-config.jsThis works perfectly fine in Ubuntu 22.04References:Jest preset for MongoDB in-memory server. Contribute to shelfio/jest-mongodb development by creating an account on GitHub.Spinning up mongod in memory for fast tests. If you run tests in parallel this lib helps to spin up dedicated mongodb servers for every test file in MacOS, *nix, Windows or CI environments (in most...",
"username": "Abdul_Rauf"
}
] | Jest-mongo and Ubuntu 22.04 | 2023-01-15T02:01:25.788Z | Jest-mongo and Ubuntu 22.04 | 4,330 |
|
null | [
"java",
"atlas-device-sync",
"android",
"kotlin",
"flexible-sync"
] | [
{
"code": "\"someField\": {\n \"additionalProperties\": {\n \"bsonType\": \"int\"\n },\n \"bsonType\": \"object\"\n }\n",
"text": "We’re using the Kotlin SDK in our Android app and in MongoDB we have a field that is a map/dictionary of Strings to Integer values. It has the following bson definition:In the old Java SDK, there was a class called RealmDictionary that could represent such a data structure. Unfortunately, that doesn’t exist in the Kotlin SDK anymore.\nWhat is the suggested way of representing this when using the Kotlin SDK?Thanks for your help in advance.",
"username": "Sebastian_Dombrowski"
},
{
"code": "RealmDictionaryimport io.realm.RealmDictionary\n\n...\n\nopen class DetailInfo() : RealmObject() {\n ...\n var socialIds: RealmDictionary<String> = RealmDictionary()\n",
"text": "What version of the SDK are you using?We’re using 10.10.0 on our android/kotlin app and I’m using RealmDictionary in a bunch of places.",
"username": "Alex_Tang1"
},
{
"code": "",
"text": "Hi Alex,I think you might be referring to the Java SDK form Realm. MongoDB has actually 2 different SDKs for accessing realm on the JVM:Realm is a mobile database: a replacement for SQLite & ORMs - GitHub - realm/realm-java: Realm is a mobile database: a replacement for SQLite & ORMsKotlin Multiplatform and Android SDK for the Realm Mobile Database: Build Better Apps Faster. - GitHub - realm/realm-kotlin: Kotlin Multiplatform and Android SDK for the Realm Mobile Database: Buil...The Java SDK is at version 10.11.1 and the Kotlin SDK is at version 1.1.0.\nDue to some limitations of the Java SDK when it comes to multi-module setups we have to use the Kotlin SDK in our project. And that doesn’t have the RealmDictionary class anymore.\nSo I’m looking for ways to use maps/dictionaries when using the Kotlin SDK.",
"username": "Sebastian_Dombrowski"
},
{
"code": "",
"text": "Well well well. TIL.You’re right, we’re using realm-java. I feel like we should move to the Kotlin library, but if it doesn’t have RealmDictionary, that’ll prevent us from doing so for now.Looking more. Frozen objects might change the entirety of how we use realm, so yeah, there’s that too. Anyway thanks for the insight.",
"username": "Alex_Tang1"
},
{
"code": "",
"text": "There is still no RealmDictionary. I wonder what could be preventing the developers from adding it to Kotlin SDK?",
"username": "TheHiddenDuck"
},
{
"code": "",
"text": "G’Day @Sebastian_Dombrowski , @Alex_Tang1 , @TheHiddenDuck ,Thank you for raising your concerns. RealmDictionary was added in Kotlin SDK 1.7.0. You can read more on the Kotlin Data Types Section.You can also subscribe to Realm Newsletter for product and community updates.Cheers, \nhenna",
"username": "henna.s"
},
{
"code": "",
"text": "Ok, my mistake, I was looking at the list of supported types here and failed to notice it being listed further in the article. Although I think it would be nice to add it to the list there as well.",
"username": "TheHiddenDuck"
},
{
"code": "",
"text": "Hi @TheHiddenDuck,Thanks a lot for pointing that out. I have raised this with the docs team and they will revise the section asap.Cheers, \nhenna",
"username": "henna.s"
}
] | No RealmDictionary in Kotlin SDK | 2022-08-30T15:59:21.946Z | No RealmDictionary in Kotlin SDK | 3,171 |
[] | [
{
"code": "",
"text": "Database names that contains the underscore symbol are screwed up in the Traditional Chinese subtitles:\nScreenshot_2023-06-01-15-34-57-458_org.mozilla.firefox_Screenshot_2023-06-01-15-37-14-308_org.mozilla.firefox.merged2568×3096 392 KB\nCheers! ",
"username": "brlin"
},
{
"code": "",
"text": "Hey @brlin,It’s great to see you back in the community! Thanks for flagging this. We’ll take this up with the concerned team.Please feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Error in the Traditional Chinese subtitles of the "Creating and Deploying at Atlas Cluster" lesson in the "Getting Started with MongoDB Atlas" course | 2023-06-01T07:42:43.225Z | Error in the Traditional Chinese subtitles of the “Creating and Deploying at Atlas Cluster” lesson in the “Getting Started with MongoDB Atlas” course | 727 |
|
null | [
"queries"
] | [
{
"code": "",
"text": "How the image_collection works with findAndModify. Is this affect CPU performance of the secondary.",
"username": "ram_Kumar3"
},
{
"code": "config.image_collectionfindAndModifystoreFindAndModifyImagesInSideCollectionretryable",
"text": "Hey @ram_Kumar3,Thank you for reaching out to the MongoDB Community forums!How the image_collection works with findAndModify.The config.image_collection is used for storing the retryable findAndModify images. Starting from MongoDB 5.1 onwards, when the storeFindAndModifyImagesInSideCollection feature is enabled, primaries processing a retryable findAndModify will write a document to this collection rather than the oplog.Is this affect CPU performance of the secondary.It may affect CPU and other resources of the node, however this will highly depend on the workload. The best way to know for sure with regard to your specific case is to experiment with your expected workload, and compare the node’s resource consumption with this feature enabled or disabled.Hope it answers your questions. In case you have any further questions please feel free to reach out to us.Best regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Thanks @Kushagra_Kesav\nHow does the config.image_collection collection work and what is its behavior?",
"username": "ram_Kumar3"
},
{
"code": "config.image_collection",
"text": "Hello @ram_Kumar3,How does the config.image_collection collection work and what is its behavior?In MongoDB 5.1 onwards, this collection is used to store the pre-image and post-image of a document, to put less burden on the oplog. However, this is the implementation details that may change from time to time without any further notice.May I ask if you are seeing any issues specifically due to this particular implementation? If so, please provide additional details so that we can better understand the issue.Best regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | What is image_collection in config db | 2023-06-05T16:16:48.718Z | What is image_collection in config db | 449 |
null | [
"aggregation",
"indexes",
"performance"
] | [
{
"code": "{\n \"_id\": \"644932e30ce7a25b142bca1c\",\n \"data\": {\n \"Assistenza\": {\n \"Eventi\": {\n \"Erogazione\": [\n {\n \"TipoOperatore\": \"8\",\n \"data\": \"1672963200000\",\n \"Prestazioni\": [\n {\n \"TipoPrestazione\": 3,\n \"numPrestazione\": 1\n }\n ]\n }\n ],\n \"PresaInCarico\": {\n \"Id_Rec\": \"1902092023-01-05CF_0000000000001\",\n \"data\": \"1672876800000\"\n }\n },\n \"Erogatore\": {\n \"CodiceRegione\": \"190\",\n \"CodiceASL\": \"209\"\n },\n \"Trasmissione\": {\n \"tipo\": \"I\"\n }\n }\n },\n \"idFlusso\": \"644932cf0ce7a25b142bc632\",\n \"idTracciato\": \"574\",\n \"idUtente\": \"2\",\n \"dataCreazione\": \"2023-04-26T14:18:30.495Z\",\n \"jobId\": \"18221\",\n \"stato\": \"VERSIONED\",\n \"metadata\": {\n \"nomeFile\": \"SIAD_APS_600000_I.xml\",\n \"periodoRiferimento\": \"01\",\n \"annoRiferimento\": \"2023\",\n \"idRegione\": \"33\",\n \"valoreRegione\": \"190\",\n \"periodoRiferimentoInTrimestre\": \"1\",\n \"periodoRiferimentoInSemestre\": \"1\",\n \"periodoRiferimentoInAnno\": \"1\",\n \"Assistenza_Eventi_PresaInCarico_Id_Rec_encrypted\": \"\",\n \"idAzienda\": \"485\",\n \"valoreAzienda\": \"190209\"\n },\n \"key\": {\n \"Assistenza-Erogatore-CodiceASL\": \"209\",\n \"Assistenza-Erogatore-CodiceRegione\": \"190\",\n \"Assistenza-Eventi-Erogazione[0]-TipoOperatore\": \"8\",\n \"Assistenza-Eventi-Erogazione[0]-data\": \"1672963200000\",\n \"Assistenza-Eventi-PresaInCarico-Id_Rec\": \"1902092023-01-05CF_0000000000001\",\n \"Assistenza-Eventi-PresaInCarico-data\": \"1672876800000\",\n \"stato\": \"VERSIONED\"\n },\n \"progressivo\": \"2\",\n \"flagVersioneMassima\": true,\n \"progressivoMassimo\": \"2\",\n \"sessioniControllo\": [\n {\n \"jobId\": \"18453\",\n \"idUtente\": \"2\",\n \"dataElaborazioneControllo\": \"2023-05-25T09:10:31.331Z\",\n \"errori\": [\n {\n \"_id\": \"646f27193a95004ca8858aac\",\n \"idUtente\": \"2\",\n \"dataElaborazioneControllo\": \"2023-05-25T09:10:31.331Z\",\n \"risultatoOperazione\": [\n {\n \"idControllo\": \"4613\",\n \"codControllo\": \"FAKE1\",\n \"descControllo\": \"FAKE1\",\n \"idErrore\": \"6603\",\n \"codErrore\": \"ANOMALIA\",\n \"descErrore\": \"Anomalia\",\n \"ambitoErrore\": \"A\",\n \"gravitaErrore\": \"LIEVE\",\n \"riferimentoErrore\": \"CAMPO\",\n \"tipoErrore\": \"ANOMALIA\",\n \"idCampo\": \"44426\"\n }\n ]\n },\n {\n \"_id\": \"646f28e53a95004ca88eb26e\",\n \"idUtente\": \"2\",\n \"dataElaborazioneControllo\": \"2023-05-25T09:10:31.331Z\",\n \"risultatoOperazione\": [\n {\n \"idControllo\": \"4620\",\n \"codControllo\": \"FAKE2\",\n \"descControllo\": \"FAKE2\",\n \"idErrore\": \"6603\",\n \"codErrore\": \"ANOMALIA\",\n \"descErrore\": \"Anomalia\",\n \"ambitoErrore\": \"A\",\n \"gravitaErrore\": \"LIEVE\",\n \"riferimentoErrore\": \"CAMPO\",\n \"tipoErrore\": \"ANOMALIA\",\n \"idCampo\": \"44426\"\n }\n ]\n },\n {\n \"_id\": \"646f2aa93a95004ca897da2e\",\n \"idUtente\": \"2\",\n \"dataElaborazioneControllo\": \"2023-05-25T09:10:31.331Z\",\n \"risultatoOperazione\": [\n {\n \"idControllo\": \"4621\",\n \"codControllo\": \"FAKE3\",\n \"descControllo\": \"FAKE3\",\n \"idErrore\": \"6603\",\n \"codErrore\": \"ANOMALIA\",\n \"descErrore\": \"Anomalia\",\n \"ambitoErrore\": \"A\",\n \"gravitaErrore\": \"LIEVE\",\n \"riferimentoErrore\": \"CAMPO\",\n \"tipoErrore\": \"ANOMALIA\",\n \"idCampo\": \"44426\"\n }\n ]\n }\n ]\n },\n {\n \"jobId\": \"18455\",\n \"idUtente\": \"2\",\n \"dataElaborazioneControllo\": \"2023-05-25T13:21:31.870Z\"\n }\n ]\n}\ndb.flussi_dettagli.aggregate(\n [\n\t\t\t{\n\t\t\t\t\"$match\" : {\n\t\t\t\t\t\"idTracciato\" : Long(\"574\")\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$match\" : {\n\t\t\t\t\t\"sessioniControllo.jobId\" : Long(\"18453\")\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$unwind\" : {\n\t\t\t\t\t\"path\" : \"$sessioniControllo\",\n\t\t\t\t\t\"preserveNullAndEmptyArrays\" : false\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$match\" : {\n\t\t\t\t\t\"sessioniControllo.jobId\" : Long(\"18453\")\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$unwind\" : {\n\t\t\t\t\t\"path\" : \"$sessioniControllo.errori\",\n\t\t\t\t\t\"preserveNullAndEmptyArrays\" : false\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$unwind\" : {\n\t\t\t\t\t\"path\" : \"$sessioniControllo.errori.risultatoOperazione\",\n\t\t\t\t\t\"preserveNullAndEmptyArrays\" : false\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$count\" : \"count\"\n\t\t\t}\n\t\t]\n )\n{\n\t\"stages\" : [\n\t\t{\n\t\t\t\"$cursor\" : {\n\t\t\t\t\"query\" : {\n\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"idTracciato\" : 574\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : 18453\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"fields\" : {\n\t\t\t\t\t\"sessioniControllo\" : 1,\n\t\t\t\t\t\"_id\" : 0\n\t\t\t\t},\n\t\t\t\t\"queryPlanner\" : {\n\t\t\t\t\t\"plannerVersion\" : 1,\n\t\t\t\t\t\"namespace\" : \"siact.flussi_dettagli\",\n\t\t\t\t\t\"indexFilterSet\" : false,\n\t\t\t\t\t\"parsedQuery\" : {\n\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"idTracciato\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : 574\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : 18453\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t\"queryHash\" : \"6E14D4E2\",\n\t\t\t\t\t\"planCacheKey\" : \"1F663360\",\n\t\t\t\t\t\"winningPlan\" : {\n\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\"idTracciato\" : 1,\n\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : 1\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"indexName\" : \"idx_idTracciato_sessioneControllo\",\n\t\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\"idTracciato\" : [ ],\n\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : [ \"sessioniControllo\" ]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\"idTracciato\" : [ \"[574, 574]\" ],\n\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : [ \"[18453, 18453]\" ]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"rejectedPlans\" : [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : 18453\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : 1,\n\t\t\t\t\t\t\t\t\t\"stato\" : 1,\n\t\t\t\t\t\t\t\t\t\"data.informazioniRicovero.codiceIstitutoDiCura\" : 1,\n\t\t\t\t\t\t\t\t\t\"data.informazioniRicovero.progressivoSDO\" : 1\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"indexName\" : \"idx_discard_no_encrypted_573_572\",\n\t\t\t\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ ],\n\t\t\t\t\t\t\t\t\t\"stato\" : [ ],\n\t\t\t\t\t\t\t\t\t\"data.informazioniRicovero.codiceIstitutoDiCura\" : [ ],\n\t\t\t\t\t\t\t\t\t\"data.informazioniRicovero.progressivoSDO\" : [ ]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ \"[574, 574]\" ],\n\t\t\t\t\t\t\t\t\t\"stato\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"data.informazioniRicovero.codiceIstitutoDiCura\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"data.informazioniRicovero.progressivoSDO\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : 18453\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : 1,\n\t\t\t\t\t\t\t\t\t\"dataCreazione\" : 1,\n\t\t\t\t\t\t\t\t\t\"stato\" : 1,\n\t\t\t\t\t\t\t\t\t\"flagVersioneMassima\" : 1\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"indexName\" : \"idx_max_version_with_data\",\n\t\t\t\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ ],\n\t\t\t\t\t\t\t\t\t\"dataCreazione\" : [ ],\n\t\t\t\t\t\t\t\t\t\"stato\" : [ ],\n\t\t\t\t\t\t\t\t\t\"flagVersioneMassima\" : [ ]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ \"[574, 574]\" ],\n\t\t\t\t\t\t\t\t\t\"dataCreazione\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"stato\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"flagVersioneMassima\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : 18453\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : 1\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"indexName\" : \"idTracciato_1\",\n\t\t\t\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ ]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ \"[574, 574]\" ]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : 18453\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : 1,\n\t\t\t\t\t\t\t\t\t\"stato\" : 1,\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.PrestazioniSR.tempoParziale\" : 1,\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Chiave.tipoPrestazione\" : 1,\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Chiave.Data\" : 1,\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Chiave.Erogatore.CodiceStruttura\" : 1,\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Chiave.Erogatore.CodiceASL\" : 1,\n\t\t\t\t\t\t\t\t\t\"data.CodiceRegione\" : 1,\n\t\t\t\t\t\t\t\t\t\"metadata.FlsResSemires_2_Chiave_ID_REC_encrypted\" : 1,\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Dimissione.Data\" : 1,\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.PrestazioniSR.tempoPieno\" : 1\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"indexName\" : \"idx_discard_encrypted_567_566\",\n\t\t\t\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ ],\n\t\t\t\t\t\t\t\t\t\"stato\" : [ ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.PrestazioniSR.tempoParziale\" : [ ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Chiave.tipoPrestazione\" : [ ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Chiave.Data\" : [ ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Chiave.Erogatore.CodiceStruttura\" : [ ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Chiave.Erogatore.CodiceASL\" : [ ],\n\t\t\t\t\t\t\t\t\t\"data.CodiceRegione\" : [ ],\n\t\t\t\t\t\t\t\t\t\"metadata.FlsResSemires_2_Chiave_ID_REC_encrypted\" : [ ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Dimissione.Data\" : [ ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.PrestazioniSR.tempoPieno\" : [ ]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ \"[574, 574]\" ],\n\t\t\t\t\t\t\t\t\t\"stato\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.PrestazioniSR.tempoParziale\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Chiave.tipoPrestazione\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Chiave.Data\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Chiave.Erogatore.CodiceStruttura\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Chiave.Erogatore.CodiceASL\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"data.CodiceRegione\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"metadata.FlsResSemires_2_Chiave_ID_REC_encrypted\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Dimissione.Data\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.PrestazioniSR.tempoPieno\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : 18453\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : 1,\n\t\t\t\t\t\t\t\t\t\"stato\" : 1,\n\t\t\t\t\t\t\t\t\t\"flagVersioneMassima\" : 1\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"indexName\" : \"idx_max_version\",\n\t\t\t\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ ],\n\t\t\t\t\t\t\t\t\t\"stato\" : [ ],\n\t\t\t\t\t\t\t\t\t\"flagVersioneMassima\" : [ ]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ \"[574, 574]\" ],\n\t\t\t\t\t\t\t\t\t\"stato\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"flagVersioneMassima\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : 18453\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : 1,\n\t\t\t\t\t\t\t\t\t\"stato\" : 1\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"indexName\" : \"idTracciato_1_stato_1\",\n\t\t\t\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ ],\n\t\t\t\t\t\t\t\t\t\"stato\" : [ ]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ \"[574, 574]\" ],\n\t\t\t\t\t\t\t\t\t\"stato\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"executionStats\" : {\n\t\t\t\t\t\"executionSuccess\" : true,\n\t\t\t\t\t\"nReturned\" : 600000,\n\t\t\t\t\t\"executionTimeMillis\" : 14552,\n\t\t\t\t\t\"totalKeysExamined\" : 600000,\n\t\t\t\t\t\"totalDocsExamined\" : 600000,\n\t\t\t\t\t\"executionStages\" : {\n\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\"nReturned\" : 600000,\n\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 408,\n\t\t\t\t\t\t\"works\" : 600001,\n\t\t\t\t\t\t\"advanced\" : 600000,\n\t\t\t\t\t\t\"needTime\" : 0,\n\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\"saveState\" : 5572,\n\t\t\t\t\t\t\"restoreState\" : 5572,\n\t\t\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\t\t\"docsExamined\" : 600000,\n\t\t\t\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\"nReturned\" : 600000,\n\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 115,\n\t\t\t\t\t\t\t\"works\" : 600001,\n\t\t\t\t\t\t\t\"advanced\" : 600000,\n\t\t\t\t\t\t\t\"needTime\" : 0,\n\t\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\t\"saveState\" : 5572,\n\t\t\t\t\t\t\t\"restoreState\" : 5572,\n\t\t\t\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\"idTracciato\" : 1,\n\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : 1\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"indexName\" : \"idx_idTracciato_sessioneControllo\",\n\t\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\"idTracciato\" : [ ],\n\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : [ \"sessioniControllo\" ]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\"idTracciato\" : [ \"[574, 574]\" ],\n\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : [ \"[18453, 18453]\" ]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"keysExamined\" : 600000,\n\t\t\t\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\t\t\t\"dupsTested\" : 600000,\n\t\t\t\t\t\t\t\"dupsDropped\" : 0,\n\t\t\t\t\t\t\t\"indexDef\" : {\n\t\t\t\t\t\t\t\t\"indexName\" : \"idx_idTracciato_sessioneControllo\",\n\t\t\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ ],\n\t\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : [ \"sessioniControllo\" ]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : 1,\n\t\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : 1\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\"direction\" : \"forward\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$unwind\" : {\n\t\t\t\t\"path\" : \"$sessioniControllo\"\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$match\" : {\n\t\t\t\t\"sessioniControllo.jobId\" : {\n\t\t\t\t\t\"$eq\" : 18453\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$unwind\" : {\n\t\t\t\t\"path\" : \"$sessioniControllo.errori\"\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$unwind\" : {\n\t\t\t\t\"path\" : \"$sessioniControllo.errori.risultatoOperazione\"\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$group\" : {\n\t\t\t\t\"_id\" : {\n\t\t\t\t\t\"$const\" : null\n\t\t\t\t},\n\t\t\t\t\"count\" : {\n\t\t\t\t\t\"$sum\" : {\n\t\t\t\t\t\t\"$const\" : 1\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$project\" : {\n\t\t\t\t\"_id\" : false,\n\t\t\t\t\"count\" : true\n\t\t\t}\n\t\t}\n\t],\n\t\"ok\" : 1\n}\n",
"text": "Hi to all,\non a standalone server of MongoDB 4.2i have a collection with 23 Millions of document like thisand execute aggregationthe first match on sessioniControllo.jobId is for force mongo to use index that i have on {idTracciato:1,sessioniControllo.jobId: 1}the count ran in about 17 sec.below the explain:",
"username": "ilmagowalter"
},
{
"code": "",
"text": "Just some thoughts.1 - may be you can merge your 2 initial $match into a single one\n2 - your document seems pretty big so it might be useful to $project only what you $unwind before $unwind so that it reduce the memory usage\n3 - you $unwind sessioniControllo and then $match, may be you could $filter and then $unwind and forgo the $match, this way only what $match’es is $unwind’ed.\n4 - you could possibly replace the last $unwind with a $reduce and then $sum the result rather than $count",
"username": "steevej"
},
{
"code": "db.flussi_dettagli.aggregate([\n {\n \"$match\": {\n \"idTracciato\": Long(\"574\"),\n \"sessioniControllo.jobId\": Long(\"18453\") // if not put this condition not exclude documents that not have \"sessioniControllo\"\n }\n },\n {\n \"$project\": {\n \"sessioniControllo\": {\n $filter: {\n input: \"$sessioniControllo\",\n as: \"elem\",\n cond: { $eq: [\"$$elem.jobId\", Long(\"18453\")] }\n }\n },\n }\n },\n {\n $project: {\n \"totale\": {\n \"$reduce\": {\n input: \"$sessioniControllo\",\n initialValue: 0,\n in: {\n $add: [\"$$value\",\n {\n \"$reduce\": {\n input: \"$$this.errori\",\n initialValue: 0,\n in: {\n $add: [\"$$value\", { $size: \"$$this.risultatoOperazione\" }]\n }\n }\n }\n ]\n }\n }\n }\n }\n },\n {\n $group: {\n _id: null,\n total: {\n $sum: \"$totale\" // campo da sommare\n }\n }\n }\n]\n)\n ;\n\n",
"text": "Hi Steeve,\ni tried withthe query ran in 10 sec., so is better but slowthe result documents before unwind are 600.000, after all unwind are 1800000",
"username": "ilmagowalter"
},
{
"code": "db.flussi_dettagli.aggregate(\n [\n {\n \"$match\": {\n \"idTracciato\": Long(\"574\"),\n \"sessioniControllo.jobId\": Long(\"18453\")\n }\n },\n {\n \"$count\": \"count\"\n }\n ])\n ;\ndb.flussi_dettagli.aggregate(\n [\n {\n \"$match\": {\n \"idTracciato\": Long(\"574\"),\n \"sessioniControllo.jobId\": Long(\"18453\")\n }\n },\n {\n \"$project\": {\n \"sessioniControllo\": {\n $filter: {\n input: \"$sessioniControllo\",\n as: \"elem\",\n cond: { $eq: [\"$$elem.jobId\", Long(\"18453\")] }\n }\n },\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$sessioniControllo\",\n \"preserveNullAndEmptyArrays\": false\n }\n },\n {\n \"$count\": \"count\"\n }\n ])\n ;\ndb.flussi_dettagli.aggregate(\n [\n {\n \"$match\": {\n \"idTracciato\": Long(\"574\"),\n \"sessioniControllo.jobId\": Long(\"18453\")\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$sessioniControllo\",\n \"preserveNullAndEmptyArrays\": false\n }\n },\n {\n \"$match\": {\n \"sessioniControllo.jobId\": Long(\"18453\")\n }\n },\n {\n \"$count\": \"count\"\n }\n ])\n ;\ndb.flussi_dettagli.aggregate(\n [\n {\n \"$match\": {\n \"idTracciato\": Long(\"574\"),\n \"sessioniControllo.jobId\": Long(\"18453\")\n }\n },\n {\n \"$project\": {\n \"sessioniControllo\": {\n $filter: {\n input: \"$sessioniControllo\",\n as: \"elem\",\n cond: { $eq: [\"$$elem.jobId\", Long(\"18453\")] }\n }\n },\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$sessioniControllo\",\n \"preserveNullAndEmptyArrays\": false\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$sessioniControllo.errori\",\n \"preserveNullAndEmptyArrays\": false\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$sessioniControllo.errori.risultatoOperazione\",\n \"preserveNullAndEmptyArrays\": false\n }\n },\n {\n \"$count\": \"count\"\n }\n ])\n ;\n",
"text": "some attempts:656 millisecond8,5 second11 second11 second.it seems that the most expensive stage is the first unwind",
"username": "ilmagowalter"
},
{
"code": "{\n $project: {\n \"totale\": {\n \"$reduce\": {\n input: { $filter: {\n input: \"$sessioniControllo\",\n as: \"elem\",\n cond: { $eq: [\"$$elem.jobId\", Long(\"18453\")] }\n } }\n initialValue: 0,\n in: {\n $add: [\"$$value\",\n {\n \"$reduce\": {\n input: \"$$this.errori\",\n initialValue: 0,\n in: {\n $add: [\"$$value\", { $size: \"$$this.risultatoOperazione\" }]\n }\n }\n }\n ]\n }\n }\n }\n }\n }\n",
"text": "it seems that the most expensive stage is the first unwindThe $unwind stage is expensive because as you have seendocuments before unwind are 600.000, after all unwind are 1800000that is why avoid it reduce the memory consumption and work to do.The fact that with $match and $count is really fast:656 millisecondmeans that you indexes work.One thing you could try is to move the $filter of the first $project into the input: value of the second $project. So you would $match and then:It is the same code but run in a single stage. I usually prefer simpler stages as it is easier to code, to debug, to read and to understand. But if my preferences hinder performance, then hell with my preferences.A slightly more complex alternative of the above would be to still use $sessioniControllo as the top input: but use the $cond expression (from the $filter) to only $add the matching jobId.",
"username": "steevej"
},
{
"code": "",
"text": "i tried with one stage project, but performance is not betterA slightly more complex alternative of the above would be to still use $sessioniControllo as the top input: but use the $cond expression (from the $filter) to only $add the matching jobId.i don’t understand, please can you explain?",
"username": "ilmagowalter"
},
{
"code": "",
"text": "I give some more informationthe result of query is used two timesthe user want that total records is always showed ( don’t want “1 of more” )i tried $facet operator too, but don’t have relevant changes",
"username": "ilmagowalter"
},
{
"code": "{\n $project: {\n \"totale\": {\n \"$reduce\": {\n input: \"$sessioniControllo\" ,\n initialValue: 0 ,\n in: {\n $add: [ \"$$value\",\n { \"cond\" : [\n { \"$ne\" : [ \"$$this.jobId\", Long(\"18453\") ] } ,\n 0 ,\n { \"$reduce\" : {\n input: \"$$this.errori\" ,\n initialValue: 0 ,\n in: {\n $add: [ \"$$value\" , { $size: \"$$this.risultatoOperazione\" } ]\n }\n } }\n ] } \n ]\n }\n }\n }\n }\n }\n",
"text": "A slightly more complex alternative of the above would be to still use $sessioniControllo as the top input: but use the $cond expression (from the $filter) to only $add the matching jobId.Basically, the top level $reduce $add (using $cond) 0 if the jobId does not match but $add the inner most $reduce if it does.",
"username": "steevej"
},
{
"code": "",
"text": "unfortunatly no better performance maybe i would consider to store information already unwinded in others collection… but… i’m not sure…",
"username": "ilmagowalter"
},
{
"code": "\t\t\t{\n\t\t\t\t\"$skip\" : 0\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$limit\" : 16\n\t\t\t}\n",
"text": "finally I agreed with the user to only count the first 100,000 recordsif the records are more i will not show the total page number.But i have another question, if i ran the same query without $count but withif i have at least 16 records as result, the query ran in 162msif result is 0 records, the query ran in 25 seconds… i think because mongo fetch all 600.000 records…looking example i put in first post,\nwith jobId 18453 found records with sessioniControllo.errori … and is fast\nwith jobId 18455 not found records with sessioniControllo.errori … and is slow",
"username": "ilmagowalter"
},
{
"code": "db.flussi_dettagli.aggregate([\n {\n \"$match\": {\n \"idTracciato\": Long(\"574\"),\n \"sessioniControllo.jobId\": Long(\"18455\") // if not put this condition not exclude documents that not have \"sessioniControllo\"\n }\n },\n {\n \"$project\": {\n \"sessioniControllo\": {\n $filter: {\n input: \"$sessioniControllo\",\n as: \"elem\",\n cond: { $eq: [\"$$elem.jobId\", Long(\"18455\")] }\n }\n },\n }\n },\n {\n \"$match\": {\n \"sessioniControllo.errori\": {\n \"$exists\": true\n }\n }\n },\n]\n)\n ;\n{\n\t\"stages\" : [\n\t\t{\n\t\t\t\"$cursor\" : {\n\t\t\t\t\"query\" : {\n\t\t\t\t\t\"idTracciato\" : 574,\n\t\t\t\t\t\"sessioniControllo.jobId\" : 18455\n\t\t\t\t},\n\t\t\t\t\"fields\" : {\n\t\t\t\t\t\"sessioniControllo\" : 1,\n\t\t\t\t\t\"_id\" : 1\n\t\t\t\t},\n\t\t\t\t\"queryPlanner\" : {\n\t\t\t\t\t\"plannerVersion\" : 1,\n\t\t\t\t\t\"namespace\" : \"siact.flussi_dettagli\",\n\t\t\t\t\t\"indexFilterSet\" : false,\n\t\t\t\t\t\"parsedQuery\" : {\n\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"idTracciato\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : 574\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : 18455\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t\"queryHash\" : \"6E14D4E2\",\n\t\t\t\t\t\"planCacheKey\" : \"39D753AB\",\n\t\t\t\t\t\"winningPlan\" : {\n\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\"idTracciato\" : 1,\n\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : 1\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"indexName\" : \"idx_idTracciato_sessioneControllo\",\n\t\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\"idTracciato\" : [ ],\n\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : [ \"sessioniControllo\" ]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\"idTracciato\" : [ \"[574, 574]\" ],\n\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : [ \"[18455, 18455]\" ]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"rejectedPlans\" : [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : 18455\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : 1\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"indexName\" : \"idTracciato_1\",\n\t\t\t\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ ]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ \"[574, 574]\" ]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : 18455\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : 1,\n\t\t\t\t\t\t\t\t\t\"stato\" : 1,\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.PrestazioniSR.tempoParziale\" : 1,\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Chiave.tipoPrestazione\" : 1,\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Chiave.Data\" : 1,\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Chiave.Erogatore.CodiceStruttura\" : 1,\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Chiave.Erogatore.CodiceASL\" : 1,\n\t\t\t\t\t\t\t\t\t\"data.CodiceRegione\" : 1,\n\t\t\t\t\t\t\t\t\t\"metadata.FlsResSemires_2_Chiave_ID_REC_encrypted\" : 1,\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Dimissione.Data\" : 1,\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.PrestazioniSR.tempoPieno\" : 1\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"indexName\" : \"idx_discard_encrypted_567_566\",\n\t\t\t\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ ],\n\t\t\t\t\t\t\t\t\t\"stato\" : [ ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.PrestazioniSR.tempoParziale\" : [ ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Chiave.tipoPrestazione\" : [ ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Chiave.Data\" : [ ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Chiave.Erogatore.CodiceStruttura\" : [ ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Chiave.Erogatore.CodiceASL\" : [ ],\n\t\t\t\t\t\t\t\t\t\"data.CodiceRegione\" : [ ],\n\t\t\t\t\t\t\t\t\t\"metadata.FlsResSemires_2_Chiave_ID_REC_encrypted\" : [ ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Dimissione.Data\" : [ ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.PrestazioniSR.tempoPieno\" : [ ]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ \"[574, 574]\" ],\n\t\t\t\t\t\t\t\t\t\"stato\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.PrestazioniSR.tempoParziale\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Chiave.tipoPrestazione\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Chiave.Data\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Chiave.Erogatore.CodiceStruttura\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Chiave.Erogatore.CodiceASL\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"data.CodiceRegione\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"metadata.FlsResSemires_2_Chiave_ID_REC_encrypted\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.Dimissione.Data\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"data.FlsResSemires_2.PrestazioniSR.tempoPieno\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : 18455\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : 1,\n\t\t\t\t\t\t\t\t\t\"stato\" : 1,\n\t\t\t\t\t\t\t\t\t\"data.informazioniRicovero.codiceIstitutoDiCura\" : 1,\n\t\t\t\t\t\t\t\t\t\"data.informazioniRicovero.progressivoSDO\" : 1\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"indexName\" : \"idx_discard_no_encrypted_573_572\",\n\t\t\t\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ ],\n\t\t\t\t\t\t\t\t\t\"stato\" : [ ],\n\t\t\t\t\t\t\t\t\t\"data.informazioniRicovero.codiceIstitutoDiCura\" : [ ],\n\t\t\t\t\t\t\t\t\t\"data.informazioniRicovero.progressivoSDO\" : [ ]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ \"[574, 574]\" ],\n\t\t\t\t\t\t\t\t\t\"stato\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"data.informazioniRicovero.codiceIstitutoDiCura\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"data.informazioniRicovero.progressivoSDO\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : 18455\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : 1,\n\t\t\t\t\t\t\t\t\t\"dataCreazione\" : 1,\n\t\t\t\t\t\t\t\t\t\"stato\" : 1,\n\t\t\t\t\t\t\t\t\t\"flagVersioneMassima\" : 1\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"indexName\" : \"idx_max_version_with_data\",\n\t\t\t\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ ],\n\t\t\t\t\t\t\t\t\t\"dataCreazione\" : [ ],\n\t\t\t\t\t\t\t\t\t\"stato\" : [ ],\n\t\t\t\t\t\t\t\t\t\"flagVersioneMassima\" : [ ]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ \"[574, 574]\" ],\n\t\t\t\t\t\t\t\t\t\"dataCreazione\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"stato\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"flagVersioneMassima\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : 18455\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : 1,\n\t\t\t\t\t\t\t\t\t\"stato\" : 1\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"indexName\" : \"idTracciato_1_stato_1\",\n\t\t\t\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ ],\n\t\t\t\t\t\t\t\t\t\"stato\" : [ ]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ \"[574, 574]\" ],\n\t\t\t\t\t\t\t\t\t\"stato\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : 18455\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : 1,\n\t\t\t\t\t\t\t\t\t\"stato\" : 1,\n\t\t\t\t\t\t\t\t\t\"flagVersioneMassima\" : 1\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"indexName\" : \"idx_max_version\",\n\t\t\t\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ ],\n\t\t\t\t\t\t\t\t\t\"stato\" : [ ],\n\t\t\t\t\t\t\t\t\t\"flagVersioneMassima\" : [ ]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ \"[574, 574]\" ],\n\t\t\t\t\t\t\t\t\t\"stato\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\t\t\"flagVersioneMassima\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : 1,\n\t\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : 1,\n\t\t\t\t\t\t\t\t\t\"sessioniControllo.errori\" : 1\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"indexName\" : \"idx_idTracciato_sessioneControllo_errori\",\n\t\t\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ ],\n\t\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : [ \"sessioniControllo\" ],\n\t\t\t\t\t\t\t\t\t\"sessioniControllo.errori\" : [ \"sessioniControllo\", \"sessioniControllo.errori\" ]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ \"[574, 574]\" ],\n\t\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : [ \"[18455, 18455]\" ],\n\t\t\t\t\t\t\t\t\t\"sessioniControllo.errori\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"executionStats\" : {\n\t\t\t\t\t\"executionSuccess\" : true,\n\t\t\t\t\t\"nReturned\" : 600000,\n\t\t\t\t\t\"executionTimeMillis\" : 21741,\n\t\t\t\t\t\"totalKeysExamined\" : 600000,\n\t\t\t\t\t\"totalDocsExamined\" : 600000,\n\t\t\t\t\t\"executionStages\" : {\n\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\"nReturned\" : 600000,\n\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 333,\n\t\t\t\t\t\t\"works\" : 600001,\n\t\t\t\t\t\t\"advanced\" : 600000,\n\t\t\t\t\t\t\"needTime\" : 0,\n\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\"saveState\" : 6977,\n\t\t\t\t\t\t\"restoreState\" : 6977,\n\t\t\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\t\t\"docsExamined\" : 600000,\n\t\t\t\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\"nReturned\" : 600000,\n\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 131,\n\t\t\t\t\t\t\t\"works\" : 600001,\n\t\t\t\t\t\t\t\"advanced\" : 600000,\n\t\t\t\t\t\t\t\"needTime\" : 0,\n\t\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\t\"saveState\" : 6977,\n\t\t\t\t\t\t\t\"restoreState\" : 6977,\n\t\t\t\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\"idTracciato\" : 1,\n\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : 1\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"indexName\" : \"idx_idTracciato_sessioneControllo\",\n\t\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\"idTracciato\" : [ ],\n\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : [ \"sessioniControllo\" ]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\"idTracciato\" : [ \"[574, 574]\" ],\n\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : [ \"[18455, 18455]\" ]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"keysExamined\" : 600000,\n\t\t\t\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\t\t\t\"dupsTested\" : 600000,\n\t\t\t\t\t\t\t\"dupsDropped\" : 0,\n\t\t\t\t\t\t\t\"indexDef\" : {\n\t\t\t\t\t\t\t\t\"indexName\" : \"idx_idTracciato_sessioneControllo\",\n\t\t\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : [ ],\n\t\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : [ \"sessioniControllo\" ]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\"idTracciato\" : 1,\n\t\t\t\t\t\t\t\t\t\"sessioniControllo.jobId\" : 1\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\"direction\" : \"forward\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$project\" : {\n\t\t\t\t\"_id\" : true,\n\t\t\t\t\"sessioniControllo\" : {\n\t\t\t\t\t\"$filter\" : {\n\t\t\t\t\t\t\"input\" : \"$sessioniControllo\",\n\t\t\t\t\t\t\"as\" : \"elem\",\n\t\t\t\t\t\t\"cond\" : {\n\t\t\t\t\t\t\t\"$eq\" : [\n\t\t\t\t\t\t\t\t\"$$elem.jobId\",\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"$const\" : 18455\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$match\" : {\n\t\t\t\t\"sessioniControllo.errori\" : {\n\t\t\t\t\t\"$exists\" : true\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t],\n\t\"ok\" : 1\n}\n",
"text": "i tried not use $unwind ( only to find if i have results or not )\nbut the result is the same ( 27 seconds )explain:",
"username": "ilmagowalter"
}
] | Improve performance aggregation count | 2023-05-25T15:04:09.778Z | Improve performance aggregation count | 1,028 |
null | [
"queries",
"indexes"
] | [
{
"code": "db.test.createIndex( {\"beneAccount\": 1, \"createdDate\": 1} )\ndb.test.find({ \"beneAccount\" : \"345678901In\", \"createdDate\" : { \"$gt\" : { \"$date\" : \"2023-04-05T16:28:28.139Z\"}}}, { _id: 0, createdDate:1, beneAccount: 1}).hint(\"beneAccount_1_createdDate_1\").explain('executionStats');\n{\n explainVersion: '1',\n queryPlanner: {\n namespace: 'db1.test',\n indexFilterSet: false,\n parsedQuery: {\n '$and': [\n {\n beneAccount: {\n '$eq': '345678901In'\n }\n },\n {\n createdDate: {\n '$gt': 2023-04-05T16:28:28.139Z\n }\n }\n ]\n },\n queryHash: '7A509538',\n planCacheKey: '1DCD7324',\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: {\n stage: 'PROJECTION_COVERED',\n transformBy: {\n _id: 0,\n createdDate: 1,\n beneAccount: 1\n },\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: {\n beneAccount: 1,\n createdDate: 1\n },\n indexName: 'beneAccount_1_createdDate_1',\n isMultiKey: false,\n multiKeyPaths: {\n beneAccount: [],\n createdDate: []\n },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n beneAccount: [\n '[\"345678901In\", \"345678901In\"]'\n ],\n createdDate: [\n '({ $date: \"2023-04-05T16:28:28.139Z\" }, [])'\n ]\n }\n }\n },\n rejectedPlans: []\n },\n executionStats: {\n executionSuccess: true,\n nReturned: 0,\n executionTimeMillis: 0,\n totalKeysExamined: 0,\n totalDocsExamined: 0,\n executionStages: {\n stage: 'PROJECTION_COVERED',\n nReturned: 0,\n executionTimeMillisEstimate: 0,\n works: 1,\n advanced: 0,\n needTime: 0,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 1,\n transformBy: {\n _id: 0,\n createdDate: 1,\n beneAccount: 1\n },\n inputStage: {\n stage: 'IXSCAN',\n nReturned: 0,\n executionTimeMillisEstimate: 0,\n works: 1,\n advanced: 0,\n needTime: 0,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 1,\n keyPattern: {\n beneAccount: 1,\n createdDate: 1\n },\n indexName: 'beneAccount_1_createdDate_1',\n isMultiKey: false,\n multiKeyPaths: {\n beneAccount: [],\n createdDate: []\n },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n beneAccount: [\n '[\"345678901In\", \"345678901In\"]'\n ],\n createdDate: [\n '({ $date: \"2023-04-05T16:28:28.139Z\" }, [])'\n ]\n },\n keysExamined: 0,\n seeks: 1,\n dupsTested: 0,\n dupsDropped: 0\n }\n }\n },\n command: {\n find: 'test',\n filter: {\n beneAccount: '345678901In',\n createdDate: {\n '$gt': 2023-04-05T16:28:28.139Z\n }\n },\n projection: {\n _id: 0,\n createdDate: 1,\n beneAccount: 1\n },\n hint: 'beneAccount_1_createdDate_1',\n '$db': 'db1'\n },\n serverInfo: {\n host: '11111',\n port: 27017,\n version: '6.0.4',\n gitVersion: '441'\n },\n serverParameters: {\n internalQueryFacetBufferSizeBytes: 104857600,\n internalQueryFacetMaxOutputDocSizeBytes: 104857600,\n internalLookupStageIntermediateDocumentMaxSizeBytes: 104857600,\n internalDocumentSourceGroupMaxMemoryBytes: 104857600,\n internalQueryMaxBlockingSortMemoryUsageBytes: 104857600,\n internalQueryProhibitBlockingMergeOnMongoS: 0,\n internalQueryMaxAddToSetBytes: 104857600,\n internalDocumentSourceSetWindowFieldsMaxMemoryBytes: 104857600\n },\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1685111492, i: 1 }),\n signature: {\n hash: Binary(Buffer.from(\"0000000000000000000000000000000000000000\", \"hex\"), 0),\n keyId: 0\n }\n },\n operationTime: Timestamp({ t: 1685111492, i: 1 })\n}\ndb.test.find({ \"$or\" : [{ \"beneAccount\" : \"345678901In\", \"createdDate\" : { \"$gt\" : { \"$date\" : \"2023-04-05T16:28:28.139Z\"}}}, { \"beneAccount\" : \"145678901In\", \"createdDate\" : { \"$gt\" : { \"$date\" : \"2023-04-05T16:28:28.14Z\"}}}]}, { _id: 0, createdDate:1, beneAccount: 1}).explain('executionStats');\n{\n explainVersion: '1',\n queryPlanner: {\n namespace: 'db1.test ',\n indexFilterSet: false,\n parsedQuery: {\n '$or': [\n {\n '$and': [\n {\n beneAccount: {\n '$eq': '345678901In'\n }\n },\n {\n createdDate: {\n '$gt': 2023-04-05T16:28:28.139Z\n }\n }\n ]\n },\n {\n '$and': [\n {\n beneAccount: {\n '$eq': '145678901In'\n }\n },\n {\n createdDate: {\n '$gt': 2023-04-05T16:28:28.140Z\n }\n }\n ]\n }\n ]\n },\n queryHash: '058FF00C',\n planCacheKey: '01157F11',\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: {\n stage: 'SUBPLAN',\n inputStage: {\n stage: 'PROJECTION_DEFAULT',\n transformBy: {\n _id: 0,\n createdDate: 1,\n beneAccount: 1\n },\n inputStage: {\n stage: 'OR',\n inputStages: [\n {\n stage: 'IXSCAN',\n keyPattern: {\n beneAccount: 1,\n createdDate: 1\n },\n indexName: 'beneAccount_1_createdDate_1',\n isMultiKey: false,\n multiKeyPaths: {\n beneAccount: [],\n createdDate: []\n },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n beneAccount: [\n '[\"345678901In\", \"345678901In\"]'\n ],\n createdDate: [\n '({ $date: \"2023-04-05T16:28:28.139Z\" }, [])'\n ]\n }\n },\n {\n stage: 'IXSCAN',\n keyPattern: {\n beneAccount: 1,\n createdDate: 1\n },\n indexName: 'beneAccount_1_createdDate_1',\n isMultiKey: false,\n multiKeyPaths: {\n beneAccount: [],\n createdDate: []\n },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n beneAccount: [\n '[\"145678901In\", \"145678901In\"]'\n ],\n createdDate: [\n '({ $date: \"2023-04-05T16:28:28.14Z\" }, [])'\n ]\n }\n }\n ]\n }\n }\n },\n rejectedPlans: []\n },\n executionStats: {\n executionSuccess: true,\n nReturned: 0,\n executionTimeMillis: 0,\n totalKeysExamined: 0,\n totalDocsExamined: 0,\n executionStages: {\n stage: 'SUBPLAN',\n nReturned: 0,\n executionTimeMillisEstimate: 0,\n works: 2,\n advanced: 0,\n needTime: 1,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 1,\n inputStage: {\n stage: 'PROJECTION_DEFAULT',\n nReturned: 0,\n executionTimeMillisEstimate: 0,\n works: 2,\n advanced: 0,\n needTime: 1,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 1,\n transformBy: {\n _id: 0,\n createdDate: 1,\n beneAccount: 1\n },\n inputStage: {\n stage: 'OR',\n nReturned: 0,\n executionTimeMillisEstimate: 0,\n works: 2,\n advanced: 0,\n needTime: 1,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 1,\n dupsTested: 0,\n dupsDropped: 0,\n inputStages: [\n {\n stage: 'IXSCAN',\n nReturned: 0,\n executionTimeMillisEstimate: 0,\n works: 1,\n advanced: 0,\n needTime: 0,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 1,\n keyPattern: {\n beneAccount: 1,\n createdDate: 1\n },\n indexName: 'beneAccount_1_createdDate_1',\n isMultiKey: false,\n multiKeyPaths: {\n beneAccount: [],\n createdDate: []\n },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n beneAccount: [\n '[\"345678901In\", \"345678901In\"]'\n ],\n createdDate: [\n '({ $date: \"2023-04-05T16:28:28.139Z\" }, [])'\n ]\n },\n keysExamined: 0,\n seeks: 1,\n dupsTested: 0,\n dupsDropped: 0\n },\n {\n stage: 'IXSCAN',\n nReturned: 0,\n executionTimeMillisEstimate: 0,\n works: 1,\n advanced: 0,\n needTime: 0,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 1,\n keyPattern: {\n beneAccount: 1,\n createdDate: 1\n },\n indexName: 'beneAccount_1_createdDate_1',\n isMultiKey: false,\n multiKeyPaths: {\n beneAccount: [],\n createdDate: []\n },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n beneAccount: [\n '[\"145678901In\", \"145678901In\"]'\n ],\n createdDate: [\n '({ $date: \"2023-04-05T16:28:28.14Z\" }, [])'\n ]\n },\n keysExamined: 0,\n seeks: 1,\n dupsTested: 0,\n dupsDropped: 0\n }\n ]\n }\n }\n }\n },\n command: {\n find: 'test ',\n filter: {\n '$or': [\n {\n beneAccount: '345678901In',\n createdDate: {\n '$gt': 2023-04-05T16:28:28.139Z\n }\n },\n {\n beneAccount: '145678901In',\n createdDate: {\n '$gt': 2023-04-05T16:28:28.140Z\n }\n }\n ]\n },\n projection: {\n _id: 0,\n createdDate: 1,\n beneAccount: 1\n },\n '$db': 'sbcbfbmy'\n },\n serverInfo: {\n host: '111111',\n port: 27017,\n version: '6.0.4',\n gitVersion: '44ff59461c1353638a71e710f'\n },\n serverParameters: {\n internalQueryFacetBufferSizeBytes: 104857600,\n internalQueryFacetMaxOutputDocSizeBytes: 104857600,\n internalLookupStageIntermediateDocumentMaxSizeBytes: 104857600,\n internalDocumentSourceGroupMaxMemoryBytes: 104857600,\n internalQueryMaxBlockingSortMemoryUsageBytes: 104857600,\n internalQueryProhibitBlockingMergeOnMongoS: 0,\n internalQueryMaxAddToSetBytes: 104857600,\n internalDocumentSourceSetWindowFieldsMaxMemoryBytes: 104857600\n },\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1685111862, i: 1 }),\n signature: {\n hash: Binary(Buffer.from(\"0000000000000000000000000000000000000000\", \"hex\"), 0),\n keyId: 0\n }\n },\n operationTime: Timestamp({ t: 1685111862, i: 1 })\n}\n",
"text": "I have a collection with almost 2 million docs.I have created a compound index as below:I run simple query below:Below is the executionStats, here it shown PROJECTION_COVERED which means it use my created compound index.The I run below OR query:Then I got below executionStats which stated it used PROJECTION_DEFAULT and not PROJECTION_COVERED.May I know what I am doing wrong and how I can make $OR query to use PROJECTION_DEFAULT? Appreciated and thanks in advance for all the help given.",
"username": "Emrul_Haikal"
},
{
"code": "PROJECTION_COVERED$and$or",
"text": "Hi @Emrul_Haikal and welcome to MongoDB community forums!!The PROJECTION_COVERED stage represents the utilisation of all fields mentioned in the create index commands by the query. In the case of the first query, which is an $and operation and utilises the fields “beneAccount” and “createdDate,” it qualifies as a covered query. However, in the second scenario, where an $or operation is used, the query does not meet the criteria to be considered a covered query.how I can make $OR query to use PROJECTION_DEFAULT?The above statement is a bit unclear to me as the the query with $or operator makes use of the PROJECTION_DEFAULT in the explain output.The PROJECTION_COVERED query is an index covers a query only when both all the fields in the query are part of an index, and all the fields returned in the results are in the same index, which generally means we’d be filtering out _id.To simplify,Let me know if you have any questions.Regards\nAasawari",
"username": "Aasawari"
}
] | MongoDb not using compound index for an $or query | 2023-05-26T14:43:16.261Z | MongoDb not using compound index for an $or query | 714 |
null | [] | [
{
"code": "/* 1 */\n{\n \"_id\" : ObjectId(\"6197a78308591026b742cbc7\"),\n \"coordinates\" : [ \n -8.88180350512795, \n 38.5628716186268\n ]\n}\n\n/* 2 */\n{\n \"_id\" : ObjectId(\"6199798916317b0c2dcab874\"),\n \"coordinates\" : [ \n -9.15904389999993, \n 38.7235087\n ]\n}\n\n/* 3 */\n{\n \"_id\" : ObjectId(\"6199798916317b0c2dcab874\"),\n \"coordinates\" : [ \n -8.6923178, \n 41.1846394\n ]\n}\ndb.getCollection('users').aggregate([\n{ $unwind: '$addresses' },\n{ $project: { coordinates: [ '$addresses.region.longitude', '$addresses.region.latitude' ] } },\n{\n $lookup: {\n from: 'deliveryareas',\n let: { userCoordinates: '$coordinates' },\n pipeline: [\n { $match: {\n area: {\n $geoIntersects: {\n $geometry: {\n type: 'Point',\n coordinates: '$userCoordinates',\n },\n },\n }\n } },\n ],\n as: 'inRegion',\n },\n},\n])\n",
"text": "Can anyone help me with this, if i run this query:\ndb.getCollection(‘users’).aggregate([\n{ $unwind: ‘$addresses’ },\n{ $project: { coordinates: [ ‘$addresses.region.longitude’, ‘$addresses.region.latitude’ ] } },\n])I get this results:Then if i add a lookup with a pipeline:I get the error: i have the error: Point must be an array or object!But if i replace the variable $$userCoordinates by a fixed value, for example [ -8.88180350512795, 38.5628716186268 ] (that i have on the users) the query run with success.",
"username": "Hugo_Ferreira"
},
{
"code": "",
"text": "I’m getting the exact issue, how can we resolve this",
"username": "sindhu_N1"
}
] | How to mongo aggregate with lookup and geoIntersects | 2021-11-21T20:43:35.199Z | How to mongo aggregate with lookup and geoIntersects | 2,789 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "Hi Everyone,I have query like:{ $and: [{“status”: “published”},{“enableLocale.en”: true},{“modelSlug”:{$ne: “abc_abc”} }, {“enableListing.en”: true}, {$or: [ { “countryTagsName.en”: “Pakistan” }, {“agencyTagsName.en”: “Abcde”}, {“agencyTagsName.en”: “Abcdef”}, {“focusTagsName.en”: “Education”}, {“tagsName.en”: “|Abcdefg”}]}]}how can we calculate score for documents that match the query?",
"username": "abdul_faizan"
},
{
"code": "",
"text": "Please read Formatting code and log snippets in posts and update your query.Please provide sample documents.Please define by what you mean bycalculate score for documents",
"username": "steevej"
},
{
"code": "[\n {\n \"countryTagsName\":{\n \"en\": \"Pakistan\"\n },\n \"agencyTagsName\": {\n \"en\": \"Abcde\"\n },\n \"score\": 2\n },\n {\n \"countryTagsName\":{\n \"en\": \"Pakistan\"\n },\n \"agencyTagsName\": {\n \"en\": \"Ab\"\n },\n \"score\": 1\n }\n]\n",
"text": "Ok @steevej\nlet say I’m running this simple query in my dataset\n\nCapture1060×170 24.6 KB\nand it work fine but I want to show those document on top which match both $OR condition and those which match one condition show on last so for that I want to add score to each document if it match both condition then add score = 2 and if match one condition then score = 1 and then i’ll sort it with score.but problem is how to calculate score for each document ?I want my query to return this type of document like:I read this https://www.mongodb.com/docs/atlas/atlas-search/scoring/\nlink suggest to create index but I have very long query of $or which match 9 to 10 fields and creating such large compound index is I thing not good so that I don’t wanna create index for that.I have one other solution to get record from query and add score and sort it from JavaScript code but I want to do that from query.can you please help me out from this.Thank you in advance.",
"username": "abdul_faizan"
},
{
"code": "",
"text": "Atlas search and queries like you shared are 2 different things.The only way I can see how you may achieve that with aggregation is to",
"username": "steevej"
},
{
"code": "",
"text": "@steevej can you please explain or write query for how to use $addFields and $sum $or $cond together for my case ?",
"username": "abdul_faizan"
},
{
"code": "match_1 = { \"countryTagsName.en\" : \"Pakistan\" }\n...\nmatch_N = { \"focusTagsName.en\" , \"Education\" }\nmatch = { $match : { $or : [ match_1 , match_2 , ... , match_N ] } }\nscore = { $sum : [ { $cond : {match_1,1,0} , { $cond : {match_2,1,0}} , ... , { $cond : {match_N,1,0}} ] }\naddFields = { $addFields : score }\nsort = { $sort : { score : -1 } }\npipeline = [ match , addFields , sort ]\n",
"text": "What have you tried so far?Did you look at the many examples in the documentation for $addFields, $sum, $or and $cond.",
"username": "steevej"
},
{
"code": "",
"text": "A post was split to a new topic: Atlas Search - scoring",
"username": "Jason_Tran"
}
] | How to calculate score for query of $and $or document? | 2022-09-28T11:22:32.961Z | How to calculate score for query of $and $or document? | 4,267 |
null | [
"indexes"
] | [
{
"code": "",
"text": "Hello Team,Does mongo support creating of indexes with partial expression having a condition CreatedDate: $gte and $lte <CurrentDate+6months>. Potentially index only documents in my collection having CreatedDate in -6 to +6 months.Regards,\nLaks",
"username": "Laks"
},
{
"code": "",
"text": "Hello @Laks, Welcome back to the MongoDB community forum,I would suggest you take a look at the Partial Index doc,",
"username": "turivishal"
},
{
"code": "",
"text": "CurrentDateCurrentDate is a fixed value once processed. So yes, but only for “today” + 6 and “today” - 6.i don’t think mongodb server will re-evaluate this expression every day.",
"username": "Kobe_W"
}
] | Partial Filter Expression with varying date range | 2023-06-06T10:13:33.583Z | Partial Filter Expression with varying date range | 695 |
null | [
"free-monitoring"
] | [
{
"code": "",
"text": "Hi, We are installing MongoDB (Community Edition) in our ENV. I am looking to setup monitoring of the instance using Prometheus * Grafana dashboards.\nPlease guide me to a mongoDB exporter and its compatible/corresponding predefined/canned Grafana Dashboard.",
"username": "Viswa_Rudraraju"
},
{
"code": "",
"text": "A Prometheus exporter for MongoDB including sharding, replication and storage engines - GitHub - percona/mongodb_exporter: A Prometheus exporter for MongoDB including sharding, replication and stor...",
"username": "Kobe_W"
}
] | Looking for compatible mongoDB exporter and Grafana Dashboard | 2023-06-06T14:14:39.999Z | Looking for compatible mongoDB exporter and Grafana Dashboard | 1,163 |
null | [
"android",
"flutter"
] | [
{
"code": "Launching lib/main.dart on iPhone 14 Pro Max in debug mode...\nXcode build done. 9.6s\n[VERBOSE-2:FlutterDarwinContextMetalImpeller.mm(35)] Using the Impeller rendering backend.\nConnecting to VM Service at ws://127.0.0.1:51396/Gs4bnXxaf68=/ws\nflutter: 2023-06-05T09:38:47.939773: [INFO] Realm: Realm sync client ([realm-core-13.12.0])\nflutter: 2023-06-05T09:38:47.949838: [INFO] Realm: Platform: iOS Darwin 22.5.0 Darwin Kernel Version 22.5.0: Mon Apr 24 20:52:24 PDT 2023; root:xnu-8796.121.2~5/RELEASE_ARM64_T6000 x86_64\nflutter: 2023-06-05T09:38:47.951404: [INFO] Realm: Connection[1]: Session[1]: Binding '/Users/joshuawhitehouse/Library/Developer/CoreSimulator/Devices/AAA219F3-7055-4AA6-A6E9-BEF251DB60F8/data/Containers/Data/Application/56E93498-9D98-412E-8D3F-912E3B451867/Documents/mongodb-realm/stellaevents-iplin/63d7d4954db30956f6ae2080/default.realm' to ''\nflutter: 2023-06-05T09:38:47.951590: [INFO] Realm: Connection[1]: Session[1]: client_reset_config = false, Realm exists = true, client reset = false\nflutter: 2023-06-05T09:38:47.951823: [INFO] Realm: Connection[1]: Connecting to 'wss://ws.realm.mongodb.com:443/api/client/v2.0/app/stellaevents-iplin/realm-sync'\nflutter: 2023-06-05T09:38:47.952130: [INFO] Realm: Connected to endpoint '34.227.4.145:443' (from '192.168.1.10:51402')\nflutter: 2023-06-05T09:38:47.955877: [INFO] Realm: Connection[1]: Connected to app services with request id: \"647de56764136a45a9a47cd6\"\nflutter: 2023-06-05T09:38:49.191796: [INFO] Realm: Connection[1]: Session[1]: Received: ERROR \"Invalid query (IDENT, QUERY): failed to parse query: query contains table not in schema: \"StellaEvent\"\" (error_code=226, try_again=false, error_action=ApplicationBug)\nflutter: 2023-06-05T09:38:49.198855: [ERROR] Realm: SyncSessionError message: Invalid query (IDENT, QUERY): failed to parse query: query contains table not in schema: \"StellaXXX\" Logs: https://realm.mongodb.com/groups/62a258e7fd33f65229a1f35b/apps/62f155c93587e6e83c5e67c4/logs?co_id=647de56764136a45a9a47cd6 category: SyncErrorCategory.session code: SyncSessionErrorCode.badQuery isFatal: true\nflutter: 2023-06-05T09:38:49.210881: [INFO] Realm: Connection[1]: Disconnected\n\nDoctor summary (to see all details, run flutter doctor -v):\n[✓] Flutter (Channel stable, 3.10.2, on macOS 13.4 22F66 darwin-arm64, locale en-US)\n[✓] Android toolchain - develop for Android devices (Android SDK version 33.0.0)\n[✓] Xcode - develop for iOS and macOS (Xcode 14.3)\n[✓] Chrome - develop for the web\n[✓] Android Studio (version 2021.2)\n[✓] VS Code (version 1.78.2)\n[✓] Connected device (3 available)\n[✓] Network resources\n\n• No issues found!\n",
"text": "using realm package 1.1.0, and when I run my flutter app, it no longer generates a schema (after removing a collection from the Atlas app service, terminating the sync, and restarting in Dev Mode).My flutter dev environment…",
"username": "Josh_Whitehouse"
},
{
"code": "",
"text": "Rolled back to the flutter realm 1.0.3 version, terminated sync deleted the app service schema for the collection, restarted the device sync with Dev Mode enabled, and the schema for this collection was generated by the app with no issues. Seems to be a 1.1.0 issue",
"username": "Josh_Whitehouse"
},
{
"code": "",
"text": "Try rolling forward again. I don’t think this is an issue with 1.1.0, but the… terminated sync deleted the app service schema for the collection, restarted the device sync with Dev Mode enabledfixed it for you.",
"username": "Kasper_Nielsen1"
},
{
"code": "StellaXXXflutter: 2023-06-05T09:38:49.198855: [ERROR] Realm: SyncSessionError message: Invalid query (IDENT, QUERY): failed to parse query: query contains table not in schema: \"StellaXXX\" Logs: https://realm.mongodb.com/groups/62a258e7fd33f65229a1f35b/apps/62f155c93587e6e83c5e67c4/logs?co_id=647de56764136a45a9a47cd6 category: SyncErrorCategory.session code: SyncSessionErrorCode.badQuery isFatal: true\n",
"text": "Notice the error. It is complaining about StellaXXX",
"username": "Kasper_Nielsen1"
},
{
"code": "query contains table not in schema: \"StellaXXX\"",
"text": "This error only occurred in the 1.1.0 release of the realm flutter package. This is how I encountered it…after performing this, in 1.0.3 realm flutter package the schema for this collection is rebuilt when I start the app. But in 1.1.0, I get the query contains table not in schema: \"StellaXXX\" exception, and there is no schema rebuilt for the collection like there is in 1.0.3., it’s only occurring with the 1.1.0 release of the realm flutter package.",
"username": "Josh_Whitehouse"
},
{
"code": "",
"text": "Did you try rolling forward again? It is not totally clear to me.",
"username": "Kasper_Nielsen1"
},
{
"code": "",
"text": "The error link is from a iOS device using v1.0.3:SDK:\nDart v1.0.3\nFramework:\nFlutter v2.19.6 (stable) (Tue Mar 28 13:41:04 2023 +0000) on “ios_x64”https://realm.mongodb.com/groups/62a258e7fd33f65229a1f35b/apps/62f155c93587e6e83c5e67c4/logs?co_id=647de56764136a45a9a47cd6",
"username": "Kasper_Nielsen1"
},
{
"code": "",
"text": "I rolled forward to 1.1.0, and it’s working fine, after the schema for the collection was recreated using the 1.0.3 package.1.1.0 fails when there is no schema for the collection present in the app services. 1.0.3 creates the schema (I have Development Mode switched on in the Device Sync for this app service, but it looks like 1.1.0 fails to.To recreate -at this point,1.0.3 - successfully generates the missing schema for the collection\n1.1.0 - reports the table not in schema, no schema for the collection is generated (like it is in 1.0.3)I hope this clears up any confusion, will be glad to answer more questions. Again, downgrading to 1.0.3 and running the app generated the missing schema, and upgrading to us 1.1.0 after using 1.0.3 (and successfully generating the missing schema) runs fine.",
"username": "Josh_Whitehouse"
},
{
"code": "",
"text": "We will look into this tomorrow. One of the differences between v1.0.3 and v1.1.0 is the version of realm-core that they use.I will get back when I know more.",
"username": "Kasper_Nielsen1"
},
{
"code": "",
"text": "It seems the subscription queries are uploaded prior to the schema, possibly due to changes regarding automatic migration from partition based to flexible sync.Has absolutely no relevance for Flutter, as we never supported the former, but we still use the same version of realm-core.We will investigate further.",
"username": "Kasper_Nielsen1"
},
{
"code": "",
"text": "Hi @Josh_Whitehouse,\nWe managed to reproduce this issue using the steps that you’ve reported. So, thank you for reporting the issue.\nThe only workaround that we found for now was deleting also the local realm file before running the app. Then the schema is generated on the App service. You can use this workaround, if it is acceptable for you to delete the data, until we find better solution.\nWe will investigate further.",
"username": "Desislava_St_Stefanova"
},
{
"code": " realm.subscriptions.update((mutableSubscriptions) {\n mutableSubscriptions.clear();\n });\n",
"text": "@Josh_Whitehouse,\nAfter the investigation we recall that we don’t force re-creating the server schema, once the realm exists locally and has subscriptions added. And this is not from the new version. I suspect you have deleted the realm file when you have downgraded to 1.0.3.\nYou can add some code to clear the subscriptions just before closing the app or realmand to add the subscriptions again once the app is started. On this way you will force schema re-generation on the server next time you open the app. You can do this for developing purpose. But it is not recommended for the production.",
"username": "Desislava_St_Stefanova"
},
{
"code": "",
"text": "To elaborate. We are still working on this. The client should receive a client reset, when this situation happens.",
"username": "Kasper_Nielsen1"
},
{
"code": "",
"text": "ok, for development purposes, I’ll clear the subscriptions when closing the realm. They were being cleared after opening the realm, but now I’ve added clearing to to closing the realm as well for dev purposes.",
"username": "Josh_Whitehouse"
}
] | Dev mode enabled for Realm App Service, but Schema not generated by app | 2023-06-05T13:43:40.294Z | Dev mode enabled for Realm App Service, but Schema not generated by app | 1,561 |
null | [
"production",
"c-driver"
] | [
{
"code": "aligned_allocRewrapManyDataKeyprovidermasterKey",
"text": "Announcing 1.23.5 of libbson and libmongoc, the libraries constituting the MongoDB C Driver.No changes since 1.23.3. Version incremented to match the libmongoc version.Fixes:Thanks to everyone who contributed to this release.",
"username": "eramongodb"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB C Driver 1.23.5 Released | 2023-06-06T18:23:33.474Z | MongoDB C Driver 1.23.5 Released | 639 |
null | [
"production",
"cxx"
] | [
{
"code": "",
"text": "The MongoDB C++ Driver Team is pleased to announce the availability of mongocxx-3.7.2.Please note that this version of mongocxx requires MongoDB C Driver 1.22.1 or higher.See the MongoDB C++ Driver Manual and the Driver Installation Instructions for more details on downloading, installing, and using this driver.NOTE: The mongocxx 3.7.x series does not promise API or ABI stability across patch releases.Please feel free to post any questions on the MongoDB Community forum in the Drivers, ODMs, and Connectors category tagged with cxx. Bug reports should be filed against the CXX project in the MongoDB JIRA. Your feedback on the C++11 driver is greatly appreciated.Sincerely,The C++ Driver Team",
"username": "eramongodb"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB C++11 Driver 3.7.2 Released | 2023-06-06T20:01:35.013Z | MongoDB C++11 Driver 3.7.2 Released | 606 |
[
"queries"
] | [
{
"code": "",
"text": "I recently created program for esp8266 that is reading data from temperature sensor. Sometimes the date obtained from ntp server is bad and it sends it to server and then to db. Records with this invalid date are not visible on MongoDB Atlas. To manage them I need to use MongoShell. It should be possible to manage this via website.How it looks on website and on mongo shell:\n\ncombined2718×1002 395 KB\nThere are visible empty divs which should contain data",
"username": "Marcin_R1"
},
{
"code": ".find()",
"text": "@Marcin_R1,Can you copy and paste the results from your .find() command here? I’m going to try replicate this behaviour in my own test environment so I will copy and paste the documents to insert them.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "Atlas atlas-6nd759-shard-0 [primary] Cluster0> db.measure.find({ stationId: \"2767224\" })\n[\n {\n _id: ObjectId(\"6474e17f1b4da771c3d2e9a9\"),\n stationId: '2767224',\n date: ISODate(\"2023-09-26T21:23:54.785Z\"),\n temp: Decimal128(\"20.97999954\"),\n humidity: Decimal128(\"57.99000168\"),\n pressure: Decimal128(\"99342.46094\"),\n _class: 'com.weather.server.domain.entity.Measure'\n },\n {\n _id: ObjectId(\"6474e1e6fe31b663ef403ff5\"),\n stationId: '2767224',\n date: ISODate(\"2023-09-26T21:23:54.785Z\"),\n temp: Decimal128(\"20.97999954\"),\n humidity: Decimal128(\"57.99000168\"),\n pressure: Decimal128(\"99342.46094\"),\n _class: 'com.weather.server.domain.entity.Measure'\n },\n {\n _id: ObjectId(\"647a311b3be07117fd1f56f9\"),\n stationId: '2767224',\n date: ISODate(\"2023-06-02T18:12:41.000Z\"),\n temp: Decimal128(\"21.18000031\"),\n humidity: Decimal128(\"62.83000183\"),\n pressure: Decimal128(\"993.1099854\"),\n _class: 'com.weather.server.domain.entity.Measure'\n },\n {\n _id: ObjectId(\"647a2a11ea628d0c97947052\"),\n stationId: '2767224',\n date: ISODate(\"2023-06-02T17:42:39.000Z\"),\n temp: Decimal128(\"20.97999954\"),\n humidity: Decimal128(\"64.83999634\"),\n pressure: Decimal128(\"992.5200195\"),\n _class: 'com.weather.server.domain.entity.Measure'\n },\n {\n _id: ObjectId(\"647a29f9ac10446e49226371\"),\n stationId: '2767224',\n date: ISODate(\"2023-06-02T17:42:15.000Z\"),\n temp: Decimal128(\"21.29999924\"),\n humidity: Decimal128(\"64.66999817\"),\n pressure: Decimal128(\"992.5700073\"),\n _class: 'com.weather.server.domain.entity.Measure'\n },\n {\n _id: ObjectId(\"647a273e2155c9771458b11f\"),\n stationId: '2767224',\n date: Invalid Date,\n temp: Decimal128(\"21.54999924\"),\n humidity: Decimal128(\"64.36000061\"),\n pressure: Decimal128(\"99231.79688\"),\n _class: 'com.weather.server.domain.entity.Measure'\n },\n {\n _id: ObjectId(\"647b54bb3f56fc2b8ecc2f87\"),\n stationId: '2767224',\n date: Invalid Date,\n temp: Decimal128(\"20.70000076\"),\n humidity: Decimal128(\"45.99000168\"),\n pressure: Decimal128(\"99633.72656\"),\n _class: 'com.weather.server.domain.entity.Measure'\n },\n {\n _id: ObjectId(\"647a27fe3cc3a06ef79d3c53\"),\n stationId: '2767224',\n date: Invalid Date,\n temp: Decimal128(\"21.01000023\"),\n humidity: Decimal128(\"64.73999786\"),\n pressure: Decimal128(\"99236.9375\"),\n _class: 'com.weather.server.domain.entity.Measure'\n },\n {\n _id: ObjectId(\"647a27e83cc3a06ef79d3c52\"),\n stationId: '2767224',\n date: Invalid Date,\n temp: Decimal128(\"21.35000038\"),\n humidity: Decimal128(\"64.44000244\"),\n pressure: Decimal128(\"99238.36719\"),\n _class: 'com.weather.server.domain.entity.Measure'\n },\n {\n _id: ObjectId(\"647a27503cc3a06ef79d3c51\"),\n stationId: '2767224',\n date: Invalid Date,\n temp: Decimal128(\"21.21999931\"),\n humidity: Decimal128(\"64.47000122\"),\n pressure: Decimal128(\"99228.45313\"),\n _class: 'com.weather.server.domain.entity.Measure'\n },\n {\n _id: ObjectId(\"647a26bec69dca134631f619\"),\n stationId: '2767224',\n date: Invalid Date,\n temp: Decimal128(\"20.97999954\"),\n humidity: Decimal128(\"65.20999908\"),\n pressure: Decimal128(\"99231.4375\"),\n _class: 'com.weather.server.domain.entity.Measure'\n },\n {\n _id: ObjectId(\"647a26a1c69dca134631f618\"),\n stationId: '2767224',\n date: Invalid Date,\n temp: Decimal128(\"21.28000069\"),\n humidity: Decimal128(\"65.01000214\"),\n pressure: Decimal128(\"99230.9375\"),\n _class: 'com.weather.server.domain.entity.Measure'\n },\n {\n _id: ObjectId(\"647a25925835c010029f14a4\"),\n stationId: '2767224',\n date: Invalid Date,\n temp: Decimal128(\"20.97999954\"),\n humidity: Decimal128(\"64.91999817\"),\n pressure: Decimal128(\"99215.38281\"),\n _class: 'com.weather.server.domain.entity.Measure'\n },\n {\n _id: ObjectId(\"647a25855835c010029f14a3\"),\n stationId: '2767224',\n date: Invalid Date,\n temp: Decimal128(\"21.29000092\"),\n humidity: Decimal128(\"64.72000122\"),\n pressure: Decimal128(\"99221.71875\"),\n _class: 'com.weather.server.domain.entity.Measure'\n },\n {\n _id: ObjectId(\"647a2371af63a071e34458e7\"),\n stationId: '2767224',\n date: Invalid Date,\n temp: Decimal128(\"20.97999954\"),\n humidity: Decimal128(\"65.12000275\"),\n pressure: Decimal128(\"99200.52344\"),\n _class: 'com.weather.server.domain.entity.Measure'\n },\n {\n _id: ObjectId(\"647a235faf63a071e34458e6\"),\n stationId: '2767224',\n date: Invalid Date,\n temp: Decimal128(\"21.29000092\"),\n humidity: Decimal128(\"65.15000153\"),\n pressure: Decimal128(\"99204.32031\"),\n _class: 'com.weather.server.domain.entity.Measure'\n },\n {\n _id: ObjectId(\"647a22bc3ef30a66f882244c\"),\n stationId: '2767224',\n date: Invalid Date,\n temp: Decimal128(\"20.73999977\"),\n humidity: Decimal128(\"66.05999756\"),\n pressure: Decimal128(\"99193.78125\"),\n _class: 'com.weather.server.domain.entity.Measure'\n },\n {\n _id: ObjectId(\"647a20d4bb55e94322cacb36\"),\n stationId: '2767224',\n date: Invalid Date,\n temp: Decimal128(\"20.69000053\"),\n humidity: Decimal128(\"66.06999969\"),\n pressure: Decimal128(\"99184.96875\"),\n _class: 'com.weather.server.domain.entity.Measure'\n },\n {\n _id: ObjectId(\"6474fcf496474f714a6b0e59\"),\n stationId: '2767224',\n date: Invalid Date,\n temp: Decimal128(\"20.63999939\"),\n humidity: Decimal128(\"66.30999756\"),\n pressure: Decimal128(\"99288.35938\"),\n _class: 'com.weather.server.domain.entity.Measure'\n },\n {\n _id: ObjectId(\"6474f5e9047d8d60cfd5cb35\"),\n stationId: '2767224',\n date: Invalid Date,\n temp: Decimal128(\"20.46999931\"),\n humidity: Decimal128(\"65.48000336\"),\n pressure: Decimal128(\"99278.59375\"),\n _class: 'com.weather.server.domain.entity.Measure'\n }\n]\n",
"text": "Here are results of find() command:I don’t know if it is possible to insert document with invalid date via cli, I know that on website it’s not possible.\nYou might have to use example server app to reproduce issue - I have spring boot server which connects to Mongodb. If you would like to use it I can provide link to github.Example bad date send to server which results in not-displayable document is:\n-5149169-04-23T14:56:57ZCurrently I fixed my esp8266 program so I now have right date, but still it should be possible to delete documents with bad data via website.",
"username": "Marcin_R1"
},
{
"code": "mongoshInvalid Datetest> var test = new Date(\"-5149169-04-23T14:56:57Z\")\n\ntest> test\nInvalid Date\n ISODate(\"1970-01-01T00:00:00.000Z\")Invalid DateInvalid DateInvalid Date",
"text": "I don’t know if it is possible to insert document with invalid date via cli, I know that on website it’s not possible.Thanks for providing those examples. I can see to some degree what you mean now. I was not able to insert via mongosh an invalid date for example but was able to bring up the Invalid Date value:Inserting documents with this value seem to revert to a default of ISODate(\"1970-01-01T00:00:00.000Z\") instead of Invalid Date.Currently I fixed my esp8266 program so I now have right date, but still it should be possible to delete documents with bad data via website.Great - My initial thoughts were to perhaps filter / cleanse or implement a validator to have the correct date formats to limit results with Invalid Date from being inserted which you have done so. I will investigate the document(s) not displaying in the Atlas Data Explorer UI when a document contains a field with the value Invalid Date.Could you also share the spring boot driver versions and command used to insert the document?I’ll update here accordingly.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "@Override\n public boolean saveMeasure(NewMeasureDto newMeasureDto) {\n if(verifyStationId(newMeasureDto.getStationId())) {\n Measure measure = newMeasureMapper.mapToEntity(newMeasureDto);\n measureRepository.save(measure);\n return true;\n }\n else{\n return false;\n }\n }\n",
"text": "It looks according to pom.xml that I have springboot in version 2.6.4, same for spring-boot-starter-data-mongodb artifact. Command that I use to insert document is just save() called on repository, here is method from MeasureServiceImpl.java:If you would like to take a look at code, here is link to repo: https://github.com/4meters/weather-serverI checked the console and there is range error for each document with invalid date:\n\n2023-06-05 19_51_09-Data _ Cloud_ MongoDB Cloud — Mozilla Firefox1680×271 6.79 KB\n",
"username": "Marcin_R1"
},
{
"code": "measureRepository.save(measure);measure\"-5149169-04-23T14:56:57Z\"",
"text": "measureRepository.save(measure);Do you have an example of the value of measure here?I use to insert document is just save()I’m utilising save in the spring boot test environment but when using \"-5149169-04-23T14:56:57Z\" the document does not get inserted (instead there is a server error).Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "@Component\npublic class NewMeasureMapper {\n\n public Measure mapToEntity(NewMeasureDto newMeasureDto){\n Measure measure = new Measure();\n measure.setStationId(newMeasureDto.getStationId());\n measure.setDate(Date.from(Instant.parse(newMeasureDto.getDate())));\n\n if(newMeasureDto.getTemp()!=null){\n measure.setTemp(Decimal128.parse(newMeasureDto.getTemp()));\n }\n if(newMeasureDto.getHumidity()!=null){\n measure.setPressure(Decimal128.parse(newMeasureDto.getPressure()));\n }\n if(newMeasureDto.getPressure()!=null){\n measure.setHumidity(Decimal128.parse(newMeasureDto.getHumidity()));\n }\n if(newMeasureDto.getPm10()!=null){\n measure.setPm10(Decimal128.parse(newMeasureDto.getPm10()));\n }\n if(newMeasureDto.getPm25()!=null){\n measure.setPm25(Decimal128.parse(newMeasureDto.getPm25()));\n }\n if(newMeasureDto.getPm25Corr()!=null){\n measure.setPm25Corr(Decimal128.parse(newMeasureDto.getPm25Corr()));\n }\n \n return measure;\n }\n}\nmeasure.setDate(Date.from(Instant.parse(newMeasureDto.getDate())));\nMeasure{_id='null', stationId='2767224', date=Wed Jan 17 15:56:57 CET 5149064, temp=20.78000069, humidity=72.36000061, pressure=992.15, pm25=null, pm25Corr=null, pm10=null}\n",
"text": "I’m posting mapToEntity function, it may be the key to solution:The most important line is this:Before setting date in document it’s parsed, maybe if you use it that way, you will manage to save document.\nYou can replace “newMeasureDto.getDate()” with “-5149169-04-23T14:56:57Z”I have added System.out.println(measure) to saveMeasure method, example value of measure:It looks that after parsing date change from negative number to positive.",
"username": "Marcin_R1"
}
] | Records with invalid date are not displayed in MongoDB Atlas but on Mongo Shell they do | 2023-06-02T16:19:26.049Z | Records with invalid date are not displayed in MongoDB Atlas but on Mongo Shell they do | 554 |
|
null | [
"storage"
] | [
{
"code": "\"ERROR\",\"verbose_level_id\":-3,\"msg\":\"__wt_block_read_off:226:WiredTigerHS.wt: potential hardware corruption, read checksum error for 4096B block at offset 172032: block header checksum of 0x63755318 doesn't match expected checksum of 0x22b37ec4\"\n\"ERROR\",\"verbose_level_id\":-3,\"msg\":\"__wt_block_read_off:235:WiredTigerHS.wt: fatal read error\",\"error_str\":\"WT_ERROR: non-specific WiredTiger error\",\"error_code\":-31802\n\"ERROR\",\"verbose_level_id\":-3,\"msg\":\"__wt_block_read_off:235:the process must exit and restart\",\"error_str\":\"WT_PANIC: WiredTiger library panic\",\"error_code\":-31804\nFatal assertion\",\"attr\":{\"msgid\":50853,\"file\":\"src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp\",\"line\":712\n\\n\\n***aborting after fassert() failure\\n\\n\nWriting fatal message\",\"attr\":{\"message\":\"Got signal: 6 (Aborted).\\n\n",
"text": "Backuping a MongoDB cluster composed of three replicated MongoDB instances on a Kubernetes on-premise cluster using Velero and MinIO with Restic, triggers this fatal error of one of them after restoring the backup:Please note that we tested it using versions 4.4.11 and 6.0.5The restore works well for all our application (including two MongoDB nodes) except one (or two sometimes) MongoDB node which is most of the time in a “Back-off restarting failed container” state (even after having triggered a manual “mongod --repair” on it).We think that doing the backup of the three replicated MongoDB instances, maybe when some MongoDB synchronisation is ongoing (the services connected to MongoDB are all off during the backup), causes the backup to be seen as corrupted during the restore. Do you know what could cause this issue and how we could solve it?",
"username": "Eric_Hemmerlin"
},
{
"code": "potential hardware corruption, read checksum error",
"text": "Hi @Eric_Hemmerlin and welcome to MongoDB community forums!!potential hardware corruption, read checksum errorThe error message in the log you posted seem to indicate that the backup is corrupt, or the hardware is corrupt.\nCould you share a few details of the hardware on which the cluster is deployed like CPU, RAM, core, free space etc.However, since the cluster is deployed using different technologies mentioned, one of it could also be the possible reason of the failure.\nThe suggestion here would be to debug at each stack and let us know if the issues are specific to MongoDB?\nNote that if the underlying issue was caused by incomplete backup or corrupt hardware, there’s not much a database can do to overcome them.backup of the three replicated MongoDB instances,It seems to me like you are backing up all three nodes separately. Is this correct? Note that for a MongoDB replica set, typically you only need to backup one node, since the set contains identical data. You can restore this data to three nodes as per the restoring replica set documentationRegards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Hello @Aasawari and thanks for your answer, I appreciate it. Yes you are right, we backed up all three nodes separately, so we changed our mind after reading your post in order to only backup one node. We hope it’ll fix the issue we had.\nRegards\nEric",
"username": "Eric_Hemmerlin"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB backup and restore error using Velero and MinIO (on-premise Kubernetes cluster) | 2023-04-25T12:25:56.201Z | MongoDB backup and restore error using Velero and MinIO (on-premise Kubernetes cluster) | 1,354 |
null | [
"replication"
] | [
{
"code": "2023-05-30T09:46:36.408-0400 E REPL [replication-404] Initial sync attempt failed -- attempts left: 0 cause: NetworkInterfaceExceededTimeLimit: error fetching oplog during initial sync: Operation timed out, request was RemoteCommand 16829255 -- target:mongodb02:27017 db:local expDate:2023-05-30T09:46:35.997-0400 cmd:{ find: \"oplog.rs\", filter: { ts: { $gte: Timestamp 1685454318000|1 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 2000 }\n2023-05-30T09:46:36.408-0400 F REPL [replication-404] The maximum number of retries have been exhausted for initial sync.\n2023-05-30T09:46:36.458-0400 E REPL [replication-404] Initial sync failed, shutting down now. Restart the server to attempt a new initial sync.\n2023-05-30T09:46:36.458-0400 I - [replication-404] Fatal assertion 40088 NetworkInterfaceExceededTimeLimit: error fetching oplog during initial sync: Operation timed out, request was RemoteCommand 16829255 -- target:mongodb02:27017 db:local expDate:2023-05-30T09:46:35.997-0400 cmd:{ find: \"oplog.rs\", filter: { ts: { $gte: Timestamp 1685454318000|1 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 2000 } at src/mongo/db/repl/replication_coordinator_impl.cpp 635\n2023-05-30T09:46:36.458-0400 I - [replication-404]\n",
"text": "Hi,We have an existing replica set with 2 node and 1 arbiter. We want to add one more node to the existing replica set which I did and added the node to primary node using “rs.add” command however after running for over 20-30 hrs , the initial sync fails with below error:Target/Newly added node: MongoDB shell version v3.4.24\nSource/existing node: MongoDB shell version v3.4.9I tried to find rpm repo for 3.4.9 but could find only 3.4.24.Size of the data on existing node : 30-35TB\nNo of databases : 32\nNo of collections : most of the DBs have less than 100 collections but 3-6 DBs have close to 5k-10k documents with more than few million documents in them.Limitations:I am not expert in mongo and still trying to learn more and more from various sources so I would request the experts to help me achieve this target.-Onkar",
"username": "Onkarnath_Tiwary"
},
{
"code": "db.adminCommand( { setParameter: 1, oplogInitialFindMaxSeconds: 600 } )",
"text": "Hi @Onkarnath_TiwaryWe have an existing replica set with 2 node and 1 arbiter. We want to add one more node to the existing replica set which I did and added the node to primary node using “rs.add” command however after running for over 20-30 hrs , the initial sync fails with below error:Try increasing oplogInitialFindMax Seconds:\ndb.adminCommand( { setParameter: 1, oplogInitialFindMaxSeconds: 600 } )I tried to find rpm repo for 3.4.9 but could find only 3.4.24.Links for your version are in the archiveMongoDB 3.4 was End of life End of life the current supported versions are 4.4, 5.0 and 6.0 planning to upgrade to 4.4 at a minimum will bring you up to a version still receiving bugfixes and updates.https://learn.mongodb.com/ has many free courses to upskill with MongoDB",
"username": "chris"
},
{
"code": "",
"text": "Thank you for the respose Chris. One clarification, the command you suggested should be executed on primary I believe. Right?I have taken mongoDB university courses but the experience comes only when you start working and that is what I am lacking at this point of time but thank you for the suggestion. I will keep doing that.-Onkar",
"username": "Onkarnath_Tiwary"
},
{
"code": "--setParameter",
"text": "Might be a bit late for your situation by now.I think this parameter should be set on the secondary that is doing the initial sync. You can set this in the configuration file or via the command line flag --setParameter so that is is applied when mongod starts.",
"username": "chris"
},
{
"code": "",
"text": "Chris,I was able to start mongo daemon using oplogInitialFindMaxSeconds but when i tried with initialSyncTransientErrorRetryPeriodSeconds , it got error. Anyways, I started mongod with oplogInitialFindMaxSeconds and reinitiated initial sync. Let see!! Will post the result in any case.-Onkar",
"username": "Onkarnath_Tiwary"
},
{
"code": "",
"text": "The issue seems to have resolved now. Data is replicating without any error so far. Thank you for the help Chris",
"username": "Onkarnath_Tiwary"
}
] | Initial sync is failing | 2023-05-31T03:28:02.681Z | Initial sync is failing | 1,043 |
null | [
"aggregation",
"node-js"
] | [
{
"code": "{\n skillId: new ObjectId(\"61a914ac1155e2fb40e8d9c1\"),\n test: [\n {\n skills:[\n {\n description: 'HTML presentation and formatting tags',\n _id: new ObjectId(\"61a914ac1155e2fb40e8d9c0\")\n },\n {\n description: 'HTML layouts, using groups of elements together',\n _id: new ObjectId(\"61a914ac1155e2fb40e8d9c1\")\n },\n {\n description: 'Advanced concepts',\n _id: new ObjectId(\"61a914ac1155e2fb40e8d9c2\")\n \n],\n }\n ],\n}\n\n//aggregation\n descriptions: {\n $map: {\n input: \"$test.skills\",\n as: \"skills\",\n in: {\n $cond: [\n {$eq: [\"$$skills._id\", \"$skillId\"]},\n {$indexOfArray: [\"$$skills\", \"$skillId\"]},\n null\n ]\n }\n }\n },\n",
"text": "I am finding it difficult to output a documentI want a result that does the following thing it matches the Id of skillId with test.skills._id and return its description i wrote the above map but it doesn’t return anything null can someone suggest me proper way of query nested arrays",
"username": "BIKRAM_GHOSH"
},
{
"code": "",
"text": "It is not clear what you want to achieve. It would be easier for us to understand if you supply the desired result that you want.However, I see 2 major issues with your code.1 - The array skills is within an object of the array test and you seem to process test as if it was an object.You need to $map, $filter or $reduce the array test.2 - You use $indexOfArray on the $map variable $$skills.The variable $$skills represents 1 element of the input array. If you want to refer to the input array (which is wrong, see point 1) you need a single $ sign, just like you do with $skillId. The confusion might come from the fact that your code diverge from the habit of using plural for array and singular for an element of the array; like if the array is skills, each element is a skill.",
"username": "steevej"
},
{
"code": "{\n $addFields: {\n idx: {\n $map: {\n input: \"$test.skills\", as: \"skill\", in: {\n index: {\n $indexOfArray: [\"$$skill._id\", \"$skillId\",],\n },\n },\n },\n }, description: {\n $map: {\n input: \"$test.skills\", as: \"skill\", in: {\n des: \"$$skill.description\",\n },\n },\n },\n },\n }, {\n $addFields: {\n index: {$arrayElemAt: [\"$idx.index\", 0]},\n },\n }, {\n $addFields: {\n descriptionzo: {\n $arrayElemAt: [\"$description.des\", 0],\n },\n },\n },\n {\n $project: {\n _id: 0,\n skillId: 1,\n \"test.skills\":1,\n description: {$arrayElemAt: [\"$descriptionzo\", '$index']},\n },\n },\n",
"text": "Hi, steevej thank you for the response. I figured out how to get the desired output. I was looking for a document like this\nThe aggregation I wrote seems so messycan you please suggest me a more optimized way for achieving the description.\nThanks.",
"username": "BIKRAM_GHOSH"
},
{
"code": "",
"text": "As already mentionedThe array skills is within an object of the array test and you seem to process test as if it was an object.You need to $map, $filter or $reduce the array test.How do you handle the test array? In your example, it only has 1 element. You should read about $reduce and $filter.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Aggregation Help | 2023-06-02T17:43:12.746Z | Aggregation Help | 445 |
null | [
"sharding",
"performance"
] | [
{
"code": "\"db-name.collection2\": {\n shardKey: { PartitionKey: \"hashed\" },\n unique: false,\n balancing: true,\n chunkMetadata: [\n { shard: \"atlas-7dz9ng-shard-0\", nChunks: 2 },\n { shard: \"atlas-7dz9ng-shard-1\", nChunks: 2 },\n ],\n chunks: [\n {\n min: { PartitionKey: MinKey() },\n max: { PartitionKey: -4611686018427388000 },\n \"on shard\": \"atlas-7dz9ng-shard-0\",\n \"last modified\": Timestamp({ t: 1, i: 0 }),\n },\n {\n min: { PartitionKey: -4611686018427388000 },\n max: { PartitionKey: 0 },\n \"on shard\": \"atlas-7dz9ng-shard-0\",\n \"last modified\": Timestamp({ t: 1, i: 1 }),\n },\n {\n min: { PartitionKey: 0 },\n max: { PartitionKey: 4611686018427388000 },\n \"on shard\": \"atlas-7dz9ng-shard-1\",\n \"last modified\": Timestamp({ t: 1, i: 2 }),\n },\n {\n min: { PartitionKey: 4611686018427388000 },\n max: { PartitionKey: MaxKey() },\n \"on shard\": \"atlas-7dz9ng-shard-1\",\n \"last modified\": Timestamp({ t: 1, i: 3 }),\n },\n ],\n tags: [],\n },\n \"db-name.collection2\": {\n shardKey: { PartitionKey: \"hashed\" },\n unique: false,\n balancing: true,\n chunkMetadata: [\n { shard: \"atlas-7dz9ng-shard-0\", nChunks: 2 },\n { shard: \"atlas-7dz9ng-shard-1\", nChunks: 2 },\n ],\n chunks: [\n {\n min: { PartitionKey: MinKey() },\n max: { PartitionKey: -4611686018427388000 },\n \"on shard\": \"atlas-7dz9ng-shard-0\",\n \"last modified\": Timestamp({ t: 1, i: 0 }),\n },\n {\n min: { PartitionKey: -4611686018427388000 },\n max: { PartitionKey: 0 },\n \"on shard\": \"atlas-7dz9ng-shard-0\",\n \"last modified\": Timestamp({ t: 1, i: 1 }),\n },\n {\n min: { PartitionKey: 0 },\n max: { PartitionKey: 4611686018427388000 },\n \"on shard\": \"atlas-7dz9ng-shard-1\",\n \"last modified\": Timestamp({ t: 1, i: 2 }),\n },\n {\n min: { PartitionKey: 4611686018427388000 },\n max: { PartitionKey: MaxKey() },\n \"on shard\": \"atlas-7dz9ng-shard-1\",\n \"last modified\": Timestamp({ t: 1, i: 3 }),\n },\n ],\n tags: [],\n }\n",
"text": "Hello everybody.\nLast week we created an account of Mongodb Atlas Service. We are doing do some stress tests to see the performance of one part of our system using Mongo.Context:How the test works?\nWe are processing in parallel 40 items and then we wait 1s to process 40 items more. The total items to process is 1600.Every item is not going to be a register in the database, one sigle item will create more than 100 registers inside the database.The process we followed to create the sharded collections:\n1- sh.enableSharding(“db-name”)\n2- db.createCollection(‘collection-name’)\n3- db.getCollection(“collection-name”).createIndex({ PartitionKey: “hashed” });\n4- sh.shardCollection(“db-name.collection name”, { PartitionKey: “hashed” });\n5- sh.status() to see if the collections are well created and distributed in the 2 shards.After that we run a script in our system that creates several registers inside the collections but the time to process these data doesn’t dicrease if we add a new shard.\nWhat we are doing wrong? Why is the total time of the process not dicreasing with one more shard?\nHere you can see some data from sh.status after we run our script:Thank you for your time",
"username": "Ferran_Gutierrez_Ponce"
},
{
"code": "",
"text": "May be, just may be, your bottleneck is not the server. May be your test driver is too slow compared to the server.Doubling the number of highway lines will not help traffic if everyone transit via exit 35 which only have one line.Why is the total time of the process not decreasing with one more shard?then we wait 1sSo 1600 / 40 = 400, so you are sleeping 400s, so your total time cannot be less than 6.67 minutes.What is the load on the servers with your single shard setup?What is the load on the test driver machine?What do you mean by register? Does 100 registers per item and 1600 items end up being 160000 documents? What is the size of each document? I would really really be surprised if 160000 inserts on M30 even require sharding.When do you call shardCollection(), before or after the inserts?What we are doing wrong?You left out so many details about your setup that there is not much more that we can say.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks for your response, finally investigating we found the problem. The problem is on the client where is running the app. We have incresed the reources for the app service adding more cpu and ram and after that is increasing the number of inserts per second.PD: Before to see that the problem was in the app service of the app we have been testing the perfomance with mongoose because we thought that the problem was that they didn’t open engough connections to parallize the operations.",
"username": "Ferran_Gutierrez_Ponce"
}
] | Number of insert by second not increase with 2 shards | 2023-05-30T11:54:12.146Z | Number of insert by second not increase with 2 shards | 788 |
null | [
"connecting"
] | [
{
"code": "Jun 6 12:58:51 AM server running on port 4500\nJun 6 12:58:54 AM connected to database\nJun 6 12:59:18 AM /opt/render/project/src/node_modules/mongodb/lib/sdam/topology.js:278\nJun 6 12:59:18 AM const timeoutError = new error_1.MongoServerSelectionError(`Server selection timed out after ${serverSelectionTimeoutMS} ms`, this.description);\nJun 6 12:59:18 AM ^\nJun 6 12:59:18 AM \nJun 6 12:59:18 AM MongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017\nJun 6 12:59:18 AM at Timeout._onTimeout (/opt/render/project/src/node_modules/mongodb/lib/sdam/topology.js:278:38)\nJun 6 12:59:18 AM at listOnTimeout (node:internal/timers:557:17)\nJun 6 12:59:18 AM at processTimers (node:internal/timers:500:7) {\nJun 6 12:59:18 AM reason: TopologyDescription {\nJun 6 12:59:18 AM type: 'Unknown',\nJun 6 12:59:18 AM servers: Map(1) {\nJun 6 12:59:18 AM 'localhost:27017' => ServerDescription {\nJun 6 12:59:18 AM address: 'localhost:27017',\nJun 6 12:59:18 AM type: 'Unknown',\nJun 6 12:59:18 AM hosts: [],\nJun 6 12:59:18 AM passives: [],\nJun 6 12:59:18 AM arbiters: [],\nJun 6 12:59:18 AM tags: {},\nJun 6 12:59:18 AM minWireVersion: 0,\nJun 6 12:59:18 AM maxWireVersion: 0,\nJun 6 12:59:18 AM roundTripTime: -1,\nJun 6 12:59:18 AM lastUpdateTime: 483067822,\nJun 6 12:59:18 AM lastWriteDate: 0,\nJun 6 12:59:18 AM error: MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017\nJun 6 12:59:18 AM at connectionFailureError (/opt/render/project/src/node_modules/mongodb/lib/cmap/connect.js:370:20)\nJun 6 12:59:18 AM at Socket.<anonymous> (/opt/render/project/src/node_modules/mongodb/lib/cmap/connect.js:293:22)\nJun 6 12:59:18 AM at Object.onceWrapper (node:events:510:26)\nJun 6 12:59:18 AM at Socket.emit (node:events:390:28)\nJun 6 12:59:18 AM at emitErrorNT (node:internal/streams/destroy:157:8)\nJun 6 12:59:18 AM at emitErrorCloseNT (node:internal/streams/destroy:122:3)\nJun 6 12:59:18 AM at processTicksAndRejections (node:internal/process/task_queues:83:21) {\nJun 6 12:59:18 AM cause: Error: connect ECONNREFUSED 127.0.0.1:27017\nJun 6 12:59:18 AM at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1161:16) {\nJun 6 12:59:18 AM errno: -111,\nJun 6 12:59:18 AM code: 'ECONNREFUSED',\nJun 6 12:59:18 AM syscall: 'connect',\nJun 6 12:59:18 AM address: '127.0.0.1',\nJun 6 12:59:18 AM port: 27017\nJun 6 12:59:18 AM },\nJun 6 12:59:18 AM [Symbol(errorLabels)]: Set(1) { 'ResetPool' }\nJun 6 12:59:18 AM },\nJun 6 12:59:18 AM topologyVersion: null,\nJun 6 12:59:18 AM setName: null,\nJun 6 12:59:18 AM setVersion: null,\nJun 6 12:59:18 AM electionId: null,\nJun 6 12:59:18 AM logicalSessionTimeoutMinutes: null,\nJun 6 12:59:18 AM primary: null,\nJun 6 12:59:18 AM me: null,\nJun 6 12:59:18 AM '$clusterTime': null\nJun 6 12:59:18 AM }\nJun 6 12:59:18 AM },\nJun 6 12:59:18 AM stale: false,\nJun 6 12:59:18 AM compatible: true,\nJun 6 12:59:18 AM heartbeatFrequencyMS: 10000,\nJun 6 12:59:18 AM localThresholdMS: 15,\nJun 6 12:59:18 AM setName: null,\nJun 6 12:59:18 AM maxElectionId: null,\nJun 6 12:59:18 AM maxSetVersion: null,\nJun 6 12:59:18 AM commonWireVersion: 0,\nJun 6 12:59:18 AM logicalSessionTimeoutMinutes: null\nJun 6 12:59:18 AM },\nJun 6 12:59:18 AM code: undefined,\nJun 6 12:59:18 AM [Symbol(errorLabels)]: Set(0) {}\nJun 6 12:59:18 AM }\n",
"text": "Here is my error log from my node js API web service on render, it first gets connected and later throws this huge log of error messages.",
"username": "Modou_Mbye"
},
{
"code": "connect ECONNREFUSED 127.0.0.1:27017",
"text": "connect ECONNREFUSED 127.0.0.1:27017",
"username": "Kobe_W"
}
] | Mongodb connection failed from a web service hosted on render | 2023-06-06T01:24:12.783Z | Mongodb connection failed from a web service hosted on render | 978 |
[
"devops"
] | [
{
"code": "org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.mongodb.client.MongoClient]: Factory method 'mongoClient' threw exception; nested exception is com.mongodb.MongoConfigurationException: Unable to look up TXT record for host cluster0.xe4in.mongodb.netat org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:185)at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:653)... 218 moreCaused by: com.mongodb.MongoConfigurationException: Unable to look up TXT record for host cluster0.xe4in.mongodb.netat com.mongodb.internal.dns.DefaultDnsResolver.resolveAdditionalQueryParametersFromTxtRecords(DefaultDnsResolver.java:131)at com.mongodb.ConnectionString.<init>(ConnectionString.java:384)at com.leland.config.MongoConfig.buildMongoClientSettings(MongoConfig.java:49)at com.leland.config.MongoConfig.mongoClient(MongoConfig.java:45)at com.leland.config.MongoConfig$$EnhancerBySpringCGLIB$$df102bd2.CGLIB$mongoClient$14(<generated>)at com.leland.config.MongoConfig$$EnhancerBySpringCGLIB$$df102bd2$$FastClassBySpringCGLIB$$11b79bd1.invoke(<generated>)at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:244)at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:331)at com.leland.config.MongoConfig$$EnhancerBySpringCGLIB$$df102bd2.mongoClient(<generated>)at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.base/java.lang.reflect.Method.invoke(Method.java:566)at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154)... 219 moreCaused by: javax.naming.CommunicationException: DNS error [Root exception is java.net.SocketTimeoutException: Receive timed out]; remaining name 'cluster0.xe4in.mongodb.net'at jdk.naming.dns/com.sun.jndi.dns.DnsClient.query(DnsClient.java:313)at jdk.naming.dns/com.sun.jndi.dns.Resolver.query(Resolver.java:81)at jdk.naming.dns/com.sun.jndi.dns.DnsContext.c_getAttributes(DnsContext.java:434)at java.naming/com.sun.jndi.toolkit.ctx.ComponentDirContext.p_getAttributes(ComponentDirContext.java:235)at java.naming/com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.getAttributes(PartialCompositeDirContext.java:141)at java.naming/com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.getAttributes(PartialCompositeDirContext.java:129)at java.naming/javax.naming.directory.InitialDirContext.getAttributes(InitialDirContext.java:142)at com.mongodb.internal.dns.DefaultDnsResolver.resolveAdditionalQueryParametersFromTxtRecords(DefaultDnsResolver.java:114)... 232 moreCaused by: java.net.SocketTimeoutException: Receive timed outat java.base/java.net.TwoStacksPlainDatagramSocketImpl.receive0(Native Method)at java.base/java.net.TwoStacksPlainDatagramSocketImpl.receive(TwoStacksPlainDatagramSocketImpl.java:123)at java.base/java.net.DatagramSocket.receive(DatagramSocket.java:814)at jdk.naming.dns/com.sun.jndi.dns.DnsClient.doUdpQuery(DnsClient.java:423)at jdk.naming.dns/com.sun.jndi.dns.DnsClient.query(DnsClient.java:212)... 239 more\n",
"text": "I am using Azure to deploy my service, which would talk to MongoDB.Previously, I am using Azure Linux App Service, and everything could work well.\nHowever, I am trying to deploy my service onto Azure Windows App Service, but it would have DNS issue. Error log is as following:When using SSH onto the host, I am also seeing it not able to resolve the dns record, not matter with 8.8.8.8 or 1.1.1.1:\nMy connection string is like: mongodb+srv://xxxx:[email protected]/db?retryWrites=true&w=majorityDoes anyone have any clues on this?Thank you!",
"username": "williamwjs"
},
{
"code": "",
"text": "I was able to resolve the DNS entries for this cluster with a zero in cluster0 and not an upper O.So it looks like Azure Windows does not serve you well. B-) Being Unix old bearded man, I smiled.This editorial comment out of my chest, I notice a weird formatting in the error message. The is no space between the cluster name and the rest of the error message. May be you have backspace or other invisible character in your configuration file. May be a missing newline.The output of nameresolver.exe seems to indicate that the cluster name is found. Try quering for TXT entry. The default might be A entry.",
"username": "steevej"
},
{
"code": "",
"text": "Hi Steeve,Thank you for your reply!I think the weird formatting is something with Azure logs (For example, if you scroll down to see more logs, it would have “outat”, also without space)Also to point out that the same setting could work well on Azure Linux, but not Azure Windows. So guess it might be something related to windows hahaAlso, do we need to handle specially for “mongodb+srv” for Windows?",
"username": "williamwjs"
},
{
"code": "",
"text": "Also, do we need to handle specially for “mongodb+srv” for Windows?no you do notI would check with Azure customer service to see why.",
"username": "steevej"
},
{
"code": "",
"text": "a possible workaround: com.mongodb.MongoConfigurationException: Failed looking up TXT record for host xcluster.nuncuef.mongodb.net - #3 by bslatam_peru",
"username": "bslatam_peru"
}
] | Unable to look up TXT record for mongo host from Azure Windows App Service | 2021-08-21T02:23:27.377Z | Unable to look up TXT record for mongo host from Azure Windows App Service | 6,903 |
|
null | [
"connecting",
"atlas-cluster",
"kotlin"
] | [
{
"code": "",
"text": "I have mongoDb atlas instance running, but when trying to connect it throws this error com.mongodb.MongoConfigurationException: Failed looking up TXT record for host xcluster.nuncuef.mongodb.net. Connection is successful on windows machine, but not on the osX one. Using Intelijj with ktor and kmongo. Also, I can connect through mongo shell. Does anyone know how to solve this issue?",
"username": "Notte_Puzzle"
},
{
"code": "",
"text": "Check this thread.Issue similiar to your case with Mac & kmongo.May helpUnable to look up TXT record for host ****.mongodb.net from IntelliJ",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Workaround:Reference:\nhttps://www.appsloveworld.com/mongodb/100/53/failed-to-import-the-uri-unable-to-look-up-txt-record-for-host-cluster0-ohzuo-moapplication.properties template:spring.data.mongodb.uri=mongodb://\" + USER + “:” + PASS + “@” + HOST + “:” + PORT + “/” + DB\nspring.data.mongodb.database=DBspring.data.mongodb.uri=mongodb://userA:[email protected]:27017/db_name?ssl=true&replicaSet=atlas-rw4866-shard-0&authSource=admin&retryWrites=true&w=majority\nspring.data.mongodb.database= db_name",
"username": "bslatam_peru"
}
] | com.mongodb.MongoConfigurationException: Failed looking up TXT record for host xcluster.nuncuef.mongodb.net | 2022-08-05T22:30:55.348Z | com.mongodb.MongoConfigurationException: Failed looking up TXT record for host xcluster.nuncuef.mongodb.net | 4,059 |
null | [
"queries",
"compass",
"mongodb-shell"
] | [
{
"code": "",
"text": "Hi all,I am new to mongo db and I created a large collection of ~ 1 TB (since I read the guide and it says there is no limit on # of doc in a collection but suggests a finite number of collections…). After a month of data acquisition, I started to work on it and realized a huge problem, querying data is extremely slow (basically takes forever). I am hosting the data on a dockerized mongo 4.4 running on my NAS with 4 core CPU, and dealing with them with a mongo 6.0 running on my mac m2.Now I have the following questions:Many thanks for any comments/suggestions!",
"username": "Buxuan_Li"
},
{
"code": "",
"text": "",
"username": "Kobe_W"
},
{
"code": "",
"text": "thanks for the comments!\nthe command i used was:\n/opt/homebrew/bin/mongosh --host $REMOTE_HOST:$PORT -u $USER_NAME -p $PASSWORD --authenticationDatabase “admin” --db $REMOTE_DB --eval “db.$REMOTE_COLLECTION.createIndex({timestamp:1,index:‘text’},{ unique: true },{name:‘timestampSymbol’})” >> $LOG_FILE 2>&1I noticed sharding. I am trying to understand how it works and how to deploy.",
"username": "Buxuan_Li"
}
] | Create compound index in an extremely large collection running on standalone mongodb | 2023-06-06T03:27:28.511Z | Create compound index in an extremely large collection running on standalone mongodb | 604 |
null | [
"mongodb-shell",
"installation"
] | [
{
"code": "",
"text": "I used Homebrew to install mongodb and I am getting this errorsudo mkdir -p /data/db\nmkdir: /data: Read-only file system",
"username": "Muneet_Singh"
},
{
"code": "",
"text": "Access to root folder is removed in some Macos flavors\nCheck this link.You have to use another directory for dbpath where mongod can write like your home dir",
"username": "Ramachandra_Tummala"
}
] | Can not install mongodb on macOS ventura | 2023-06-05T18:16:05.665Z | Can not install mongodb on macOS ventura | 707 |
null | [
"dot-net",
"crud"
] | [
{
"code": "var session = db.getMongo().startSession( { readPreference: { mode: \"primary\" } } );\nsession.withTransaction( async() => {\n const sessionCollection = session.getDatabase(dbName).getCollection(collectionName);\n // Check needed values\n var checkFromAccount = sessionCollection.findOne(\n {\n \"customer\": fromAccount,\n \"balance\": { $gte: transferAmount }\n }\n )\n if( checkFromAccount === null ){\n throw new Error( \"Problem with sender account\" )\n }\n var checkToAccount = sessionCollection.findOne(\n { \"customer\": toAccount }\n )\n if( checkToAccount === null ){\n throw new Error( \"Problem with receiver account\" )\n }\n // Transfer the funds\n sessionCollection.updateOne(\n { \"customer\": toAccount },\n { $inc: { \"balance\": transferAmount } }\n )\n sessionCollection.updateOne(\n { \"customer\": fromAccount },\n { $inc: { \"balance\": -1 * transferAmount } }\n )\n}\n",
"text": "Hi, I’m referring to this code sampleSession.withTransaction() — MongoDB ManualThis don’t seem to be able to compile, TResult is unknown. I need to add a return String.Emptty to compile, which seems odd.",
"username": "4f2e3ec58d5a891addcea73b4222362"
},
{
"code": "mongosh",
"text": "The provided code is a mongosh snippet, which is written in JavaScript. You indicated that you are working with the .NET/C# Driver. Please see Drivers API for Transactions and make sure you select C# as your language in the top right corner of the page. Hope that helps!Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "using (IClientSessionHandle session = await MongoClientSystem.StartSessionAsync ().ConfigureAwait (false))\n{\n await session.WithTransactionAsync (async (session, cancellationToken) =>\n {\n\t\t\t// blah\n //THIS IS NEED TO COMPILE\n\t\t\treturn String.Empty;\n\t})\n .ConfigureAwait (false);\n}\n",
"text": "I used the c# version, of course. I pasted the java script version. It doesn’t compile without the return String.Empty",
"username": "4f2e3ec58d5a891addcea73b4222362"
},
{
"code": "MongoClientSystem",
"text": "Please provide the URL that contains the provided code sample as I am unable to locate it in our C# transaction examples. In particular MongoClientSystem is not a class in the .NET/C# Driver and this appears to be third-party code.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "That sample is mine. You don’t have a c# sample that’s why I linked in the javascript version. It is to show why you need a String.Empty return to be able to compile in C# when javascript version don’t.",
"username": "4f2e3ec58d5a891addcea73b4222362"
},
{
"code": "WithTransactionAsyncTResult",
"text": "Thank you for patiently explaining the issue. I believe that I understand the problem now. JavaScript allows an implicit null return whereas C# does not. And because WithTransactionAsync returns a TResult, you must provide a return type to make the C# compiler happy even if you don’t use the return value. This is unfortunately a limitation of the C# language. We will consider adapting this guidance into future C# transaction examples. We appreciate your feedback.",
"username": "James_Kovacs"
}
] | Code sample WithTransaction C# docs wrong? | 2023-05-27T09:57:27.480Z | Code sample WithTransaction C# docs wrong? | 856 |
null | [] | [
{
"code": "",
"text": "I use MongoDB to store user data for a game I work on. There is an xp (experience) field that exists for every user in the game. I have my server retry requests to the database if it fails due to a network outage or if the database is down. If an $inc operation manages to execute on the database but doesn’t respond to the server (network outage), a duplicate $inc could happen due to the server retrying. Is this something I should worry about, and how would I avoid this case if I should?",
"username": "Axillary_Studios"
},
{
"code": "",
"text": "$inc is not an idempotent operation, So if you don’t be cautious, you may end up incrementing something twice.e.g. if you get “timeout”, you won’t know if the operation indeed finishes on server side or not.",
"username": "Kobe_W"
},
{
"code": "",
"text": "Thanks, I will try to design something to prevent this case from happening.",
"username": "Axillary_Studios"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Is it safe to use the $inc operation? | 2023-06-05T04:38:44.370Z | Is it safe to use the $inc operation? | 322 |
[
"flutter",
"flexible-sync"
] | [
{
"code": "",
"text": "Hi, I am doing a simple example and am getting this error when adding a task to the database. I have fully configured on Device Sync.Thank you for your time.\nScreenshot_207689×720 26.5 KB\n\n\nScreenshot_208734×766 29.8 KB\n\nScreenshot_2051413×305 18.1 KB\n",
"username": "Minh_Quang_H_Vu"
},
{
"code": "",
"text": "\nScreenshot_210715×313 8.5 KB\n\n\nScreenshot_211710×172 5.15 KB\n\n\nScreenshot_212814×385 20.6 KB\n\nScreenshot_2091199×236 11.1 KB\n",
"username": "Minh_Quang_H_Vu"
},
{
"code": "realm.write<Tasks>(...)Tasks",
"text": "Hi @Minh_Quang_H_Vu,As the error in the attached screenshot alludes to, before the realm.write<Tasks>(...) you need to add a subscription on the Tasks table. Please see the docs for more information.Jonathan",
"username": "Jonathan_Lee"
},
{
"code": "",
"text": "I’m really sorry but after seeing a docs I still don’t know what to do.\nScreenshot_2131127×458 28.7 KB\n",
"username": "Minh_Quang_H_Vu"
},
{
"code": "Task",
"text": "Can you share the realm model for Task?",
"username": "Kasper_Nielsen1"
},
{
"code": "import 'package:realm/realm.dart';\n\npart 'task.g.dart';\n\n@RealmModel()\nclass _Tasks {\n @PrimaryKey()\n @MapTo(\"_id\")\n late ObjectId id;\n\n late String title;\n late String date;\n late String userId;\n}\n\n",
"text": "",
"username": "Minh_Quang_H_Vu"
},
{
"code": "TasksTasks",
"text": "Ah ok. I didn’t notice at first that you had an asymmetric table setup on Tasks (sorry about that!). Asymmetric / data ingest sync is currently unsupported in the Flutter SDK. Does your use case require making the Tasks table asymmetric?",
"username": "Jonathan_Lee"
},
{
"code": "",
"text": "I do not know. Now what should I do to fix it. This code I was able to run before and query the data and display it, but now it can’t.\nScreenshot_214697×557 24.8 KB\n",
"username": "Minh_Quang_H_Vu"
},
{
"code": "TasksTasksxTasksCollections from your schemaData IngestTasks",
"text": "So data ingest sync is designed and optimized for insert-only workloads (syncing from the device to MongoDB, but not the other way around).Given that you are querying on the data being synced (Tasks), it sounds like data ingest sync is not suited for your use case (let me know if this isn’t the case though, I’m admittedly not a Flutter expert).I would recommend removing Tasks as a data ingest collection by doing the following:After re-enablement, Tasks will be synced as a normal table/collection. For context, the reasoning for the termination and re-enablement of sync is because removing a data ingest collection is a destructive schema change (read here for more information regarding the consequences of these types of changes).Let me know if that helps,\nJonathan",
"username": "Jonathan_Lee"
},
{
"code": "",
"text": "I really don’t understand. Now it works even though I didn’t edit anything. But after adding 4 tasks with the “add” button and manually deleting them directly on the collections, they are not synchronized\nScreenshot_2151350×676 45.7 KB\n\n\nScreenshot_2161567×401 16.8 KB\n",
"username": "Minh_Quang_H_Vu"
},
{
"code": "",
"text": "That is exactly the nature of data ingest sync. Changes will be synchronized from the device to MongoDB Atlas, but not the other way around (I would recommend reading the link I sent above). In your situation, the deletions in Atlas were not propagated back down to the client. I’m actually surprised that synchronization is working at all because as I mentioned above, data ingest is currently unsupported in the Flutter SDK.",
"username": "Jonathan_Lee"
},
{
"code": "",
"text": "I ran as expected. I really thank you for your enthusiastic help.",
"username": "Minh_Quang_H_Vu"
}
] | How to fix Error: Cannot write to class Task when no flexible sync subscription has been created | 2023-06-05T16:18:07.351Z | How to fix Error: Cannot write to class Task when no flexible sync subscription has been created | 1,350 |
|
null | [
"android",
"flutter"
] | [
{
"code": "flutter: 2023-06-04T14:00:05.236643: [ERROR] Realm: Connection[1]: Session[1]: Error integrating bootstrap changesets: Failed to transform received changeset: Schema mismatch: Property 'longitude' in class 'Location' is nullable on one side and not on the other.\n// coordianates are LONGITUDE 1st, then LATITUDE\n@RealmModel(ObjectType.embeddedObject)\nclass _Location {\n @MapTo('type')\n late String type;\n @MapTo('coordinates')\n late List<double> coordinates;\n}\n \"properties\": {\n \"location\": {\n \"title\": \"Location\",\n \"type\": \"object\",\n \"required\": [\n \"type\"\n ],\n \"properties\": {\n \"coordinates\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"double\"\n }\n },\n \"type\": {\n \"bsonType\": \"string\"\n }\n }\n },\n \"location_id\": {\n \"bsonType\": \"objectId\"\n },\nDoctor summary (to see all details, run flutter doctor -v):\n[✓] Flutter (Channel stable, 3.10.2, on macOS 13.4 22F66 darwin-arm64, locale en-US)\n[✓] Android toolchain - develop for Android devices (Android SDK version 33.0.0)\n[✓] Xcode - develop for iOS and macOS (Xcode 14.3)\n[✓] Chrome - develop for the web\n[✓] Android Studio (version 2021.2)\n[✓] VS Code (version 1.78.2)\n[✓] Connected device (3 available)\n[✓] Network resources\n\n• No issues found!\n",
"text": "I am developing a flutter app which uses 2DSphere coordinates in the schema and queries them. I have an index on the coordinates (2dsphere), and the schema and schemas.dart works fine on the iOS simulator and iPhone. When I test on MacOS desktop I get the following exception:and of course, they don’t load into the app…Again, I’m not seeing this under iOS. I’m not sure what changes to schema.dart and/or my realm schema to fix this.If it helps understand the problem, the location object is embedded in another object, which is also embedded in another object.\nBelow is the my schema.dart fragment, and app services Schema (running in dev mode), and flutter environment:",
"username": "Josh_Whitehouse"
},
{
"code": "",
"text": "Well , i face the same issue the schema is miss matched in app service have your schema on app service automatically build or you created it yourself ?",
"username": "33_ANSHDEEP_Singh"
},
{
"code": "@RealmModellongitudelongitudeLocation",
"text": "@Josh_Whitehouse Neither your server-side schema, or your @RealmModel mentions a property called longitude which doesn’t match the error message you have send.Anyway, the error message tells you, that there is a type mismatch between server and client for the property longitude on Location. One side is nullable, the other is not.Developer mode cannot handle that for you. Only additive changes are handled automatically. Changing the type of a property is considered a destructive change.During development I would typically nuke the wrong side (or both) when this happens, since I typically don’t care about any stored data at that point.",
"username": "Kasper_Nielsen1"
},
{
"code": "",
"text": "I am in dev mode, and it was created automatically. I’m just not sure how to resolve the mismatch, which is not a problem with iOS, just macOS.",
"username": "Josh_Whitehouse"
},
{
"code": "",
"text": "this is how MongoDB stores longitude and latitude so their geolocation aggregate pipeline using $geoNear works.\nlocation (object) {\ntype (String) : “Point” // in my case, a point, not an area\ncoordinates (Array, 2 doubles): [ -80.1337, 26.15987 ]\n}Iongitude is the first double in the array, latitude the second double. I am not sure how MongoDB Atlas internally represents these, but is reporting the property as longitude, even tho the schema has no such property defined. I’m also curious how this works fine under iOS, but not MacOS.Also changing the fields to nullable in the schema.dart doesn’t fix the issue for me. I removed the collection schema definition from the app service and still enountered this message.",
"username": "Josh_Whitehouse"
},
{
"code": "longitudeLocation.longitude",
"text": "Yes you have an array of two doubles and a convention that the first is longitude, but there is no property called longitude. The error complains about a type mismatch on on the Location.longitude property.Did you perhaps have that previously?",
"username": "Kasper_Nielsen1"
},
{
"code": "",
"text": "I didn’t have that property previously with this collection, no. And to reiterate, it’s working fine with iOS, but not the MacOS flutter app.",
"username": "Josh_Whitehouse"
}
] | Realm schema coordinates issue only with MacOSt | 2023-06-04T18:09:49.962Z | Realm schema coordinates issue only with MacOSt | 754 |
null | [
"python"
] | [
{
"code": "",
"text": "Hi Please, I appreciate a support to properly migrate the filters used in mongo with the command (getCollectionNames().filter()) in python code. Best thanks",
"username": "Mateus_Saldanha"
},
{
"code": "list_collection_names()>>> print(mongo_db.list_collection_names())\n['collection', 'newCollection', 'test_collection']\n",
"text": "Hi @Mateus_Saldanha,Perhaps pymongo list_collection_names() works for you / is what you are after?Example - Output after connecting to my MongoDB:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks Jason for your replay. Please do you know if it’s possible to filter for only one collection, for example, ‘test_’? thanks.",
"username": "Mateus_Saldanha"
},
{
"code": ">>> client.test.list_collection_names(filter=None)\n['test', 'test3']\n>>> client.test.list_collection_names(filter={'name': 'test'})\n['test']\n",
"text": "Yes you can use list_collection_names with the filter argument:",
"username": "Shane"
},
{
"code": "",
"text": "Great!!! Thank you so much for your replay!!! your comment really helped me. Thanks Shane",
"username": "Mateus_Saldanha"
},
{
"code": "",
"text": "Thanks Jason and Shane for share, your comments helped me a lot.",
"username": "Mateus_Saldanha"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to use command getCollectionNames().filter() similar in python | 2023-06-03T13:17:04.000Z | How to use command getCollectionNames().filter() similar in python | 599 |
null | [] | [
{
"code": "",
"text": "Hello,ist there a list of error codes available?I know WriteConflict is code 112 and a DevideByZero is code 16608. But is there a complete list?The only source I was able to find was this one: error.rs.html -- sourceRegards,Christian",
"username": "Christian_Kutbach"
},
{
"code": "",
"text": "I just found a list of error code in mongo-c-driver (by searching one of thecode in github), it’s inside mongoc-error.h. But really, should have a better error code solution for cxx driver, or at least say something about the error code of c driver a bit.",
"username": "Eric_Jeffrey"
},
{
"code": "",
"text": "Go here mongo-db-error-codes-list",
"username": "Vikram_Rathore"
}
] | List of error codes | 2020-08-28T12:47:57.542Z | List of error codes | 12,982 |
null | [] | [
{
"code": "",
"text": "Hello.I’m very new to Realm, so sorry if this is trivial. I’m trying to achieve the following with the flexible sync: have a user share the document with the group of users, e.g. a teacher shares assignments with all students in the class. I’m trying to figure out security rules for that.I see something similar can be achieved using this approach, but I don’t want to maintain a list of collaborators for each document (e.g. what if there is a new user added?).Is it possible instead of having a list of collaborators in the document have a list of groups and then somehow check it in the security rules if the user trying to sync this document is in one of those groups??",
"username": "TheHiddenDuck"
},
{
"code": "[\n {\n \"name\": \"teacher\",\n \"apply_when\": {\n { \"%%user.custom_data.isTeacher\": %%true }\n },\n \"document_filters\": {\n \"read\": true,\n \"write\": true,\n },\n \"read\": true,\n \"write\": true\n },\n {\n \"name\": \"student\",\n \"apply_when\": {\n { \"%%user.custom_data.isStudent\": %%true }\n },\n \"document_filters\": {\n \"read\": { \"section\": {$in: \"%%user.custom_data.sections\" } },\n \"write\": true,\n },\n \"read\": true,\n \"write\": true\n }\n]\n",
"text": "Hi. It sounds to me like you might want to use the Restricted News Feed or the Tiered Permissions model defined here: https://www.mongodb.com/docs/atlas/app-services/sync/app-builder/device-sync-permissions-guide/#restricted-news-feedThe TLDR is that you can use Custom User Data to define a mapping between users and a document that you own in your database that can be used during permission evaluation. This would let you define fields like “isAdmin” or “groups” in a document in your own cluster and have that data be used within the permissions evaluation functions using something like this:The syntax might be a little off in the above, but I hope it explains how you might be able to leverage custom user data for your application.Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Hello, thank you for your quick response. I will give it a try.Do I understand correctly that with this approach whenever a user joins/leaves a group, I need to add that group id to an array in users’ custom data, and then I can use that in my rules?",
"username": "TheHiddenDuck"
},
{
"code": "",
"text": "Yes, that is correct. Though that seems like what you were asking for (correct me if I am wrong). The advantage here is that is is data you get to control.",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Yes, that was what I was asking for overall. It’s just a bit unusual for me in terms of it reversing the relationships. I was thinking something like a group having a list of students, but I can see how that can be complicated in security rules to first lookup all groups the document lists, then list all user ids from those groups. I will give it a try. Thank you for help ",
"username": "TheHiddenDuck"
},
{
"code": "",
"text": "Hi, yes that is fair; however, like you mentioned having it the other way would be more difficult for a generalized and performance approach. One thing that you can do is still have your manual interactions be using the conceptual model you want and then use Database Triggers to replicate changes to the user document. IE, if you have a groups collection that you want to modify to add a user to a list for a specific group, you can then have a database trigger setup to listen for that change and replicate it to the user data collection.Best,\nTyler",
"username": "Tyler_Kaye"
}
] | Sharing document with a group | 2023-06-02T20:36:37.096Z | Sharing document with a group | 646 |
null | [
"replication",
"ops-manager",
"kubernetes-operator"
] | [
{
"code": "",
"text": "I deployed ops manager in local mode and mongodb enterprise kubernetes operator.\nI created a organization and followed the kubernetes setup. I created a secret and config map with the correct api key. However when I am trying to deploy replicaset from the operator I am getting an error message “failed to create/update (ops manager reconciliation phase) status 401\nDetail: put”",
"username": "ori.simhovich"
},
{
"code": "",
"text": "Hi @ori.simhovich, unfortunately that could be down to any number of things. Your best bet is to raise a support case and they’ll be able to help check it all over with you. You can do that through the \nMongoDB Support Portal.",
"username": "Dan_Mckean"
},
{
"code": "",
"text": "Its say the the error accored in the path “Http://[ops manager service]/api/public/v1.0/grups/[project I’d]/automationConfig”",
"username": "ori.simhovich"
}
] | Mongodb enterprise kubernetes operator failed to authenticate with ops manager | 2023-06-05T07:40:36.400Z | Mongodb enterprise kubernetes operator failed to authenticate with ops manager | 692 |
null | [
"compass",
"atlas"
] | [
{
"code": "",
"text": "When I try to load web pages that require a database on a local host, the loading time increases substantially. The same issue happens when I try to browse my collection on MongoDB. The 13 mb collection does not load on the browser and results in a “data explorer operation for request timed out” error since it takes longer than 45 seconds to load. It does load on Compass but takes a long time to do so.I am using the free database tier, but this is my first time experiencing such egregious loading times.",
"username": "Kirill_K"
},
{
"code": "",
"text": "loadwhat’s this? what is your query like?\nAre you trying to find({}) on that 13m collection?",
"username": "Kobe_W"
},
{
"code": "",
"text": "Yes, we are. Our query hasn’t changed since before today, so I don’t understand why it takes a long time to load.I don’t think it has anything to do with our code since it takes about 3 minutes for me to look at a 13mb collection on MongoDB Compass.",
"username": "Kirill_K"
}
] | Database is taking too long to load | 2023-06-04T17:27:56.036Z | Database is taking too long to load | 745 |
null | [
"python",
"compass",
"atlas",
"schema-validation",
"motor-driver"
] | [
{
"code": "{\n required: [\n '_id',\n 'date',\n 'username',\n 'cheatData'\n ],\n properties: {\n _id: {\n bsonType: 'long',\n title: 'User ID',\n description: 'Holds the SteamID64 of an entered user. Is a primary key. Required.'\n },\n date: {\n bsonType: 'date',\n title: 'Addition Date',\n description: 'Holds a MongoDB Date that represents the date and time that the data was added. Required.'\n },\n username: {\n bsonType: 'string',\n title: 'Username',\n description: 'The current username of a given user, obtained via Steam API. Required.'\n },\n aliases: {\n bsonType: 'array',\n title: 'Aliases',\n description: 'All past aliases of the user, obtained via Steam API.',\n items: {\n bsonType: 'string'\n }\n },\n friends: {\n bsonType: 'array',\n title: 'Friends',\n description: 'The current friends of the user, obtained via Steam API.',\n items: {\n bsonType: 'long'\n }\n },\n cheatData: {\n bsonType: 'object',\n title: 'Cheat Data',\n description: 'An object that contains what the user is logged as, reasons why if they\\'re not innocent, and optional evidence links to prove they cheat. Required entries: flag, isBot.',\n required: [\n 'flag, isBot'\n ],\n properties: {\n flag: {\n bsonType: 'string',\n title: 'Flag',\n description: 'What level of suspicion the user is at. Required.',\n 'enum': [\n 'innocent',\n 'watched',\n 'suspicious',\n 'cheater'\n ]\n },\n isBot: {\n bsonType: 'bool',\n title: 'Is Bot',\n description: 'A simple boolean to say if the user is a bot. Required.'\n },\n infractions: {\n bsonType: 'array',\n title: 'Infractions',\n description: 'All possible or confirmed infractions/cheats the user has demonstrated employing. Should only be filled out if the flag is suspicious or cheater.',\n items: {\n bsonType: 'string'\n }\n },\n evidence: {\n bsonType: 'array',\n title: 'Evidence',\n description: 'All evidence that is used to prove the user is a cheater. Not required, heavily encouraged.',\n items: {\n bsonType: 'string'\n }\n }\n }\n },\n overrideName: {\n bsonType: 'string',\n title: 'Custom Name',\n description: 'Overrides the username when displayed on any data visualizer or displayer. Should be used if a cheater changes their name often.'\n }\n }\n}\n{\n \"_id\": {\n \"$numberLong\": \"76561198818675138\"\n },\n \"username\": \"test\",\n \"date\": {\n \"$date\": {\n \"$numberLong\": \"1685904702000\"\n }\n },\n \"aliases\": [\n \"test\",\n \"test the code\"\n ],\n \"friends\": [\n {\n \"$numberLong\": \"76561198818675136\"\n },\n {\n \"$numberLong\": \"76561198818675136\"\n }\n ],\n \"cheatData\": {\n \"flag\": \"innocent\",\n \"isBot\": false\n },\n \"overrideName\": \"hello\"\n}\n{\n \"failingDocumentId\": {\n \"$numberLong\": \"76561198818675138\"\n },\n \"details\": {\n \"operatorName\": \"$and\",\n \"clausesNotSatisfied\": [\n {\n \"index\": {\n \"$numberInt\": \"0\"\n },\n \"details\": {\n \"operatorName\": \"$eq\",\n \"specifiedAs\": {\n \"required\": [\n \"_id\",\n \"date\",\n \"username\",\n \"cheatData\"\n ]\n },\n \"reason\": \"field was missing\"\n }\n },\n {\n \"index\": {\n \"$numberInt\": \"1\"\n },\n \"details\": {\n \"operatorName\": \"$eq\",\n \"specifiedAs\": {\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"long\",\n \"title\": \"User ID\",\n \"description\": \"Holds the SteamID64 of an entered user. Is a primary key. Required.\"\n },\n \"date\": {\n \"bsonType\": \"date\",\n \"title\": \"Addition Date\",\n \"description\": \"Holds a MongoDB Date that represents the date and time that the data was added. Required.\"\n },\n \"username\": {\n \"bsonType\": \"string\",\n \"title\": \"Username\",\n \"description\": \"The current username of a given user, obtained via Steam API. Required.\"\n },\n \"aliases\": {\n \"bsonType\": \"array\",\n \"title\": \"Aliases\",\n \"description\": \"All past aliases of the user, obtained via Steam API.\",\n \"items\": {\n \"bsonType\": \"string\"\n }\n },\n \"friends\": {\n \"bsonType\": \"array\",\n \"title\": \"Friends\",\n \"description\": \"The current friends of the user, obtained via Steam API.\",\n \"items\": {\n \"bsonType\": \"long\"\n }\n },\n \"cheatData\": {\n \"bsonType\": \"object\",\n \"title\": \"Cheat Data\",\n \"description\": \"An object that contains what the user is logged as, reasons why if they're not innocent, and optional evidence links to prove they cheat. Required entries: flag, isBot.\",\n \"required\": [\n \"flag, isBot\"\n ],\n \"properties\": {\n \"flag\": {\n \"bsonType\": \"string\",\n \"title\": \"Flag\",\n \"description\": \"What level of suspicion the user is at. Required.\",\n \"enum\": [\n \"innocent\",\n \"watched\",\n \"suspicious\",\n \"cheater\"\n ]\n },\n \"isBot\": {\n \"bsonType\": \"bool\",\n \"title\": \"Is Bot\",\n \"description\": \"A simple boolean to say if the user is a bot. Required.\"\n },\n \"infractions\": {\n \"bsonType\": \"array\",\n \"title\": \"Infractions\",\n \"description\": \"All possible or confirmed infractions/cheats the user has demonstrated employing. Should only be filled out if the flag is suspicious or cheater.\",\n \"items\": {\n \"bsonType\": \"string\"\n }\n },\n \"evidence\": {\n \"bsonType\": \"array\",\n \"title\": \"Evidence\",\n \"description\": \"All evidence that is used to prove the user is a cheater. Not required, heavily encouraged.\",\n \"items\": {\n \"bsonType\": \"string\"\n }\n }\n }\n },\n \"overrideName\": {\n \"bsonType\": \"string\",\n \"title\": \"Custom Name\",\n \"description\": \"Overrides the username when displayed on any data visualizer or displayer. Should be used if a cheater changes their name often.\"\n }\n }\n },\n \"reason\": \"field was missing\"\n }\n }\n ]\n }\n}\n",
"text": "Very new to MongoDB and tried to make a collection with validation, but I have kept trying and failing to insert some data into it for the first time. Lots of trial and error but continued failures, even from putting the data in via Compass, Motor, and web browser. Here’s the schema, test data I’ve tried inserting, and then the error I get from the web browser version of Atlas.I just don’t understand what’s going wrong or where due to how new I am, so I would appreciate it if I could get an explanation as to what’s going wrong here. Thanks to anyone who attempts to help!SchemaTest DataError\nDocument failed validation:Edited to include @Jack_Woehr’s fix",
"username": "pinheadtf2"
},
{
"code": "\"cheatData\": {\n \"flag\": \"innocent\",\n \"isBot\": \"false\"\n },\n\"isBot\": false",
"text": "Maybe \"isBot\": false ?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "That did fix one issue, but the overarching problem still remains. Sure wouldn’t have caught it, thanks for the help!",
"username": "pinheadtf2"
},
{
"code": "",
"text": "Output still the same?",
"username": "Jack_Woehr"
},
{
"code": "required: [\n 'flag, isBot'\n ]\n{\n \"title\": \"userList\",\n \"required\": [\n \"_id\",\n \"date\",\n \"username\",\n \"cheatData\"\n ],\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\",\n \"title\": \"User ID\",\n \"description\": \"Holds the Steam3 ID of an entered user. Is a primary key. Required.\"\n },\n \"date\": {\n \"bsonType\": \"date\",\n \"title\": \"Addition Date\",\n \"description\": \"Holds a MongoDB Date that represents the date and time that the data was added. Required.\"\n },\n \"username\": {\n \"bsonType\": \"string\",\n \"title\": \"Username\",\n \"description\": \"The current username of a given user, obtained via Steam API. Required.\"\n },\n \"aliases\": {\n \"bsonType\": \"array\",\n \"title\": \"Aliases\",\n \"description\": \"All past aliases of the user, obtained via Steam API.\",\n \"items\": {\n \"bsonType\": \"string\"\n }\n },\n \"friends\": {\n \"bsonType\": \"array\",\n \"title\": \"Friends\",\n \"description\": \"The current friends of the user, obtained via Steam API.\",\n \"items\": {\n \"bsonType\": \"string\"\n }\n },\n \"cheatData\": {\n \"bsonType\": \"object\",\n \"title\": \"Cheat Data\",\n \"description\": \"An object that contains what the user is logged as, reasons why if they're not innocent, and optional evidence links to prove they cheat. Required entries: flag, isBot.\",\n \"required\": [\n \"flag\",\n \"isBot\"\n ],\n \"properties\": {\n \"flag\": {\n \"bsonType\": \"string\",\n \"title\": \"Flag\",\n \"description\": \"What level of suspicion the user is at. Required.\"\n },\n \"isBot\": {\n \"bsonType\": \"bool\",\n \"title\": \"Is Bot\",\n \"description\": \"A simple boolean to say if the user is a bot. Required.\"\n },\n \"infractions\": {\n \"bsonType\": \"array\",\n \"title\": \"Infractions\",\n \"description\": \"All possible or confirmed infractions/cheats the user has demonstrated employing. Should only be filled out if the flag is suspicious or cheater.\",\n \"items\": {\n \"bsonType\": \"string\"\n }\n },\n \"evidence\": {\n \"bsonType\": \"array\",\n \"title\": \"Evidence\",\n \"description\": \"All evidence that is used to prove the user is a cheater. Not required, heavily encouraged.\",\n \"items\": {\n \"bsonType\": \"string\"\n }\n }\n }\n },\n \"overrideName\": {\n \"bsonType\": \"string\",\n \"title\": \"Custom Name\",\n \"description\": \"Overrides the username when displayed on any data visualizer or displayer. Should be used if a cheater changes their name often.\"\n }\n }\n}\n{\n \"_id\": \"[U:1:1341943403]\",\n \"date\": {\n \"$date\": {\n \"$numberLong\": \"1685904702000\"\n }\n },\n \"username\": \"OMEGATRONIC\",\n \"aliases\": [\n \"OMEGATRONIC\"\n ],\n \"friends\": [\n \"[U:1:1546566598]\",\n \"[U:1:1545378816]\"\n ],\n \"cheatData\": {\n \"flag\": \"cheater\",\n \"isBot\": true,\n \"infractions\": [\n \"cathook\"\n ],\n \"evidence\": [\n \"linkgoeshere\"\n ]\n },\n \"overrideName\": \"hello\"\n}\n{\n \"_id\": \"[U:1:3333333333]\",\n \"date\": {\n \"$date\": {\n \"$numberLong\": \"1685904702001\"\n }\n },\n \"username\": \"OMEGATRONIC\",\n \"aliases\": [\n \"OMEGATRONIC\"\n ],\n \"friends\": [\n \"[U:1:1111111111]\",\n \"[U:1:2222222222]\"\n ],\n \"cheatData\": {\n \"flag\": \"cheater\",\n \"isBot\": true,\n \"infractions\": [\n \"cathook\"\n ],\n \"evidence\": [\n \"linkgoeshere\"\n ]\n },\n \"overrideName\": \"hello\"\n}\n{\n \"failingDocumentId\": \"[U:1:3333333333]\",\n \"details\": {\n \"operatorName\": \"$and\",\n \"clausesNotSatisfied\": [\n {\n \"index\": {\n \"$numberInt\": \"0\"\n },\n \"details\": {\n \"operatorName\": \"$eq\",\n \"specifiedAs\": {\n \"title\": \"userList\"\n },\n \"reason\": \"field was missing\"\n }\n },\n {\n \"index\": {\n \"$numberInt\": \"1\"\n },\n \"details\": {\n \"operatorName\": \"$eq\",\n \"specifiedAs\": {\n \"required\": [\n \"_id\",\n \"date\",\n \"username\",\n \"cheatData\"\n ]\n },\n \"reason\": \"field was missing\"\n }\n },\n {\n \"index\": {\n \"$numberInt\": \"2\"\n },\n \"details\": {\n \"operatorName\": \"$eq\",\n \"specifiedAs\": {\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\",\n \"title\": \"User ID\",\n \"description\": \"Holds the Steam3 ID of an entered user. Is a primary key. Required.\"\n },\n \"date\": {\n \"bsonType\": \"date\",\n \"title\": \"Addition Date\",\n \"description\": \"Holds a MongoDB Date that represents the date and time that the data was added. Required.\"\n },\n \"username\": {\n \"bsonType\": \"string\",\n \"title\": \"Username\",\n \"description\": \"The current username of a given user, obtained via Steam API. Required.\"\n },\n \"aliases\": {\n \"bsonType\": \"array\",\n \"title\": \"Aliases\",\n \"description\": \"All past aliases of the user, obtained via Steam API.\",\n \"items\": {\n \"bsonType\": \"string\"\n }\n },\n \"friends\": {\n \"bsonType\": \"array\",\n \"title\": \"Friends\",\n \"description\": \"The current friends of the user, obtained via Steam API.\",\n \"items\": {\n \"bsonType\": \"string\"\n }\n },\n \"cheatData\": {\n \"bsonType\": \"object\",\n \"title\": \"Cheat Data\",\n \"description\": \"An object that contains what the user is logged as, reasons why if they're not innocent, and optional evidence links to prove they cheat. Required entries: flag, isBot.\",\n \"required\": [\n \"flag\",\n \"isBot\"\n ],\n \"properties\": {\n \"flag\": {\n \"bsonType\": \"string\",\n \"title\": \"Flag\",\n \"description\": \"What level of suspicion the user is at. Required.\"\n },\n \"isBot\": {\n \"bsonType\": \"bool\",\n \"title\": \"Is Bot\",\n \"description\": \"A simple boolean to say if the user is a bot. Required.\"\n },\n \"infractions\": {\n \"bsonType\": \"array\",\n \"title\": \"Infractions\",\n \"description\": \"All possible or confirmed infractions/cheats the user has demonstrated employing. Should only be filled out if the flag is suspicious or cheater.\",\n \"items\": {\n \"bsonType\": \"string\"\n }\n },\n \"evidence\": {\n \"bsonType\": \"array\",\n \"title\": \"Evidence\",\n \"description\": \"All evidence that is used to prove the user is a cheater. Not required, heavily encouraged.\",\n \"items\": {\n \"bsonType\": \"string\"\n }\n }\n }\n },\n \"overrideName\": {\n \"bsonType\": \"string\",\n \"title\": \"Custom Name\",\n \"description\": \"Overrides the username when displayed on any data visualizer or displayer. Should be used if a cheater changes their name often.\"\n }\n }\n },\n \"reason\": \"field was missing\"\n }\n }\n ]\n }\n}\n",
"text": "So a combination of things has happened:The AS validator states no errors:\n\nimage1657×886 45.9 KB\nHowever, if I attempt to use Compass, with the same pasted schema mind you:\n\nimage1133×601 44.1 KB\nI genuinely don’t understand this stuff, I’ll post the updated schemas, test files, and errors but this is truly pain inducing at this point.Updated SchemaUpdated Test File Numero Uno y Dos\nFile 1 (Currently In Database):File 2:Error",
"username": "pinheadtf2"
},
{
"code": "",
"text": "Further Update:\nEven if I removed the required arguments on a fresh off the printer collection, it still continues to error stating I have missing fields.???",
"username": "pinheadtf2"
},
{
"code": "",
"text": "Are you saying one of those two files passes validation and the other does not?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "I temporarily disabled validation through Compass by switching it to warn instead of error which let it go through, to let me test it with App Services’ schema validator. It passed the tests there, and the other file does as well.Attempting to insert after switching back to error results in failure.",
"username": "pinheadtf2"
},
{
"code": "",
"text": "I went thru something like this once with MongoDB validation, and it turned out that I had spelled something 2 different ways in 2 different places. ISTR I looked at it for a week before I finally spotted it.",
"username": "Jack_Woehr"
}
] | Validation states "field was missing" despite them existing | 2023-06-04T20:55:45.768Z | Validation states “field was missing” despite them existing | 806 |
null | [
"atlas-cluster",
"rust"
] | [
{
"code": " DnsResolve { message: \"No connections available\" }use mongodb::{bson::doc, options::ClientOptions, Client};\n\nuse std::error::Error;\n\n#[tokio::main]\nasync fn main() -> Result<(), Box<dyn Error>> {\n let uri = \"mongodb+srv://<uname>:<pw>@cluster0.cugioru.mongodb.net/?retryWrites=true&w=majority\";\n let mut client_options = ClientOptions::parse(uri).await.unwrap();\n\n client_options.app_name = Some(\"My App\".to_string());\n\n ERROR HERE >>> let client: Client = Client::with_options(client_options).unwrap();\n let mut client_options = ClientOptions::parse(uri).await.unwrap();\n\n ... Do stuff with client\n\n \n Ok(())\n}\n",
"text": "Hello,I’ve been trying to run a very simple program, to test out mongo with rust. However, I can’t seem to connect at allI keep getting DnsResolve { message: \"No connections available\" }\nMy coworkers, running the exact same code are getting no errors. So it must be a problem on my side but there is no documentation or info anywhere to help me find out what this problem could be.Here is the code, with credentials obstructed obvHelp is appreciated. Thanks in advance",
"username": "FQuark"
},
{
"code": " DnsResolve { message: \"No connections available\" }nslookup -type=srv cluster0.cugioru.mongodb.net\nnslookup -type=txt cluster0.cugioru.mongodb.net\n",
"text": "Hi @FQuark,I keep getting DnsResolve { message: \"No connections available\" }\nMy coworkers, running the exact same code are getting no errors. So it must be a problem on my side but there is no documentation or info anywhere to help me find out what this problem could be.I’m not too familiar with rust but based off the error, it might be related to the DNS resolution of the SRV record. Are your co-workers connecting to the cluster using a different network?Could you also try the following from the same client and advise the output?From my own network I was able to resolve the 3 hostnames associated with the above record.Look forward to hearing from you.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hello,Sorry for the very long delay. I was using a workaround using a different machine which isn’t very convenient.My co-workers are indeed working from a different network. This seems to be machine specific as other devices on my network are able to connect.Running those lookups, none of them are connecting on my work machine, but they all work on my personal one.I’m running Arch linux-lts 6.1.31-1 on Wayland, and iwd on its own for network stuff.Thanks for the help, sorry again for the delay",
"username": "FQuark"
},
{
"code": "",
"text": "My co-workers are indeed working from a different network. This seems to be machine specific as other devices on my network are able to connect.Running those lookups, none of them are connecting on my work machine, but they all work on my personal one.I do agree with your assessment here regarding the issue being specific to the problem machine. In saying so, it doesn’t appear that this is a MongoDB related issue. Unfortunately I am not too familiar with the environment you have specified below and how to troubleshoot DNS issues here:I’m running Arch linux-lts 6.1.31-1 on Wayland, and iwd on its own for network stuff.Generally the Google DNS servers are able to resolve the Atlas DNS SRV records - I am not sure of the steps of how to force your machine to use this DNS but is it something you have tried from a troubleshooting perspective?Regards,\nJason",
"username": "Jason_Tran"
}
] | Rust driver : DnsResolve { message: "No connections available" } | 2023-05-18T18:19:25.517Z | Rust driver : DnsResolve { message: “No connections available” } | 865 |
[] | [
{
"code": "",
"text": "It must be an object. While I am testing its never return null.",
"username": "Sharkman_N_A"
},
{
"code": "",
"text": "I am inserting like that",
"username": "Sharkman_N_A"
},
{
"code": "",
"text": "Ho @Sharkman_N_A,\nI think it’s because you’re indicating a field anyway, without an associated value, so it automatically initializes the value to null.BR",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "I think it’s because you’re indicating a field anyway, without an associated value, so it automatically initializes the value to null.Its indicated field. Sometimes buyer object is returning null\n\nimage1078×62 9.43 KB\n",
"username": "Sharkman_N_A"
}
] | Sometimes object writing null but app is not sending null value | 2023-06-04T17:48:20.798Z | Sometimes object writing null but app is not sending null value | 392 |
|
null | [
"node-js",
"mongoose-odm"
] | [
{
"code": "",
"text": "I would greatly appreciate any insights, suggestions, or solutions from the community to help resolve this problem. Thank you in advance for your assistance!",
"username": "saka_oluwasola"
},
{
"code": "",
"text": "Hello @saka_oluwasola and welcome to the community!Have you tried connecting with mongosh? This is the simplest health check for the network connection.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Thanks so much have tried to connecting with mongodb but I can’t be able to do it",
"username": "saka_oluwasola"
},
{
"code": "mongosh",
"text": "Not mongodb … the mongosh command.",
"username": "Jack_Woehr"
}
] | MongoNetworkError: connection to <your MongoDB Atlas cluster address> closed | 2023-06-03T19:57:06.528Z | MongoNetworkError: connection to <your MongoDB Atlas cluster address> closed | 888 |
[
"delhi-mug"
] | [
{
"code": "Partner Solutions Architect at MongoDBSoftware Engineer, Community @ MongoDBLead - MUG Delhi NCR | Software Engineer @ SAP LabsLead - MUG Delhi NCR | Founder @CosmoCloud Lead - MUG Delhi NCR",
"text": "\nMUG Delhi-NCR - Event Deck960×540 88.4 KB\nDelhi-NCR MongoDB User Group is hosting a meetup on 3rd June 2023 @ MongoDB Office, Gurugram for MongoDB Community in the region.RSVP to join the Waitlist: Please click on the “ RSVP ” link at the top of this event page if you plan to attend. The link should change to a green button if you RSVPed. Join us for some amazing tech sessions, networking, and fun. Meet other MongoDB Developers, Enthusiasts, Customers, and Experts to get all the required knowledge and ideas you need to build your giant idea.RSVP to join the Waitlist: Please click on the “ RSVP ” link at the top of this event page if you plan to attend. The link should change to a green button if you’ve RSVPed. Stay tuned for more updates! In the meantime make sure you join the Delhi-NCR Group to introduce yourself and stay abreast with future meetups and discussions.Event Type: In-Person\n Location: 8th Floor, MongoDB Office, Gurugram .\n Floor 8th, Building - 10C, DLF Cyber City, Sector 24, Gurugram, Haryana 122001Please Note: We have limited seats available for the event. RSVP on the event page to express your interest and enter the waitlist. We will contact you to collect more information and confirm your attendance.Event Type: In-Person\nLocation: 8th Floor, MongoDB Office, Gurugram \nutsav_talwar1339×1675 372 KB\nPartner Solutions Architect at MongoDB\nLuv607×762 73.2 KB\n\nKushagra_Kesav_1632×2190 359 KB\nSoftware Engineer, Community @ MongoDBLead - MUG Delhi NCR | Software Engineer @ SAP LabsLead - MUG Delhi NCR | Founder @CosmoCloud Lead - MUG Delhi NCR",
"username": "Priyanka_Taneja"
},
{
"code": "",
"text": "Really looking forward to attend, learn and connect/network with the speakers and other attendees.\nGreat opportunity!",
"username": "Divyansh_Agrawal"
},
{
"code": "",
"text": "Wow Excited for the event . Last Mongo DB Event at Mongo DB Office was just Amazing . Looking forward for again wonderful Experience!! :))",
"username": "Yash_Sisodia27"
},
{
"code": "",
"text": "Last time I wasn’t able to join the meet. Let’s meet this time. Excited to meet some great personalities.",
"username": "Ratin_Tech"
},
{
"code": "",
"text": "Would be my first time attending this event. looking forward to it",
"username": "HARSHIT_RAJ_21BCG10006"
},
{
"code": "",
"text": "Wasn’t able to get shortlisted last time :(, Looking forward to attending the meetup this time!",
"username": "Narayan_Soni"
},
{
"code": "",
"text": "This will be my first offline event. Hoping for get great experience and create more connections",
"username": "Abhay_Mishra"
},
{
"code": "",
"text": "Hello, by when we can get confirmation mail for the event so that we can book our train tickets and take permission form our college. Getting delayed will make it more difficult, so please provide confirmation mail as quickly as possible. Thank you.",
"username": "HARSHIT_RAJ_21BCG10006"
},
{
"code": "",
"text": "By when we will get confirmation mail???",
"username": "Harshit_Raj2"
},
{
"code": "",
"text": "Hey Harshit,\nWe will be rolling out a form tomorrow to have everyone who registered confirm if they are planning to attend. Based on the responses we will share the confirmation emails by mid-next week.",
"username": "Harshit"
},
{
"code": "",
"text": "But isn’t the event happening on 3rd june itself. Shouldn’t an important thing like a confirmation mail be send out a week prior before the event so that students can plan accordingly.",
"username": "HARSHIT_RAJ_21BCG10006"
},
{
"code": "",
"text": "Can’t we get confirmation mail a bit earlier because we have to take permission to leave college and it takes time.",
"username": "Harshit_Raj2"
},
{
"code": "",
"text": "The form is not yet received, Could you please send the confirmation mail as soon as possible so that we can take permission from college.",
"username": "Sudhir_Venkatesh"
},
{
"code": "",
"text": "I didn’t receive form till now, Could you please send the confirmation mail as soon as possible so that we can take permission from college.",
"username": "Abhay_Mishra"
},
{
"code": "",
"text": "Hey @Sudhir_Venkatesh, @Harshit_Raj2 and @HARSHIT_RAJ_21BCG10006,\nThe email to confirm your attendance has been sent out. Please express your interest in the form linked in the email. We understand your concerns regarding seeking permission from the college authorities.We send “emails to confirm” only when RSVPs exceed the venue’s capacity, accounting for expected dropouts. If the threshold is not reached, we directly confirm everyone on the waitlist. Thus, we wait until the number of RSVPs reaches a certain count before sending out “emails to confirm.”Furthermore, we noticed a significant decline in participation when confirmation was obtained a couple of weeks before the event. To improve attendance and planning, we now request confirmations one week before the event to accommodate any last-minute changes anyone might have.Looking forward to seeing you at the event Feel free to DM - if you need anything else to help expedite your college permission!",
"username": "Harshit"
},
{
"code": "",
"text": "It will be my first time attending this event. looking forward to it",
"username": "Shubham_Jaiswal2"
},
{
"code": "",
"text": "I didn’t receive form yet , Can you please send the confirmation mail or any information about any slots left?",
"username": "Shubham_Jaiswal2"
},
{
"code": "",
"text": "Hey Shubham,\nWe don’t see you RSVPed for the event and unfortunately, at the moment, we are overbooked for the event Please join the group so that we can keep you informed about the upcoming events: https://www.mongodb.com/community/forums/delhi-mug",
"username": "Harshit"
},
{
"code": "",
"text": "I didn’t recieve any form as well for the confirmation. Please help. I rsvp’d in top 100",
"username": "Narayan_Soni"
},
{
"code": "",
"text": "Is there any entry pass or mail is the only confirmation",
"username": "Ujjwal_Gupta"
}
] | Delhi-NCR MUG: MongoDB Delhi NCR June Meetup | 2023-05-17T08:39:53.703Z | Delhi-NCR MUG: MongoDB Delhi NCR June Meetup | 4,586 |
|
null | [
"aggregation"
] | [
{
"code": "A: 1-n :B\nA: 1-n :C\nA: 1-1 :D\nA: 1-1 :E\n[\n {\n \"$match\": {\n //Fields from collection A\n }\n },\n {\n \"$lookup\": {\n \"from\": \"B\",\n \"localField\": \"BId\",\n \"foreignField\": \"_id\",\n \"as\": \"B\"\n }\n },\n {\n \"$lookup\": {\n \"from\": \"C\",\n \"localField\": \"CId\",\n \"foreignField\": \"_id\",\n \"as\": \"C\"\n }\n },\n {\n \"$lookup\": {\n \"from\": \"D\",\n \"localField\": \"_id\",\n \"foreignField\": \"leadId\",\n \"as\": \"D\"\n }\n },\n {\n \"$lookup\": {\n \"from\": \"E\",\n \"localField\": \"_id\",\n \"foreignField\": \"leadId\",\n \"as\": \"E\"\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$B\"\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$C\"\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$D\",\n \"preserveNullAndEmptyArrays\": true\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$E\",\n \"preserveNullAndEmptyArrays\": true\n }\n },\n {\n \"$match\": {\n //Fields from collection B,C,D,E\n }\n },\n {\n \"$sort\": {\n //Fields from all the collections\n }\n },\n {\n \"$skip\": 0\n },\n {\n \"$limit\": 20\n }\n]\n[\n {\n \"$lookup\": {\n \"from\": \"B\",\n \"localField\": \"BId\",\n \"foreignField\": \"_id\",\n \"as\": \"B\"\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$B\",\n \"preserveNullAndEmptyArrays\": false\n }\n },\n {\n \"$match\": {\n //Fields from collection B\n }\n },\n {\n \"$sort\": {\n //Fields from collection B\n }\n },\n {\n \"$lookup\": {\n \"from\": \"C\",\n \"localField\": \"CId\",\n \"foreignField\": \"_id\",\n \"as\": \"C\"\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$C\",\n \"preserveNullAndEmptyArrays\": false\n }\n },\n {\n \"$match\": {\n //Fields from collection C\n }\n },\n {\n \"$sort\": {\n //Fields from collection C\n } \n },\n {\n \"$lookup\": {\n \"from\": \"D\",\n \"localField\": \"_id\",\n \"foreignField\": \"leadId\",\n \"as\": \"D\"\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$D\",\n \"preserveNullAndEmptyArrays\": true\n }\n },\n {\n \"$match\": {\n //Fields from collection D\n }\n },\n {\n \"$sort\": {\n //Fields from collection D\n }\n },\n {\n \"$lookup\": {\n \"from\": \"Es\",\n \"localField\": \"_id\",\n \"foreignField\": \"leadId\",\n \"as\": \"E\"\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$E\",\n \"preserveNullAndEmptyArrays\": true\n }\n },\n {\n \"$match\": {\n //Fields from collection E\n }\n },\n {\n \"$sort\": {\n //Fields from collection E\n }\n },\n {\n \"$skip\": 20\n },\n {\n \"$limit\": 20\n }\n]\n",
"text": "Hi\nI’m having some trouble with a specific aggregation that have a poor performance and would really appreciate help with optimizing it.my db is consist with 4 collections, for example, A,B,C,D,E They are connected as follow:In my application i need to return an array of documents from collection A with all the fields from all the other collections, and will need to filer + sort by every field within the 5 collections (according to a user operation)\nMy aggregation was initially build as follows:after some reading i figure out i need to match + sort fields as soon as possible in order to eliminates document, so i rewrote my aggregation as follows:It did improved the performance in some of the cases, but i’ve also upgraded mongo from 4.4 to 6.0.\nBecause of the slot-based execution mechanism in mongo 6.0 my previous aggregation actually having better performance in some cases, so i’m kind of back to square one.My question is:Thanks for the help",
"username": "Shaked_Hadas"
},
{
"code": "",
"text": "A few things1 - all your $match stages should be move into the corresponding $lookup stage\n2 - you should remove all $sort stages except the last one\n3 - you do not need to $unwind before doing $lookup on an arrayRequiring 4 $lookup for a paging use-case seems abusing.",
"username": "steevej"
},
{
"code": "",
"text": "Thank you for your help! i will look into you suggestions!\none question about the unwind, could you elaborate on that? i’m using $unwind after $lookup (not before) because in 1:1 or 1:N scenarios lookup returns an array with 1 item.EDIT: I just tried moving the $match into the $lookups but having a $pipeline in a $lookup stage also break the slot-based execution mechanism",
"username": "Shaked_Hadas"
},
{
"code": "",
"text": "question about the unwindI misread the code. I thought you were using the result array of 1 $lookup to perform the next $lookup and this is not the case.the slot-based execution mechanismI will have to read about that since I am not familiar.",
"username": "steevej"
}
] | Help with aggregation optimization | 2023-06-01T12:46:24.630Z | Help with aggregation optimization | 447 |
null | [
"swift"
] | [
{
"code": "self.observerfunc showConnectionStatus(realm: Realm) {\n let session = realm.syncSession\n\n // Observe connectionState for changes using KVO\n self.observer = session!.observe(\\.connectionState, options: [.initial]) { (syncSession, change) in\n switch syncSession.connectionState {\n case .connecting:\n print(\" -> Connecting...\")\n case .connected:\n print(\" -> Connected\")\n case .disconnected:\n print(\" -> Disconnected\")]\n default:\n break\n }\n }\n}\n",
"text": "Our project needs to know if it’s got a connection to Realm (Flexible Sync) or not. There are some tasks that should not happen if the app is offline (deleting a certain object for example).We are using KVO to determine connection status - however, when the app goes from connected to disconnected it can sometimes take up a minute or longer for the event to fire (the opposite happens almost instantly)Any suggestions on how to know if the app has gone offline/disconnected faster?Here’s the code we’re using for tracking connection status. self.observer is a NSKeyValueObservation",
"username": "Jay"
},
{
"code": "",
"text": "Would you mind clarifying what you mean by deleting an object while it’s offline? Is that due to a local aggregation or index on the client?Would you mind giving some more detail in regards to this use case? Is this an IoT device or service per chance? There’s other ways to do this would be to logically construct a dispatch queue and have that queue just sever the connection as necessary or open the Realm connection.That’s what I’ve seen done and have actually done on an iOS and Android App for a devices remote control system.",
"username": "Brock"
},
{
"code": "if let syncSession = realm.syncSession {\n syncSession.suspend()\n} else {\n print(\" no syncSession\")\n}\n",
"text": "@Brock Thanks for taking a look.This use case is a multi-user contacts app. Contact details can be added, and while they are being added the contact is considered “in-use”. Details would be “called contact” or “texted contact”. Contacts can also be deleted if they are not 'in-use\". In the UI, a user can see a list of contacts and then select one to get details. From there, additional details can be added.The objective is preventing a contact from being deleted while its in the process of being added to.What we do in code, when the user selects a contact, we flag it as “in-use” which syncs to the server. That makes it so other users can see it’s in use, and prevents other users from deleting it while it’s “in-use”Users that are offline cannot add to or delete a contact. That would be a syncing nightmare if an offline user deleted a bunch of contacts that were actually in use by other users that were online. Delete’s always win so when that user went back online, poof - there goes the contacts everyone else were using.When a user is offline, the “in-use” status of a contact is unknown. So in code we prevent the user from deleting anything.The above code works perfectly if we get a syncSession and suspend it in code like this…it fires immediately. But that’s not the use case. This use case is if the device disconnects from the server due to dropped internet.For example; if I am running that code on the hard-wired iMac I am sitting in front of and pull out the ethernet jack (simulating a dropped internet connection). The code does not fire immediately - it can take over a minute for Realm to know it’s offline. We’re looking for something to indicate offline status faster - say within a few seconds. Many other databases have that ability.Edit:In testing, when an internet connection drops it can take a minute+ for Realm to realize it’s offline. However, when it’s offline and then goes back online, it recognizes that with a couple of seconds.",
"username": "Jay"
},
{
"code": "contact.deleted = nowcontact.lastUpdated = nowlastDeleted > lastUpdatedlastDeleted > lastUpdatedlastDeleted",
"text": "Hey Jay,It sounds like you are using “in-use” as a lock, and you want to gate behavior on this lock. I think you want to be very careful doing this, because this behavior is inherently racy. For instance, consider the case where client A and client B are both online, and client A deletes a contact at the same time client B updates it.\nIn this scenario, setting the “in-use” flag will race with the delete, and it would be likely that the delete would go through anyway.Adding a dependency around online status will only exacerbate this problem, even if you could detect online status with zero latency (which you can’t).Instead, I’d recommend the following:",
"username": "Sudarshan_Muralidhar"
},
{
"code": "",
"text": "@Sudarshan_MuralidharThanks for the response and insight. The “in-use” property is definitely racey and I want to avoid that.One solution is for the app to know whether it’s online/connected or not. If it’s not connected, disallow objects to be deleted, per my question.We have also considered a soft-delete mechanism as you suggest but are having a hard time making it work for offline situations. Let me provide an example.Suppose we have an app that stores scientific articles, they can be added, edited and removed. One such article is called “A Solution for Faster Than Light Travel” but within the body there are no solutions (yet).Two users are online and are looking at the article in their UI. One user drops connection. Meanwhile the other user adds an equation to the article of how to achieve faster than light travel (article.lastUpdated = now)Meanwhile the offline user says “Faster than light is not possible” and soft-deletes it (article.deleted = now).Then the offline user reconnects. Realm sync’s and deleted > lastUpdated so the article disappears from view and on the next Cron cycle, it’s removed entirely.Obviously that would be bad.If Realm knows it’s not connected, then the app could prevent the user from soft-deleting in the first place, preventing the issue entirely.The ultimate goal is to give user(s) the flexibility to add, edit objects and remove old or unused objects but at the same time keeping objects that are being used intact. “used” would be objects that are being viewed or in the process of being edited by another user, hence the “in-use” reference.",
"username": "Jay"
},
{
"code": "",
"text": "Understood. Unfortunately, any “is connected” approach would be racy as well - online status can change at any time (and indeed, can’t always be determined reliably in a circumstance with spotty connection).Further, the situation you describe can happen in an entirely online case, if a device simply hasn’t seen the latest changes yet.You could build a better solution for presence - for instance, you could create a scheduled trigger that updates a “heartbeat” document on the cloud regularly, and listen for that change on the client. If the change does not come through, you would know you’re offline. We are also thinking about better presence solutions internally.However, I would not recommend building any application logic that depends on online/offline state, because it can’t always be reliably determined and will be a source of future bugs.Maybe one way to solve this could be to automatically consider an article “undeleted” if the number of solutions > 1?",
"username": "Sudarshan_Muralidhar"
},
{
"code": "",
"text": "Thanks for the suggestions @Sudarshan_MuralidharSeeing that MongoDB (Realm) is a serverless, multi-user platform, it’s hard to imagine an app or use case where the online state of a user is not considered, record/object locking doesn’t exist and data is “never” deleted (generally speaking)How are those processes generally handled in a multi-user MongoDB environment where apps are offline and online all the time?A situation where user A is working with an object and User B deletes it seems like it would be a common occurrence. Perhaps it’s not?Soft-deletes “work” but are obviously not ideal and fail with offline/online issues. Heartbeat is something we’ve done but it’s a lot of back and forth communication the developer has to code into the app. Works, but again, not ideal.How are other developers handling these issues? Realm or otherwise? Is there perhaps a white paper or general guidance? Or have we just come upon a edge case that just doesn’t happen?I understand the scope of the the ask, but working through Parse, Couchbase, Firebase and a few others, it wasn’t really an issue.Any insight/direction is greatly appreciated.Jay",
"username": "Jay"
},
{
"code": "",
"text": "Jay, I apologize for the very long delay here.\nI will say that many Realm users who expect concurrent writes on the same objects don’t allow deletes, or are okay with the current semantics. For completeness, those semantics are:I’d suggest looking into potential solutions that could limit deletions (soft-deleting or archiving for example) of objects that could be touched by multiple devices and must be able to “survive” a delete in the way you’re describing.",
"username": "Sudarshan_Muralidhar"
}
] | Realm Connection Status | 2023-04-19T21:09:50.064Z | Realm Connection Status | 1,196 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.