image_url
stringlengths 113
131
⌀ | tags
list | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | []
| [
{
"code": "AutoEncryptionSettings autoEncryptionSettings = AutoEncryptionSettings.builder()\n .keyVaultNamespace(keyVaultNamespace)\n .kmsProviders(kmsProviders)\n .schemaMap(new HashMap<String, BsonDocument>() {{\n put(\"admin\" + \".\" + \"school\",\n // Need a schema that references the new data key\n BsonDocument.parse(\"{\"\n + \" properties: {\"\n + \" student: {\"\n + \" encrypt: {\"\n + \" keyId: [{\"\n + \" \\\"$binary\\\": {\"\n + \" \\\"base64\\\": \\\"\" + base64DataKeyId + \"\\\",\"\n + \" \\\"subType\\\": \\\"04\\\"\"\n + \" }\"\n + \" }],\"\n + \" bsonType: \\\"object\\\",\"\n + \" algorithm: \\\"AEAD_AES_256_CBC_HMAC_SHA_512-Random\\\"\"\n + \" }\"\n + \" }\"\n + \" },\"\n + \" \\\"bsonType\\\": \\\"object\\\"\"\n + \"}\"));\n }}).build();\ncom.mongodb.MongoException: HMAC validation failure\n at com.mongodb.MongoException.fromThrowableNonNull(MongoException.java:83)\n at com.mongodb.client.internal.Crypt.fetchKeys(Crypt.java:286)\n at com.mongodb.client.internal.Crypt.executeStateMachine(Crypt.java:244)\n at com.mongodb.client.internal.Crypt.decrypt(Crypt.java:128)\n at com.mongodb.client.internal.CryptConnection.command(CryptConnection.java:121)\n at com.mongodb.client.internal.CryptConnection.command(CryptConnection.java:131)\n at com.mongodb.internal.operation.CommandOperationHelper.executeCommand(CommandOperationHelper.java:345)\n at com.mongodb.internal.operation.CommandOperationHelper.executeCommand(CommandOperationHelper.java:336)\n at com.mongodb.internal.operation.CommandOperationHelper.executeCommandWithConnection(CommandOperationHelper.java:222)\n at com.mongodb.internal.operation.FindOperation$1.call(FindOperation.java:658)\n at com.mongodb.internal.operation.FindOperation$1.call(FindOperation.java:652)\n at com.mongodb.internal.operation.OperationHelper.withReadConnectionSource(OperationHelper.java:583)\n at com.mongodb.internal.operation.FindOperation.execute(FindOperation.java:652)\n at com.mongodb.internal.operation.FindOperation.execute(FindOperation.java:80)\n at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:170)\n at com.mongodb.client.internal.MongoIterableImpl.execute(MongoIterableImpl.java:135)\n at com.mongodb.client.internal.MongoIterableImpl.iterator(MongoIterableImpl.java:92)\n at com.mongodb.client.internal.MongoIterableImpl.forEach(MongoIterableImpl.java:121)\n at com.mongodb.client.internal.MongoIterableImpl.into(MongoIterableImpl.java:130)\n at Test.main(Test.java:165)\nCaused by: com.mongodb.crypt.capi.MongoCryptException: HMAC validation failure\n at com.mongodb.crypt.capi.MongoCryptContextImpl.throwExceptionFromStatus(MongoCryptContextImpl.java:145)\n at com.mongodb.crypt.capi.MongoCryptContextImpl.throwExceptionFromStatus(MongoCryptContextImpl.java:151)\n at com.mongodb.crypt.capi.MongoCryptContextImpl.addMongoOperationResult(MongoCryptContextImpl.java:83)\n at com.mongodb.client.internal.Crypt.fetchKeys(Crypt.java:282)\n ... 18 more\n",
"text": "Hi, I recently started using CSFLE with enterprise edition. We are using the following AutoEncryptionSettings settings to create the client:Whenever our App restarts per server, we create a new data encryption key using “clientEncryption.createDataKey(“local”, new DataKeyOptions())” command as we didn’t want to use a single data encryption key(DEK). The output of the above createDataKey command is base64 encoded and pass to the schemaMap as “base64DataKeyId” as shown in above autoEncryptionSettings. Problem is, we inserted data into “school” collection using 3 App servers(that is, we currently created and used 3 DEKs in 3 servers to insert entries) and when we try to fetch all these records using a 4th server, we are getting the following exception:But everything works fine when we use a single DEK across all servers.\nSo, I would like to understand what could be the issue when using multiple DEKs or if it is recommended to use a single DEK.",
"username": "Vicky"
},
{
"code": "",
"text": "The only time I’ve seen a HMAC validation error is when I used the wrong local key for the local kms provider. Check you’re using the right keys!",
"username": "James_Ivings"
},
{
"code": "",
"text": "Same problem, is possible use more than one key right?\nI´m starting to thing that is not possible.",
"username": "Christopher_Duran"
},
{
"code": "",
"text": "Well, and another is question is:Following this example of FLE, i noticed that one .bin file decrypt all users, even if i created new user generating new .bin file with a new key.In other words, would be great have an example of multiple keys!",
"username": "Christopher_Duran"
}
]
| CSFLE: Getting HMAC validation error when using multiple data encryption keys | 2021-05-15T15:41:39.108Z | CSFLE: Getting HMAC validation error when using multiple data encryption keys | 3,687 |
null | [
"node-js"
]
| [
{
"code": "",
"text": "*name is uniquecollection : {\n[{\n_id: ‘1’,\nname: “Bob”,\n},\n{\n_id: ‘2’,\nname: “sara”,\n},\n{\n_id: ‘3’,\nname: “jon”,\n},\n],\n}\narrayToFind = [“Bob”,“sara”]need to find all matching names in one call.expected result => {\n_id: ‘1’,\nname: “Bob”,\n},\n{\n_id: ‘2’,\nname: “sara”,\n},",
"username": "Dev_INX"
},
{
"code": "",
"text": "Please enjoy the fine documentation of $in.",
"username": "steevej"
},
{
"code": "",
"text": "$in returns only one doc and not all of them",
"username": "Dev_INX"
},
{
"code": "mongosh> c.find()\n{ _id: '1', name: 'Bob' }\n{ _id: '2', name: 'sara' }\n{ _id: '3', name: 'jon' }\nmongosh> arrayToFind = [ \"Bob\" , \"sara\" ]\nmongosh> c.find( { \"name\" : { \"$in\" : arrayToFind } } )\n{ _id: '1', name: 'Bob' }\n{ _id: '2', name: 'sara' }\n",
"text": "$in returns only one docIn does not. See the many examples in from the documentation.Or this one:",
"username": "steevej"
},
{
"code": "",
"text": "thanks I find the bug in my code",
"username": "Dev_INX"
},
{
"code": "",
"text": "I find the bugPlease share so others do not fall on the same trap.",
"username": "steevej"
}
]
| Find multiple ids in collection (one line query) | 2022-12-08T13:52:24.415Z | Find multiple ids in collection (one line query) | 11,047 |
null | [
"queries",
"node-js",
"data-modeling"
]
| [
{
"code": "{\n name: \"Tesla Inc.\",\n category: \"Automotive\",\n contact: {\n state : {\n name: \"Texas\",\n city: \"Austin\", \n address: {\n streetName: \"Tesla Road\",\n number: '1'\n } \n }\n }\n} \nfindOne({ name : \"Tesla\"}) {_id: '637e4397f6723844191aa03d', name: 'Tesla', category: \n 'Automotive', contact: {…}} \nundefinedstoreRoutes.route(\"/enterprise\").get(function (req, res) {\n let db_connect = dbo.getDb(\"res\");\n\n const query = { name : \"Tesla\"};\n \n db_connect\n .collection(\"stores\")\n .findOne(query,function (err, result) {\n if (err) throw err;\n res.json(result);\n });\n});\nhttp://localhost:5000/enterprise{\"_id\":\"637e4397f6723844191aa03d\",\"name\":\"Tesla\",\"category\":\"Automotive\",\"contact\":{\"state\":{\"name\":\"Texas\",\"city\":\"Austin\",\"address\":{\"streetName\":\"Tesla Road\",\"number\":\"1\"}}}}\n function GetEnterprise() {\n const [store, setStore] = useState({\n })\n useEffect(() => {\n async function fetchData() {\n const response = await fetch(`http://localhost:5000/enterprise`);\n \n if (!response.ok) {\n const message = `An error has occurred: ${response.statusText}`;\n window.alert(message);\n return;\n }\n const record = await response.json();\n if (!record) {\n // window.alert(`Record with id ${id} not found`);\n window.alert(`Record with id not found`);\n return;\n } \n setStore(record);\n } \n fetchData(); \n return;\n }, [1]); \n \n //debugging\n console.log('tesla: ' + store);\n window.store = store;\n let res_json = JSON.stringify(store);\n console.log('res_json :' + res_json);\n \n return store;\n } \nstore console.log('tesla: ' + store);\n window.store = store;\n let res_json = JSON.stringify(store);\n console.log('res_json :' + res_json);\n \n[object Object]{_id: '637e4397f6723844191aa03d', name: 'Tesla', category: 'Automotive', contact: {…}} \ncontactundefined let res_json = JSON.stringify(store);\n console.log('res_json :' + res_json);\n {\"_id\":\"637e4397f6723844191aa03d\",\"name\":\"Tesla\",\"category\":\"Automotive\",\"contact\":{\"state\":{\"name\":\"Texas\",\"city\":\"Austin\",\"address\":{\"streetName\":\"Tesla Road\",\"number\":\"1\"}}}}\n",
"text": "I am using the MERN stack for my current project. So I am facing this problem:Let’s say that I have the following document in MongoDB :What I get as response after using findOne({ name : \"Tesla\"}) is :As you can see contact object is undefinedFollows my coding processResult: After typing browser url http://localhost:5000/enterprise returns the expected value:Result:\nBefore GetEnterprise() function returns store I have added these 4 lines of code for debugging:1st line logs [object Object] which is not that informative for what I am getting back as a response.\nSo I came up with 2nd line which enables to debug directly from the browser console.\nAfter I type store my console logs:So my contact object is missing(undefined).Now fun fact is the 3rd and 4rd lines :My console logs the whole object as expected:Which is really weird.\nI guess it has something to do with the async and await functions. But I am not sure.\nWhat am I doing wrong?\nAny suggestions …?",
"username": "nikos_anastasiou"
},
{
"code": "undefined{…}contact: {…}",
"text": "I do not think yourcontact object is undefinedI think that{…}is just the way console.log outputs complex object.The fact that JSON.stringify prints it correctly as you wroteconsole logs the whole object as expectedconfirms that it is not missing(undefined).I guess it has something to do with the async and await functionsWrong. Your object is not missing or undefined. If it was you would get an exception when you try to access it.Any suggestions …?Use the contact object since it is not missing or undefined.What am I doing wrong?You are simply misunderstanding the meaning of the outputcontact: {…}If it was really missing or undefined it would not even show up.",
"username": "steevej"
},
{
"code": "",
"text": "Please make a post as the solution for your other post.",
"username": "steevej"
},
{
"code": " console.log('enterprise contact' + enterprise.contact.state.name);app.js:210 Uncaught TypeError: Cannot read properties of undefined (reading 'state')",
"text": "You are right! “contact” is a complex object and “{…}” is the way complex objects are displayed on the console. Although, my problem still remains as I can’t get the embedded document. From example, in my app.js console.log('enterprise contact' + enterprise.contact.state.name); prints app.js:210 Uncaught TypeError: Cannot read properties of undefined (reading 'state')",
"username": "nikos_anastasiou"
},
{
"code": "",
"text": "Rather thanconsole.log(‘enterprise contact’ + enterprise.contact.state.name)try to output the top level object with\nconsole.log('enterprise ’ + enterprise)",
"username": "steevej"
}
]
| Can't retrieve embedded object from a MongoDB document - MERN stack | 2022-12-05T12:58:45.253Z | Can’t retrieve embedded object from a MongoDB document - MERN stack | 2,667 |
null | [
"aggregation"
]
| [
{
"code": "",
"text": "I have a a mongo db that stores user information as a document in a collection and periodically receives a document in a different collection with a value that needs to be added to one of the user documents.is it a good idea to use Kafka connect to facilitate this?I can grab the data from the update doc easily but sinking it to its proper field has been harder.",
"username": "Dylan_Campopiano"
},
{
"code": "_idxyz",
"text": "Hi @Dylan_Campopiano and welcome in the MongoDB Community !If you are using MongoDB Atlas, I would just use an Atlas Trigger in the Realm App Services. It’s like 3 or 4 lines of code.If not, then I think you have to use Kafka to retrieve the docs as they arrive in your collection and setup a Kafka consumer so you can consume these documents and send an update command to MongoDB to update the related document in the other collection.I think Kafka connector are just capable of reading or writing entire docs into a collection. Not deal with specific tasks like \"get just this field and the _id and send an update to collection xyz\".Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Using kafka connect to grab data from a newly uploaded document and update an existing one | 2022-12-07T22:12:17.155Z | Using kafka connect to grab data from a newly uploaded document and update an existing one | 1,237 |
null | [
"compass"
]
| [
{
"code": "",
"text": "I am using visual studio code, which is connected to my mongodb compass.\nI can create new databases, collection and insert documents through visual studio code but they are not showing up on my compass app.\nSimilarly, I can create new databases, collection and insert documents on my compass app but they are not showing up on my visual studio.\nI checked the connection and my IP is registered on atlas.\nPlease help.",
"username": "Mayank_Rana"
},
{
"code": "",
"text": "I am using visual studio code, which is connected to my mongodb compass.The above does not make sense. Visual studio is a client connected to a database server. Compass is a client connected to a database server. Both can be connected to the same server or connected to different servers.If you do something successfully on one client it is saved in the server. If you do not see what in the other client you have done successfully in the other client, then it means both clients are not connected to the same server.Compass does not continually update the data you see. You have to run queries and update the screen.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Databases, collection and documents not showing up on visual studio and on compass app | 2022-12-08T12:18:11.016Z | Databases, collection and documents not showing up on visual studio and on compass app | 2,096 |
[
"node-js",
"crud"
]
| [
{
"code": "",
"text": "\nScreenshot from 2022-12-08 16-38-11686×674 53.2 KB\n",
"username": "Aravindh_k"
},
{
"code": "type MySchema = {\n\tid: number;\n\tscores: number[];\n};\n\nconst collection = db.collection<MySchema>('array');\n\nawait collection.updateMany(\n\t{},\n\t{ $push: { scores: 95 } }\n);\nawaitupdateMany()",
"text": "Hi, yes I agree, it looks like there’s something wrong with the TypeScript types.If you define a type for your collection it works:Tip: You should also await the updateMany() call.",
"username": "Nick"
}
]
| Push operation is not working in updateMany | 2022-12-08T11:08:59.981Z | Push operation is not working in updateMany | 1,325 |
|
null | [
"node-js"
]
| [
{
"code": "",
"text": "MongoDB NodeJS associate developer practice exam : no way to find correct answer for the wrong answers. I’ve attempted the practice exam twice. However, not able to figure out where I did wrong and why?\nThere should be some way out to know the correct answers of the wrong questions. Otherwise the practice test might be not able to serve its purpose.",
"username": "neeraj"
},
{
"code": "",
"text": "Hi @neeraj,no way to find the correct answer for the wrong answers. I’ve attempted the practice exam twice. However, not able to figure out where I did wrong and why?At the moment, we do not display the correct answer to a practice question that you have answered incorrectly. However, we appreciate your feedback and will work with the concerned team to address it.If you have any doubts or concerns, please feel free to reach out to us.Thanks,\nKushagra Kesav",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Sure @Kushagra_Kesav\nThank you for your response. This will help. Looking forward for some way for the candidate to know the feedback of their attempt which is beyond just right/wrong option.",
"username": "neeraj"
},
{
"code": "",
"text": "The fact that they do not supply a way to find the correct answers might be based on the following information.Harsh editorial comment next, please be aware.A lot of people aiming for certification, are aiming just for certification and are not deploying the efforts to really learn the subject. They study the sample exams rather than the subject until they know enough good answers to pass the certification. They do not know much, they cannot do much but they are certified. Then they go on some medium, and read article like 10 JS questions and answers to land a job or 10 interview questions for python. They know nothing except than how to get a job. They cannot do the job but they got the job. Not providing answers to practice exam is one way to weed the field of unskilled certified people.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks @steevej for your replies.\nJust to share that I’ve cleared my certification.\nHowever, I still believe that there must be some option in the exam or there should be some way out to help a candidate.",
"username": "neeraj"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB NodeJS associate developer practice exam : no way to find correct answer for the wrong answers | 2022-11-26T08:18:14.033Z | MongoDB NodeJS associate developer practice exam : no way to find correct answer for the wrong answers | 2,503 |
null | [
"node-js"
]
| [
{
"code": "",
"text": "Hi,\nI took the [Associate Developer Node.js Practice Questions], after answered 28 questions, the page automatically showed how many questions I have passed.\nhowever, there are no “review option” for me to review the correct answers or incorrect answers to the questions,\nif anyone knows how to review the questions once the practice questions completed? thanks!",
"username": "ChangChien.Chang"
},
{
"code": "",
"text": "Hi @ChangChien.Chang,Welcome back to MongoDB Community forums Kindly post the screenshot of the page you are experiencing issues with! It will help us to narrow down the specific reason behind that.Thanks,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "\nimage940×532 111 KB\n\nimage940×519 111 KB\nNot able to figure out what wrong I am doing here.",
"username": "neeraj"
},
{
"code": "",
"text": "There are many such questions in the practice test. Sharing screenshots.\n\nimage940×470 62.9 KB\n\n\nimage940×463 76.5 KB\n\n\nimage940×520 124 KB\n\n\nimage940×475 57.8 KB\n\n\nimage940×480 57.2 KB\n\n\nimage1253×646 25.4 KB\n\n\nimage1194×478 15.7 KB\n",
"username": "neeraj"
},
{
"code": "",
"text": "\nimage940×481 43.6 KB\n\n\nimage940×435 34.4 KB\n\n\nimage940×473 37.1 KB\n",
"username": "neeraj"
},
{
"code": "",
"text": "About What are two valid method names for MongoClient class?They ask for 2 but you select 3.According to documentation, there is no method named open() for MongoClient.",
"username": "steevej"
},
{
"code": "",
"text": "About the 1KB question. Your answer seems to generates a syntax error for size : 1KB.About *What command deletes document which book is EFF and user is T.B., and returns the deleted document. According to documentation, deleteOne returns Promise, not the delete document. According to same documentation, findOneAndDelete might be more appropriate.No answer for the movies question, so it is hard to point you in the right direction.About, What schema IS the most effective? I would guess that by what is the most, they only want one. You selected 4. Having all orders of a product inside the product document can easily create a massive array. Same with reviews. Historical prices is hardly interesting for most product use-cases. Current price and availability is really useful.",
"username": "steevej"
},
{
"code": "",
"text": "As a bonus tip, when the questions are about API, just trying the API might gives you some clues about what is wrong.",
"username": "steevej"
},
{
"code": "",
"text": "Thank you @steevej for reverting.\nI found only on of your answers helpful.\nI’ve tried selecting only one option as well, however, it is not working. I’m not able to figure out the correct options.",
"username": "neeraj"
},
{
"code": "",
"text": "@steevej\n\nimage1191×628 13.4 KB\n\nI hope, the deleteone is the right method of deleting a document. However, it shows error.\ndelete_one and remove_one are not valid as per my understanding.\nremoveone seems invalid as well.",
"username": "neeraj"
},
{
"code": "",
"text": "I found only one of your answers helpful.Which one was helpful? I can certainly add more details to the ones that did not help.",
"username": "steevej"
},
{
"code": "",
"text": "@steevej I’m referring to this one.\n\nimage1191×628 13.4 KB\n\nI hope, the deleteone is the right method of deleting a document. However, it shows error.\ndelete_one and remove_one are not valid as per my understanding.\nremoveone seems invalid as well.",
"username": "neeraj"
},
{
"code": "",
"text": "The one about deleteOne was the only one useful? Really?What about the syntax size:1KB? It was not helpful, really? You selected twice answers with size:1KB and got an error twice and it is not helpful to indicate that size:1KB generates a syntax error.What about the schema question? The hints about massive array for orders inside documents, reviews inside documents and historical prices inside documents not being good schema are not useful?",
"username": "steevej"
},
{
"code": "",
"text": "See the other thread.",
"username": "steevej"
},
{
"code": "",
"text": "Hi @neeraj,Welcome to the MongoDB Community forums I hope, the deleteone is the right method of deleting a document. However, it shows error.\ndelete_one and remove_one are not valid as per my understanding.I assume you are attempting the practice question for Python language, if yes then please refer to the official documentation of the python driver to learn more.I would encourage you to re-attempt the practice question with the correct options.If you have any further questions or concerns, please don’t hesitate to contact us.Thanks,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "@Kushagra_Kesav I had attempted in NodeJS language.\nNow, I’ve cleared the actual certifications. Thank you so much for all your efforts and patiently answering my queries.\nThanks to @steevej also.\n",
"username": "neeraj"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
]
| How to get the results - Associate Developer Node.js Practice Questions | 2022-11-22T04:43:25.498Z | How to get the results - Associate Developer Node.js Practice Questions | 3,773 |
null | []
| [
{
"code": "",
"text": "hi, i have a one-to-many relationship, need to ask some pointer in ff:Any sample code will greatly help. Thanks@Mohit_Sharma",
"username": "Eman_Nollase"
},
{
"code": "",
"text": "…this is using realm-kotlin",
"username": "Eman_Nollase"
},
{
"code": "realm.write {\n val person = copyToRealm(Person())\n val children: RealmList<Child> = person.children\n val cousin: RealmList<Child> = realmListOf()\n\n // Insert object\n children.add(Child(\"Jane\"))\n\n // Remove object\n children.remove(0) \n}\n",
"text": "Hey @Eman_Nollase! Welcome to the forums.You use RealmList something like this :",
"username": "Mohit_Sharma"
},
{
"code": "\n \n orders.forEach { order ->\n checkObj.orderDetails.add(order)\n }\n copyToRealm(checkObj)\n }\n _checkState.value = CheckEntryState.CheckSuccess\n }\n }\n }\n \n fun updateCheckDetail(newCheck: Check, oldCheck: Check, orders: List<OrderDetail>) {\n viewModelScope.launch {\n realm.write {\n val check = findLatest(oldCheck)\n check?.tableNumber = newCheck.tableNumber\n check?.checkNumber = newCheck.checkNumber\n check?.serverId = newCheck.serverId\n check?.totalAmount = newCheck.totalAmount\n check?.tipAmount = newCheck.tipAmount\n check?.taxAmount = newCheck.taxAmount\n check?.baseAmount = newCheck.baseAmount\n \n ",
"text": "Hi @Mohit_Sharma ,Thanks for the reply: please check my code hereMy issue is the i am having issue on updating the list of new entry order/s to the checks.Any idea? thanks",
"username": "Eman_Nollase"
},
{
"code": "",
"text": "Can please share what issue you are getting?",
"username": "Mohit_Sharma"
}
]
| Add/remove item to one-to-many relationship | 2022-12-05T10:33:03.820Z | Add/remove item to one-to-many relationship | 1,708 |
null | [
"connecting"
]
| [
{
"code": " let url = 'mongodb+srv://{AWS_ACCESS_KEY}:{AWS_SECRET_KEY}' +\n '@cluster0.blahblah.mongodb.net/mydb?authSource=%24external&authMechanism=MONGODB-AWS' +\n '&retryWrites=true&w=majority&authMechanismProperties=AWS_SESSION_TOKEN:{AWS_TOKEN}',;\n\n const ec2IMDCredsProvider = fromInstanceMetadata();\n const { accessKeyId, secretAccessKey, sessionToken } = await ec2IMDCredsProvider();\n\n url = url.replace('{AWS_ACCESS_KEY}', accessKeyId);\n url = url.replace('{AWS_SECRET_KEY}', encodeURIComponent(secretAccessKey));\n url = url.replace('{AWS_TOKEN}', encodeURIComponent(sessionToken || ''));\n",
"text": "Hello,I want to connect my application server to MongoDB Atlas using AWS STS. Currently, I can get it to connect just fine by doing this:After the assumeRole session times out, my application disconnects and I don’t know what hook to use in order to re-invoke ec2 instance metadata service in order to get a new role id/secret/token. The mongodb driver sits there trying and retrying the stale invalid credentials.My current workaround is to give my servers a short lifespan and have my ASG automatically replace them over and over. This is not a great solution. I’d prefer for my application code to handle reconnecting gracefully. Has anyone done this before?",
"username": "Victor_Moreno"
},
{
"code": "[13:48:21 ERR[] Connection id \"0HMCSB6U64G5J\", Request id \"0HMCSB6U64G5J:00000003\": An unhandled exception was thrown by the application.\n\nMongoDB.Driver.MongoAuthenticationException: Unable to authenticate using sasl protocol mechanism MONGODB-AWS.\n\n ---> MongoDB.Driver.MongoCommandException: Command saslContinue failed: Authentication failed..\n\n at MongoDB.Driver.Core.WireProtocol.CommandUsingQueryMessageWireProtocol`1.ProcessReply(ConnectionId connectionId, ReplyMessage`1 reply)\n\n at MongoDB.Driver.Core.WireProtocol.CommandUsingQueryMessageWireProtocol`1.ExecuteAsync(IConnection connection, CancellationToken cancellationToken)\n\n at MongoDB.Driver.Core.Authentication.SaslAuthenticator.AuthenticateAsync(IConnection connection, ConnectionDescription description, CancellationToken cancellationToken)\n\n --- End of inner exception stack trace ---\n\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync(CancellationToken cancellationToken)\n\n at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.PooledConnection.OpenAsync(CancellationToken cancellationToken)\n\n at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.ConnectionCreator.CreateOpenedInternalAsync(CancellationToken cancellationToken)\n\n at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.ConnectionCreator.CreateOpenedOrReuseAsync(CancellationToken cancellationToken)\n\n at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.AcquireConnectionHelper.EnteredPoolAsync(Boolean enteredPool, CancellationToken cancellationToken)\n\n at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.AcquireConnectionAsync(CancellationToken cancellationToken)\n\n at MongoDB.Driver.Core.Servers.Server.GetChannelAsync(CancellationToken cancellationToken)\n\n at MongoDB.Driver.Core.Operations.RetryableReadContext.InitializeAsync(CancellationToken cancellationToken)\n\n at MongoDB.Driver.Core.Operations.RetryableReadContext.CreateAsync(IReadBinding binding, Boolean retryRequested, CancellationToken cancellationToken)\n\n at MongoDB.Driver.Core.Operations.FindOperation`1.ExecuteAsync(IReadBinding binding, CancellationToken cancellationToken)\n\n at MongoDB.Driver.OperationExecutor.ExecuteReadOperationAsync[TResult[](IReadBinding binding, IReadOperation`1 operation, CancellationToken cancellationToken)\n\n at MongoDB.Driver.MongoCollectionImpl`1.ExecuteReadOperationAsync[TResult[](IClientSessionHandle session, IReadOperation`1 operation, ReadPreference readPreference, CancellationToken cancellationToken)\n\n at MongoDB.Driver.MongoCollectionImpl`1.UsingImplicitSessionAsync[TResult[](Func`2 funcAsync, CancellationToken cancellationToken)\n\n at MongoDB.Driver.IAsyncCursorSourceExtensions.FirstOrDefaultAsync[TDocument[](IAsyncCursorSource`1 source, CancellationToken cancellationToken)\n\n at MongoDbGenericRepository.DataAccess.Read.MongoDbReader.GetOneAsync[TDocument,TKey[](Expression`1 filter, String partitionKey, CancellationToken cancellationToken)\n\n at MongoDbGenericRepository.ReadOnlyMongoRepository.GetOneAsync[TDocument,TKey[](Expression`1 filter, String partitionKey, CancellationToken cancellationToken)\n\n at Microsoft.AspNetCore.Identity.UserManager`1.FindByNameAsync(String userName)",
"text": "I am having same problem. I am confused how AWS Assumerole can work in current driver implementation (I am using .Net). Credentials provided in string to MongoDB will always expire. And if Mongo Clients try to restablish connection, it will never be possible to authenticate because STS authentication is not possible with expired ACCESS KEY and SECRET. There is no way to provide new key and secret.Can someone in Mongo help please ?I am getting below error after some time:",
"username": "Vinay"
},
{
"code": "",
"text": "All MongoDB drivers do not implement calling AssumeRole, and therefore cannot “re-AssumeRole” by themselves. Instead, they expect us to provide the temporary credentials directly but do not support refreshing them.\nI recommend adding the IAM role of your EC2 instance or ECS Task as a database user directly to the MongoDB Atlas cluster.",
"username": "Boris_Figovsky"
},
{
"code": "",
"text": "Hi Boris, how to reslove the session timeout issue after adding the role of EC2 instance as database user? could you introduce more detail? thanks",
"username": "Wang_Huanjing"
},
{
"code": "",
"text": "Hi!I want to add a +1 to this thread.Looks like someone else had the same problem and fixed it themselves for typescript: GitHub - scaleapi/mongodb-auth-aws-improvedAnd MongoDB fixed it in the java driver: https://jira.mongodb.org/browse/JAVA-4292It would be great if the official MongoDB librairies all supported it.",
"username": "Leo_Ferlin-Sutton1"
}
]
| Re-establish AWS Assumerole connection | 2021-10-05T13:40:43.293Z | Re-establish AWS Assumerole connection | 5,023 |
[
"data-api"
]
| [
{
"code": "",
"text": "Hello, Am having an issue adding my endpoint.\nError: resource name can only contain ASCII letters, numbers, and underscores\nKindly help. Attached is the file with the error.\n\nScreenshot from 2022-12-07 13-05-271697×759 46.9 KB\n",
"username": "brian_murithi"
},
{
"code": "/resultsresults",
"text": "Can you clarify the question?resource name can only contain ASCII letters, numbers, and underscoresand your route has a forward slash in it/resultsif that’s what the question is about, change it toresults",
"username": "Jay"
},
{
"code": "",
"text": "I actually spotted the issue. I saved it before I could name my function.\nThank you so much for taking your part to respond",
"username": "brian_murithi"
}
]
| Resource name can only contain ASCII letters, numbers, and underscores | 2022-12-07T11:24:27.628Z | Resource name can only contain ASCII letters, numbers, and underscores | 2,796 |
|
null | [
"charts"
]
| [
{
"code": "",
"text": "I am probably missing the documentation here but I am using charts in my app. I have setup a fairly involved set of charts that I am embedding in my Js based UI. But this has been on my Dev data base. Is there a way to extract this as code or a script so I can basically recreate this for my Prod environment?\nRegards,\nVibha",
"username": "Vibha_Gopal"
},
{
"code": "",
"text": "There sure is! The ellipsis button on the dashboard has a command “Export dashboard”. This will export the dashboard definition to a .charts file, which you can then reimport into a new project.Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "Hello @tomhollander\nbeside exporting a dashboard, is there somewhere a description around how to integrate chats development into a CI/CD pipeline?\nRegards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Right now it’s not possible to fully automate CI/CD with Charts (e.g. activating Charts, importing a dashboard, configuring embedding, etc). However we know the scenario is important and it’s something we’re looking into.Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "Thank you! This really helps!",
"username": "Vibha_Gopal"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Is it possible to migrate my charts to a new DB/environment | 2022-12-05T13:52:23.853Z | Is it possible to migrate my charts to a new DB/environment | 2,173 |
null | [
"queries"
]
| [
{
"code": " \"command\": {\n \"getMore\": 7229634113631845000,\n \"collection\": \"data\",\n \"batchSize\": 4899,\n\n.......\n\n\n \"originatingCommand\": {\n \"find\": \"data\",\n \"filter\": {\n \"accountId\": \"AAA-367YTGSA\",\n \"customIterator\": {\n \"$gte\": {\n \"$date\": \"2072-11-05T01:41:58.041Z\"\n }\n },\n \"startTime\": {\n \"$lte\": {\n \"$date\": \"2022-12-06T17:00:00Z\"\n }\n },\n \"type\": {\n \"$in\": [\n \"TYPE_A\",\n \"TYPE_B\"\n ]\n }\n },\n \"sort\": {\n \"accountId\": 1,\n \"customIterator\": 1\n },\n \"limit\": 5000,\n \"maxTimeMS\": 300000,\n\n.....\n\n\n\n \"planSummary\": [\n {\n \"IXSCAN\": {\n \"accountId\": 1,\n \"customIterator\": 1,\n \"startTime\": 1,\n \"type\": 1\n }\n }\n ],\naccountId_customIterator_startTime_type\naccountId:1 customIterator:1 startTime:1 type:1 \n\naccountId_type_customIterator_startTime\naccountId:1 type:1 customIterator:1 startTime:1 \n\n\"planSummary\": [\n {\n \"IXSCAN\": {\n \"accountId\": 1,\n \"customIterator\": 1,\n \"startTime\": 1,\n \"type\": 1\n }\n",
"text": "Hi below query has been executed in MongoI have two indexes as below:First Index:Second Index:As per my understanding, the query should be using the second Index as per ESR rule but plan summary states the story otherwise.What I am missing here?",
"username": "Abhinav_27971"
},
{
"code": "\"accountId\": 1,\n \"customIterator\": 1,\n{\n \"accountId\": 1,\n \"customIterator\": 1,\n \"type\": 1,\n \"startTime\": 1\n\n }\n",
"text": "Hi @Abhinav_27971 ,We show the ESR rule as a good starting point, but its not necessarily the only consideration an optimiser takes.If a query shape can be fulfilled with an index the database will try to exmine it. Possibly it will also be chosen over another index even if it results with worse performance eventually or over time (as plans get cached for queries).In your case I believe the database treated the type $in as a range operator while respecting your sorting fields which requireIf you will use an index that does not have those fields consecutively, it will not be able to use it for the sort. This is a heavy penalty to sort in memory compare to sort on index and I beilive this is the main incentive of choosing the index.I would say the best index for this query is actually:So we have the equality and sort bundled as first part then we go to any other equality/range fields.Again, to make sure that this index is the best one you can use a hint clause and run it to check if performance is indeed better. Otherwise, use the most performant index even if it is not exactly according to ESR…Thanks\nPavel",
"username": "Pavel_Duchovny"
}
]
| Why Query doesnt use index with ESR rule here? | 2022-12-08T08:03:19.428Z | Why Query doesnt use index with ESR rule here? | 1,207 |
null | [
"compass"
]
| [
{
"code": "MongodbUbuntu 16.04Mongodbuse admin\ndb.createUser(\n {\n user: 'myuser',\n pwd: 'password',\n roles: [ { role: 'readWrite', db: 'mydb' } ]\n }\n);\nmongod.confnet:\n port: 27017\n bindIp: 127.0.0.1,<server_ip>\n\nsecurity:\n authorization: 'enabled'\nmongodb://myuser:password@server_ip:27017/mydb\nconnection timed outLaravel Forge",
"text": "I installed Mongodb on my remote server using this documentation. I have Ubuntu 16.04 on my remote server. Mongodb got installed successfully. I added the user like this:I also made changes in the mongod.conf like this:Now when I try to connect to mongodb using conneciton string like this:It gives me the following error:connection timed outWhat am I doing wrong here? I am using Laravel Forge to manage sever.",
"username": "Ehsan_Elahi"
},
{
"code": "",
"text": "Can you connect by shell remotely as well as locally using same srv connect string?\nIs port 27017 open to connections\nCould be firewall blocking your connection",
"username": "Ramachandra_Tummala"
},
{
"code": "telnet 139.162.147.195 27017\nTrying 139.162.147.195...\ntelnet: Unable to connect to remote host: Connection timed out\n",
"text": "I am not using the srv string as you can see in the question. Aftter ssh-ing into the server, I can connect to mongodb using the same string. There is no firewall blockage as I can connect to server from the same host but cant connect to mongodb. And this is the result of telnet:",
"username": "Ehsan_Elahi"
},
{
"code": "",
"text": "Just for testing use 0.0.0.0 for bindIp and see if you can connect\nDo you have the option to fill in individual params instead of connect string?\nUse advance options and check ssh tunnel\nTry different options",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "As it turned out the port was not open and that was the only issue. Opened the port and now its working fine.",
"username": "Ehsan_Elahi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Mongodb Compass: Connection timed out | 2022-12-08T06:02:27.629Z | Mongodb Compass: Connection timed out | 3,185 |
null | []
| [
{
"code": "",
"text": "Getting this when trying to insert log expression tree\nMongoDB.Driver.MongoCommandException: Command insert failed: BSONObj exceeded maximum nested object depth: 200.",
"username": "Amit_Upadhyay2"
},
{
"code": "",
"text": "any Suggestio in this.",
"username": "Amit_Upadhyay2"
}
]
| BSONObj exceeded maximum nested object depth | 2022-12-07T12:01:02.522Z | BSONObj exceeded maximum nested object depth | 1,265 |
[]
| [
{
"code": "",
"text": "Hello! One of my courses has not loaded after the migration, and the process of my dba path does not load, in the new university, i don’t see the path previously enrolled, it appears on my dashboard, but it doesn’t have anything inside, so i re-enrolled the dba path, but only recognize 1 course, and the other course that it doesn’t recognize is the mongodb basic course, i tried to take the mongodb basic course again but, don’t let me advance from unit, there seems to be an error since i complete all exercises and only appears that i have met 50% and don’t let me continue.\n\nCaptura de Pantalla 2022-12-07 a la(s) 10.10.56 a. m.1477×447 54.3 KB\n\n\nCaptura de Pantalla 2022-12-07 a la(s) 10.11.13 a. m.1477×447 63.3 KB\n\n\nCaptura de Pantalla 2022-12-07 a la(s) 10.10.56 a. m.1477×447 54.3 KB\n\n\nCaptura de Pantalla 2022-12-07 a la(s) 10.12.06 a. m.1477×810 105 KB\n",
"username": "Heidi_Esteves"
},
{
"code": "",
"text": "Hi @Heidi_Esteves,Welcome to the MongoDB Community forums Can you please email us at [email protected]? The University team will be happy to help you out.Thanks,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Course(and DBA PATH). has not loaded at the new university | 2022-12-07T21:46:21.066Z | Course(and DBA PATH). has not loaded at the new university | 1,379 |
|
null | [
"python"
]
| [
{
"code": "",
"text": "Hello Team,\nI have a very large JSON file which i have to store in MongoDB. But when i tried to push that JSON file it threw me an “Document Too Large Error”.After few research i found that GridFS helps us in storing a large JSON documents which are more than 16MB in Binary Format. But no where it’s given how to maintain the data in chunks with multiple fields in it. The data is stored in Binary Format everywhere without the multiple fields that is in the JSON file.Please do help me understand how to store the below mentioned structure of the data in chunks using GridFS.Tools Used: pymongo, Mongo version 6Structure of data:\n{ title : “Sample Title”,\ndata : [“text1”,“text2”,“text3”…],\nlink : “Sample href link” }Thank You.",
"username": "Jagadeesh_Reddy_A_L"
},
{
"code": "",
"text": "It is not clearSo share the command you are using. The text version of the command is nice to have so that we can cut-n-paste and a screenshot of the environment where you run the command is also pertinent as the context might reveal a few issues.You shared the structure of the data but it is not clear how many things have the field title. Is there only 1 title in the whole file? Sharing the document by providing a link to it might help us help you.If you have JSON, using GridFS will definitively be the last and worst solution. You won’t be able to use normale queries, you won’t be able to use aggregation.",
"username": "steevej"
},
{
"code": "requesting = []\nrequesting.append(data[i]) -> where data[i] is a dictionary\nresult = mycollection.bulk_write(requesting) \n",
"text": "Hello Steeve,\nI have multiple JSON documents which have multiple fields in it. The JSON document has many dictionaries in it with the multiple fields Currently I’m pushing the data using bulk_write() command.The structure of each dictionary is as given above. The JSON file has this multiple dictionaries.",
"username": "Jagadeesh_Reddy_A_L"
}
]
| Storing a large JSON file containing Textual information with mulitple keys in MongoDB | 2022-12-07T13:12:15.946Z | Storing a large JSON file containing Textual information with mulitple keys in MongoDB | 2,270 |
[
"atlas-cluster"
]
| [
{
"code": "",
"text": "Unable to initiate connection to sandbox. NSLOOKUP fails.\nimage1888×137 11.6 KB\nMy connection string:\nmongo “mongodb+srv://sandbox.mty4ywf.mongodb.net/myFirstDatabase” --username m001-student\nimage1909×345 12.9 KB\nConnection info from Mongo Atlas portal:\n\nimage968×177 12 KB\n",
"username": "Antony_Arokia_Raj_M"
},
{
"code": "mongosh --quiet \"mongodb+srv://sandbox.mty4ywf.mongodb.net/myFirstDatabase\" --eval 'db.hello().ok'\n1\n",
"text": "Switch up your dns resolver. Its working fine:",
"username": "chris"
},
{
"code": "nslookup -type=any sandbox.mty4ywf.mongodb.net\nServer:\t\t127.0.0.53\nAddress:\t127.0.0.53#53\n\nNon-authoritative answer:\nsandbox.mty4ywf.mongodb.net\ttext = \"authSource=admin&replicaSet=atlas-1ar142-shard-0\"\nsandbox.mty4ywf.mongodb.net\tservice = 0 0 27017 ac-nhc98eg-shard-00-00.mty4ywf.mongodb.net.\nsandbox.mty4ywf.mongodb.net\tservice = 0 0 27017 ac-nhc98eg-shard-00-01.mty4ywf.mongodb.net.\nsandbox.mty4ywf.mongodb.net\tservice = 0 0 27017 ac-nhc98eg-shard-00-02.mty4ywf.mongodb.net.\n\nAuthoritative answers can be found from:\n",
"text": "The drivers look up SRV and TXT records in dns and connect using the hosts, ports and connection options returned from them:",
"username": "chris"
},
{
"code": "nslookup -type=any sandbox.mty4ywf.mongodb.net",
"text": "nslookup -type=any sandbox.mty4ywf.mongodb.netThank You. This was a good suggestion. I updated to use Google DNS and it works now.",
"username": "Antony_Arokia_Raj_M"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Unable to initiate connection to sandbox. NSLOOKUP fails | 2022-12-07T19:00:28.349Z | Unable to initiate connection to sandbox. NSLOOKUP fails | 2,015 |
|
null | [
"java",
"connecting"
]
| [
{
"code": "",
"text": "The mongo-java-driver seems to have some issues with how the connection pool is managed. Eventually leading to a timeout due to the pool reaching the max no. of connections. (I believe it has been fixed in Fix deadlock and couple more problems in `DefaultConnectionPool` by stIncMale · Pull Request #699 · mongodb/mongo-java-driver · GitHub)",
"username": "Alexandru_B"
},
{
"code": "",
"text": "Hi there.That PR is a fix to another PR that has not been included yet in a release, so it would be unrelated to any issues you are seeing in a released version of the Java driver.Can you elaborate on the issues that you are seeing (driver version, stack trace, logs)?Regards,\nJeff",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "Hello,Thanks for coming back so quickly.This is the error message that we got back.Timeout waiting for a pooled connection after 120000 MILLISECONDSWe enabled the JMXConnectionPoolListener and noticed the pool reaches the maximum size before this happens.Running version 4.2.3. (Connecting to a replica set hosted in MongoAtlas)",
"username": "Alexandru_B"
},
{
"code": "",
"text": "This is not uncommon, and the root cause is not a bug in the driver. Rather, it’s usually one of:Some possibilities to debug:Regards,\nJeff",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "The default size pool is 100 and we increased to 200 and the same thing happened.2.Executing operations that are not optimized, e.g. a query on an unindexed field in a collectionI’ll double-check this. I’m pretty sure all queries are a run against a suitable index.3.Networking issuesI’ve used netcat to try to test the connectivity to all instances in the replica set and all seemed well.I’ll try switching to DEBUG and see if anything useful shows up. ",
"username": "Alexandru_B"
},
{
"code": "",
"text": "I managed to reproduce it locally.\nAt some point, the pool size reaches a maximum of 100. I stopped the load test on the app and then I ran a request towards the app that involves reading from MongoDB. The response is a timeout.\nI get this timeout even though there is no more load in the application.\nI did a thread dump and noticed a thread called AsyncGetter being in a TIMED_WAITING state after which I get the timeout.\nE.g.“AsyncGetter-4-thread-1” - Thread t@82\njava.lang.Thread.State: TIMED_WAITING\nat sun.misc.Unsafe.park(Native Method)\n- parking to wait for <4a5f1261> (a java.util.concurrent.Semaphore$FairSync)\nat java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)\nat java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)\nat java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)\nat java.util.concurrent.Semaphore.tryAcquire(Semaphore.java:409)\nat com.mongodb.internal.connection.ConcurrentPool.acquirePermit(ConcurrentPool.java:197)\nat com.mongodb.internal.connection.ConcurrentPool.get(ConcurrentPool.java:140)\nat com.mongodb.internal.connection.DefaultConnectionPool.getPooledConnection(DefaultConnectionPool.java:284)\nat com.mongodb.internal.connection.DefaultConnectionPool.access$200(DefaultConnectionPool.java:63)\nat com.mongodb.internal.connection.DefaultConnectionPool$1.run(DefaultConnectionPool.java:164)\nat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\nat java.util.concurrent.FutureTask.run(FutureTask.java:266)\nat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\nat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\nat java.lang.Thread.run(Thread.java:748)\nLocked ownable synchronizers:\n- locked <1c600001> (a java.util.concurrent.ThreadPoolExecutor$Worker)Any reason why a connection couldn’t be retrieved when the pool has 100 opened connections ?",
"username": "Alexandru_B"
},
{
"code": "",
"text": "We are also observing a similar issue with Mongo driver 3.12. Has there been any fix suggested for this issue ?\nFrom the thread dump I see few AsyncGetter threads are stuck in TIMED_WAITING. It is causing the application to get stuck and not recovering by itself until the application is restarted. We are using Java driver 3.12",
"username": "Venky_Chowdary"
}
]
| Any chance to get a new Java driver release with latest bug fixes? | 2021-05-10T14:56:51.482Z | Any chance to get a new Java driver release with latest bug fixes? | 5,511 |
null | [
"queries",
"node-js",
"crud",
"mongoose-odm"
]
| [
{
"code": "{\n _id: ObjectId(\"78391283\"),\n name: \"Colors company\",\n shops: [\n {shopID: ObjectId(\"123456a1cb\"), income: 0},\n {shopID: ObjectId(\"2a1cb67890\"), income: 0},\n {shopID: ObjectId(\"2a1cb010111\"), income: 0},\n ...\n ],\n}\n{\n companyID: \"78391283\",\n shopID: \"123456a1cb\",\n amountToAdd: 200,\n}\nconst ObjectId = require(\"mongoose\").Types.ObjectId;\n\nShops.findOneAndUpdate(\n { _id: parsedBody.shopID},\n {\n set: {\n // <-- missing loop here\n \"shops.$.income\": { parsedBody.amountToAdd + income }\n }\n }\n)\n",
"text": "Hello people, in the following example I have a company named Colors Company that owns many shopsI need to :\n1- Loop on the shops array, find the requested array using {parsedBody.shopID}\n2- Get use of the amount stored in “shop.$.income”\n2- Increase the income using {parsedBody.amountToAdd}This is the POST request parsedBody :What i’ve triedI’m using Next js 13.Thanks for your help !",
"username": "Slog_Go"
},
{
"code": "DB>db.test.find()\n[\n {\n _id: ObjectId(\"639122b0136ffe0bdd036073\"),\n name: 'Colors company',\n shops: [\n { shopID: ObjectId(\"639122b0136ffe0bdd036072\"), income: 0 },\n { shopID: ObjectId(\"639122b2136ffe0bdd036074\"), income: 0 },\n { shopID: ObjectId(\"639122b3136ffe0bdd036075\"), income: 0 },\n { shopID: ObjectId(\"639122b4136ffe0bdd036076\"), income: 0 },\n { shopID: ObjectId(\"639122b5136ffe0bdd036077\"), income: 0 },\n { shopID: ObjectId(\"639122b5136ffe0bdd036078\"), income: 0 }, /// Last 3 elements with same shopID value\n { shopID: ObjectId(\"639122b5136ffe0bdd036078\"), income: 0 }, /// Last 3 elements with same shopID value\n { shopID: ObjectId(\"639122b5136ffe0bdd036078\"), income: 0 } ///Last 3 elements with same shopID value\n ]\n }\n]\nshops\"shopID\"$[<identifier>]DB> db.test.updateOne(\n {_id: ObjectId(\"639122b0136ffe0bdd036073\"), 'shops.shopID': ObjectId(\"639122b5136ffe0bdd036078\")},\n {$inc: {'shops.$[element].income': 200}},\n {arrayFilters:[{'element.shopID':ObjectId(\"639122b5136ffe0bdd036078\")}]}\n)\n{\n acknowledged: true,\n insertedId: null,\n matchedCount: 1,\n modifiedCount: 1,\n upsertedCount: 0\n}\nDB>db.test.find()\n[\n {\n _id: ObjectId(\"639122b0136ffe0bdd036073\"),\n name: 'Colors company',\n shops: [\n { shopID: ObjectId(\"639122b0136ffe0bdd036072\"), income: 0 },\n { shopID: ObjectId(\"639122b2136ffe0bdd036074\"), income: 0 },\n { shopID: ObjectId(\"639122b3136ffe0bdd036075\"), income: 0 },\n { shopID: ObjectId(\"639122b4136ffe0bdd036076\"), income: 0 },\n { shopID: ObjectId(\"639122b5136ffe0bdd036077\"), income: 0 },\n { shopID: ObjectId(\"639122b5136ffe0bdd036078\"), income: 200 },\n { shopID: ObjectId(\"639122b5136ffe0bdd036078\"), income: 200 },\n { shopID: ObjectId(\"639122b5136ffe0bdd036078\"), income: 200 }\n ]\n }\n]\n\"income\"shopID: ObjectId(\"639122b5136ffe0bdd036078\")mongosh",
"text": "Hi @Slog_Go - Welcome to the community.I’m not too sure if this suits your use case exactly, but i’ve created the following test document in my test environment:Note: the last 3 objects within the shops array have the same \"shopID\" value.I’ve done this to try demonstrate the behaviour of the $[<identifier>] positional operator.Running the following update command:The test document after the above update:\"income\" incremented by 200 for each of the elements where shopID: ObjectId(\"639122b5136ffe0bdd036078\")The above was done in mongosh but you can alter it accordingly to test for your driver.Please note that this was only briefly tested on a single test document so I would advise you test thoroughly in your own test environment to verify it suits all your use case(s) and requirement(s) before moving to production in the case you believe this is what you are after.If you still require further assistance, please advise what the expected output is to be.Hope this helps.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Loop on array, then update an object field using ObjectId | 2022-12-07T14:30:37.181Z | Loop on array, then update an object field using ObjectId | 3,911 |
null | []
| [
{
"code": "",
"text": "I encountered the error below trying to upgrade maximum cluster tier from m20 → m30.\n‘Additional permissions are required to access the requested resource.’I’m using aws account to use mongodb atlas, but it doesn’t have billing permission.\nAnd I wonder which exact permission is required to adjust the option because\nthe admin denied to give my account billing permission without specific reason.So I’d like to know which exact permission is needed to do that.Thank you.",
"username": "Joonghun_Park"
},
{
"code": "Project Cluster Manager",
"text": "Hi @Joonghun_Park - Welcome to the community.Based off the title and post description I presume you’re trying to change the auto-scaling max tier. Please correct me if I am wrong here. In saying so, users with the Project Cluster Manager can perform the following tasks:I would refer to the Atlas User Roles documentation for more detailed descriptions of each of the Atlas Roles.Please contact the Atlas chat support if you’re still having troubles trying to change the maximum cluster tier.Hope this helps.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Error while trying to adjust maximum cluster tier | 2022-12-07T07:34:57.613Z | Error while trying to adjust maximum cluster tier | 1,022 |
null | [
"mongodb-shell",
"golang",
"containers"
]
| [
{
"code": "conn := mongo.connect()\nsess, _ := conn.StartSession()\n*mongo.clientmongo.Session*mongo.clientmongo.session",
"text": "We have a docker based architecture for the product , And I do the below when I bring up all the docker containersSo at this point in time I have the *mongo.client & mongo.Session I know *mongo.client is go routine safe and we use that for susbsequent operations , Also we do connect only once when we try to bring the docker instance .So my question is when we bring down all the containers ? Do I need a teardown mechanism to say disconnect the client and mongo.session ? If I don’t do that what will be the problem . And if I can get a recommended way of how to close the session and client that would be helpful .",
"username": "karthick_d"
},
{
"code": "DisconnectClientContextDisconnectClientconn, err := mongo.Connect(...)\nif err != nil {\n\tpanic(err)\n}\n\nctx, cancel := context.WithTimeout(context.Background, 30*time.Second)\ndefer cancel()\ndefer conn.Disconnect(ctx)\n\n// Run operations with the Client...\nStartSessionEndSessionsess, err := conn.StartSession()\nif err != nil {\n\tpanic(err)\n}\n\n// Use the Session to run causally consistent\n// operations or transactions...\n\nsess.EndSession(context.TODO())\n",
"text": "It’s best practice to call Disconnect on any connected Client when you no longer need it or when the application is shutting down. If you pass a Context with a timeout to Disconnect, it will disconnect the Client gracefully, including waiting up to the configured timeout for any in-progress operations to finish.Example of disconnecting a client, waiting up to 30 seconds for in-progress operations to complete:If you’re starting sessions with StartSession, you should eventually end that session by calling EndSession to avoid leaking sessions, which can lead to a “TooManyLogicalSessions” error. Sessions also have a maximum lifetime of 30 minutes, at which point the database will end the session.Example of starting and ending a session:",
"username": "Matt_Dale"
}
]
| Do I need to disconnect the *mongo.client and mongo.Session after I bring down the docker containers? | 2022-12-04T05:47:52.320Z | Do I need to disconnect the *mongo.client and mongo.Session after I bring down the docker containers? | 2,894 |
null | [
"aggregation",
"queries",
"python",
"transactions"
]
| [
{
"code": "{\n \"_id\": {\n \"$oid\": \"637ac16a7c31adec64511551\"\n },\n \"User\": \"0\",\n \"Card\": \"0\",\n \"Year\": \"2002\",\n \"Month\": \"11\",\n \"Day\": \"26\",\n \"Time\": \"11:21\",\n \"Amount\": \"$379.73\",\n \"Use Chip\": \"Swipe Transaction\",\n \"Merchant Name\": \"6515854639642454768\",\n \"Merchant City\": \"Calexico\",\n \"Merchant State\": \"CA\",\n \"Zip\": \"92231.0\",\n \"MCC\": \"3066\",\n \"Errors?\": \"\",\n \"Is Fraud?\": \"No\"\n}\nproject_cost = {\n \"$project\": {\n \"MCC\": 1,\n \"cost_split\": {\n \"$split\": [\n \"$Amount\", \"$\"\n ]\n }\n }\n }\npymongo.errors.OperationFailure: Invalid $project :: caused by :: '$' by itself is not a valid FieldPath, full error: {'ok': 0.0, 'errmsg': \"Invalid $project :: caused by :: '$' by itself is not a valid FieldPath\", 'code': 16872, 'codeName': 'Location16872'}\n",
"text": "I’m trying to split my amount field(String) and remove dollar symbol, for grouping and sum of amount, but I’m getting error saying $ is not valid path, I’m new to MongoDB aggregations and not able to figure out how to split it.Sample Json:Query:Error:",
"username": "karthik_tvs"
},
{
"code": "{\"$literal\": \"$\"}$",
"text": "$ is often used to identify a stage or operator. Try {\"$literal\": \"$\"} for a literal $Many of the fields could be using better data types to begin with and will serve you better as you need to perform different and more complex queries.",
"username": "chris"
},
{
"code": "db.collection.aggregate([\n {\n \"$project\": {\n \"MCC\": 1,\n \"cost_split\": {\n \"$split\": [\n \"$Amount\",\n {\n $literal: \"$\"\n }\n ]\n }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"MCC\": 1,\n \"cost_split\": 1\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$cost_split\",\n \"includeArrayIndex\": \"arrayIndex\"\n }\n },\n {\n \"$match\": {\n \"arrayIndex\": 1\n }\n },\n {\n \"$group\": {\n \"_id\": \"$MCC\",\n \"Total_cost\": {\n \"$sum\": {\n \"$convert\": {\n \"cost_split\": \"$moop\",\n \"to\": \"int\"\n }\n }\n }\n }\n }\n])\npymongo.errors.OperationFailure: $convert found an unknown argument: cost_split, full error: {'ok': 0.0, 'errmsg': '$convert found an unknown argument: cost_split', 'code': 9, 'codeName': 'FailedToParse'}\n",
"text": "Thanks $literal query is succeeding and I have further pipeline query as below and below is the errorbelow is the error I’m getting\nError",
"username": "karthik_tvs"
},
{
"code": "",
"text": "Use double or decimal.You can see how using a correct data type would make this much simpler!",
"username": "chris"
}
]
| $split operator with $ symbol | 2022-12-07T21:31:28.094Z | $split operator with $ symbol | 1,746 |
null | [
"atlas-cluster",
"golang",
"containers",
"rust"
]
| [
{
"code": "mongodb+srv://hosteldb.e3ayhyn.mongodb.net/?retryWrites=true&w=majority&authSource=%24external&authMechanism=MONGODB-X509&tlsCertificateKeyFile=X509-cert-application.pem\nConnected successfully.\nthread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Error { kind: ServerSelection { message: \"Server selection timeout: No available servers. Topology: { Type: ReplicaSetNoPrimary, Servers: [ { Address: ac-hikpf5h-shard-00-02.e3ayhyn.mongodb.net:27017, Type: Unknown, Error: No such file or directory (os error 2) }, { Address: ac-hikpf5h-shard-00-01.e3ayhyn.mongodb.net:27017, Type: Unknown, Error: No such file or directory (os error 2) }, { Address: ac-hikpf5h-shard-00-00.e3ayhyn.mongodb.net:27017, Type: Unknown, Error: No such file or directory (os error 2) }, ] }\" }, labels: {}, wire_version: None, source: None }', src/main.rs:16:70\nstack backtrace:\n 0: 0x558c9867b9a0 - std::backtrace_rs::backtrace::libunwind::trace::h32eb3e08e874dd27\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/../../backtrace/src/backtrace/libunwind.rs:93:5\n 1: 0x558c9867b9a0 - std::backtrace_rs::backtrace::trace_unsynchronized::haa3f451d27bc11a5\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5\n 2: 0x558c9867b9a0 - std::sys_common::backtrace::_print_fmt::h5b94a01bb4289bb5\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/sys_common/backtrace.rs:66:5\n 3: 0x558c9867b9a0 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::hb070b7fa7e3175df\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/sys_common/backtrace.rs:45:22\n 4: 0x558c986a10de - core::fmt::write::hd5207aebbb9a86e9\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/fmt/mod.rs:1202:17\n 5: 0x558c98675a15 - std::io::Write::write_fmt::h3bd699bbd129ab8a\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/io/mod.rs:1679:15\n 6: 0x558c9867d1d3 - std::sys_common::backtrace::_print::h7a21be552fdf58da\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/sys_common/backtrace.rs:48:5\n 7: 0x558c9867d1d3 - std::sys_common::backtrace::print::ha85c41fe4dd80b13\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/sys_common/backtrace.rs:35:9\n 8: 0x558c9867d1d3 - std::panicking::default_hook::{{closure}}::h04cca40023d0eeca\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panicking.rs:295:22\n 9: 0x558c9867cebf - std::panicking::default_hook::haa3ca8c310ed5402\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panicking.rs:314:9\n 10: 0x558c9867d87a - std::panicking::rust_panic_with_hook::h7b190ce1a948faac\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panicking.rs:698:17\n 11: 0x558c9867d777 - std::panicking::begin_panic_handler::{{closure}}::hbafbfdc3e1b97f68\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panicking.rs:588:13\n 12: 0x558c9867be4c - std::sys_common::backtrace::__rust_end_short_backtrace::hda93e5fef243b4c0\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/sys_common/backtrace.rs:138:18\n 13: 0x558c9867d492 - rust_begin_unwind\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panicking.rs:584:5\n 14: 0x558c97b3d373 - core::panicking::panic_fmt::h8d17ca1073d9a733\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/panicking.rs:142:14\n 15: 0x558c97b3d4c3 - core::result::unwrap_failed::hfaddf24b248137d3\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/result.rs:1785:5\n 16: 0x558c97b6c8b8 - core::result::Result<T,E>::unwrap::h7cfaf9f63865626d\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/result.rs:1107:23\n 17: 0x558c97b4e432 - rust_consumer::main::hab342cdb996c6973\n at /workspaces/HostelMan/document-verification/rust-consumer/src/main.rs:16:20\n 18: 0x558c97b3d9db - core::ops::function::FnOnce::call_once::ha36fa24bcea7fbf0\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/ops/function.rs:248:5\n 19: 0x558c97b4ea3e - std::sys_common::backtrace::__rust_begin_short_backtrace::hf46ea52f7fc1b996\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/sys_common/backtrace.rs:122:18\n 20: 0x558c97b4fed1 - std::rt::lang_start::{{closure}}::h9ec016e26b03fa68\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/rt.rs:166:18\n 21: 0x558c986722af - core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once::hb69be6e0857c6cfb\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/core/src/ops/function.rs:283:13\n 22: 0x558c986722af - std::panicking::try::do_call::h396dfc441ee9c786\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panicking.rs:492:40\n 23: 0x558c986722af - std::panicking::try::h6cdda972d28b3a4f\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panicking.rs:456:19\n 24: 0x558c986722af - std::panic::catch_unwind::h376039ec264e8ef9\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panic.rs:137:14\n 25: 0x558c986722af - std::rt::lang_start_internal::{{closure}}::hc94720ca3d4cb727\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/rt.rs:148:48\n 26: 0x558c986722af - std::panicking::try::do_call::h2422fb95933fa2d5\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panicking.rs:492:40\n 27: 0x558c986722af - std::panicking::try::h488286b5ec8333ff\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panicking.rs:456:19\n 28: 0x558c986722af - std::panic::catch_unwind::h81636549836d2a25\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/panic.rs:137:14\n 29: 0x558c986722af - std::rt::lang_start_internal::h6ba1bb743c1e9df9\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/rt.rs:148:20\n 30: 0x558c97b4feaa - std::rt::lang_start::h65289c0e3cdf7e56\n at /rustc/897e37553bba8b42751c67658967889d11ecd120/library/std/src/rt.rs:165:17\n 31: 0x558c97b4ea21 - main\n 32: 0x7f20c1cbed0a - __libc_start_main\n 33: 0x558c97b3d6aa - _start\n 34: 0x0 - <unknown>\nfn db_init() -> Client {\n // Path to certificate\n let mongo_certificate = env::var(\"MONGODB_CERTIFICATE\").unwrap();\n\n // MongoDB connection URI\n let uri = \"mongodb+srv://hosteldb.e3ayhyn.mongodb.net/?retryWrites=true&w=majority&authSource=%24external&authMechanism=MONGODB-X509&tlsCertificateKeyFile=\".to_owned()+ &mongo_certificate;\n\n println!(\"{}\", uri);\n\n let client_options = ClientOptions::parse(uri).unwrap();\n\n // Get a handle to the cluster\n let client = Client::with_options(client_options).unwrap();\n\n // List the names of the databases in that cluster\n for db_name in client.list_database_names(None, None) {\n println!(\"{:?}\", db_name.len());\n }\n\n // Ping the server to see if you can connect to the cluster\n client\n .database(\"University\")\n .run_command(doc! {\"ping\": 1}, None)\n .unwrap();\n println!(\"Connected successfully.\");\n\n client\n}\n",
"text": "I’m getting this error on trying to ping my DB.Here’s my code:I have it running in a docker container. Go driver in same setting and on same network works perfectly fine. I don’t understand the issue.",
"username": "Azanul_Haque"
},
{
"code": "",
"text": "Is the certificate referenced built in to the image or correctly bind mounted in the container ?",
"username": "chris"
},
{
"code": "",
"text": "I know different drivers evaluate relative paths of certificate files differently. The Go driver evaluates relative paths based on the current working directory of the running binary (usually the current directory of the shell used to run the binary). I’m not sure how the Rust driver evaluates relative paths of certificate files.",
"username": "Matt_Dale"
}
]
| Rust driver connection error | 2022-12-05T16:33:55.135Z | Rust driver connection error | 2,258 |
[
"node-js"
]
| [
{
"code": "",
"text": "I have a small webapp that runs locally with a local MongoDB database just fine. I moved it to Render, and created a DB cluster on mongoDB atlas, and the webapp appears to build on render, but every time I start the webapp on Render, it says it can’t connect with MongoDB.Here’s my app.js connection code:\n\nScreen Shot 2022-10-08 at 2.38.09 AM2129×1283 311 KB\n",
"username": "Daniel_Winterbottom"
},
{
"code": "",
"text": "Here’s the Render log entry informing me that it couldn’t connect. to the MongoDB Atlas DB. :\n\nScreen Shot 2022-10-08 at 2.36.43 AM2052×588 164 KB\nNote that I am able to connect manually using mongosh from my Terminal app on my Mac.",
"username": "Daniel_Winterbottom"
},
{
"code": "",
"text": "For some reason, the connection code screencap is not visible in the first post. Here it is again:\n\nScreen Shot 2022-10-08 at 2.38.09 AM2129×1283 311 KB\n\nI should also say that I have whitelisted both my ip address and all ip addresses (via whitelisting 0.0.0.0/0).",
"username": "Daniel_Winterbottom"
},
{
"code": "http://portquiz.net:27017/\n",
"text": "From the Render machine running your app try to access:If it fails then you have to communicate with the Render people since they are blocking outgoing traffic on the given port.",
"username": "steevej"
},
{
"code": "",
"text": "@Daniel_Winterbottom I just ran into a similar problem (connecting Mongo Atlas to render web service). I had a slightly more vague error, but my issue was resolved by adding my render project’s outbound IP addresses to mongodb project’s network access.Posting on the off chance that this is helpful. I had found this guide, but had already been following those steps to no avail.",
"username": "Billy_Littlefield"
},
{
"code": "",
"text": "Thanks Billy, I was migrating an app from Heroku and this resolved my issues to access my mongo Data Base.",
"username": "Alejandro_Valdiviezo"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Can't connect to Mongodb Atlas from Render Web-Hosted App | 2022-10-08T07:01:19.822Z | Can’t connect to Mongodb Atlas from Render Web-Hosted App | 7,748 |
|
null | [
"migration"
]
| [
{
"code": "",
"text": "Hi,I am new with monogodb. My Question is how we can move local mongo db changes to production mongodb without loosing data. For Instance I have already running monogodb which my client is using 24/7, Now I have added two more Fields in user collection, added and removed indexing on different collection e.t.c. Now I want to apply the changes to production db.Is it a good praticeMongodb is running in container.Thanks",
"username": "Trial_Work"
},
{
"code": "",
"text": "Hi,\ni donot know it’s the right channel to ask but any help will higly appericated.\nThanks",
"username": "Trial_Work"
},
{
"code": "",
"text": "Now I have added two more Fields in user collectionMongoDB uses flexible schema, meaning you can change fields freely. unless you need strict schema in your API as many frameworks need that. you need versioning if you change too many things.if those two fields (or any new field) are not mandatory inputs (meaning they can be null), then you don’t need any change in the production database. that is the flexibility of document databases.otherwise, you need to add a versioning field to your new schema (model) as well as a second schema to process old documents. this introduces extra (inescapable) work to your application as you have to work with 2 schemas every time you read a document. you should also start a migration logic about old documents because at some point you will have many versions to deal with any time you add a mandatory field.added and removed indexing on different collectionindex contents are based on the current documents in the database and changed with each insert/delete/update operation. so even if they use the same keys, production indexes are different than your development indexes because documents are different. to keep your customer’s data intact, you need to script your way for removing/adding indexes.",
"username": "Yilmaz_Durmaz"
}
]
| Move Mongodb Dev Changes to Running Production MongoDB | 2022-12-06T16:56:14.851Z | Move Mongodb Dev Changes to Running Production MongoDB | 2,522 |
null | [
"queries",
"performance"
]
| [
{
"code": "",
"text": "Manual presented at https://www.mongodb.com/docs/manual/reference/explain-results/ shows winning plan structure and executionStats structure , where each stage passes its resulting documents or index keys to the parent node.I have a query about ‘executionTimeMillis’ field shown at executionStats and ‘executionTimeMillisEstimate’ shown at every stage hierarchy.I have observed that ‘executionTimeMillis’ = ‘executionTimeMillis’ of Shard_Merge stage.\nAnother observation is ‘executionTimeMillis’ of shard_merge stage is maximum of all shard’s ‘executionTimeMillis’. That may be due to parallel of execution at all shards. I can understand this logic.\nMy query is why ‘executionTimeMillis’ of each shard is neither equal to outer stage of that shard nor sum of all stages of that shard?Can an y one explain‘executionTimeMillis’ is not equal to most outer stage’s ‘executionTimeMillisEstimate’ nor sum of every stage.Can any one explain me why?",
"username": "Prof_Monika_Shah"
},
{
"code": "",
"text": "Please answer that is ‘executionTimeMillis’ of Shard_Merge equal to ‘executionTimeMillisEstmate’ of input stage?",
"username": "Prof_Monika_Shah"
}
]
| Regarding executionTime shown at explain plan | 2022-12-05T06:16:12.296Z | Regarding executionTime shown at explain plan | 1,566 |
null | [
"java",
"production",
"scala"
]
| [
{
"code": "",
"text": "The 4.8.1 MongoDB Java & JVM Drivers release is a patch to the 4.8.0 release.The documentation hub includes extensive documentation of the 4.8 driver.You can find a full list of bug fixes here .",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB Java Driver 4.8.1 Released | 2022-12-07T18:08:06.996Z | MongoDB Java Driver 4.8.1 Released | 2,331 |
null | [
"queries",
"indexes",
"performance",
"transactions"
]
| [
{
"code": "{\"Type\": \"Retail\", \"ArchivalStatus\": {\"$exists\": false}, \"CreatedDate\": ISO Date }{\"ArchivalStatus\": \"COMPLETED\"}{\"name\" : \"ABC\", \"details\" : {...}, \"ArchivalStatus\": \"COMPLETE\", \"CreatedDate\": ISO Date() }db.coll.createIndex(\n { \"Type\": 1},\n { partialFilterExpression: { \"ArchivalStatus\": { $eq: null } } }\n);\n$exists : falsedb.RV.createIndex(\n { \"Type\": 1},\n { partialFilterExpression: { \"match_res.enrichmentstatus\": { $exists: false } } }\n);\n",
"text": "Hello Team,Requirement is to identify transactions with the\n{\"Type\": \"Retail\", \"ArchivalStatus\": {\"$exists\": false}, \"CreatedDate\": ISO Date } and update a new field → {\"ArchivalStatus\": \"COMPLETED\"} .\nBelow is the sample structure.{\"name\" : \"ABC\", \"details\" : {...}, \"ArchivalStatus\": \"COMPLETE\", \"CreatedDate\": ISO Date() }I’m planning to create the below partial index to pick only those records not having ArchivalStatus field.Below are the queries with respect to partial indexes in mongo version 4.4.Please help me understand the pre requisite to create partial indexes with ‘$and’ operator at the top-level only (https://www.mongodb.com/docs/v5.0/core/index-partial/) with an example.How to create partial index with multiple fields in mongo version 4.4Regards,\nLaks",
"username": "Laks"
},
{
"code": "",
"text": "Hi Team,Do you any updates on the above query?Regards,\nLaks",
"username": "Laks"
},
{
"code": "$exists : falsec.createIndex( { last_updated : 1 } , { partialFilterExpression: { \"$and\" : [ { name : \"Tips\" } , { type : \"generic\"} ] }})\n",
"text": "why is the below index with $exists : false not supported ?Because the documentation says that $exists:true is allowed. If $exists:false would be allow the documentation will have mentioned. There might be a technical reason why it cannot be supported. One thing about partial index is that the query must specify the fields used in the expression. This makes me think how would you specify a query that specify a field that does not exists.You probably could reverse your logic and have start with enrichmentstatus:null rather than non existing status and have your partial index expression to be enrichmentstatus:null.the requisite to create partial indexes with ‘$and’ operator at the top-levelexample:By top-level only, I could only assume that $and has to come first. But since the operators $not and $or are not allowed, you could probably re-write any expression with $and somewhere else as an $and at the top level.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Partial Indexes | 2022-11-30T11:56:12.331Z | Partial Indexes | 2,566 |
null | [
"atlas-search",
"text-search"
]
| [
{
"code": "",
"text": "How can I configure a custom analyzer in order to search for an exact match of a phrase inside a long text?For example, given the following text:Lorem ipsum dolor sit amet. Aut libero dolorem qui quia nesciunt et pariatur officiis sed error architecto qui ipsum fuga sed placeat quia vel cupiditate architecto. Ea autem veritatis eum ipsa distinctio qui distinctio velit!I want to have a match when searching “nesciunt et pariatur officiis”, but not when searching “ipsum amet” (since those two words appear in text and in that order, but not next to each other).",
"username": "German_Medaglia"
},
{
"code": "phrase",
"text": "Hi @German_Medaglia , you should be able to achieve this by using a predefined analyzer (e.g. lucene.standard) in your index and the phrase operator in your query. Check out the docs here!",
"username": "amyjian"
},
{
"code": "",
"text": "Hi @amyjian! Thanks a lot for your reply. I had already tried that, but when searching for example “this is a test”, I get as result one item having the text “Is this email displaying correctly?”. Besides the order of “this” and “is” being inverted, I also would like to return only results having the full input query (not only any of the words).",
"username": "German_Medaglia"
},
{
"code": "",
"text": "Can you share an example of the index definition and query that you are using?",
"username": "amyjian"
},
{
"code": "",
"text": "Sorry, I think that the unwanted results were being returned as a match for other conditions in the aggregation pipeline. Thanks for your help! I will mark your first answer as solution.",
"username": "German_Medaglia"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Searching exact phrase inside long text | 2022-11-23T20:01:00.213Z | Searching exact phrase inside long text | 2,206 |
null | [
"node-js",
"mongodb-shell"
]
| [
{
"code": "npm install -g lerna\nnpm install -g typescript\nnpm run bootstrap\nexport SEGMENT_API_KEY=\"dummy\"\nnpm run compile-exec\n./mongosh[80020]: ../src/node_contextify.cc:792:static void node::contextify::ContextifyScript::New(const FunctionCallbackInfo<v8::Value> &): Assertion `(sandbox) != nullptr' failed.\n 1: 0x10081832c node::Abort() [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n 2: 0x100818160 node::AppendExceptionLine(node::Environment*, v8::Local<v8::Value>, v8::Local<v8::Message>, node::ErrorHandlingMode) [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n 3: 0x10080ceb8 node::contextify::ContextifyScript::New(v8::FunctionCallbackInfo<v8::Value> const&) [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n 4: 0x1009def8c v8::internal::MaybeHandle<v8::internal::Object> v8::internal::(anonymous namespace)::HandleApiCallHelper<true>(v8::internal::Isolate*, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::FunctionTemplateInfo>, v8::internal::Handle<v8::internal::Object>, unsigned long*, int) [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n 5: 0x1009deaec v8::internal::Builtin_HandleApiCall(int, unsigned long*, v8::internal::Isolate*) [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n 6: 0x1012a73ec Builtins_CEntry_Return1_DontSaveFPRegs_ArgvOnStack_BuiltinExit [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n 7: 0x101221914 Builtins_JSBuiltinsConstructStub [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n 8: 0x10134ce7c Builtins_ConstructHandler [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n 9: 0x101224064 Builtins_InterpreterEntryTrampoline [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n10: 0x101221700 construct_stub_create_deopt_addr [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n11: 0x10134ce7c Builtins_ConstructHandler [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n12: 0x101224064 Builtins_InterpreterEntryTrampoline [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n13: 0x101224064 Builtins_InterpreterEntryTrampoline [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n14: 0x101224064 Builtins_InterpreterEntryTrampoline [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n15: 0x1084ce2d0\n16: 0x1084e2dec\n17: 0x1084e01d0\n18: 0x10850c138\n19: 0x1084e23ec\n20: 0x1084e1d9c\n21: 0x1084dec4c\n22: 0x101224064 Builtins_InterpreterEntryTrampoline [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n23: 0x1084d6858\n24: 0x101224064 Builtins_InterpreterEntryTrampoline [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n25: 0x101224064 Builtins_InterpreterEntryTrampoline [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n26: 0x1084d6858\n27: 0x101224064 Builtins_InterpreterEntryTrampoline [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n28: 0x1084d6858\n29: 0x101224064 Builtins_InterpreterEntryTrampoline [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n30: 0x101224064 Builtins_InterpreterEntryTrampoline [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n31: 0x101224064 Builtins_InterpreterEntryTrampoline [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n32: 0x101224064 Builtins_InterpreterEntryTrampoline [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n33: 0x101224064 Builtins_InterpreterEntryTrampoline [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n34: 0x101224064 Builtins_InterpreterEntryTrampoline [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n35: 0x101224064 Builtins_InterpreterEntryTrampoline [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n36: 0x101224064 Builtins_InterpreterEntryTrampoline [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n37: 0x101224064 Builtins_InterpreterEntryTrampoline [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n38: 0x101224064 Builtins_InterpreterEntryTrampoline [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n39: 0x1084ce2d0\n40: 0x1084d086c\n41: 0x101224064 Builtins_InterpreterEntryTrampoline [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n42: 0x1012224f0 Builtins_JSEntryTrampoline [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n43: 0x101222184 Builtins_JSEntry [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n44: 0x100ab3c88 v8::internal::(anonymous namespace)::Invoke(v8::internal::Isolate*, v8::internal::(anonymous namespace)::InvokeParams const&) [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n45: 0x100ab3300 v8::internal::Execution::Call(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Object>, int, v8::internal::Handle<v8::internal::Object>*) [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n46: 0x10098df6c v8::Function::Call(v8::Local<v8::Context>, v8::Local<v8::Value>, int, v8::Local<v8::Value>*) [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n47: 0x1007fde74 node::builtins::BuiltinLoader::CompileAndCall(v8::Local<v8::Context>, char const*, node::Realm*) [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n48: 0x100886450 node::Realm::ExecuteBootstrapper(char const*) [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n49: 0x10076208c std::__1::__function::__func<node::LoadEnvironment(node::Environment*, char const*)::$_0, std::__1::allocator<node::LoadEnvironment(node::Environment*, char const*)::$_0>, v8::MaybeLocal<v8::Value> (node::StartExecutionCallbackInfo const&)>::operator()(node::StartExecutionCallbackInfo const&) [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n50: 0x1007e1b28 node::StartExecution(node::Environment*, std::__1::function<v8::MaybeLocal<v8::Value> (node::StartExecutionCallbackInfo const&)>) [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n51: 0x100760898 node::LoadEnvironment(node::Environment*, char const*) [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n52: 0x1015df808 main [/Users/h3n4l/Documents/mycode/mongosh/dist/mongosh]\n53: 0x19e55be50 start [/usr/lib/dyld]\n",
"text": "Hi everyone, I am trying to build the mongosh into an executable.\nOS: Darwin\nArch: ARM64\nVersion: Ventura 13.0\nXCode: 14.0.1\nFollow the README on mongosh GitHub repo, I running:After 10 mins, I get the executable, but I encounter the following error when I try to run it:",
"username": "oysterdays"
},
{
"code": "",
"text": "@oysterdays What are your Node.js/npm versions?",
"username": "Anna_Henningsen"
},
{
"code": "",
"text": "Fwiw, I can reproduce this issue with Node.js 18. mongosh currently only builds with Node.js 16, but we will ensure that we are able to upgrade to a more recent Node.js version in the upcoming months.",
"username": "Anna_Henningsen"
}
]
| Build mongosh successfully but encounter Assertion `(sandbox) != nullptr' failed. if running the executable | 2022-12-07T03:46:40.244Z | Build mongosh successfully but encounter Assertion `(sandbox) != nullptr’ failed. if running the executable | 1,274 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "[\n {\n \"HistogramData\": {\n \"HistogramDataColumnTitle\": {\n \"ColumnTitle\": [\n \"Tran PPV in/s\",\n \"Tran Freq Hz\",\n \"Vert PPV in/s\",\n \"Vert Freq Hz\",\n \"Long PPV in/s\",\n \"Long Freq Hz\",\n \"Geophone PVS in/s\"\n ]\n },\n \"HistogramDataIntervals\": {\n \"Time\": \"2021-05-20 12:22:32\",\n \"Row\": {\n \"Cell\": [\n \"0.1462\",\n \"46.5\",\n \"0.0525\",\n \">100\",\n \"0.5494\",\n \"25.6\",\n \"0.5577\"\n ]\n }\n }\n }\n },\n {\n \"HistogramData\": {\n \"HistogramDataColumnTitle\": {\n \"ColumnTitle\": [\n \"Tran PPV in/s\",\n \"Tran Freq Hz\",\n \"Vert PPV in/s\",\n \"Vert Freq Hz\",\n \"Long PPV in/s\",\n \"Long Freq Hz\",\n \"Geophone PVS in/s\"\n ]\n },\n \"HistogramDataIntervals\": [\n {\n \"Time\": \"2021-05-22 19:15:51\",\n \"Row\": {\n \"Cell\": [\n \"0.0050\",\n \"19.7\",\n \"0.0056\",\n \"7.4\",\n \"0.0050\",\n \"28.4\",\n \"0.0063\"\n ]\n }\n },\n {\n \"Time\": \"2021-05-22 19:30:51\",\n \"Row\": {\n \"Cell\": [\n \"0.0050\",\n \"60.2\",\n \"0.0044\",\n \"20.5\",\n \"0.0050\",\n \"48.8\",\n \"0.0063\"\n ]\n }\n }\n ]\n }\n }\n]\n[\n {\n \"Time\": \"2021-05-20 12:22:32\",\n \"Max PPV in/s\": 0.5494,\n \"Max Freq Hz\": 25.6\n },\n {\n \"Time\": \"2021-05-22 19:15:51\",\n \"Max PPV in/s\": 0.0056,\n \"Max Freq Hz\": 7.4\n },\n {\n \"Time\": \"2021-05-22 19:30:51\",\n \"Max PPV in/s\": 0.0050,\n \"Max Freq Hz\": 60.2\n }\n]\n",
"text": "Hi All! I’ve got a doozy… I need to reduce some data from a large collection. The data I am focusing on looks like this:and I somehow need to reduce it to this:What is going on here? The values in the “Cell” array correspond to the “ColumnTitle” array. The values and dates are stored as strings for various reasons that cannot be addressed, let’s take it as a given. A document may have one set of values as you see in the first document, or multiple sets of values as you see in the second document. Regardless, for each set of values, I must compare all “PPV in/s” values, find the max, and pair with the corresponding “Freq Hz”. Add time and that’s what I present as the desired outcome. Presenting the results in a new array after “Cell” is also acceptable.I’ve been going around in circles trying to figure out how to implement different stages in an aggregation pipeline. Any help would be much appreciated. Thank you in advance!",
"username": "Nick_Mac"
},
{
"code": "",
"text": "Are your ColumnTitle always the same and in the same orders?When you ask for Max PPV, do you mean the max between Tran PPV, Vert PPV and Long PPV?When HistogramDataIntervals is an array, do you want the Time and Values of the row that has the max as defined in the previous sentence?Your input values seem to be string and your result shows number. What to do if the Freq of the Max PPV is the string value >100 as you have in one of your input documents?",
"username": "steevej"
},
{
"code": "[\n {\n \"Time\": \"2021-05-20 12:22:32\",\n \"Tran PPV in/s\": \"0.1462\",\n \"Tran Freq Hz\": \"46.5\",\n \"Vert PPV in/s\": \"0.0525\",\n \"Vert Freq Hz\": \">100\",\n \"Long PPV in/s\": \"0.5494\",\n \"Long Freq Hz\": \"25.6\",\n \"Geophone PVS in/s\": \"0.5577\"\n },\n {\n \"Time\": \"2021-05-22 19:15:51\",\n \"Tran PPV in/s\": \"0.0050\",\n \"Tran Freq Hz\": \"19.7\",\n \"Vert PPV in/s\": \"0.0056\",\n \"Vert Freq Hz\": \"7.4\",\n \"Long PPV in/s\": \"0.0050\",\n \"Long Freq Hz\": \"28.4\",\n \"Geophone PVS in/s\": \"0.0063\"\n },\n {\n \"Time\": \"2021-05-22 19:30:51\",\n \"Tran PPV in/s\": \"0.0050\",\n \"Tran Freq Hz\": \"60.2\",\n \"Vert PPV in/s\": \"0.0044\",\n \"Vert Freq Hz\": \"20.5\",\n \"Long PPV in/s\": \"0.0050\",\n \"Long Freq Hz\": \"48.8\",\n \"Geophone PVS in/s\": \"0.0063\"\n }\n]\n",
"text": "I’ve identified three types of sensors. The “combo” sensors have all the seven channels I presented above and always report out in that order. The other type does not have the “Geophone” channel. Lastly there are sensors that only have a “Geophone” channel.Yes, max PPV is the max between Tran PPV, Vert PPV and Long PPV.If you mean presenting the max values alongside the original values then yes that would be good. Time must be presented, not optional.Good catch. When frequency is >100 I plot is as 100. It could be >200 too and I plot that as 200.I’ve been contemplating whether having an intermediate step like this would be beneficial:Thank you so much, Steeve!",
"username": "Nick_Mac"
},
{
"code": "{\n \"Time\": \"2021-05-22 19:30:51\",\n \"Max PPV in/s\": 0.0050,\n \"Max Freq Hz\": 60.2\n }\n_unwind = { \"$unwind\" : \"$HistogramData.HistogramDataIntervals\" }\n_project = { \"$project\" : {\n \"_id\" : 0 ,\n \"time\" : \"$HistogramData.HistogramDataIntervals.Time\" ,\n \"row\" : \"$HistogramData.HistogramDataIntervals.Row.Cell\"\n} }\n{ time: '2021-05-20 12:22:32',\n row: [ '0.1462', '46.5', '0.0525', '>100', '0.5494', '25.6', '0.5577' ] }\n{ time: '2021-05-22 19:15:51',\n row: [ '0.0050', '19.7', '0.0056', '7.4', '0.0050', '28.4', '0.0063' ] }\n{ time: '2021-05-22 19:30:51',\n row: [ '0.0050', '60.2', '0.0044', '20.5', '0.0050', '48.8', '0.0063' ] }\n_set_PPV = { \"$set\" : {\n \"_PPV\" : { \"$map\" : {\n \"input\" : \"$_ppv_indices\" ,\n \"as\" : \"index\" ,\n \"in\" : { \"$arrayElemAt\" : [ \"$$row\" , \"$$index\" ] }\n } }\n} }\n_set_Data = { \"$set\" : { \n \"_Data\" : { \"$cond\" : {\n \"if\" : { \"$isArray\" : \"$HistogramData.HistogramDataIntervals\" } ,\n \"then\" : \"$HistogramData.HistogramDataIntervals\" ,\n \"else\" : [ \"$HistogramData.HistogramDataIntervals\" ]\n } } ,\n} }\n_set_indices = { \"$set\" : {\n \"_ppv_indices\" : [ 0 , 2 , 4 ] ,\n \"_freq_indices\" : [ 1 , 3 , 5 ]\n} }\n_set_PPV = { \"$set\" : {\n \"_PPV\" : { \"$map\" : {\n \"input\" : \"$_Data\" ,\n \"as\" : \"data\" ,\n \"in\" : {\n \"Time\" : \"$$data.Time\" , \n \"ppv\" : { \"$map\" : {\n \"input\" : \"$_ppv_indices\" ,\n \"as\" : \"index\" ,\n \"in\" : { \"$arrayElemAt\" : [ \"$$data.Row.Cell\" , \"$$index\" ] }\n } }\n }\n } }\n} }\n_set_Freq = { \"$set\" : {\n \"_Freq\" : { \"$map\" : {\n \"input\" : \"$_Data\" ,\n \"as\" : \"data\" ,\n \"in\" : {\n \"Time\" : \"$$data.Time\" , \n \"freq\" : { \"$map\" : {\n \"input\" : \"$_freq_indices\" ,\n \"as\" : \"index\" ,\n \"in\" : { \"$arrayElemAt\" : [ \"$$data.Row.Cell\" , \"$$index\" ] }\n } }\n }\n } }\n} }\n_convert_PPV = { \"$set\" : {\n \"_PPV_converted\" : { \"$map\" : {\n \"input\" : \"$_PPV\" ,\n \"as\" : \"data\" ,\n \"in\" : { \n \"Time\" : \"$$data.Time\" , \n \"ppv\" : { \"$map\" : {\n \"input\" : \"$$data.ppv\" ,\n \"as\" : \"value\" ,\n \"in\" : { \"$convert\" : {\n \"input\" : \"$$value\" ,\n \"to\" : \"double\" ,\n \"onError\" : { \"$cond\" : [ { \"$eq\" : [ \">100\" , \"$$value\" ] } , 100.0 , 200.0 ] }\n } }\n } }\n }\n } }\n} }\n_convert_Freq = { \"$set\" : {\n \"_Freq_converted\" : { \"$map\" : {\n \"input\" : \"$_Freq\" ,\n \"as\" : \"data\" ,\n \"in\" : {\n \"Time\" : \"$$data.Time\" , \n \"freq\" : { \"$map\" : {\n \"input\" : \"$$data.freq\" ,\n \"as\" : \"value\" ,\n \"in\" : { \"$convert\" : {\n \"input\" : \"$$value\" ,\n \"to\" : \"double\" ,\n \"onError\" : { \"$cond\" : [ { \"$eq\" : [ \">100\" , \"$$value\" ] } , 100.0 , 200.0 ] }\n } }\n } }\n }\n } }\n} }\n_max_ppv = { \"$set\" : {\n \"_max_ppv\" : { \"$map\" : {\n \"input\" : \"$_PPV_converted\" ,\n \"as\" : \"data\" ,\n \"in\" : {\n \"Time\" : \"$$data.Time\" , \n \"ppv\" : \"$$data.ppv\" ,\n \"max\" : { \"$max\" : \"$$data.ppv\" }\n }\n } }\n} }\n_max = { \"$set\" : {\n \"_max\" : { \"$max\" : \"$_max_ppv.max\" }\n} }\n",
"text": "You may skip reading this post as a correct solution is in the next postLast minute note: You may read after COMPLEX AND WRONG SOLUTION BELOW but I found that I err and I could not producein the result set because I was going for a single PPV max per input documents. And it looks you want a PPV max per Row.Cell. This is simpler because a lot of $map can be removed from the solution below once you $unwind.I’ve been contemplating whether having an intermediate step like this would be beneficialI find it always beneficial. But I would start withto simply the data to:Some of the stages from the wrong solution below can be use but in a much simpler form. In most only the inner $map is required now that it is $unwind. The _set_indices stays the same. For example _set_PPV with the double $map becomesI will publish a real solution, after I sleep. Good Night.COMPLEX AND WRONG SOLUTION BELOWThe fact that some times HistogramDataIntervals is an object and at other times it is an array.My first stage would be a $set to unify everything to an array. Something like:Then another set stage which could be combine with the above but I like to separate them. This set just set fields to then index needed to get the ppv and freq values in different arrays.Divide and ConquerNext transformation would use $map on _Data to extract PPV values into a separate array using _ppv_indices for the Row.Cell embedded array. Something like:Doing the same as above but for Freq.My next transformation would be to user $map to convert the values of _Freq.freq and _PPV.ppv to numbers.Ditto for Freq.Now the data (_PPV_converted and _Freq_converted) is in a workable format to get the max ppv.Now that we know the max within which time row we can find the max of the document.The rest gonna be for another day. I am done for today. As a final note a lot of those $set can be done in a single stage. But when developing I find it easy to keep them separate. Optimization can come after accuracy.",
"username": "steevej"
},
{
"code": "_unwind = { \"$unwind\" : \"$HistogramData.HistogramDataIntervals\" }\n_project = { \"$project\" : {\n \"_id\" : 0 ,\n \"time\" : \"$HistogramData.HistogramDataIntervals.Time\" ,\n \"row\" : \"$HistogramData.HistogramDataIntervals.Row.Cell\"\n} }\n_set_PPV = { \"$set\" : {\n \"_PPV\" : { \"$map\" : {\n \"input\" : [ 0 , 2 , 4 ] ,\n \"as\" : \"index\" ,\n \"in\" : { \"$arrayElemAt\" : [ \"$row\" , \"$$index\" ] }\n } }\n} }\n_convert_PPV = { \"$set\" : {\n \"_PPV_converted\" : { \"$map\" : {\n \"input\" : \"$_PPV\" ,\n \"as\" : \"ppv\" ,\n \"in\" : { \"$convert\" : {\n \"input\" : \"$$ppv\" ,\n \"to\" : \"double\"\n } }\n } }\n} }\n_max_ppv = { \"$set\" : {\n \"_max_ppv\" : { \"$max\" : \"$_PPV_converted\" }\n} }\n_index = { \"$set\" : {\n \"_index\" : { \"$indexOfArray\" : [ \"$_PPV_converted\" , \"$_max_ppv\" ] }\n} }\n_freq = { \"$set\" : {\n \"_freq\" : { \"$arrayElemAt\" : [ \"$row\" , { \"$add\" : [ { \"$multiply\" : [ \"$_index\" , 2 ] } , 1 ] } ] }\n} }\n[ _unwind , _project , _set_PPV , _convert_PPV , _max_ppv , _index , _freq ]\n{ time: '2021-05-20 12:22:32',\n row: [ '0.1462', '46.5', '0.0525', '>100', '0.5494', '25.6', '0.5577' ],\n _PPV: [ '0.1462', '0.0525', '0.5494' ],\n _PPV_converted: [ 0.1462, 0.0525, 0.5494 ],\n _max_ppv: 0.5494,\n _index: 2,\n _freq: '25.6' }\n{ time: '2021-05-22 19:15:51',\n row: [ '0.0050', '19.7', '0.0056', '7.4', '0.0050', '28.4', '0.0063' ],\n _PPV: [ '0.0050', '0.0056', '0.0050' ],\n _PPV_converted: [ 0.005, 0.0056, 0.005 ],\n _max_ppv: 0.0056,\n _index: 1,\n _freq: '7.4' }\n{ time: '2021-05-22 19:30:51',\n row: [ '0.0050', '60.2', '0.0044', '20.5', '0.0050', '48.8', '0.0063' ],\n _PPV: [ '0.0050', '0.0044', '0.0050' ],\n _PPV_converted: [ 0.005, 0.0044, 0.005 ],\n _max_ppv: 0.005,\n _index: 0,\n _freq: '60.2' }\n",
"text": "Let’s start again.First 2 stages:Then a simple stage that extracts the PPV values in a separate array.Then a stage to convert the new PPV.Then we get the _max_PPVThe we use $indexOfArray to find the index of _max_ppv inside the _PPV_converted array.My next stage would be to find the corresponding Freq in row which is the index of max ppv + 1All this for a pipeline that looks like:That provides the following:All the data is there (even the intermediate values for debug purpose), what is missing is some cosmetic $project to present the fields time, _max_ppv, and _freq in the format you wish. I will leave that to the reader as an exercise.",
"username": "steevej"
}
]
| How to use conditionals (and more?) to combine and reduce data | 2022-12-05T07:27:46.537Z | How to use conditionals (and more?) to combine and reduce data | 1,462 |
null | [
"aggregation",
"atlas-functions"
]
| [
{
"code": "Failed to optimize pipeline :: caused by :: $regexMatch needs 'regex' to be of type string or regexconst regExAdPriceExtractStage = {\n $set: {\n regexAdPrice: {\n $cond: {\n if: {\n $regexMatch: {\n input: '$price_text',\n regex: /pw|week|pcm|month|psm|mth|p\\.c\\.m|pm|p\\/m|p\\.m|sqm|m2/,\n options: 'i',\n },\n },\n then: '$$REMOVE',\n else: {\n $regexFind: {\n input: '$price_text',\n regex: /(?:\\d{1,3}(?:,\\d{3})*|\\d+)(?:\\.\\d{1,2})?(?!\\.?\\d)/,\n options: 'i',\n },\n },\n },\n },\n },\n };\nprice_text/<pattern>/",
"text": "Hi AllI have a large aggregation pipeline which runs perfectly fine when tested in VSCode Playground, however when I transfer this into a App Services Function, which would normally be executed using a scheduled trigger I am getting errors with the use of $regexThe error I am receiving is:Failed to optimize pipeline :: caused by :: $regexMatch needs 'regex' to be of type string or regexThe pipeline stage is as followsIt looks at a price_text field and will either remove the field if some patters match, or it will extract a monetary amount from the field.Reading the docs https://www.mongodb.com/docs/manual/reference/operator/aggregation/regexMatch/ it says that I can use the pattern /<pattern>/ but for some reason this does not run when in an App Services Function.Thanks heaps.",
"username": "Gary_Jarrel"
},
{
"code": "BSON.BSONRegExpconst regExAdPriceExtractStage = {\n $set: {\n regexAdPrice: {\n $cond: {\n if: {\n $regexMatch: {\n input: '$price_text',\n // eslint-disable-next-line prefer-regex-literals\n regex: new BSON.BSONRegExp('pw|week|pcm|month|psm|mth|p\\\\.c\\\\.m|pm|p\\\\/m|p\\\\.m|sqm|m2'),\n options: 'i',\n },\n },\n then: '$$REMOVE',\n else: {\n $regexFind: {\n input: '$price_text',\n // eslint-disable-next-line prefer-regex-literals\n regex: new BSON.BSONRegExp('(?:\\\\d{1,3}(?:,\\\\d{3})*|\\\\d+)(?:\\\\.\\\\d{1,2})?(?!\\\\.?\\\\d)'),\n options: 'i',\n },\n },\n },\n },\n },\n };\n",
"text": "So I think I figured it out. The only way I could get it to work is to use BSON.BSONRegExp as per this note here: https://www.mongodb.com/docs/atlas/app-services/functions/mongodb/read/#evaluate-a-regular-expression. The pipeline stage now being:",
"username": "Gary_Jarrel"
}
]
| Error using $regexMatch and $regexFind in Aggregation Pipeline within Atlas App Sync Function | 2022-12-07T01:28:32.925Z | Error using $regexMatch and $regexFind in Aggregation Pipeline within Atlas App Sync Function | 1,978 |
null | [
"installation"
]
| [
{
"code": "processor : 0\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 44\nmodel name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz\nstepping : 2\nmicrocode : 0x1f\ncpu MHz : 1599.428\ncache size : 12288 KB\nphysical id : 0\nsiblings : 12\ncore id : 0\ncpu cores : 6\napicid : 0\ninitial apicid : 0\nfpu : yes\nfpu_exception : yes\ncpuid level : 11\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d\nvmx flags : vnmi preemption_timer invvpid ept_x_only ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple\nbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown\nbogomips : 5865.24\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n\nprocessor : 1\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 44\nmodel name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz\nstepping : 2\nmicrocode : 0x1f\ncpu MHz : 2931.397\ncache size : 12288 KB\nphysical id : 1\nsiblings : 12\ncore id : 0\ncpu cores : 6\napicid : 32\ninitial apicid : 32\nfpu : yes\nfpu_exception : yes\ncpuid level : 11\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d\nvmx flags : vnmi preemption_timer invvpid ept_x_only ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple\nbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown\nbogomips : 5865.35\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n\nprocessor : 2\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 44\nmodel name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz\nstepping : 2\nmicrocode : 0x1f\ncpu MHz : 1599.780\ncache size : 12288 KB\nphysical id : 0\nsiblings : 12\ncore id : 8\ncpu cores : 6\napicid : 16\ninitial apicid : 16\nfpu : yes\nfpu_exception : yes\ncpuid level : 11\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d\nvmx flags : vnmi preemption_timer invvpid ept_x_only ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple\nbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown\nbogomips : 5865.24\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n\nprocessor : 3\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 44\nmodel name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz\nstepping : 2\nmicrocode : 0x1f\ncpu MHz : 1821.824\ncache size : 12288 KB\nphysical id : 1\nsiblings : 12\ncore id : 8\ncpu cores : 6\napicid : 48\ninitial apicid : 48\nfpu : yes\nfpu_exception : yes\ncpuid level : 11\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d\nvmx flags : vnmi preemption_timer invvpid ept_x_only ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple\nbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown\nbogomips : 5865.35\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n\nprocessor : 4\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 44\nmodel name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz\nstepping : 2\nmicrocode : 0x1f\ncpu MHz : 1599.641\ncache size : 12288 KB\nphysical id : 0\nsiblings : 12\ncore id : 2\ncpu cores : 6\napicid : 4\ninitial apicid : 4\nfpu : yes\nfpu_exception : yes\ncpuid level : 11\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d\nvmx flags : vnmi preemption_timer invvpid ept_x_only ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple\nbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown\nbogomips : 5865.24\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n\nprocessor : 5\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 44\nmodel name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz\nstepping : 2\nmicrocode : 0x1f\ncpu MHz : 1926.516\ncache size : 12288 KB\nphysical id : 1\nsiblings : 12\ncore id : 2\ncpu cores : 6\napicid : 36\ninitial apicid : 36\nfpu : yes\nfpu_exception : yes\ncpuid level : 11\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d\nvmx flags : vnmi preemption_timer invvpid ept_x_only ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple\nbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown\nbogomips : 5865.35\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n\nprocessor : 6\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 44\nmodel name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz\nstepping : 2\nmicrocode : 0x1f\ncpu MHz : 1599.679\ncache size : 12288 KB\nphysical id : 0\nsiblings : 12\ncore id : 10\ncpu cores : 6\napicid : 20\ninitial apicid : 20\nfpu : yes\nfpu_exception : yes\ncpuid level : 11\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d\nvmx flags : vnmi preemption_timer invvpid ept_x_only ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple\nbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown\nbogomips : 5865.24\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n\nprocessor : 7\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 44\nmodel name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz\nstepping : 2\nmicrocode : 0x1f\ncpu MHz : 1726.326\ncache size : 12288 KB\nphysical id : 1\nsiblings : 12\ncore id : 10\ncpu cores : 6\napicid : 52\ninitial apicid : 52\nfpu : yes\nfpu_exception : yes\ncpuid level : 11\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d\nvmx flags : vnmi preemption_timer invvpid ept_x_only ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple\nbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown\nbogomips : 5865.35\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n\nprocessor : 8\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 44\nmodel name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz\nstepping : 2\nmicrocode : 0x1f\ncpu MHz : 1599.458\ncache size : 12288 KB\nphysical id : 0\nsiblings : 12\ncore id : 1\ncpu cores : 6\napicid : 2\ninitial apicid : 2\nfpu : yes\nfpu_exception : yes\ncpuid level : 11\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d\nvmx flags : vnmi preemption_timer invvpid ept_x_only ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple\nbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown\nbogomips : 5865.24\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n\nprocessor : 9\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 44\nmodel name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz\nstepping : 2\nmicrocode : 0x1f\ncpu MHz : 2658.628\ncache size : 12288 KB\nphysical id : 1\nsiblings : 12\ncore id : 1\ncpu cores : 6\napicid : 34\ninitial apicid : 34\nfpu : yes\nfpu_exception : yes\ncpuid level : 11\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d\nvmx flags : vnmi preemption_timer invvpid ept_x_only ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple\nbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown\nbogomips : 5865.35\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n\nprocessor : 10\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 44\nmodel name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz\nstepping : 2\nmicrocode : 0x1f\ncpu MHz : 1599.634\ncache size : 12288 KB\nphysical id : 0\nsiblings : 12\ncore id : 9\ncpu cores : 6\napicid : 18\ninitial apicid : 18\nfpu : yes\nfpu_exception : yes\ncpuid level : 11\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d\nvmx flags : vnmi preemption_timer invvpid ept_x_only ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple\nbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown\nbogomips : 5865.24\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n\nprocessor : 11\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 44\nmodel name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz\nstepping : 2\nmicrocode : 0x1f\ncpu MHz : 2703.682\ncache size : 12288 KB\nphysical id : 1\nsiblings : 12\ncore id : 9\ncpu cores : 6\napicid : 50\ninitial apicid : 50\nfpu : yes\nfpu_exception : yes\ncpuid level : 11\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d\nvmx flags : vnmi preemption_timer invvpid ept_x_only ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple\nbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown\nbogomips : 5865.35\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n\nprocessor : 12\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 44\nmodel name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz\nstepping : 2\nmicrocode : 0x1f\ncpu MHz : 1599.808\ncache size : 12288 KB\nphysical id : 0\nsiblings : 12\ncore id : 0\ncpu cores : 6\napicid : 1\ninitial apicid : 1\nfpu : yes\nfpu_exception : yes\ncpuid level : 11\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d\nvmx flags : vnmi preemption_timer invvpid ept_x_only ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple\nbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown\nbogomips : 5865.24\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n\nprocessor : 13\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 44\nmodel name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz\nstepping : 2\nmicrocode : 0x1f\ncpu MHz : 2932.234\ncache size : 12288 KB\nphysical id : 1\nsiblings : 12\ncore id : 0\ncpu cores : 6\napicid : 33\ninitial apicid : 33\nfpu : yes\nfpu_exception : yes\ncpuid level : 11\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d\nvmx flags : vnmi preemption_timer invvpid ept_x_only ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple\nbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown\nbogomips : 5865.35\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n\nprocessor : 14\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 44\nmodel name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz\nstepping : 2\nmicrocode : 0x1f\ncpu MHz : 1599.051\ncache size : 12288 KB\nphysical id : 0\nsiblings : 12\ncore id : 8\ncpu cores : 6\napicid : 17\ninitial apicid : 17\nfpu : yes\nfpu_exception : yes\ncpuid level : 11\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d\nvmx flags : vnmi preemption_timer invvpid ept_x_only ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple\nbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown\nbogomips : 5865.24\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n\nprocessor : 15\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 44\nmodel name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz\nstepping : 2\nmicrocode : 0x1f\ncpu MHz : 2778.503\ncache size : 12288 KB\nphysical id : 1\nsiblings : 12\ncore id : 8\ncpu cores : 6\napicid : 49\ninitial apicid : 49\nfpu : yes\nfpu_exception : yes\ncpuid level : 11\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d\nvmx flags : vnmi preemption_timer invvpid ept_x_only ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple\nbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown\nbogomips : 5865.35\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n\nprocessor : 16\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 44\nmodel name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz\nstepping : 2\nmicrocode : 0x1f\ncpu MHz : 1599.882\ncache size : 12288 KB\nphysical id : 0\nsiblings : 12\ncore id : 2\ncpu cores : 6\napicid : 5\ninitial apicid : 5\nfpu : yes\nfpu_exception : yes\ncpuid level : 11\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d\nvmx flags : vnmi preemption_timer invvpid ept_x_only ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple\nbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown\nbogomips : 5865.24\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n\nprocessor : 17\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 44\nmodel name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz\nstepping : 2\nmicrocode : 0x1f\ncpu MHz : 2422.676\ncache size : 12288 KB\nphysical id : 1\nsiblings : 12\ncore id : 2\ncpu cores : 6\napicid : 37\ninitial apicid : 37\nfpu : yes\nfpu_exception : yes\ncpuid level : 11\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d\nvmx flags : vnmi preemption_timer invvpid ept_x_only ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple\nbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown\nbogomips : 5865.35\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n\nprocessor : 18\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 44\nmodel name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz\nstepping : 2\nmicrocode : 0x1f\ncpu MHz : 1599.681\ncache size : 12288 KB\nphysical id : 0\nsiblings : 12\ncore id : 10\ncpu cores : 6\napicid : 21\ninitial apicid : 21\nfpu : yes\nfpu_exception : yes\ncpuid level : 11\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d\nvmx flags : vnmi preemption_timer invvpid ept_x_only ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple\nbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown\nbogomips : 5865.24\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n\nprocessor : 19\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 44\nmodel name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz\nstepping : 2\nmicrocode : 0x1f\ncpu MHz : 2009.673\ncache size : 12288 KB\nphysical id : 1\nsiblings : 12\ncore id : 10\ncpu cores : 6\napicid : 53\ninitial apicid : 53\nfpu : yes\nfpu_exception : yes\ncpuid level : 11\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d\nvmx flags : vnmi preemption_timer invvpid ept_x_only ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple\nbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown\nbogomips : 5865.35\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n\nprocessor : 20\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 44\nmodel name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz\nstepping : 2\nmicrocode : 0x1f\ncpu MHz : 1599.583\ncache size : 12288 KB\nphysical id : 0\nsiblings : 12\ncore id : 1\ncpu cores : 6\napicid : 3\ninitial apicid : 3\nfpu : yes\nfpu_exception : yes\ncpuid level : 11\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d\nvmx flags : vnmi preemption_timer invvpid ept_x_only ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple\nbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown\nbogomips : 5865.24\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n\nprocessor : 21\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 44\nmodel name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz\nstepping : 2\nmicrocode : 0x1f\ncpu MHz : 2809.128\ncache size : 12288 KB\nphysical id : 1\nsiblings : 12\ncore id : 1\ncpu cores : 6\napicid : 35\ninitial apicid : 35\nfpu : yes\nfpu_exception : yes\ncpuid level : 11\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d\nvmx flags : vnmi preemption_timer invvpid ept_x_only ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple\nbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown\nbogomips : 5865.35\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n\nprocessor : 22\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 44\nmodel name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz\nstepping : 2\nmicrocode : 0x1f\ncpu MHz : 1600.054\ncache size : 12288 KB\nphysical id : 0\nsiblings : 12\ncore id : 9\ncpu cores : 6\napicid : 19\ninitial apicid : 19\nfpu : yes\nfpu_exception : yes\ncpuid level : 11\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d\nvmx flags : vnmi preemption_timer invvpid ept_x_only ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple\nbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown\nbogomips : 5865.24\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n\nprocessor : 23\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 44\nmodel name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz\nstepping : 2\nmicrocode : 0x1f\ncpu MHz : 2902.321\ncache size : 12288 KB\nphysical id : 1\nsiblings : 12\ncore id : 9\ncpu cores : 6\napicid : 51\ninitial apicid : 51\nfpu : yes\nfpu_exception : yes\ncpuid level : 11\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d\nvmx flags : vnmi preemption_timer invvpid ept_x_only ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple\nbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown\nbogomips : 5865.35\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n",
"text": "Hey, I’m trying to start MongoDB and I’m getting a error “Illegal instruction (core dumped)”, It has worked before on this machine but recently stopped working after a CPU & Ram UpgradeThe CPU Info is below",
"username": "DarkerInk"
},
{
"code": "",
"text": "Have you tried a re-install?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Looks like you potentially got a CPU downgrade.That processor is in the Westmere generation it does not support the avx feature which is required for MongoDB 5.0+If you don’t have a backup or another member in a replica set this puts you in a difficult position.",
"username": "chris"
},
{
"code": "",
"text": "Ah, I see. I’ll just move my data to another server that supports AVX. Thanks for the response",
"username": "DarkerInk"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Cannot Start MongoDB | 2022-12-06T14:32:33.282Z | Cannot Start MongoDB | 2,226 |
null | [
"document-versioning"
]
| [
{
"code": "",
"text": "I’m aware of the document versioning pattern when it comes to versioning documents but I can’t seem to find anything official on the MongoDB resources regarding document versioning lots of large documents with frequent changes.I’ve had a look at the data modelling course on the university site and it doesn’t cover it.Are there any patterns that can be used to version lots of large documents with frequent changes? Or is MongoDB a bad use case for this?",
"username": "Imran_Azad"
},
{
"code": "",
"text": "Hi @Imran_Azad ,What do you mean with frequent changes? Does each change create a new version document?How do you consider keeping the versions? It sounds that if the document is large and change is frequent its best to not embed old versions but create a new document…Can you show case the use case in more detail.Thanks",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Be aware that an updated document is completely written back to permanent storage, even the unmodified values.So if your document is large and is frequently updated, you might suffer write starvation. In this case, a variation of the outlier pattern might be appropriated if only a few fields of the large documents are frequently updated. You would keep the stable fields in the main large document and store the frequent modifications in a separate outlier document or documents. This would reduce the write starvation since the frequently updated and written parts are much smaller that the stable main large document.",
"username": "steevej"
},
{
"code": "",
"text": "Hi @Pavel_DuchovnyThe specific problem is we have a document with multiple fields and we need to track the changes made to each field e.g.Does that help clarify?",
"username": "Imran_Azad"
},
{
"code": "{\n field1 : { latest: \"x\" , prev : [ \"y\", \"z\" ]}\n filed2 : { latest : \"a\" , prev : [ ] }\n}\n",
"text": "@Imran_Azad ,Will the following design work:if you know that some fields are immutable you can just keep them as is, otherwise consider the above module.If that cause the documents grow to very large sizes, I will need to understand the query pattern to suggested a better module.For example:Ty",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "@Pavel_Duchovny Thank you so much for this. I’ve got further clarification of the requirements. Essentially, each time a change is made to a document an entire snapshot needs to be taken of the previous version not the tracking of individual fields as I said previously. There will always be a “main” version of a document and all it’s previous versions each time it changed. There’s a possibility any aspect of the document could be changed.What design would you recommend for this?",
"username": "Imran_Azad"
},
{
"code": "{\n _id : some generic unique,\n ItemId : \"xyz\" ,\n timestamp : latest timestamp\n ...\n}\n{ itemId: 1, timestamp: -1}collection.findOne({itemId : \"xyz\"}).sort({timestamp : -1})\n",
"text": "Hi @Imran_Azad ,In that case it sounds like you may have the benefit of splitting the data into a “latest” collection and a history collection.In the latest collection you will store the most recent version and have it queried and indexed by id.While the history collection will receive the privious state. So essentially an update is a “transaction” of delete => insert new with same _id => insert old.If updates are so frequent and the critical path is the write and not the read you may consider the following alternative:Insert a new version into the main collection keeping the history in the same one (you may offload the history as a batch process)In this design I Invision the collection as follows:The index for lookup by id is now { itemId: 1, timestamp: -1}When you lookup by id you do:This will get only latest version.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Wow! Thank you so much, I really appreciate this Pavel!",
"username": "Imran_Azad"
}
]
| Document Versioning Large Documents With Frequent Changes | 2022-12-02T10:52:39.965Z | Document Versioning Large Documents With Frequent Changes | 2,546 |
null | [
"aggregation"
]
| [
{
"code": "{\n index: 'userNameCity',\n compound:{\n \n filter:{\n \n equals: {\nvalue: ObjectId [ '6271f2bb79cd80194c81f631', '62cf35ce62effdc429efa9b0' ],\n path: '_id'\n },\n \n }\n ,\n",
"text": "The index for the path (_id) is of datatype ObjectID\nI want to filter by an array of objectId.\nThe above code works for single objectId but does not work for array\nNote:\nThe array will be a dynamic listCan you please suggest",
"username": "Manoranjan_Bhol"
},
{
"code": "",
"text": "The syntaxObjectId [ ‘6271f2bb79cd80194c81f631’, ‘62cf35ce62effdc429efa9b0’ ]is most likely wrong.Try [ObjectId(‘627…631’),ObjectIs(‘62c…9b0’)]",
"username": "steevej"
},
{
"code": "",
"text": "\nimage1553×705 38.9 KB\n@steevej Thanks for the suggestion however it does not work",
"username": "Manoranjan_Bhol"
},
{
"code": "",
"text": "Hi @Manoranjan_Bhol ,Thanks for your response.Please note that, values in the equals filter accept either boolean or objectId as a parameter. Please refer to the documentation. Hence you should mention either boolean or ObjectId value only but cannot mention Array or string there.Thanks,\nDarshan",
"username": "DarshanJayarama"
},
{
"code": "",
"text": "Take a look at https://www.mongodb.com/docs/atlas/atlas-search/compound/, may be you can use it with filter to achieve what you want to do.",
"username": "steevej"
},
{
"code": "",
"text": "@steevej @DarshanJayarama\nThank you for the help againMy goal is to",
"username": "Manoranjan_Bhol"
},
{
"code": "_idtype: objectIdstring",
"text": "Hi @Manoranjan_Bhol ,Thanks for your reply.\nPlease note that, in index definition If you have created _id with type: objectId then in the equals you can not have more than 1 objectId as that accept either only single objectId in the query.Further, I have noticed you are mentioning 2 index in the query, that create ambiguity, Do you mind sharing the index definition of this string index.Thanks,\nDarshan",
"username": "DarshanJayarama"
},
{
"code": "",
"text": "@DarshanJayarama\n\nimage1241×574 17.3 KB\nPlease refer to the index defined",
"username": "Manoranjan_Bhol"
},
{
"code": "",
"text": "So any thoughts? Or workarounds on how to implement functionality like described above? Is the only way is use Compound with hundreds of equals operators??",
"username": "Nikita_Prokopev"
}
]
| Compound Query filter by multiple objectId | 2022-08-13T11:18:27.400Z | Compound Query filter by multiple objectId | 5,247 |
null | [
"aggregation",
"queries",
"node-js",
"data-modeling",
"mongoose-odm"
]
| [
{
"code": "",
"text": "Hello, I got hired (non paid, basically I got chosen and I love what I do) to do a website used to browse logs for a videogame, so currently I have multiple documents each containing different fields, each document is a log object (since they contain different fields I have to create one document for each type of log), my question is simple but the answer is quite the opposite I’m afraid.I want to be able to, query all documents and sort every single result based on a field that is a timestamp, so I know I can query each collection and sort by timestamp, the problem is I cannot do a sort for all of the collections, I’ve tried google and came across some threads about using JS’s sort function to do it but I never get accurate results it’s just a mess (also mongoose .sort is slower than JS’s sort??)The other problem comes with filtering data, I can easily regex for input data but not type data, without having a complete mess on the backend to check if filter contains value then query this collection, which would lead to ridiculous wait times when trying to gather data, I’m guessing this is not the first time this problem has occurred? How do big companies do it? Am I missing something critical? Also if there’s a better alternative, which? Thanks for your time!",
"username": "lexi_hvh"
},
{
"code": "{\n logType : \"one\",\n \"game\" : \"one\",\n \"playerDetails\" : {\n name: \"...\"\n }\n timestamp : 11111\n...\n},\n{\n logType : \"two\",\n \"game\" : \"two\",\n \"races\" : [ ... ],\n timestamp : 222222\n...\n},\ndb.logs.find({}).sort({timestamp : -1});\n",
"text": "Hi @lexi_hvh ,I am a bit confused by the paragraphs, in the first one you mentioned that each document in the “logs” collection is different as each log may have different set of fields.However, later on you look intoquery each collection and sort by timestamp, the problem is I cannot do a sort for all of the collectionsSo do you store all logs in a single collection? Or you have a collection per log type?All logs can sit on the same collection and have different fields in one collection as long as the predicates are common:No we can query all logs and sort by timestamp:Does that help your questions?Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Need a hand to help me with some logic | 2022-12-05T17:07:35.357Z | Need a hand to help me with some logic | 870 |
null | [
"aggregation",
"queries",
"node-js"
]
| [
{
"code": "filterByDate(startDate,endDate){\n startDate = new Date('2022-01-01');\n endDate = new Date();\nreturn this.dataModel.aggregate([\n{\n$match: { $gte:startDate,$lte:endDate }\n}\n])\n}\n\n",
"text": "I want to initialize variable in the beginning by date than change the variable with the date comes from the request for example :I got only data with matched date initialized in startDate endDate\nIf I console the passed data in the request it shows what I put in startDate endDate\nany help please",
"username": "skander_lassoued"
},
{
"code": "",
"text": "Can you explain what is getting returned, or the error you get, when this query runs? I’m not a NodeJS developer, but this doesn’t look like a valid query to me. From the looks of things I would think you would get an error as it looks like you’re missing a field to match on.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Your match statement needs the field names.",
"username": "Steve_Hand1"
},
{
"code": "",
"text": "I solved this by initiate variables in front side",
"username": "skander_lassoued"
}
]
| $gte and $lte doesn't with variable | 2022-10-04T20:08:22.875Z | $gte and $lte doesn’t with variable | 2,778 |
null | [
"server",
"security",
"configuration"
]
| [
{
"code": " net:\n port: 27017\n bindIp: 0.0.0.0\n tls:\n mode: requireTLS\n certificateSelector: thumbprint=0f******************************************9\n",
"text": "trying to configure mongoDB server to use TLS on Windows Server 2019\nconfiguration file:get Error:code:140 InvalidSSLConfiguration Could not read private key attached to the selected certificate, ensure it exists and check the private key permissionsand my cert says there is a private key detected within.Question:What to do next? How to solve the problem?",
"username": "Xi_Chen"
},
{
"code": "",
"text": "Hello\nJust if somebody else stumbles over this issue: You have to make sure, that your mongodb service is run with a user who actually has permissions to the key in question",
"username": "Andreas_Dim"
}
]
| [Windows Server | need help on TLS Config] With Error in log on starting service - code:140 InvalidSSLConfiguration Could not read private key attached to the selected certificate, ensure it exists and check the private key permissions | 2021-11-09T05:23:19.892Z | [Windows Server | need help on TLS Config] With Error in log on starting service - code:140 InvalidSSLConfiguration Could not read private key attached to the selected certificate, ensure it exists and check the private key permissions | 3,920 |
null | [
"transactions"
]
| [
{
"code": "",
"text": "New to Mongo Atlas. Is there a way to set the auto scaling (say M20 to M30) to happen in less than an hour? We have a transactional based system that uses Mongo, and an hour of CPU issues would be an issue for the system. Ideally I’d like to set something that if it spikes to 95% say 3 times in 5 minutes it scales up, then if it’s quiet for 2 hours is scales back down.",
"username": "Dennis_Herbert"
},
{
"code": "",
"text": "Hi Dennis! I’m a product manager on the Atlas team here at MongoDB. First of all, welcome to the platform - we’re glad to have you. For context, there is no way to configure how long it takes to auto-scale because Atlas handles all the resources on the backend and scales up as quickly as possible. Our system checks for 75%+ CPU and RAM utilization within an hour before scaling up, and 50%- CPU and RAM utilization within 24 hours before scaling down.You can read more about the specifics of how auto-scaling works here: https://www.mongodb.com/docs/atlas/cluster-autoscaling/However, I’d love to better understand your use case for your application as we’re always working on improving our auto-scaling functionality. If you’re open to it, send me an email at [email protected] so we can learn more and ensure your feedback gets surfaced.",
"username": "Lori_Berenberg"
},
{
"code": "",
"text": "Hi Lori,We are using Mongo for the back end of a OLTP system that is 24/7. If the system needs a resource increase, waiting an hour to get those resources via automation could mean the system is overwhelmed until it scales up, so having the ability to set a faster scale up time would be beneficial in our scenario.",
"username": "Dennis_Herbert"
},
{
"code": "",
"text": "Thanks Dennis! Would you say that the time to trigger is more important than the actual time it takes to scale? And what is your tolerance for unnecessary scaling as a result of false bursts in activity? (i.e. a 5 minute burst that immediately goes back down within 10 mins)",
"username": "Lori_Berenberg"
},
{
"code": "",
"text": "I’d think that for OLTP if it sees a sustained bust of X CPU or memory % (settable, but I’d think 85% would be what we would set) 2 minutes start the scaling and maintain it until there is an hour of activity that is reduced enough to trigger a scale down.",
"username": "Dennis_Herbert"
},
{
"code": "",
"text": "Hi. No only I couldn’t agree more with the request to scale up way faster than an hour but also I’m having an issue where “steal cpu” is compromising the stability of our system and Auto-Scaling functionality is not actually scaling up properly, so the cluster simply stays unresponsive.\nWhat we need is the ability to configure custom rules based on alerts for auto-scaling. Just like in AWS auto-scaling.",
"username": "Tomas_Gonzalez1"
},
{
"code": "",
"text": "Hi Dennis and Tomas,What we need is the ability to configure custom rules based on alerts for auto-scaling. Just like in AWS auto-scaling.Firstly, thank you for providing your feedback regarding the current auto-scaling situations you are experiencing for your environments.Although the ability to configure custom auto-scaling policies is not available, there is currently a feedback post regarding the configuration of the duration for how the auto-scaling is evaluated / monitored in which you can vote for but I do understand perhaps you would also like the ability to also change the actual CPU/Memory utilisation percentage as well.If you would like, you could also create another feedback post specifically regarding customisable averages for the CPU/Memory percentages (rather than the duration) over the rolling period where others and yourself can vote for.I have raised this feedback internally as well in hopes that there are some improvements in future for auto-scaling.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Auto scaling question | 2022-07-05T13:29:29.927Z | Auto scaling question | 5,813 |
[
"compass",
"atlas-cluster"
]
| [
{
"code": "",
"text": "I am using the following connection string mongodb+srv://m001-student:[email protected]/test\nimage1920×1074 84.3 KB\n",
"username": "Maryana_Holubinska"
},
{
"code": "",
"text": "Hello @Maryana_Holubinska ,Welcome to The MongoDB Community Forums! Please go through below documentation and make sure that your Atlas Cluster satisfies all the required prerequisites. The most common reason for such error is not adding your IP address to your Atlas Project’s IP access list.Regards,\nTarun",
"username": "Tarun_Gaur"
}
]
| When trying to connect to Mongo DB via Compass, I am getting "connection <monitor> to 52.57.108.148:27017 closed" | 2022-12-06T11:02:55.252Z | When trying to connect to Mongo DB via Compass, I am getting “connection <monitor> to 52.57.108.148:27017 closed” | 3,974 |
|
null | []
| [
{
"code": "",
"text": "i facing one issue with logs its generating too many files in few mints and consume gb’s in one day so i want to reduce the size of logs file i just want one file with error logs only not informational logs\ncan someone please help on this really appericated!!",
"username": "Nabeel_Qadri"
},
{
"code": "systemLog.quietsystemLog.quiet",
"text": "You can configure mongod to run with systemLog.quiet systemLog.quiet is not recommended for production systems as it may make tracking problems during particular connections much more difficult.Investigating centralised logging systems or shipping archived logs off system may server you better in the long term.",
"username": "chris"
}
]
| Write Error Logs only in mongodb.log | 2022-12-05T10:16:32.835Z | Write Error Logs only in mongodb.log | 1,343 |
null | [
"java"
]
| [
{
"code": "waitqueuemultipleConnection string contains unsupported option 'waitqueuemultiple'.",
"text": "Hi,We’ve just upgraded the Java Driver (mongodb-driver-sync & bson) from 4.2.3 to 4.7.0. Within the connection string we have the option waitqueuemultiple. After the driver upgrade while the service is starting up, we can see on the log console a warning message which is like the following:Connection string contains unsupported option 'waitqueuemultiple'.The service is still working, but it is working under non performance scenarios, I’d like to know which alternative and why this options was removed. I tried to investigate within the source code where is this validation made, but there is nothing about this on mongodb-driver-core-4.7.0 nor mongodb-driver-sync-4.7.0Thanks",
"username": "Miguel_Angel_Alcantara_Ayre"
},
{
"code": "",
"text": "I’m not sure why you appear to be seeing a difference between 4.2 and 4.7, as support for waitQueueMultiple was removed in the 4.0 major release. See Remove connection pool wait queue event class and properties · mongodb/mongo-java-driver@564a468 · GitHub. Can you double check the version of the driver that you upgraded from?",
"username": "Jeffrey_Yemin"
},
{
"code": "3.10 (development) -> 4.2 (upgrading Spring version) -> 4.7 (performance issues)waitqueuemultiple",
"text": "I see, it was my fault because my working root branch already had the version 4.2 (this upgrade is still on process) my branch was born from this previous one, and upgraded java driver to 4.7 (upgraded for performance issues), but the current version (on development branch) is 3.10.2. So basically is like3.10 (development) -> 4.2 (upgrading Spring version) -> 4.7 (performance issues)So, it seems nobody noticed that warning message for the waitqueuemultiple while doing the 4.2 upgrade. I wasn´t able to see any recommendation regarding this option removal. Is there any at all? Or a brief explanation of why it was removed?Thanks",
"username": "Miguel_Angel_Alcantara_Ayre"
},
{
"code": "",
"text": "It is mentioned in the 4.0 upgrading guide. There is no justification given, but I can tell you that the large majority of users have upgraded to a 4.x driver release by now and we haven’t received a single report from our users that this has caused a problem in any production application.If you see something, we would certainly like to hear about it.",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "Thanks a lot Jeffrey for your support. We have plans in a near future to run performance test, after that I can update using this same thread if all went smooth or not.",
"username": "Miguel_Angel_Alcantara_Ayre"
}
]
| Deprecated Connection string option: waitqueuemultiple | 2022-12-06T16:10:58.111Z | Deprecated Connection string option: waitqueuemultiple | 1,703 |
null | [
"connecting",
"atlas-cluster",
"atlas"
]
| [
{
"code": "aquino.pv2hrqc.mongodb.netnslookup aquino.pv2hrqc.mongodb.net",
"text": "Hello,I am having some problems connecting to the database from a server hosted at cloud and they asked me about the MongoDB server IP address so they can check more. Is there I way I can find out which is the IP address of the server where my DB is located?Another question is the address of my cluster:aquino.pv2hrqc.mongodb.net , I can connect from my local machine to it.nslookup aquino.pv2hrqc.mongodb.netWhy this DNS is not resolved. I tried to nslookup it and it doesn’t return anything.I would like to better understand how it works, I have other databases in other clouds (RDS PostgreSQL, Redis, ScaledGrid) All endpoints that are passed to me resolve.Atlas endpoint does not resolve the name.Best Regards,",
"username": "oliveira"
},
{
"code": ";QUESTION\naquino.pv2hrqc.mongodb.net. IN ANY\n;ANSWER\naquino.pv2hrqc.mongodb.net. 60 IN TXT \"authSource=admin&replicaSet=atlas-ljmqsp-shard-0\"\naquino.pv2hrqc.mongodb.net. 60 IN SRV 0 0 27017 ac-tfd8xpu-shard-00-00.pv2hrqc.mongodb.net.\naquino.pv2hrqc.mongodb.net. 60 IN SRV 0 0 27017 ac-tfd8xpu-shard-00-01.pv2hrqc.mongodb.net.\naquino.pv2hrqc.mongodb.net. 60 IN SRV 0 0 27017 ac-tfd8xpu-shard-00-02.pv2hrqc.mongodb.net.\n",
"text": "Why this DNS is not resolved.It is. See:But you need to query something else than A records. Read about it at SRV record - Wikipedia.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks for the information. It was what I needed.Good to know, so Atlas Works with DNS SRV. I thought it was Type A.One other question:If I have an IP from Atlas, can I know which name it resolves to?For example, this IP is from an Atlas cluster 35.180.52.239, but I would like to know which DNS it resolves to.I say this because I have traffic monitoring on my network and the DNS is not shown, only the outgoing IPs and the amount of traffic that comes from each one.Best Regards,",
"username": "oliveira"
},
{
"code": "",
"text": "The tool you are already using, nslookup, can do that.You may also use Dig (DNS lookup).",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| I need to know the IP of my cluster, Atlas does not resolve the domain name | 2022-12-06T13:28:41.252Z | I need to know the IP of my cluster, Atlas does not resolve the domain name | 2,841 |
null | [
"production",
"golang"
]
| [
{
"code": "",
"text": "The MongoDB Go Driver Team is pleased to release version 1.9.4 of the MongoDB Go Driver.This release contains a bugfix for heartbeat buildup with streaming protocol when the Go driver process is paused in an FAAS environment (e.g. AWS Lambda). For more information please see the 1.9.4 release notes.You can obtain the driver source from GitHub under the v1.9.4 tag.Documentation for the Go driver can be found on pkg.go.dev and the MongoDB documentation site. BSON library documentation is also available on pkg.go.dev. Questions and inquiries can be asked on the MongoDB Developer Community. Bugs can be reported in the Go Driver project in the MongoDB JIRA where a list of current issues can be found. Your feedback on the Go driver is greatly appreciated!Thank you,The Go Driver Team",
"username": "benjirewis"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB Go Driver 1.9.4 Released | 2022-12-06T19:53:13.503Z | MongoDB Go Driver 1.9.4 Released | 1,442 |
null | [
"monitoring"
]
| [
{
"code": "",
"text": "Hi and good day,I’m kind of new with MongoDB , i have noticed on version 4 and higher , storage size is smaller than the datasize as shown with db.stats command. i found out that it is because of the Wiredtiger engine compression being done on the storage size…My question is, Is it possible to get the actual size of the storage without disabling the engine ? I’m using it for our reporting of database status.",
"username": "Daniel_Inciong"
},
{
"code": "db.adminCommand('listDatabases').totalSizedb.adminCommand('listDatabases').databases.forEach(\n function(x){\n d = db.getSiblingDB(x.name).stats(1024**3); \n print(tojson(\n {\n \"db\": d.db, \n \"size\": d.dataSize,\n \"sizeOnDisk\": d.storageSize\n }\n ))\n }\n)\n",
"text": "From the mongo shell.Total size on disk (bytes):\ndb.adminCommand('listDatabases').totalSizeData Size and Size on Disk for each Database (GiB) (Does not include Indexes):",
"username": "chris"
},
{
"code": "",
"text": "Thanks for the feedback , will try this out and revert back ",
"username": "Daniel_Inciong"
},
{
"code": "",
"text": "hi chris , i’ve done the said above and compared the result from db.status command , i think its also the same result. storage compression is still applied on the storage size.",
"username": "Daniel_Inciong"
},
{
"code": "",
"text": "What are you asking for in particular?The snippet I provided gives both Data Size and Storage Size.",
"username": "chris"
},
{
"code": "",
"text": "yes the script provides the storage and data size but the storage size i’m getting is compressed base from my comparison with the stats provided using db.stats command.is there a way to get the storage size that is uncompressed?",
"username": "Daniel_Inciong"
},
{
"code": "",
"text": "The uncompressed size is the data size.",
"username": "chris"
},
{
"code": "",
"text": "hi chris ,Sorry for the very late feedback… already got it… thank you",
"username": "Daniel_Inciong"
},
{
"code": "",
"text": "Hi Chris, How do I get actual storage size of indexes?",
"username": "Lakshmi_Jeedigunta"
}
]
| Getting actual storage size | 2021-02-09T04:50:27.721Z | Getting actual storage size | 5,234 |
null | [
"kafka-connector"
]
| [
{
"code": "copy.existing{\n \"name\":\"mongodb-natvie-source-connector\",\n \"config\":{\n \"mongo.errors.deadletterqueue.topic.name\":\"mongodb.dev-api.dlq\",\n \"connector.class\":\"com.mongodb.kafka.connect.MongoSourceConnector\",\n \"errors.log.include.messages\":\"true\",\n \"mongo.errors.log.enable\":\"true\",\n \"publish.full.document.only\":\"true\",\n \"tasks.max\":\"50\",\n \"batch.size\":\"100\",\n \"copy.existing.namespace.regex\":\"collection_x_namespace\",\n \"heartbeat.interval.ms\":\"3000\",\n \"collection\":\"collection_y_namespace\",\n \"mongo.errors.tolerance\":\"none\",\n \"key.converter.schemas.enable\":\"false\",\n \"database\":\"dev-api\",\n \"copy.existing.max.threads\":\"50\",\n \"poll.await.time.ms\":\"100\",\n \"connection.uri\":\"***\",\n \"value.converter.schemas.enable\":\"false\",\n \"name\":\"mongodb-natvie-source-connector\",\n \"copy.existing\":\"true\",\n \"value.converter\":\"org.apache.kafka.connect.json.JsonConverter\",\n \"errors.log.enable\":\"true\",\n \"key.converter\":\"org.apache.kafka.connect.json.JsonConverter\",\n \"poll.max.batch.size\":\"1000\"\n },\n \"tasks\":[\n {\n \"connector\":\"mongodb-natvie-source-connector\",\n \"task\":0\n }\n ],\n \"type\":\"source\"\n}\ncopy.existing.namespace.regextopic not found",
"text": "Hi, I’m running the kafka source connector version 1.8.1 on strimzi connect. However, the copy.existing doesn’t work.\nHere is my configurationWhen I run the connector and check the metrics the getmore_commands_successful(copy.existing) metric is equal to 2 but there’s nothing written to the topic. The CDC works as expected.There’s also no error. Is there anything missing or wrong in the configuration?I’ve also tried running this with regex copy.existing.namespace.regex: “dev-api.(collection_x|collection_y)”\nand a topic prefix, none of them worksI also tried replacing the copy.existing.namespace.regex with the collection whose related topic doesn’t exist. I expected that it’d throw a topic not found exception but it didn’t. It seems that these configurations are ignored altogether",
"username": "Tara_Firoozian"
},
{
"code": "",
"text": "I’m encountering similar behavior, though using the connector through Confluent Cloud. Commenting here to subscribe to replies.",
"username": "Tarek_ZIADE"
}
]
| Kafka source connector copy.existing doesn't work | 2022-11-28T16:39:23.934Z | Kafka source connector copy.existing doesn’t work | 1,953 |
null | [
"aggregation"
]
| [
{
"code": "db.sightings.aggregate([\n {\n $sort: {\n 'location.latitude': -1\n }\n }, {\n $limit: 4\n }\n])\n",
"text": "Hello everyone.\nIn the lab practice there is a wrong solved code. The solved code in the section “Review and Solved Code” is:but in the collection sightings, the field “location” is an array with two elements, “type” and “coordinates”. The location.latitude doesn’t exist.\nIt is possible to check?\nBest regards\nEnrico",
"username": "Enrico_Sarais"
},
{
"code": "db.sightings.find().count()",
"text": "I agree something is broken because there are only 6 documents and none has a “latitude/longitude” distinction: db.sightings.find().count()It is either:",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "This topic was automatically closed after 60 days. New replies are no longer allowed.",
"username": "system"
}
]
| Introduction to MongoDB-MongoDB Aggregation-Lesson3 | 2022-12-06T08:41:52.395Z | Introduction to MongoDB-MongoDB Aggregation-Lesson3 | 1,415 |
[
"node-js",
"mongoose-odm",
"connecting",
"atlas-cluster",
"server"
]
| [
{
"code": "`const mongoose = require('mongoose');\nconst mySecret = process.env['mongoUrl']\nconst intialDbConnection = async () => {\n try {\n await mongoose.connect(mySecret, {\n useNewUrlParser: true,\n useUnifiedTopology: true\n })\n console.log(\"db connected\")\n \n }\n catch (error) {\n console.error(error);\n }\n}\n\nmodule.exports = { intialDbConnection }`\nMongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you're trying to access the database from an IP that isn't whitelisted. Make sure your current IP address is on your Atlas cluster's IP whitelist: https://docs.atlas.mongodb.com/security-whitelist/\n at NativeConnection.Connection.openUri (/home/runner/store-cipher-backend-1/node_modules/mongoose/lib/connection.js:824:32)\n at /home/runner/store-cipher-backend-1/node_modules/mongoose/lib/index.js:380:10\n at /home/runner/store-cipher-backend-1/node_modules/mongoose/lib/helpers/promiseOrCallback.js:32:5\n at new Promise (<anonymous>)\n at promiseOrCallback (/home/runner/store-cipher-backend-1/node_modules/mongoose/lib/helpers/promiseOrCallback.js:31:10)\n at Mongoose._promiseOrCallback (/home/runner/store-cipher-backend-1/node_modules/mongoose/lib/index.js:1225:10)\n at Mongoose.connect (/home/runner/store-cipher-backend-1/node_modules/mongoose/lib/index.js:379:20)\n at intialDbConnection (/home/runner/store-cipher-backend-1/db/db.connect.js:5:20)\n at Object.<anonymous> (/home/runner/store-cipher-backend-1/index.js:15:1) {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n servers: Map(3) {\n 'cluster0-shard-00-00.fkvic.mongodb.net:27017' => [ServerDescription],\n 'cluster0-shard-00-01.fkvic.mongodb.net:27017' => [ServerDescription],\n 'cluster0-shard-00-02.fkvic.mongodb.net:27017' => [ServerDescription]\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: 'atlas-on2j4f-shard-0',\n logicalSessionTimeoutMinutes: undefined\n },\n code: undefined\n}\n",
"text": "I keep getting this error although I have allowed access from anywhere in the Network access section of the cluster. (Screenshot of the same is attached below)\n\nimage1672×603 19.2 KB\nhere is the connection code**And the entire code is here https://replit.com/@KumaraswamyA/store-cipher-backend-1**(store-cipher-backend-1 - Replit)",
"username": "Kumaraswamy_A"
},
{
"code": "http://portquiz.net:27017/\n",
"text": "What do you get when you try to go at:",
"username": "steevej"
},
{
"code": "",
"text": "This server listens on all TCP ports, allowing you to test any outbound TCP port.You have reached this page on port 27017 (from http host header).Your network allows you to use this port. (Assuming that your network is not doing advanced traffic filtering.)Network service: unknown\nYour outgoing IP: 59.91.26.158",
"username": "Kumaraswamy_A"
},
{
"code": "",
"text": "You have reached this page on port 27017 (from http host header).did you do this from replit console or your pc? where do you run your app? replit, work pc or home pc?I have tried and replit has no problem connecting Atlas clusters.One possibility is that you just haven’t waited long enough when you added “0.0.0.0” to your network list. for me, it took about 15-20 seconds for the change to take effect.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "",
"username": "Kumaraswamy_A"
},
{
"code": "curl http://portquiz.net:27017/",
"text": "backend is in replitis that free or powered? free repls are stopped when you left the page or after a while.or ports got a problem somehow. try this on repl’s “shell”: curl http://portquiz.net:27017/it should show “Port test successful!” and give the IP of that repl. try running your backend after this test and see if it connects.edit: do this also when you get that error and see if the port test succeeds.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "do this also when you get that error and see if the port test succeeds",
"username": "Kumaraswamy_A"
},
{
"code": "",
"text": "Today, I could finally get the same problem you are experiencing.Then, I did a side-by-side test with replit and Compass. Removing and re-adding the access from everywhere. Compass had no problem connecting.It seems the problem arises from the container (virtual machine) our repl is started. I got two different addressses that both succeeded portquiz.net test but one failed to connect mongodb:23.251.145.77 → Failed\n35.203.153.221 → ConnectedI have sent a bug report within the repl. I can’t say what happens next. I suggest you close/reopen the repl by going back to “My Repls” page, waiting a bit, and re-open it. check if you connect. if not, try again.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Struggling for 2 days with the same connection issue. Tried suggested test methods and all of them are passed however still receiving error from mongodb.\nWaiting for a solution impatiently (my certification got stuck halfway).",
"username": "Valikhan_Dumshebayev"
},
{
"code": "",
"text": "Struggling for 2 days with the same connection issue. Tried suggested test methods and all of them are passed however still receiving error from mongodb.Hi @Valikhan_Dumshebayev, Although the error is generic, the problem here turned out to be related to replit.com container. If your case is also related to a repl, please read my post just above yours. Else read on.The root of the problem is the strict access rules on the host network your app runs. You might be behind a strict school/work network that prevents certain connections to the internet. In these cases, you need to move to a more permissive network such as home or mobile.PS: It is still a mystery to me why we can connect to portquiz.net but not to mongodb servers.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "I am on my home network with no restriction. Actually, for me it has started at Saturday, then all of a sudden it got connected on Monday for some time and get disconnected again. Didn’t check on Sunday, though.",
"username": "Valikhan_Dumshebayev"
},
{
"code": "",
"text": "Actually, for me it has started at Saturday, then all of a sudden it got connected on Monday for some time and get disconnected againCan you please make a new post, include the full error message you get, and include a link here (because of irregularities)? That will allow both us community helpers and the mongodb staff to look with fresh sight.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Sure.\nNo connection to MongoDB from Replit - MongoDB Atlas - MongoDB Developer Community Forums",
"username": "Valikhan_Dumshebayev"
},
{
"code": "",
"text": "I can’t say what happens next. I suggest you close/reopen the repl by going back to “My Repls” page, waiting a bit, and re-open it. check if you connect. if not, try again.I have been doing this and sometimes it just doesn’t work.",
"username": "Kumaraswamy_A"
},
{
"code": "",
"text": "I have been doing this and sometimes it just doesn’t work.I know it is a pain to get a working container just continue with your luck.Please also consider sending a bug report from within the repl (help button on bottom-left) about this problem so to make replit team aware of the situation.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Yep. I just did that.\nThanks for taking the time and helping me out. \nI will update you once the issue gets resolved.",
"username": "Kumaraswamy_A"
},
{
"code": "",
"text": "If you are following this thread, here is the temporary solution provided by replit support team.“You can run “kill 1” in the shell of your Repl which will reboot it and may fix the issue, although it isn’t a 100% reliable workaround.”Apart from this the solution that has worked for me is changing the file name and running, the DB gets connected. In case you have already used the APIs in the frontend, after doing the above step change back your file name to the old one and the DB gets connected without any issues.Cheers.",
"username": "Kumaraswamy_A"
},
{
"code": "",
"text": "Apart from this the solution that has worked for me is changing the file name and running, the DB gets connectedthat is a weird one.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "maybe the repl becomes active with different IP when we change the repl file name.",
"username": "Kumaraswamy_A"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| MongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you're trying to access the database from an IP that isn't whitelisted | 2022-11-28T07:18:39.161Z | MongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you’re trying to access the database from an IP that isn’t whitelisted | 18,444 |
|
null | [
"aggregation"
]
| [
{
"code": "$geoNear: {\n near: { type: \"Point\", coordinates: [lon, lat] },\n distanceField: \"dist.calculated\",\n maxDistance: distance,\n includeLocs: \"dist.location\",\n spherical: true,\n query: {\n type: type,\n },\n },\n{\n $project: {\n _id: 1,\n type: 1,\n distance: \"$dist.calculated\", // <- this returns the correct value\n position: \"$dist.location\", // <- if i add this line i get server error (500)\n },\n },\n",
"text": "I followed the $geoNear documentation and created a $geoNear query that works correctly:this is connected to a function enabled on a custom https endpoint. i get the results without any problem.the problem is in the next stage where i want to project the location in my results:i also tried “location: 1” and get same error. i need the coordinates in the project stage because i want to use them in my results view.the solution i found now is to run another query that loops the results and adds the location to each item but i wish to use the location during query stages but any attempt leads to error.any suggestions about it?",
"username": "Antonio_Gioia"
},
{
"code": "\"accept-encoding\": \"null\",\n",
"text": "addingto the headers in the request to the endpoint seems to fix the problem",
"username": "Antonio_Gioia"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Cannot project $dist.location after $geoNear query on atlas custom https endpoint | 2022-12-06T15:35:10.789Z | Cannot project $dist.location after $geoNear query on atlas custom https endpoint | 670 |
null | [
"queries",
"node-js",
"data-modeling"
]
| [
{
"code": "{\n _id: ObjectId('2456177899'),\n name: ABC.\n age: 28,\n}\n{\n _id: anthing\n userId: '2456177899'\n}\n\"let\": { \"userId\": { \"$toObjectId\": \"$userId\" } },\n_id: '-Msdae23124asd'\n{\n _id: anthing\n userId: ObjectId('2456177899')\n}\n",
"text": "First of all, I would like to thank this awesome community that has helped me immensely wrt each query I have posted here. I have a query that pertains to ObjectId in MongoDB, especially when I am migrating database from Firebase to MongoDB. I am using aggregation pipeline in my queries, and due to the nature of my queries I am facing a dilemma wrt _id of each document and their references in other collections.Consider a user documentand now consider an “Employment” document that has reference to user _idand as expected, it dosen’t work in $lookup, I applied the following method, in order for it to workand it works, but I am migrating data from firebase database, and during migration process, I am inserting firebase unique key as _id instead of ObjectId for exampleFirst option is to ensure that any new entry in migrated database conforms to ObjectId, for that to happen I will have to put checks in every controller, and MongoDB queries like I am doing with aggregation pipeline.Second option is to migrate firebase data with key as ObjectId, and save each reference id as ObjectId, for example “Employement” document will look like thisThis will save a lot of checks and conditions (which in some cases might be missed wrt future data entry), what is the best approach for my scenario, save each reference to _id as ObjectId in other collections or add checks etc in order for everything to work.Thanks in advance",
"username": "Daniyal_Khan"
},
{
"code": "",
"text": "Hey, unfortunately I have no answer for you but a question.\nI’m facing more or less the same problem as you and wanted to ask you how you referenced the objectId from user to userId of employment?",
"username": "CHH_N_A"
}
]
| Working with ObjectId in reference documents | 2021-09-14T16:43:24.010Z | Working with ObjectId in reference documents | 6,223 |
null | [
"aggregation",
"queries",
"data-modeling"
]
| [
{
"code": "{\n \"_id\" : \"123\", \n \"C\" : \"C1\", \n \"K\" : \"K1\" ,\n.....(much more fields)\n \"Fs\" : [ \n \"F1\",\"F3\" \n ]\n },\n{\n \"_id\" : \"264\", \n \"C\" : \"C1\", \n \"K\" : \"K1\" ,\n.....(much more fields)\n \"Fs\" : [ \n \"F2\",\"F3\" \n ]\n }\n\n{\n \"_id\" : { \"C\" : \"C1\", \"K\" : \"K1\" }\n.....(much more fields)\n \"Fs\":[\n \"F1\",\"F2\",\"F3\"\n ]\n}\n",
"text": "Hello everyone,I’m trying to write an aggregation to return information about my sources and I can’t find a simple way to union values of arrays of the documents I group.Source docsWanted DocI managed to obtain this result using an $unwind before my $group; but my question is:There’s a simple way to use just one $group step? Something equivalent to a SQL Union.Thanks",
"username": "MauSamba"
},
{
"code": "",
"text": "What is the issue with using$unwindHow would you do that use case withSQL UnionTo me it looks like SQL UNION is not related at all to this kind of grouping.",
"username": "steevej"
},
{
"code": "",
"text": "I fear that, in production, it will take too long to execute. We’ll find out.In my head SQL Union was the better example, if you prefer c# it will be\nl.SelectMany(x=>x.Fs).Distinct()",
"username": "MauSamba"
},
{
"code": "",
"text": "l.SelectMany(x=>x.Fs).Distinct()Where is the equivalent of having “_id” : { “C” : “C1”, “K” : “K1” } as a group key in the above c# SelectMany? I feel the above do a lot less work that your Wanted Doc.I fear that, in production, it will take too long to execute. We’ll find out.Optimizing too early is not good because the performance issues might be elsewhere than where we fear they will be. Implements using the simpler code and optimize only if you have issues.",
"username": "steevej"
}
]
| Union simple arrays in $group stage | 2022-12-06T11:06:58.628Z | Union simple arrays in $group stage | 1,221 |
null | [
"node-js",
"mongoose-odm"
]
| [
{
"code": "",
"text": "What happened?\nwhen I run mongod I can access normally through the shell and mongoDbCompass\nbut my node application does not grant access\nand it shows this errorMongooseServerSelectionError: connect ECONNREFUSED ::1:27017\nat Connection.openUri (C:\\nodeTeste\\node_modules\\mongoose\\lib\\connection.js:825:32)What have you already done to try to solve it?\nran the command => mongod --ipv6\noh yes, my node application can access the database and fetch data … but the database does not appear in the shell or in mongoDbCompass\nand I can’t even access the banks I see in mongoDbCompass\nit looks like it’s creating a bank somewhere else",
"username": "Mardoqueu_Oliveira"
},
{
"code": "",
"text": "Try with mongodb://127.0.0.1:27017",
"username": "steevej"
},
{
"code": "",
"text": "thanks a lot it worked i was trying localhost .\nI’ve been trying for 5 hours lol",
"username": "Mardoqueu_Oliveira"
},
{
"code": "",
"text": "A recent change in the DNS resolver library (unrelated to MongoDB) where entries from /etc/hosts were sorted before and are now not sorted anymore resolves localhost to its IPv6 flavor ::1 rather than IPv4 127.0.0.1.",
"username": "steevej"
}
]
| Connection with mongoDB | 2022-12-06T14:33:13.681Z | Connection with mongoDB | 1,147 |
null | [
"app-services-data-access"
]
| [
{
"code": "",
"text": "Whenever I create a new collection I need to apply read and write rules by default without manually doing it. Is there any way?",
"username": "Rabeeh_Ebrahim"
},
{
"code": "",
"text": "Hi, we introduced the concept of “Default Rules” recently to handle this very inconvenience. It simply means that if there are no collection-level rules for a new collection then it falls back to using the default rule: https://www.mongodb.com/docs/atlas/app-services/rules/#std-label-default-rules",
"username": "Tyler_Kaye"
}
]
| How to apply rules (read and write) by default when creating a new collections? | 2022-12-06T12:42:12.140Z | How to apply rules (read and write) by default when creating a new collections? | 1,728 |
null | [
"queries"
]
| [
{
"code": "",
"text": "“AU”: “[‘SHERMAN T’, ‘MARGALIT I’, ‘COREM S’]”\n——I want to convert this data from string to array\ndb.collection.find({“AU.$”:{$type:2}}).forEach(function(x){x.AU=Array(x.AU);db.collection.save(x)})[Error] TypeError: db.collection.find(…).forEach(…) is undefined",
"username": "M_M-m"
},
{
"code": "",
"text": "The function db.collection.find() does not return a container. It returns a cursor. If you want to use forEach() you must call something like toArray().",
"username": "steevej"
}
]
| Cannot convert data type | 2022-12-06T09:59:49.401Z | Cannot convert data type | 792 |
null | []
| [
{
"code": "",
"text": "how to check Data modification Source\nExample :- In My case I want to create Diff between API end point call vs Direct access for collections, So basically I want to check Source of call for Collections(API, Direct)\nPlease suggest me if there is any possible mechanism.",
"username": "Amit_Upadhyay2"
},
{
"code": "",
"text": "There is nothing built in for your use case.Most of us do not want to have to have performance penalty to have mongo keep track of that.It is easy to add a field when you insert documents in your collection. But what ever field you use in your API to identify the source, a good hacker can easily update any document with said field to mimic an API change in order to cover its track.As per your other postit seems you have people that have write access to your database that wreck havoc with your data. Random people should not have direct write access to your database and the security model of mongo Atlas allows a fine grain control of who does what.",
"username": "steevej"
},
{
"code": "",
"text": "Hi @steevej ,\nThank you for your Quick response . can we achieve this type of Use case using any other technique .Thanks\nAmit",
"username": "Amit_Upadhyay2"
},
{
"code": "",
"text": "can we achieve this type of Use case using any other techniqueNot that I know. We just have to wait and see if someone share something.",
"username": "steevej"
}
]
| How to check Data modification Source | 2022-12-05T14:31:48.642Z | How to check Data modification Source | 1,170 |
[
"atlas-functions",
"data-api"
]
| [
{
"code": "Request ValidationNo Additional AuthorizationVerify Payload SignatureApplication Authentication",
"text": "Hi all,I am trying to setup a few HTTPS Endpoints to perform user management functions. The backing functions have been defined and are working as expected when Request Validation is set to No Additional Authorization. The problem comes in when the validation is set to Verify Payload Signature.When I try to call the endpoint using curl as described in the example I get the following error back “expected signature method to be sha256”.\nimage1109×111 23 KB\nI have confirmed that the secret being used to generate the hash is the same as the one configured on Atlas (in the test app it was simply 12345). The hash was generated using the function described in the documentation, other languages were also tested but the same end result.The function is configured with System authentication and was tested both as Private and Public. I have also tested with the function configured to use Application Authentication and an api key but still no change.Any assistance would be appreciated.Regards\nChris",
"username": "Chris_Snyders"
},
{
"code": "Endpoint-Signature::sha256=<hex encoded hash>:Endpoint-Signature:sha256=<hex encoded hash>",
"text": "Solved my issues, the documentation says to use Endpoint-Signature::sha256=<hex encoded hash> but this is wrong, there should only be a single : e.g Endpoint-Signature:sha256=<hex encoded hash>",
"username": "Chris_Snyders"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| HTTPS Endpoint "Verify Payload Signature" authorization failing | 2022-10-10T07:25:04.172Z | HTTPS Endpoint “Verify Payload Signature” authorization failing | 2,345 |
|
null | [
"aggregation"
]
| [
{
"code": "let userId = any userId in this group\nlet groupId = any groupId\nlet result = await groupChatModel.aggregate([\n\n {\n\n $match: {\n\n groupId: groupId\n\n }\n\n },\n\n\n\n {\n\n \"$sort\": {\n\n \"createdAt\": -1\n\n }\n\n },\n\n {\n\n\n $project: {\n\n LastmessageCreatedAt: {\n\n\n $cond: {\n\n\n\n if: {\n\n $eq: [userId, \"$senderId\"]\n\n },\n\n then: $createdAt\",\n else: \"$$REMOVE\" // i want nothing here but cant get rid of this else condition\n\n\n\n }\n\n\n\n }\n\n }\n\n }\n]),\n",
"text": "i want to get createdAt field of last message of a specific user in agroup\nsuppose here userId is any user . i want to fetch his last message in a group which he sent and want to get createdAt field of that message. then i want to compare all documents whose createdAt time is less then this document and want to fetch 1st document in that List.in below code 1st i sorted documents reversely now i want to get createdAt field of last message of user i put if condition but remain unsuccesfull .try with arrays but still no success.\nit returns createdAt field of last document processed. i want createdAt field of that document where senderId field value is equal to userId\nI dont want another query .Can anyone help me to get createdAt field of that document?\nplease dont put simple match condition because match will give one document and all other document doesnt go to next stage . after that i also want compare that createdAt field with other documents createdAt field.",
"username": "Saad_Tanveer"
},
{
"code": "",
"text": "Hi @Saad_Tanveer,It’s really hard to understand your need just based on that pipeline.Could you please provide a bunch of documents (just the fields we need) that represent your problem and the expected output based on these sample documents?Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "{\n \"_id\" : ObjectId(\"633ebce93642850dfa00dbb7\"),\n \"messageType\" : 0,\n \"groupId\" : group-A,\n \"message\" : \"my 1st message\",\n \"senderId\" : user-A,\n \"createdAt\" : ISODate(\"2022-11-06T16:32:57.000Z\"),\n \"__v\" : 0,\n \"updatedAt\" : ISODate(\"2022-11-06T14:28:03.478Z\")\n}\n\n/* 2 */\n{\n \"_id\" : ObjectId(\"633ebd553642850dfa00dbb9\"),\n \"messageType\" : 0,\n \"groupId\" : group-A,\n \"message\" : \"UserB -1st message\",\n \"senderId\" : user-B,\n \"createdAt\" : ISODate(\"2022-11-13T16:34:45.000Z\"),\n \"__v\" : 0,\n \"updatedAt\" : ISODate(\"2022-11-13T14:28:03.478Z\")\n}\n\n/* 3 */\n{\n \"_id\" : ObjectId(\"633fd4b5cdf14144b040d63b\"),\n \"messageType\" : 0,\n \"groupId\" : group-X,\n \"message\" : \"User x 1st message in X-group\",\n \"senderId\" : User-X,\n \"createdAt\" : ISODate(\"2022-11-15T12:26:45.000Z\"),\n \"__v\" : 0,\n \"updatedAt\" : ISODate(\"2022-11-15T14:28:03.478Z\")\n}\n\n/* 4 */\n{\n \"_id\" : ObjectId(\"6343c6c0dbfa6356d476a642\"),\n \"messageType\" : 0,\n \"groupId\" : group-A\n \"message\" : \"userA last message in groupA\",\n \"senderId\" : user-A,\n \"createdAt\" : ISODate(\"2022-11-19T12:16:16.000Z\"),\n \"__v\" : 0,\n \"updatedAt\" : ISODate(\"2022-11-19T14:28:03.478Z\")\n}\n\n/* 5 */\n{\n \"_id\" : ObjectId(\"63451926177a364ce8909d33\"),\n \"messageType\" : 0,\n \"groupId\" : group-A\n \"message\" : \"secong mesage\",\n \"senderId\" : user-B,\n \"createdAt\" : ISODate(\"2022-11-23T12:20:06.000Z\"),\n \"__v\" : 0,\n \"updatedAt\" : ISODate(\"2022-11-23T14:28:03.478Z\")\n}\n{\n \"_id\" : ObjectId(\"633ebd553642850dfa00dbb9\"),\n \"messageType\" : 0,\n \"groupId\" : group-A,\n \"message\" : \"UserB -1st message\",\n \"senderId\" : user-B,\n \"createdAt\" : ISODate(\"2022-11-13T16:34:45.000Z\"),\n \"__v\" : 0,\n \"updatedAt\" : ISODate(\"2022-11-13T14:28:03.478Z\")\n}\n\n",
"text": "@MaBeuLux88\nSample documents\nMy groupchat collectionSuppose i want to get last message of any user in group A before user A latest message in group A.\nin this case latest message of user A in group A is message: “userA last message in groupA”\nso last message in group A before userA latest message is message: “UserB -1st message”Expected outcome if we pass user A and group ATry to solve in 1 query or 1 aggregation query",
"username": "Saad_Tanveer"
},
{
"code": "",
"text": "I dont want another queryI am sorry I couldn’t read your whole post, but deduced a few bit from your example data and output. I don’t have the answer but correct me if I am wrong: get the document with time stamp just below the last document of that user. That seems a “match group, project only required fields, sort all first, maybe put them in an array in a $lookup, find first index of matching user, get post _id from index+1” query.the reason I quoted your above statement is that the reason we have “aggregation” pipeline is to write many queries but execute them in a single attempt. you just need to ask the right questions for your queries ",
"username": "Yilmaz_Durmaz"
},
{
"code": "user = \"user-A\" ; \ngroup = \"group-A\" ;\n_match = { \"senderId\" : user , \"groupId\" : group }\n_sort = { \"$sort\" : { \"createdAt\" : -1 } }\n_limit = { \"$limit\" : 1 }\n_lookup = { \"$lookup\" : {\n \"from\" : \"group_chat\" ,\n \"localField\" : \"groupId\" ,\n \"foreignField\" : \"groupId\" ,\n \"as\" : \"_result\" ,\n \"let\" : { \"last_createdAt\" : \"$createdAt\" } ,\n \"pipeline\" : [\n { \"$match\" : {\n \"senderId\" : { \"$ne\" : user } ,\n \"$expr\" : { \"$lte\" : [ \"$createdAt\" , \"$$last_createdAt\" ] }\n } } ,\n _sort ,\n _limit\n ]\n} }\ndb.group_chat.aggregate( [ _match , _sort , _limit , _lookup ] )\n/* The above provides the following */\n{ _id: ObjectId(\"6343c6c0dbfa6356d476a642\"),\n messageType: 0,\n groupId: 'group-A',\n message: 'userA last message in groupA',\n senderId: 'user-A',\n createdAt: 2022-11-19T12:16:16.000Z,\n __v: 0,\n updatedAt: 2022-11-19T14:28:03.478Z,\n _result: \n [ { _id: ObjectId(\"633ebd553642850dfa00dbb9\"),\n messageType: 0,\n groupId: 'group-A',\n message: 'UserB -1st message',\n senderId: 'user-B',\n createdAt: 2022-11-13T16:34:45.000Z,\n __v: 0,\n updatedAt: 2022-11-13T14:28:03.478Z } ] }\n\n/* you then may add a cosmetic state such as */\n_replace = { \"$replaceRoot\" : { \"newRoot\" : { \"$arrayElemAt\" : [ \"$_result\" , 0 ] } } }\ndb.group_chat.aggregate( [ _match , _sort , _limit , _lookup , _replace ] )\n\n/* to exactly get */\n{ _id: ObjectId(\"633ebd553642850dfa00dbb9\"),\n messageType: 0,\n groupId: 'group-A',\n message: 'UserB -1st message',\n senderId: 'user-B',\n createdAt: 2022-11-13T16:34:45.000Z,\n __v: 0,\n updatedAt: 2022-11-13T14:28:03.478Z }\n",
"text": "Here is my go at it.",
"username": "steevej"
},
{
"code": "",
"text": "@Yilmaz_Durmaz\nMy problem is simple.\nSuppose you are a part of Digital team whatsapp group. So your latest message in group is “Hy its my last message till now”. I want the latest message in group before your last message sent in group .",
"username": "Saad_Tanveer"
},
{
"code": "",
"text": "I am not sure if you examined @steevej 's answer above. I put it in a playground at this link: Mongo playground.He uses shell-like commands (adapt and use in any language) and it is better to understand than a bit fat query as used in the playground. if you need a bit of explanation: the first few stages find the last message of a user, and the lookup finds messages older than that one that does not belong to that user, the replaceroot returns the first of those.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "@steevej Thanks. That worked. i marked it as solution",
"username": "Saad_Tanveer"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Get a field value on the basis of condition in aggregation | 2022-12-02T10:53:55.569Z | Get a field value on the basis of condition in aggregation | 4,452 |
null | [
"queries",
"node-js",
"mongoose-odm",
"api"
]
| [
{
"code": "const express =require(\"express\");\nconst bodyParser = require(\"body-parser\");\nconst mongoose = require(\"mongoose\");\nconst Promise = require('bluebird');\nPromise.promisifyAll(mongoose);\n\nconst app = express();\n\nmongoose.connect(\"mongodb://<Username>:<Password>@<RemoteHost>:<Port>/user\")\n .then(function(){\n console.log(\"Connected to database\");\n })\n .catch(function(err){\n console.log(\"Connection failed\", err);\n });\n\napp.use(bodyParser.json());\napp.use(bodyParser.urlencoded({extended: true}));\napp.use(express.static(\"public\"));\n\n\n// Below is the schema and the database model\n\nconst userSchema = new mongoose.Schema({\n\n role_id: {type:Number},\n name:{ firstName:{type:String},\n middleName: {type:String},\n lastName: {type:String}\n },\n contact:{\n mobileNo:{type:Number},\n mail_id:{ \n username:{type:String},\n password:{type:String}\n }\n },\n address: {type:String},\n pincode: {type:Number},\n city: {type:String},\n state: {type:String}\n \n});\n\nconst User = mongoose.model(\"users\",userSchema);\n\n// Below is the get,post method\n\napp.get(\"/getusers\",function(req,res){\n User.find(function(err,foundUsers){ \n if(err){\n res.send(err);\n } else {\n res.send(foundUsers);\n }\n });\n});\n\napp.post(\"/getusers\",function(req,res){\n try {\n const newUser = new User({\n role_id: req.body.role_id ,\n name:{\n firstName: req.body.name.firstName,\n middleName: req.body.name.middleName,\n lastName: req.body.name.lastName\n },\n contact:{\n mobileNo: req.body.contact.mobileNo,\n mail_id:{\n username: req.body.contact.mail_id.username,\n password: req.body.contact.mail_id.password\n }\n },\n address: req.body.address,\n pincode: req.body.pincode,\n city: req.body.city,\n state: req.body.state\n });\n newUser.save();\n \n } catch (error) {\n console.log(error);\n }\n})\n/Documents/mProject/node_modules/mongodb/lib/cmap/connection.js:207\n callback(new error_1.MongoServerError(document));\n ^\n\nMongoServerError: command insert requires authentication\n at Connection.onMessage (/Documents/mProject/node_modules/mongodb/lib/cmap/connection.js:207:30)\n at MessageStream.<anonymous> (/Documents/mProject/node_modules/mongodb/lib/cmap/connection.js:60:60)\n at MessageStream.emit (node:events:513:28)\n at processIncomingData (/Documents/mProject/node_modules/mongodb/lib/cmap/message_stream.js:132:20)\n at MessageStream._write (/Documents/mProject/node_modules/mongodb/lib/cmap/message_stream.js:33:9)\n at writeOrBuffer (node:internal/streams/writable:391:12)\n at _write (node:internal/streams/writable:332:10)\n at MessageStream.Writable.write (node:internal/streams/writable:336:10)\n at Socket.ondata (node:internal/streams/readable:754:22)\n at Socket.emit (node:events:513:28)\n at addChunk (node:internal/streams/readable:315:12)\n at readableAddChunk (node:internal/streams/readable:289:9)\n at Socket.Readable.push (node:internal/streams/readable:228:10)\n at TCP.onStreamRead (node:internal/stream_base_commons:190:23) {\n ok: 0,\n code: 13,\n codeName: 'Unauthorized',\n [Symbol(errorLabels)]: Set(0) {}\n}\n",
"text": "I’m able to connect to the remote serverNow after calling API from software or postman I’m getting below errorI’ve tried below combinations for mongoDB connectionsSo please respond if you found any answers",
"username": "R_V"
},
{
"code": " codeName: 'Unauthorized',\nunauthorized",
"text": "Hi @R_V - Welcome to the communityI’ve tried below combinations for mongoDB connections\nmongoose.connect(“mongodb://${Username}:${Password}@${RemoteHost}:${Port}/user”)\nmongoose.connect(“mongodb://${RemoteHost}:${Port}/user”)\nmongoose.connect(“mongodb://${RemoteHost}/user”)\nmongoose.connect(“mongodb://${Username}:${Password}@${RemoteHost}/user”)\nmongoose.connect(“mongodb+srv://${Username}:${Password}@${RemoteHost}:${Port}/user”)\nmongoose.connect(“mongodb+srv://${RemoteHost}:${Port}/user”)\nmongoose.connect(“mongodb+srv://${RemoteHost}/user”)\nmongoose.connect(“mongodb+srv://${Username}:${Password}@${RemoteHost}/user”)I do not believe this is due to the above combinations of connection strings used. The unauthorized error generally indicates that the associated database user does not have sufficient privileges to perform a particular action. You may find the following Built-In Roles documentation useful as a reference point.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thank you for the solution. It did helped.",
"username": "R_V"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Command insert requires authentication when connecting to remote server of database | code: 13, codeName: Unauthorized | 2022-11-23T07:38:45.422Z | Command insert requires authentication when connecting to remote server of database | code: 13, codeName: Unauthorized | 11,522 |
null | [
"react-native"
]
| [
{
"code": " \"title\": \"Media\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"access\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"string\"\n }\n },\n \"collectionType\": {\n \"bsonType\": \"string\"\n },\n \"media\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"name\": \"Media_media\",\n \"properties\": {\n \"author\": {\n \"bsonType\": \"string\"\n },\n \"createdAt\": {\n \"bsonType\": \"string\"\n },\n \"genre\": {\n \"bsonType\": \"string\"\n },\n \"language\": {\n \"bsonType\": \"string\"\n },\n \"length\": {\n \"bsonType\": \"int\"\n },\n \"link\": {\n \"bsonType\": \"string\"\n },\n \"mediaName\": {\n \"bsonType\": \"string\"\n },\n \"note\": {\n \"bsonType\": \"string\"\n },\n \"releaseDate\": {\n \"bsonType\": \"string\"\n },\n \"type\": {\n \"bsonType\": \"string\"\n }\n }\n }\n },\n \"userId\": {\n \"bsonType\": \"string\"\n }\n }\n}\n",
"text": "Hey, im using this Template to build my app upon. The only problem is, that i need an array of objects in my schema, which i can’t seem to get working.\nMy mongoDB schema looks like this:I would appreciate any help",
"username": "Silas_Jeydo"
},
{
"code": "",
"text": "Hello @Silas_Jeydo ,I notice you haven’t had a response to this topic yet - were you able to find a solution?\nIf not, could you please provide me with below details to help me understand your use-case?Regards,\nTarun",
"username": "Tarun_Gaur"
}
]
| Embed an array of objects with the latest template examples | 2022-11-23T20:36:24.167Z | Embed an array of objects with the latest template examples | 1,378 |
null | [
"data-modeling",
"indexes"
]
| [
{
"code": "truefalsefalse",
"text": "In a collection, there is a boolean field. For this field, I want to store the bare minimum data. For example, I only plan to store the value true. For documents where this field has the value of false, it is simply not stored at all. This scheme contains just enough information to make the distinction. If, in addition, I also store the value of false, it would be superfluous data.What are the pros and cons of this minimalist strategy?\nIn what scenarios is it used?",
"username": "Big_Cat_Public_Safety_Act"
},
{
"code": "is_active : true\nactivation_date : 2022-12-05\n",
"text": "I am a big fan of only storing true values for Boolean fields. This, with partial index seems to be a good way to reduce the amount of data stored while making some queries (the one including field:true) in their conditions more efficient.I often go further and rather than using field:true, I use a timestamp that indicates when the field became true.Rather thanI haveThis way I have more information in a single field. The existence of the field activation_date indicates that the document is_active and when it became active. Two birds with one stone.",
"username": "steevej"
}
]
| Storing bare minimum data in MongoDB | 2022-12-06T01:06:44.461Z | Storing bare minimum data in MongoDB | 1,239 |
null | [
"queries",
"dot-net"
]
| [
{
"code": "OS: win10 x64\n.NET: 7.0\nTARGET APP:WIN x64\nMONGODB SERVER VERSION:3.4.2\nMONGODB DRIVER VERSION:2.13.3\npublic class MongoDbObject\n{\n [JsonConstructor]\n public MongoDbObject() { }\n public ObjectId _id { get; set; }\n public string Product { get; set; }\n public string Production_Phase { get; set; }\n public string TestLine { get; set; }\n public string SN { get; set; }\n [BsonDateTimeOptions(Kind = DateTimeKind.Local)]\n public DateTime Test_Datetime { get; set; }\n public string Station { get; set; }\n public string Test_Result { get; set; }\n [BsonDateTimeOptions(Kind = DateTimeKind.Local)]\n public DateTime Upload_Datetime { get; set; }\n public string FileName { get; set; }\n public string[] Files { get; set; }\n public string Equipment_NO { get; set; }\n public string Slot { get; set; }\n public string Fail_Item { get; set; }\n public TestItemObject[] TestItems { get; set; }\n}\n\npublic class TestItemObject\n{ \n public TestItemObject() { }\n public string TestItem { get; set; }\n public float Usl { get; set; }\n public float Lsl { get; set; }\n public float TestValue { get; set; }\n public bool Result { get; set; }\n}\n\npublic async static Task DownloadLogfileByTestdate_StationAsync(DateTime startDate,DateTime endDate,string station)\n{ \n try\n {\n ProcessLogfile.config = config;\n var client = new MongoClient(config.DataBaseConfig.db_conn_string);\n var db = client.GetDatabase(config.DataBaseConfig.db_name);\n var collections = db.GetCollection<MongoDbObject>(config.DataBaseConfig.collection);\n Console.WriteLine(\"数据库建立连接并打开连接...\");\n List<MongoDbObject>? items = new();\n if(config.DataBaseConfig.only_online_data)\n items = collections.Find(x =>x.SN.Length>config.DataBaseConfig.filter_sn_length && x.FileName.ToUpper().Contains(\"ONLINE\") && x.SN!=string.Empty && x.Test_Datetime > startDate && x.Test_Datetime < endDate && x.Station.ToUpper().Contains(station.ToUpper().Trim())).ToList();\n else\n items = collections.Find(x => x.SN != string.Empty && x.SN.Length > config.DataBaseConfig.filter_sn_length && x.Test_Datetime > startDate && x.Test_Datetime < endDate && x.Station.ToUpper().Contains(station.ToUpper().Trim())).ToList();\n\n if (items.Count > 0)\n {\n Console.WriteLine($\"查找到符合条件的数据:\\t{items.Count} 条\");\n await ProcessLogfile.SaveToLocalFileAsync(items.ToArray(), filePath);\n }\n else\n {\n Console.Error.WriteLine(\"No valid data be found!\");\n } \n }\n catch (Exception ex)\n {\n Console.Error.WriteLine(ex.Message + \"\\r\\n\" + ex.StackTrace);\n }\n}\n\nNo suitable constructor found for serializer type: 'MongoDB.Bson.Serialization.Serializers.DateTimeSerializer'.\n\nat MongoDB.Bson.Serialization.BsonSerializationProviderBase.CreateSerializer(Type, IBsonSerializerRegistry) + 0x1a8\nat MongoDB.Bson.Serialization.PrimitiveSerializationProvider.GetSerializer(Type, IBsonSerializerRegistry) + 0x92 at MongoDB.Bson.Serialization.BsonSerializerRegistry.CreateSerializer(Type type) + 0x91 at System.Collections.Concurrent.ConcurrentDictionary2.GetOrAdd(TKey, Func2) + 0x82 at MongoDB.Bson.Serialization.BsonSerializerRegistry.GetSerializer(Type) + 0x5e at MongoDB.Bson.Serialization.BsonMemberMap.GetSerializer() + 0x176 at MongoDB.Bson.Serialization.Attributes.BsonSerializationOptionsAttribute.Apply(BsonMemberMap) + 0x16 at MongoDB.Bson.Serialization.Conventions.AttributeConventionPack.AttributeConvention.Apply(BsonMemberMap) + 0x16d at MongoDB.Bson.Serialization.Conventions.ConventionRunner.Apply(BsonClassMap) + 0x146 at MongoDB.Bson.Serialization.BsonClassMap.AutoMapClass() + 0x33 at MongoDB.Bson.Serialization.BsonClassMap.LookupClassMap(Type) + 0x18c at MongoDB.Bson.Serialization.BsonClassMapSerializationProvider.GetSerializer(Type, IBsonSerializerRegistry) + 0xdb at MongoDB.Bson.Serialization.BsonSerializerRegistry.CreateSerializer(Type type) + 0x91 at System.Collections.Concurrent.ConcurrentDictionary2.GetOrAdd(TKey, Func2) + 0x82 at MongoDB.Bson.Serialization.BsonSerializerRegistry.GetSerializer(Type) + 0x5e at MongoDB.Bson.Serialization.BsonSerializerRegistry.GetSerializerT + 0x25 at MongoDB.Driver.MongoCollectionImpl`1..ctor(IMongoDatabase, CollectionNamespace, MongoCollectionSettings, ICluster, IOperationExecutor) + 0x6d at MongoDB.Driver.MongoDatabaseImpl.GetCollection[TDocument](String, MongoCollectionSettings) + 0xa5 at CpkMongoDbDownload.MongoDbController.<DownloadLogfileByTestdate_StationAsync>d__4.MoveNext() + 0x158\n\n",
"text": "I created a testing app in .NET Core 7. This is my development environmentThis is my DataObject class:This is my testing property:It works in debug or release mode, but there is a exception when published to an AOT app.This is the exception:How to solve this issue?",
"username": "Huang_YF"
},
{
"code": "",
"text": "Anybody can help me?",
"username": "Huang_YF"
}
]
| There is a MONGODB's exception when publish a AOT app in .net core 7 | 2022-11-30T04:37:22.570Z | There is a MONGODB’s exception when publish a AOT app in .net core 7 | 1,242 |
null | [
"graphql"
]
| [
{
"code": "",
"text": "So if i decide to use the Atlas Device Relationships for Realm and GraphQL there is an open question for me. How are rules and roles applied to this?\nFor example i have a collection contact with a forgein key to the collection Dataset, how do the rules apply to this. So i have sync rules for the contact that it allows only to read this data as an owner, does this also automaticly include the reference to the dataset? Or do i need to define there additional rules and these one aply than? So because the dataset so far only is allowed to be read an write by the owner.\nThe owner of the contactdata is a different one then the owenr of the dataset. So does the owner of the contact data know have the permission to read this refernec data or not?I hope you understand what i mean and sorry for bad englisch ",
"username": "Jakob_Wusten"
},
{
"code": "",
"text": "For all services, if you have a cross-collection link then implicitly it is just storing a foreign key to the linking collection. This means that if you query collection 1, you will get all of the primary keys for collection 2 (the one being linked to). For sync, this means that if you query on a collection you will get all of the foreign keys but none of the actual objects that they refer to. The SDK’s will handle this under the hood by letting you know that the list is “empty” in this case and they treat links as null unless they actually have the object being linked to. Therefore, in order to effectively use links you need to query on both collections and have permissions to view both collections (which makes sense since you shouldn’t be able to bypass permissions just because you have a link to an object)This is why it is often a better fit to use embedded objects when there truly is a “has-a” or “has-many” relationship. However, it still is very possible to use relationships effectively with sync, you just need to have access to both collections.",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Relationships and Rules | 2022-12-05T10:19:39.270Z | Relationships and Rules | 1,632 |
null | [
"dot-net"
]
| [
{
"code": "public class MainClass\n{\n public Guid Id { get; set; }\n public NestedClass Detailed { get; set; }\n}\n\npublic class NestedClass\n{\n public string Value { get; set; }\n}",
"text": "Given the simple nested class below odata call fails with error messageOData: $filter=detailed/value eq ‘test’Exception has been thrown by the target of an invocation.\nSystem.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation.\n—> System.InvalidOperationException: IIF(({document}{Detailed} == null), null, {Detailed.Value}) is not supported.I understand you do not have a full OData with queryable support but i’m looking for ways forward for us since we try to adopt at least basic OData support in all our APIs.Questions:Versions\nMongoDB.Driver: 2.12.0\nMongoDB 4.4.4 Community\nMicrosoft.AspNetCore.OData: 5.x",
"username": "David_Ernstsson"
},
{
"code": "",
"text": "Did you managed to solve this?",
"username": "Stefan_Niculescu"
},
{
"code": "",
"text": "Is there any news about it?\nWe would really need to have this issue resolved in order to move forward with our solution.",
"username": "faramos"
}
]
| OData .Net Core simple nested query fails | 2021-03-10T18:53:09.738Z | OData .Net Core simple nested query fails | 3,571 |
null | []
| [
{
"code": "",
"text": "Hello everyone.What is the best type to use for price value? Is it Double or Float? I am using Realm objects.\nThe price should contain only 2 decimals. (10.55$)Have a nice day!",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "Hi @Ciprian_Gabor,For prices you want to use the Decimal128 type so all the calculations are absolutely exact (and not rounded to the next float or double) because of the IEEE 754.The Realm SDK 10.0 is now Generally Available with new capabilities such as Cascading Deletes and new types like Decimal128.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thanks for you answer, what types should I use for Kotlin and SwiftUI",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "@Ciprian_Gabor: May I know which SDK you are using?",
"username": "Mohit_Sharma"
},
{
"code": "id(\"io.realm.kotlin\") version \"1.4.0\"",
"text": "id(\"io.realm.kotlin\") version \"1.4.0\"",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "That’s what I suspected earlier.Currently, Kotlin SDK doesn’t support Decimal128 which is why you couldn’t find the type but would be available soon. With this said and considering this limitation I would use Double as they are more accurate when doing price calculations.",
"username": "Mohit_Sharma"
},
{
"code": "",
"text": "Thanks!\nDo you have any exact date when is going to be available?",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "Sorry, I don’t have an update regarding that.",
"username": "Mohit_Sharma"
}
]
| Float vs Double type for Price value | 2022-12-02T10:32:31.291Z | Float vs Double type for Price value | 2,031 |
null | [
"node-js",
"production",
"transactions",
"typescript"
]
| [
{
"code": "Filteranyinterface CircularSchema {\n name: string;\n nestedSchema: CircularSchema;\n}\n\n// we have a collection of type Collection<CircularSchema>\n\n// below a depth of 9, type checking is enforced\ncollection.findOne({ 'nestedSchema.nestedSchema.nestedSchema.name': 25 }) // compilation error - name must be a string\n\n// at a depth greater than 9, code compiles but is not type checked (11 deep)\ncollection.findOne({\n 'nestedSchema.nestedSchema.nestedSchema.nestedSchema.nestedSchema.nestedSchema.nestedSchema.nestedSchema.nestedSchema.nestedSchema.name': 25\n}) // NO compilation error\n",
"text": "The MongoDB Node.js team is pleased to announce version 4.11.0 of the mongodb package!Version 4.3.0 of the Node driver added Typescript support for dot notation into our Filter type but\nin the process it broke support for recursive schemas. In 4.11.0, we now support recursive schemas and\nprovide type safety on dot notation queries up to a depth of 9. Beyond a depth of 9, code still compiles\nbut is no longer type checked (it falls back to a type of any).Note that our depth limit is a product of Typescript’s recursive type limitations.Many thanks to those who contributed to this release!We invite you to try the mongodb library immediately, and report any issues to the NODE project.",
"username": "Bailey_Pearson"
},
{
"code": "Filterinterface Author {\n name: string;\n bestBook: Book;\n}\n\ninterface Book {\n title: string;\n author: Author;\n}\n \nlet authors: Collection<Author>\n\n// below a depth of 8, type checking is enforced\nauthors.findOne({ 'bestBook.author.bestBook.title': 25 }}) \n// ✅ expected compilation error is thrown: \"title must be a string\"\n\n// at a depth greater than 8 code compiles but is not type checked (9 deep in this example)\nauthors.findOne({ 'bestBook.author.bestBook.author.bestBook.author.bestBook.author.name': 25 }) \n// ⛔️ perhaps unexpected, no compilation error is thrown because the key is too deeply nested\n",
"text": "There is a small limitation in type checking for recursive types in 4.11.0. Recursive types which reference themselves will compile safely but do not have type checking.The code example in the release notes will successfully compile, but it will not type check the Filter predicate.The following is an example that shows a scenario where type checking is enforced.",
"username": "Bailey_Pearson"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB NodeJS Driver 4.11.0 Released | 2022-10-19T16:36:50.737Z | MongoDB NodeJS Driver 4.11.0 Released | 2,700 |
[
"containers"
]
| [
{
"code": "apt install ./mongodb-database-tools-ubuntu2004-arm64-100.6.1.debThe following packages have unmet dependencies:\n mongodb-database-tools:arm64 : Depends: libc6:arm64 but it is not installable\n Depends: libgssapi-krb5-2:arm64 but it is not installable\n Depends: libkrb5-3:arm64 but it is not installable\n Depends: libk5crypto3:arm64 but it is not installable\n Depends: libcomerr2:arm64 but it is not installable\n Depends: libkrb5support0:arm64 but it is not installable\n Depends: libkeyutils1:arm64 but it is not installable\nE: Unable to correct problems, you have held broken packages.\n",
"text": "Hello I tried to install Mongo databasetools\nI use Ubuntu 20.04 arm64,\nAnd when I execute …\napt install ./mongodb-database-tools-ubuntu2004-arm64-100.6.1.deb\nI get the following errorAll of these packages are ‘Ubuntu key packages’, so I tried to update them … but it’s not possible to remove them (can harm Ubuntu …)\nI have also tried to run it in a Docker container (Ubuntu & Debian, it fails too …)I have tried to update Ubuntu apt tools with the following tuto apt - How do I resolve unmet dependencies after adding a PPA? - Ask Ubuntu ?\nBut it does not work neither;\nanyone as an idea to solve this issue?",
"username": "Sebastien_Laloo"
},
{
"code": "sudo apt update",
"text": "Your apt cache should be up to date beforehand.So run sudo apt update first.",
"username": "chris"
}
]
| Problem installing database-cli-tools | 2022-12-05T17:26:39.135Z | Problem installing database-cli-tools | 1,298 |
|
[
"app-services-user-auth",
"react-js"
]
| [
{
"code": "Error: Request failed (POST https://realm.mongodb.com/api/client/v2.0/app/xxxxxxxx/auth/providers/local-userpass/register): invalid json (status 400)\nimport { Button, TextField } from \"@mui/material\";\nimport { useContext, useState } from \"react\";\nimport { Link, useLocation, useNavigate } from \"react-router-dom\";\nimport { UserContext } from \"../contexts/user.context\";\n\nconst Signup = () => {\n const navigate = useNavigate();\n const location = useLocation();\n\n const { emailPasswordSignup } = useContext(UserContext);\n const [form, setForm] = useState({\n email: \"\",\n password: \"\"\n });\n\n const redirectNow = () => {\n const redirectTo = location.search.replace(\"?redirectTo=\", \"\");\n navigate(redirectTo ? redirectTo : \"/\");\n };\n\n const onFormInputChange = (event) => {\n const { name, value } = event.target;\n setForm({ ...form, [name]: value });\n };\n\n const onSubmit = async() => {\n try { \n const user = await emailPasswordSignup(form.email, form.password);\n if (user) {\n redirectNow();\n } \n } catch (error) { \n alert(error);\n }\n };\n\n return <form style={{ display: \"flex\", flexDirection: \"column\", maxWidth: \"300px\", margin: \"auto\" }}>\n <h1>Signup</h1>\n <TextField\n label=\"Email\"\n type=\"email\"\n variant=\"outlined\"\n name=\"email\"\n value={form.email}\n onInput={onFormInputChange}\n style={{ marginBottom: \"1rem\" }}\n />\n <TextField\n label=\"Password\"\n type=\"password\"\n variant=\"outlined\"\n name=\"password\"\n value={form.password}\n onInput={onFormInputChange}\n style={{ marginBottom: \"1rem\" }}\n />\n <Button variant=\"contained\" color=\"primary\" onClick={onSubmit}>\n Signup\n </Button>\n <p>Have an account already? <Link to=\"/login\">Login</Link></p>\n </form>\n}\n\nexport default Signup;\n const emailPasswordLogin = async (email, password) => {\n const credentials = Credentials.emailPassword(email, password);\n const authenticatedUser = await app.logIn(credentials);\n setUser(authenticatedUser);\n return authenticatedUser;\n };\n\n const emailPasswordSignup = async (email, password) => {\n try {\n await app.emailPasswordAuth.registerUser(email, password);\n return emailPasswordLogin(email, password);\n } catch (error) {\n throw error; \n }\n };\n",
"text": "Hi, I am currently trying to follow this guide below to create a React app that creates a site that can log in using the Atlas realm user feature.Build a full-stack app using MongoDB Realm GraphQL (without worrying about servers at all)\nReading time: 3 min read\nHowever, whenever I make a request to create a user, it shows the “invalid json (status 400)” error message. I’m fairly new to this, any help would be great! Thanks a lot in advance.Here’s the exact error message that i got (the xxxxxxxx part is replaced with my App ID)Here’s my code for the registration pageAnd here’s the emailPasswordSignup UserContext",
"username": "Storm_Thomason"
},
{
"code": "npm install [email protected]",
"text": "Just run npm install [email protected]",
"username": "Kevin_Daniel"
}
]
| app.emailPasswordAuth.registerUser() invalid json (status 400) | 2022-11-06T19:56:06.389Z | app.emailPasswordAuth.registerUser() invalid json (status 400) | 2,832 |
|
[
"atlas-cluster",
"database-tools",
"backup"
]
| [
{
"code": "",
"text": "Hi guys,I am trying to clone a database using mongodump and mongorestore, however, when specifying the connection string I constantly receive the following error:\nimage1161×48 3.2 KB\nI specify it the following way:mongodump --uri = ‘mongodb+srv://:@cluster0.mv5da.mongodb.net/?retryWrites=true&w=majority’If I use the same connection string to connect mongoshell it does work.Can somebody help me out?",
"username": "Jan_van_Dorth"
},
{
"code": "",
"text": "Try without extra spaces between the parameter name, the equal sign and the parameter value.",
"username": "steevej"
},
{
"code": "",
"text": "Doesn’t work. I came here with the same problem.I hate technology.",
"username": "Karolis_Strazdas"
},
{
"code": "",
"text": "I hate technology.Yes. Technology is not advanced enough for us to really determine what is your problem. So you will have to use the human skill of writing in order to describe your problem with more details. Since a picture is worth a thousand words, a screenshot that shows exactly what your are doing and the error you are getting will be very useful.Too bad @Jan_van_Dorth did not have the courtesy to followup. May be the issue was something else but we won’t know.",
"username": "steevej"
}
]
| Error parsing command line options | 2022-03-16T10:08:46.897Z | Error parsing command line options | 10,775 |
|
null | [
"aggregation",
"queries",
"atlas-search"
]
| [
{
"code": "Song\"isPopular\": \"true\"\"isPopular\": \"true\"isPopular{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"name\": [\n {\n \"foldDiacritics\": false,\n \"maxGrams\": 7,\n \"minGrams\": 3,\n \"tokenization\": \"edgeGram\",\n \"type\": \"autocomplete\"\n }\n ]\n }\n }\n}\ncompoundshouldisPopularlet results = await Song.aggregate([\n {\n $search: {\n index: \"song_autocomplete\",\n compound: {\n must: [{\n autocomplete: {\n query: req.query.name,\n path: \"name\",\n tokenOrder: \"sequential\",\n }\n }],\n should: [{\n text: {\n query: \"true\",\n path: \"isPopular\"\n }\n }],\n }\n },\n },\n {\n $project: {\n name: 1,\n _id: 1,\n },\n },\n {\n $limit: 10,\n },\n ]);\nisPopular",
"text": "I am using the atlas search autocomplete feature for a collection of Songs in MongoDb.If I query the song “Santa Claus Is Comin’ To Town” using autocomplete, I get back many versions of this song by many different artists.A Song in my case has a \"isPopular\": \"true\" attribute, if the song is a popular version of the song.I am looking to prioritize documents that have \"isPopular\": \"true\" so that they are returned first in the autocomplete result list. The way I currently have this search configured does not pay attention to this isPopular attribute. How might I boost these popular records?Here is the index that I defined in Atlas Search:And here is the query I am using to fetch the results using a compound should query factoring the isPopular.Would appreciate any workaround here so that results that have isPopular show up as the first results.",
"username": "Michael_Murphy"
},
{
"code": "boostconstant\"isPopular\": \"true\"",
"text": "Hi @Michael_Murphy, welcome to the MongodB community! You should be able to achieve this by customizing the score of results - try playing around with the boost or constant options. This should allow you to increase the score of documents which satisfy \"isPopular\": \"true\" so that they get returned at the top of the results list.Let me know if this helps!",
"username": "amyjian"
},
{
"code": "score$projectscoreisPopular: \"true\"$search: {\n index: \"song_autocomplete\",\n compound: {\n must: [{\n autocomplete: {\n query: req.query.name,\n path: \"name\",\n tokenOrder: \"sequential\",\n }\n }],\n should: [{\n text: {\n query: \"true\",\n path: \"isPopular\",\n score: {\n // \"constant\": { value: 5 }\n \"boost\": { value: 5 }\n }\n }\n }],\n }\n },\n",
"text": "So I’ve been playing around with score priror, but just did so again and it does not seem to be effecting the results or their scores when I $project the scores with the result. The value is the same with or without the score boost or constant for records where isPopular: \"true\"Am I configuring this incorrectly?",
"username": "Michael_Murphy"
}
]
| Prioritize certain records over others with Autocomplete | 2022-12-05T16:24:37.520Z | Prioritize certain records over others with Autocomplete | 1,728 |
null | [
"queries"
]
| [
{
"code": "{\n\t\"_id\" : ObjectId(\"6364ee0b94521b3d7ec62cd0\"),\n\t\"groupId\" : ObjectId(\"61a7c52f97d24b6a5eb75252\"),\n\t\"insertedAt\" : \"2022-11-04T10:48:43.670370Z\",\n\t\"isActive\" : false,\n\t\"likes\" : 0,\n\t\"text\" : \"Hh\",\n\t\"title\" : \"Hh\",\n\t\"type\" : \"groupPost\",\n\t\"uniquePostId\" : \"123\",\n\t\"updatedAt\" : \"2022-11-04T10:48:43.670403Z\",\n\t\"userId\" : ObjectId(\"61e1160c97d24b2673d7f136\")\n}\n{\n\t\"_id\" : ObjectId(\"6364eab794521b3d7efad66d\"),\n\t\"groupId\" : ObjectId(\"61a7c52f97d24b6a5eb75252\"),\n\t\"insertedAt\" : \"2022-11-04T10:34:31.935725Z\",\n\t\"isActive\" : false,\n\t\"likes\" : 0,\n\t\"teamId\" : ObjectId(\"61e13b4e97d24b4901a923e2\"),\n\t\"text\" : \"7y\",\n\t\"title\" : \"Uyy\",\n\t\"type\" : \"teamPost\",\n\t\"uniquePostId\" : \"123\",\n\t\"updatedAt\" : \"2022-11-04T10:34:31.935756Z\",\n\t\"userId\" : ObjectId(\"61e1160c97d24b2673d7f136\")\n}\n{\n\t\"_id\" : ObjectId(\"6364eab794521b3d7efad66b\"),\n\t\"groupId\" : ObjectId(\"61a7c52f97d24b6a5eb75252\"),\n\t\"insertedAt\" : \"2022-11-04T10:34:31.935725Z\",\n\t\"isActive\" : false,\n\t\"likes\" : 0,\n\t\"text\" : \"7y\",\n\t\"title\" : \"Uyy\",\n\t\"type\" : \"groupPost\",\n\t\"uniquePostId\" : \"456\",\n\t\"updatedAt\" : \"2022-11-04T10:34:31.935756Z\",\n\t\"userId\" : ObjectId(\"61e1160c97d24b2673d7f136\")\n}\n\ndb.posts.distinct(\"uniquePostId\",{})\n\n",
"text": "DocumentsuniquPostId “123” is repeated I just want to get the documents only once. If I use distinct() in the query I’m getting only distinct values in the array, not the entire document, is there any way to get entire documentsoutput\n[\n“123”,\n“456”\n]\nI want the entire document not only the values",
"username": "Prathamesh_N"
},
{
"code": "",
"text": "We need closure in your other posts. Share the solution you found or mark one of the post as the solution.",
"username": "steevej"
},
{
"code": "",
"text": "when you stored many documents with that field having the same value, you violated your condition of uniqueness. now you need to decide what makes the documents you want back to be unique among others with the same value.\nwithout this decision, it is hard to make a query. so please make one. (newest, oldest, likes, date, etc.). then the rest would possibly follow an aggregation of “distinct, lookup, sort, limit 1, unwind”. sorting is impossible without criteria.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to filter only unique id documents? | 2022-12-05T11:51:19.163Z | How to filter only unique id documents? | 2,995 |
[
"atlas-device-sync"
]
| [
{
"code": "",
"text": "Hey everyone,On our test environment, I noticed an inconsistency between the schema of an object on the Backend (seen in the App Services’ Schema UI) and in our Client app (iOS). On the backend, it’s an array of String, while on the client side, it’s an array of another object.\nWeirdly enough, this hasn’t caused any Sync issue (?!).Realizing I don’t need this property, I decided to simply remove it from the schema on the client side.\nI turned on development mode, and ran the client app to update the schema.\nWent back to the Schema (viewed from the App Services’ Schema UI), but nothing has changed.Now, I decided to remove it manually from the Schema UI, but the following error pops up when I try to save:\n\nScreenshot 2022-12-05 at 15.18.201686×106 24.2 KB\nError is the following: “cannot define relationship for field which does not exist in schema”Cryptic! Is this a bug, or am I missing something?Thanks!",
"username": "Baptiste_Malaguti"
},
{
"code": "",
"text": "Hi @Baptiste_Malaguti,Thanks for posting! A few things here:I turned on development mode, and ran the client app to update the schemaYour local schema can actually be a subset of your server-side schema. So, removing the table from your local data model would not trigger the table to be removed from your server-side schema. Similarly when removing fields from a schema in your local data model. Development mode changes are additive-onlyError is the following: “cannot define relationship for field which does not exist in schema”This likely means that you still have a relationship that references the property you’re trying to remove. From the schemas tab, there’s should be a toggle labeled “Expand relationships”, if you turn that on you’ll be able to see the relationships defined on your table, and remove the ones that you no longer need. See https://www.mongodb.com/docs/atlas/app-services/schemas/relationships for additional details.It’s also worth noting that removing a property on a synced table is a destructive schema change, so you will be prompted to terminate + re-enable sync when trying to save this schema change. See https://www.mongodb.com/docs/atlas/app-services/sync/data-model/update-schema/#std-label-destructive-changes-synced-schema for additional details.",
"username": "Kiro_Morkos"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Property of Realm Schema cannot be removed from the Backend UI ("cannot define relationship for field which does not exist in schema") | 2022-12-05T14:20:21.721Z | Property of Realm Schema cannot be removed from the Backend UI (“cannot define relationship for field which does not exist in schema”) | 1,645 |
|
null | [
"aggregation",
"queries"
]
| [
{
"code": "// Booking\n{\n title: \"Booking 1\",\n days: [\n { start: \"2022-12-01T08:30:00.000+00:00\", end: \"2022-12-03T05:30:00.000+00:00\" }\n ]\n}\n\nconst start = \"2022-12-01\";\nconst end = \"2022-12-07\";\n\nBookingsBooking.aggregate([\n{\n $match: {\n $and: [\n {\n \"days.start\": {\n $gte: new Date(start),\n\t },\n \"days.start\": {\n $lte: new Date(end),\n },\n },\n ],\n }\n }\n]);\n",
"text": "I have a collection of bookings, each booking has an array of days the booking covers, each day has a start and end date.I want to return only the bookings that have atleast 1 day within a set of dates. The below feels like it should work but it returns nothing all the time.I have tried quite a few variations of the above but can’t seem to figure it out.Thanks!",
"username": "Tim_Horwood"
},
{
"code": "",
"text": "Are your start and end dates in Booking Date object or strings?To be able to compare them the types must matches. You are using new Date() in your aggregation so start and end must be Date, not string. From here it looks like they are strings.Your query is wrong because your are query days.start twice, once with new Date( start ) and the second time with new Date( end ).Look at $elemMatch because I suspect you want your 2 conditions to be true for the same array element.",
"username": "steevej"
},
{
"code": "$elemMatch$match: {\n\t\t\t\t\tdays: {\n\t\t\t\t\t\t$elemMatch: {\n\t\t\t\t\t\t\tstart: {\n\t\t\t\t\t\t\t\t$gte: new Date(start).toISOString(),\n\t\t\t\t\t\t\t\t$lte: new Date(end).toISOString(),\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n",
"text": "Thanks for the reply!$elemMatch is one of the variations I have attempted but you were correct about the date format being the issue.The below gives me exactly what I want:Thanks!",
"username": "Tim_Horwood"
},
{
"code": "",
"text": "Mark one of the post as the solution so that other forum’s users know.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to perform an aggregate, returning only documents where an array of subdocuments have a date that falls between 2 dates | 2022-12-05T13:41:03.890Z | How to perform an aggregate, returning only documents where an array of subdocuments have a date that falls between 2 dates | 2,182 |
null | [
"queries",
"node-js",
"data-modeling",
"react-js"
]
| [
{
"code": " {\n name: \"Tesla Inc.\",\n category: \"Automotive\",\n contact: {\n state : {\n name: \"Texas\",\n city: \"Austin\", \n address: {\n streetName: \"Tesla Road\",\n number: '1'\n } \n }\n }\n }\nlet address = response.contact.state.city.addressresponse.nameTesla Inc.console.log(response.contact.state.city.address.streetName)Uncaught TypeError: Cannot read properties of undefined (reading 'state')storeRoutes.route(\"/enterprise\").get(function (req, res) {\n let db_connect = dbo.getDb(\"res\");\n\n const query = { name : \"Tesla\"};\n \n db_connect\n .collection(\"stores\")\n .findOne(query,function (err, result) {\n if (err) throw err;\n res.json(result);\n });\n });\nfunction GetEnterprise() {\n const [store, setStore] = useState({\n });\n \n useEffect(() => {\n async function fetchData() {\n const response = await fetch(`http://localhost:5000/enterprise`);\n \n if (!response.ok) {\n const message = `An error has occurred: ${response.statusText}`;\n window.alert(message);\n return;\n }\n \n const record = await response.json();\n if (!record) {\n window.alert(`Record with id not found`);\n return;\n } \n\n setStore(record);\n }\n \n fetchData();\n \n return;\n }, [1]); \n \n return store;\n }\nlet enterprise = GetEnterprise();\n console.log( enterprise.name );\nTesla Incconsole.log( enterprise.contact.state.adress.number ); Uncaught TypeError: Cannot read properties of undefined (reading 'state') let res_json = JSON.stringify(enterprise); {\"_id\":\"637e4397f6723844191aa03d\",\"name\":\"Tesla\",\"category\":\"Automotive\",\"contact\":{\"state\":{\"name\":\"Texas\",\"city\":\"Austin\",\"address\":{\"streetName\":\"Tesla Road\",\"number\":\"1\"}}}} ",
"text": "I am using the MERN stack for my current project. So I am facing this problem:Let’s say that I have the following document in MongoDB :After I get the response from the server I want to use the address subdocument in a format like:\nlet address = response.contact.state.city.addressWhile for response.name console logs : Tesla Inc.\nif I use console.log(response.contact.state.city.address.streetName), I get :\nUncaught TypeError: Cannot read properties of undefined (reading 'state')Follows my coding processThis is my Express route for quering the database :This a Retrieve Data Function that returns the data object:Now lets take a look at the code in my App.js. I declare a variable called “enterprise” which is supposed to be the entire document as JavaScript object.\nNote that if a property of “enterprise” object is just a key-value pair it is been retrieved fine. For example :Logs Tesla Inc . Fine!But if I try to retrieve an embedded document or even a property of this, I get wrong cause it has not been defined. For example :\nconsole.log( enterprise.contact.state.adress.number ); \nLogs :\nUncaught TypeError: Cannot read properties of undefined (reading 'state'). As mentioned before.\nFor some reason seems like I can not use the full response as a JavaScript object.Fun fact if i implement :\n let res_json = JSON.stringify(enterprise); \nMy console logs the whole object as expected:\n {\"_id\":\"637e4397f6723844191aa03d\",\"name\":\"Tesla\",\"category\":\"Automotive\",\"contact\":{\"state\":{\"name\":\"Texas\",\"city\":\"Austin\",\"address\":{\"streetName\":\"Tesla Road\",\"number\":\"1\"}}}} \nI guess it has something to do with the async and await functions. That is because if I make an irrelevant change and save my document works fine. But if I refresh the page from the browser get back to its previous state.\nWhat am I doing wrong?\nAny suggestions on retrieving a document and convert it to JS object without any “cuts” in my app?\nThanks…",
"username": "nikos_anastasiou"
},
{
"code": "",
"text": "The object address is not a field from the object response.contact.state.city. It is a field from the object response.contact.state.response.contact.state.city.address.streetName",
"username": "steevej"
},
{
"code": "",
"text": "Yes, i know. That’s not the case though. I just haven’t found a way to edit the post yet",
"username": "nikos_anastasiou"
},
{
"code": "",
"text": "If this issue is resolved, please mark one of the post as the solution. You could also make a new post with the solution.",
"username": "steevej"
}
]
| Response embedded object logs "undefined". MERN stack | 2022-12-01T11:43:52.059Z | Response embedded object logs “undefined”. MERN stack | 3,321 |
null | [
"queries",
"java"
]
| [
{
"code": "{ \"_id\" : ObjectId(\"634675b9aa49cf504848e414\"), \"userId\" : \"5a26886656330579dc22009c\", \"userFullName\" : \"MarioRossi\", \"startDate\" : ISODate(\"2022-11-02T23:00:00Z\"), \"endDate\" : ISODate(\"2022-11-03T23:00:00Z\"), \"requestDate\" : ISODate(\"2022-10-12T08:07:19.931Z\"), \"approved\" : true }",
"text": "I’m unable to understand how to create a query with dates to get documents. I’ve the following document:{ \"_id\" : ObjectId(\"634675b9aa49cf504848e414\"), \"userId\" : \"5a26886656330579dc22009c\", \"userFullName\" : \"MarioRossi\", \"startDate\" : ISODate(\"2022-11-02T23:00:00Z\"), \"endDate\" : ISODate(\"2022-11-03T23:00:00Z\"), \"requestDate\" : ISODate(\"2022-10-12T08:07:19.931Z\"), \"approved\" : true }if I query using:db.plan.find({“startDate” : {$gte: ISODate(“2022-11-02T00:00:00Z”)}, “endDate”: {$lte: ISODate(“2022-11-03T23:00:00Z”)}})I obtain the document. But if I add a day to startDatedb.plan.find({“startDate” : {$gte: ISODate(“2022-11-03T00:00:00Z”)}, “endDate”: {$lte: ISODate(“2022-11-03T23:00:00Z”)}})I don’t obtain the document. In my mind, I think that 2022-11-03T00:00:00Z is greater than 2022-11-02T23:00:00Z. Where am I wrong?",
"username": "morellik"
},
{
"code": "",
"text": "In my mind, I think that 2022-11-03T00:00:00Z is greater than 2022-11-02T23:00:00ZIt is. That is why you do not get the document. In the document startDate is2022-11-02T23:00:00Zand your query means find document where the startDate (of the document) is greater or equal to 2022-11-03T00:00:00Z.",
"username": "steevej"
},
{
"code": "",
"text": "You are ready. My mistake.I’ve to check if every day of the month is in the interval between startDate and EndDate. How can I do that?",
"username": "morellik"
},
{
"code": "",
"text": "I do not understand exactly what you want. What do you mean by every day of the month? Is the month the query or the store dates? Is startDate and endDate fields of your document or input for your query? Share sample documents and query dates.",
"username": "steevej"
},
{
"code": "{ \"_id\" : ObjectId(\"634675b9aa49cf504848e414\"), \"userId\" : \"5a26886656330579dc22009c\", \"userFullName\" : \"MarioRossi\", \"startDate\" : ISODate(\"2022-11-02T23:00:00Z\"), \"endDate\" : ISODate(\"2022-11-03T23:00:00Z\"), \"requestDate\" : ISODate(\"2022-10-12T08:07:19.931Z\"), \"approved\" : true }for(int day=1; day<= howManyDays; day++) {\n LocalDateTime date1 = LocalDateTime.of(year, month, day, 00, 00).minusDays(1);\n LocalDateTime date2 = LocalDateTime.of(year, month, day, 23, 00).plusDays(1);\n Date d1 = Date.from(date1.atZone(defaultZoneId).toInstant());\n Date d2 = Date.from(date2.atZone(defaultZoneId).toInstant());\n List<Plan> plan = planRepository.findByDateRange(userId, d1, d2);\n....\n}\n@Query(\"{'userId': ?0, 'startDate': {$gte: ?1}, 'endDate': {$lte: ?2}}\") List<Plan> findByDateRange(String userId, Date date1, Date date2);",
"text": "The document is the following:\n{ \"_id\" : ObjectId(\"634675b9aa49cf504848e414\"), \"userId\" : \"5a26886656330579dc22009c\", \"userFullName\" : \"MarioRossi\", \"startDate\" : ISODate(\"2022-11-02T23:00:00Z\"), \"endDate\" : ISODate(\"2022-11-03T23:00:00Z\"), \"requestDate\" : ISODate(\"2022-10-12T08:07:19.931Z\"), \"approved\" : true }In my program, I loop over all days in the current month and I’ve to check if the date is between startDate and EndDate.The following is a peace of code:The wrong query is the following:@Query(\"{'userId': ?0, 'startDate': {$gte: ?1}, 'endDate': {$lte: ?2}}\") List<Plan> findByDateRange(String userId, Date date1, Date date2);",
"username": "morellik"
},
{
"code": "",
"text": "Why do you loop over all days to see if each is between startDate and endDate?I think that if the first of the month is $gte than startDate and the last of the month is $lte than endDate then all days are between startDate and endDate.Dates are stored UTC and you use LocalDateTime may this is why you do not get the result you wish.I added JAVA tag to your post since this code looks like JAVA.",
"username": "steevej"
},
{
"code": "",
"text": "I’ve two documents: plan and presence. When a user wants to go on holiday, the program register the days range in plan as: { “_id” : ObjectId(“634675b9aa49cf504848e414”), “userId” : “5a26886656330579dc22009c”, “userFullName” : “MarioRossi”, “startDate” : ISODate(“2022-11-02T23:00:00Z”), “endDate” : ISODate(“2022-11-03T23:00:00Z”), “requestDate” : ISODate(“2022-10-12T08:07:19.931Z”), “approved” : true }in presence the user register his timework as:\n{ “_id” : ObjectId(“637b947c4e70fa4c14f0d2e6”), “userId” : “5a26886656330579dc22009c”, “month” : 11, “day” : 9, “year” : 2022, “entranceTime” : “08:30”, “breakStart” : “12:30”, “breakEnd” : “13:00”, “exitTime” : “17:00”, “totalWorkHours” : \"8:0 }So, for each day of the month, I’ve to check if the user is on holiday or not. I do that recreating the date as\nLocalDateTime date1 = LocalDateTime.of(year, month, day, 00, 00).minusDays(1);\nLocalDateTime date2 = LocalDateTime.of(year, month, day, 23, 00).plusDays(1);and querying if the date1 and date2 are in the range of startDate and endDate in plan document.",
"username": "morellik"
},
{
"code": "",
"text": "I solved using another approach.",
"username": "morellik"
},
{
"code": "",
"text": "It would be nice if you could share how you solved it.That would benefit all including the new user that has something similar in",
"username": "steevej"
},
{
"code": "",
"text": "When users ask for holidays, the program adds one or more entries in the presence collection also. So I haven’t to query for dates range in the plan collection because I’ve already one or more documents with the information in presence collection.",
"username": "morellik"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Query between two dates | 2022-11-15T15:19:17.792Z | Query between two dates | 4,536 |
null | [
"node-js",
"compass",
"connecting",
"atlas-cluster"
]
| [
{
"code": "",
"text": "Hi there,I am able to connect to Mongo DB through Compass and code on my local system but when I try to deploy the code to an Ubuntu server, it is giving me this error:ERROR IN CONNECTING TO DATABASE Error: querySrv ENOTFOUND _mongodb._tcp.tha.ovfl8.mongodb.net\nat QueryReqWrap.onresolve [as oncomplete] (internal/dns/promises.js:172:17) {\nerrno: undefined,\ncode: ‘ENOTFOUND’,\nsyscall: ‘querySrv’,\nhostname: ‘_mongodb._tcp.tha.ovfl8.mongodb.net’\n}I am quite sure that this is a problem with Ubuntu because I’ve tried running it on Windows and Mac both and it was working fine.I’ve whitelisted the IP and my Mongo DB is not dormant.Can anybody please help? I’m quite new to Mongo DB atlas",
"username": "Bhuwan_Devshali"
},
{
"code": "",
"text": "Hi @Bhuwan_Devshali and welcome in the MongoDB Community !Before anything else, can you try to connect from this Ubuntu server to MongoDB Atlas using mongosh using the command line provided when you click “connect” in Atlas?This will make sure that it’s not an issue related to network, VPN, firewalls, etc but currently my guess is that your Ubuntu server can’t connect to port 27017 for some internal security reasons.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Head back to the network settings of your cluster and set it to “allow access anywhere” and try again. If it still fails to connect it is DNS/Firewall/VPN settings of your Ubuntu server as @MaBeuLux88 mentioned. Else, you are setting wrong IP address in your whitelist.",
"username": "Yilmaz_Durmaz"
},
{
"code": ";QUESTION\ntha.ovfl8.mongodb.net. IN ANY\n;ANSWER\n;AUTHORITY\nmongodb.net. 900 IN SOA ns-761.awsdns-31.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 60\n;ADDITIONAL\n",
"text": "You URI is wrong. The cluster tha.ovfl8.mongodb.net does not exist.",
"username": "steevej"
},
{
"code": "",
"text": "Hi @steevej ,The URL exists, I’ve changed the URL slightly to post in a public forum.",
"username": "Bhuwan_Devshali"
},
{
"code": "",
"text": "@Yilmaz_Durmaz I’ve allowed access from Anywhere. Still the same error",
"username": "Bhuwan_Devshali"
},
{
"code": "",
"text": "Hey @MaBeuLux88 , I am able to connect to Mongo DB from my Ubuntu Server using Mongosh.",
"username": "Bhuwan_Devshali"
},
{
"code": "curl http://portquiz.net:27017",
"text": "you can tag/answer in a single post If it still fails to connect it is DNS/Firewall/VPN settings of your Ubuntu serverWe see this error a lot and it is almost always related to the host environment. let me highlight them.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "The port was already open.\nIf I am able to connect through MONGOSH, i should be able to connect via Node js code also. It is not a network issue, it is something else.\nCould you please explain the meaning of this error message so that I can debug the issue on my ownERROR IN CONNECTING TO DATABASE Error: querySrv ENOTFOUND _mongodb._tcp.thoughtsutra.ovfl8.mongodb.net\nat QueryReqWrap.onresolve [as oncomplete] (internal/dns/promises.js:172:17) {\nerrno: undefined,\ncode: ‘ENOTFOUND’,\nsyscall: ‘querySrv’,\nhostname: ‘_mongodb._tcp.thoughtsutra.ovfl8.mongodb.net’\n}",
"username": "Bhuwan_Devshali"
},
{
"code": "",
"text": "It is hard to help if you hide critical details.The URL exists, I’ve changed the URL slightly to post in a public forum.When the error is DNS ENOTFOUND, it is kind of critical to see what is the connection string.The cluster thoughtsutra.ovfl8.mongodb.net does not exist so you get ENOTFOUND.If I am able to connect through MONGOSH, i should be able to connect via Node js code also.Yes you should, but I am pretty sure you cannot with thoughtsutra.ovfl8.mongodb.net in your URI.It is not a network issue.If your URI really has thoughtsutra.ovfl8.mongodb.net as the cluster, it is a network issue.",
"username": "steevej"
},
{
"code": "mongodb://mongodb+srv://mongodb://",
"text": "another reason that comes to mind is that you are using an older driver version (which we keep forgetting to ask). What driver version are you trying to use?there are two connection string versions: old mongodb:// and new mongodb+srv://. in the old format, you need to give the address of each cluster member. in the new one, only cluster address is given and it is resolved to member addresses with an SRV record resolver. old drivers do not understand this new string format and fail to resolve cluster addresses.from the same location you get your connection string (connect with application) select the oldest nodejs driver version and use that mongodb:// connection string and report back so we would know if that is the issue.",
"username": "Yilmaz_Durmaz"
},
{
"code": "mongodb+srv://",
"text": "Since the code is ENOTFOUND for syscall querySrv with a host name that starts with _mongodb._tcp the URI is of the formmongodb+srv://",
"username": "steevej"
}
]
| Not able to connect MongoDB through Node.js on Ubuntu | 2022-12-01T15:45:12.561Z | Not able to connect MongoDB through Node.js on Ubuntu | 4,141 |
[
"server",
"installation"
]
| [
{
"code": "",
"text": "\nindex1057×335 75.9 KB\n",
"username": "Mari_Mikhaleva"
},
{
"code": "",
"text": "What is your os?\nWhat command you used to start the service?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi! My OS is Ubuntu.\nI used these commands to install Mongo 4.4wget -qO - https://www.mongodb.org/static/pgp/server-4.4.asc | sudo apt-key add -\necho “deb [ arch=amd64,arm64 ] MongoDB Repositories focal/mongodb-org/4.4 multiverse” | sudo tee /etc/apt/sources.list.d/mongodb-org-4.4.list\nsudo apt-get update\nsudo apt-get install mongodb-org=4.4.8 mongodb-org-server=4.4.8 mongodb-org-shell=4.4.8 mongodb-org-mongos=4.4.8 mongodb-org-tools=4.4.8\nsystemctl start mongod\nsystemctl enable mongod\nsystemctl status mongod",
"username": "Mari_Mikhaleva"
},
{
"code": "mongod --logpath ~/mongocrashes.log",
"text": "I suspect two things for now:try running mongod --logpath ~/mongocrashes.log in the terminal. it should give a better log so we can examine it. use “upload” button in your answer and attach it.",
"username": "Yilmaz_Durmaz"
},
{
"code": "Error reading config file: No such file or directory\nsystemctl cat mongodb | grep ExecStart\nsystemctl cat mongodb | grep User\n",
"text": "The error message isUsually the configuration file is /etc/mongodb.conf but since it could be different you can verify the actual name used by the service using:What ever is the file specified by -f or –config, that file needs to exist and be readable by the user specified in the service file. You may find the user with:Next time you publish logs, screens output or code, please doing using the formatting specified in Formatting code and log snippets in posts. This way we can cut-n-paste in our reply or experimentation rather than typing over.",
"username": "steevej"
},
{
"code": "",
"text": "I suspect since sudo is not used with systemctl command it is unable to start mongod(not able to read mongod.conf file)",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "That could be the case but my understanding a little bit different.The configuration file is read by mongod, not by systemctl. The process mongod is started as the User specified in the service file, so only this user needs to have read access to the file.But I will test your sudo theory once I am on my linux machine.",
"username": "steevej"
},
{
"code": ": steevej ; systemctl status mongodb\n: steevej ; systemctl cat mongodb | grep ExecStart\n: steevej ; systemctl cat mongodb | grep User\n: steevej ; ls -l /etc/mongodb.conf\n: steevej ; systemctl start mongodb\n: steevej ; systemctl status mongodb\n\n\n: steevej ; systemctl status mongodb\n○ mongodb.service - MongoDB Database Server\n Loaded: loaded (/usr/lib/systemd/system/mongodb.service; disabled; preset: disabled)\n Active: inactive (dead)\n Docs: https://docs.mongodb.org/manual\n\nNov 25 08:55:36 xps13 systemd[1]: Started MongoDB Database Server.\nNov 25 08:58:10 xps13 systemd[1]: Stopping MongoDB Database Server...\nNov 25 08:58:10 xps13 systemd[1]: mongodb.service: Deactivated successfully.\nNov 25 08:58:10 xps13 systemd[1]: Stopped MongoDB Database Server.\nNov 25 08:58:10 xps13 systemd[1]: mongodb.service: Consumed 2.309s CPU time.\n\n: steevej ; systemctl cat mongodb | grep ExecStart\nExecStart=/usr/bin/mongod --config /etc/mongodb.conf\n\n: steevej ; systemctl cat mongodb | grep User\nUser=mongodb\n\n: steevej ; ls -l /etc/mongodb.conf\n-rw------- 1 mongodb mongodb 678 Jul 13 10:42 /etc/mongodb.conf\n\n: steevej ; systemctl start mongodb\n● mongodb.service - MongoDB Database Server\n Loaded: loaded (/usr/lib/systemd/system/mongodb.service; disabled; preset: disabled)\n Active: active (running) since Fri 2022-11-25 09:00:46 EST; 40ms ago\n Docs: https://docs.mongodb.org/manual\n Main PID: 192273 (mongod)\n Memory: 2.0M\n CPU: 23ms\n CGroup: /system.slice/mongodb.service\n └─192273 /usr/bin/mongod --config /etc/mongodb.conf\n\nNov 25 09:00:46 xps13 systemd[1]: Started MongoDB Database Server.\n: steevej ; mongod --version\ndb version v5.0.3\n",
"text": "Here comes the result.My bash prompt is: steevej ;So even if only mongodb user has read access systemctl still starts correctly. As a note, when I do systemctl start|stop without sudo, I have a dialog that ask for my passwd. So it looks like systemctl does a sudo behind the scene.More over, if I change the owner of /etc/mongodb.conf so that user mongodb does not have read access, it fails to start with the error Error opening config file: Permission denied.If I remove the configuration, it fails with Error opening config file ‘/etc/mongodb.conf’: No such file or directory. The error message is a little bit more explicit but I am using a more recent version:So the only conclusion I can make is that the configuration file does not exist.",
"username": "steevej"
},
{
"code": "systemctl cat mongodb | grep ExecStart\n",
"text": "@Mari_Mikhaleva , above command is the crucial part we need from your side. can you please provide us its output.",
"username": "Yilmaz_Durmaz"
},
{
"code": "/usr/lib/systemd/system/mongod.serviceExecStart=/usr/bin/mongod --config /etc/mongod.conf",
"text": "mongod.conf.txt (667 Bytes)here is the default config file you can use. if you somehow deleted your config file, copy this one. but remember, you will still need the path we were asking for.or, put this file where ever you want (and accessible by systemctl) and edit the following line in /usr/lib/systemd/system/mongod.service (at least ubuntu 20 uses this path)",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "This is my file mongocrashes.log\nmongocrashes.log (1.2 KB)",
"username": "Mari_Mikhaleva"
},
{
"code": "txt",
"text": "the file itself gives an error. edit your last post, rename the file to have txt extension, and upload/attach again.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "docs.txt (1.3 KB)",
"username": "Mari_Mikhaleva"
},
{
"code": "/data/dbmongodsudo chmod 777 /data/db",
"text": "Attempted to create a lock file on a read-only directory: /data/dbthis one is now related to folder permissions over /data/db. I am not a Linux guru, so please check how you give write access to mongod process.assuming it is your personal computer, just for the moment, give full access to it with sudo chmod 777 /data/db so we can see if you get any more errors.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "This is a output of command:ExecStart=/usr/bin/mongod --quiet --config /etc/mongod.conf",
"username": "Mari_Mikhaleva"
},
{
"code": "",
"text": "I am guessing you corrected config file issue because your new log file says you don’t have write permission for the path to store database files. please check my above post and share again if you still have errors.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "hi again @Mari_Mikhaleva, forum posts behave a bit strangely about sending new posts. sometimes just gives notifications, sometimes sends posts in an email. your last one (the one you seems deleted) came in an email, and I checked the log file.Initializing full-time diagnostic data capture with directory ‘/data/db/diagnostic.data’\nwaiting for connections on port 27017this means your config file is now fine and the data folder has the correct permissions so that the server is good to go. it is something nice to see.with that in hand, if you decide to reinstall mongodb you have now a way to check a few possible locations.Good Luck ",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "forum posts behave a bit strangely about sending new posts. sometimes just gives notifications, sometimes sends posts in an emailThat is something you may configure in your account preferences. I was happy when I was able to configure my account to not receive email anymore.",
"username": "steevej"
}
]
| I can't start mongodb server, because it status=2/INVALIDARGUMENT | 2022-11-24T15:11:19.619Z | I can’t start mongodb server, because it status=2/INVALIDARGUMENT | 11,215 |
|
null | []
| [
{
"code": "",
"text": "I am rather new to MongoDB, and worked a lot with Firebase before.\nIn Firebase you can create collections within a collection by adding one to an existing document.\nMy question: is it possible to have collections within a collection in MongoDB Atlas Dataservice?",
"username": "Daniel_Brunner"
},
{
"code": "",
"text": "Is it possible to have collections within a collection in MongoDB Atlas Dataservice?No you cannot since a collection is part of a database.Is it possible to have collections within a collection in MongoDB Atlas Dataservice?Yes you can if you think that a collection is a list of document. You may see an array of documents as a collection. But it won’t be a collection as we know it in MongoDB.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Nested collections | 2022-12-05T07:43:16.451Z | Nested collections | 1,964 |
[
"singapore-mug"
]
| [
{
"code": "MongoDB Solutions ArchitectMongoDB Solutions Architect",
"text": "\nimage1130×637 155 KB\nWelcome back to the second community gathering. The theme of this gathering is focused on MongoDB document model and query language. To make things fun, we will be playing an escape room game to test your MQL skills (do not worry if you are not familiar, the purpose is for us all to learn together). As usual, there will be food and prizes!Event Type: In-Person\nLocation: Workshop@Lavender, Aperia Mall, 12 Kallang Ave, #01-56, Singapore 339511MongoDB Solutions Architect\navatar726×705 82.8 KB\nMongoDB Solutions Architect",
"username": "DerrickChua"
},
{
"code": "",
"text": "Dear community, we have received feedback that many people have started going away for their holiday plans and will not be able to join in. With the view of making every SG MUG gathering a vibrant and enjoyable one, we have decided to postpone this gathering till after the new year. We sincerely apologize to those who have already RSVPed and blocked our your calendar for this event. We will keep everyone updated when we have a new date.Happy holidays everyone and see you in the new year!",
"username": "DerrickChua"
}
]
| [Postponed] Singapore MUG: Escape Game Edition | 2022-11-21T13:05:05.528Z | [Postponed] Singapore MUG: Escape Game Edition | 3,077 |
|
null | [
"aggregation",
"queries",
"node-js"
]
| [
{
"code": "Users {\nname : \"XXX\",\nage:\"20\"\n}\n{\nname : \"XXX\",\nage:\"26\"\n}\n\n{\nname : \"XXX\",\nage:\"23\"\n}\n\nthis.userMoadel.aggregation([\n{\n$match: {age: !23}\n}\n\n])\n",
"text": "I have collection of docs that looks like (Dummy data)I want to to query all users except that one has 23 years old\nI implement function like this :It doesn’t work is there any solution ?\nI have to work with aggregation cuz this query should be after a lookup stage",
"username": "skander_lassoued"
},
{
"code": "",
"text": "What you need is https://www.mongodb.com/docs/manual/reference/operator/query/ne/.Please followup on your other threads",
"username": "steevej"
},
{
"code": "",
"text": "One more observation, your sample data age field is of type String, it wont work for numeric Operation",
"username": "psram"
},
{
"code": "",
"text": "$nin was the missed param thank you all",
"username": "skander_lassoued"
}
]
| How can $match all data except one | 2022-11-01T13:12:35.366Z | How can $match all data except one | 2,913 |
null | [
"mongodb-shell",
"golang"
]
| [
{
"code": "NewRegistryBuilder()NewRespectNilValuesRegistryBuilder()tM := reflect.TypeOf(bson.M{})\nregistry := bson.NewRegistryBuilder().RegisterTypeMapEntry(bsontype.EmbeddedDocument, tM).Build()\nclientOpts := options.Client().ApplyURI(URI).SetAuth(info).SetRegistry(registry)\nclient, err := mongo.Connect(ctx, clientOpts)\nNewRespectNilValuesRegistryBuilder().Build()SetRegistry()tM := reflect.TypeOf(bson.M{})\nregistry := bson.NewRegistryBuilder().RegisterTypeMapEntry(bsontype.EmbeddedDocument, tM).Build()\nclientOpts := options.Client().ApplyURI(SOMEURI).SetAuth(info).SetRegistry(registry)\n//not sure if the below line is of right usage ??\nclientOpts.SetRegistry(NewRespectNilValuesRegistryBuilder().Build())\nclient, err := mongo.Connect(ctx, clientOpts)\n\n",
"text": "Currently I already use NewRegistryBuilder() in the mongo client options , Somehow i want to make use of NewRespectNilValuesRegistryBuilder() from mongocompact so that I can set the nilvalues in the models as empty array .Currently I have something like below , To map the old mgo behaviorNow somehow I want to incorporate and set the NewRespectNilValuesRegistryBuilder().Build() to true in the same connect options , Not sure how to do that . Also the other problem that i see is i’m not able to see the mongocompat registry file in my vendor directorycan i do something like this , SetRegistry() two times ? like below ?can anyone help me with this ?related issue here",
"username": "karthick_d"
},
{
"code": "mgocompat.NewRegistryBuilder()mgocompat.NewRespectNilValuesRegistryBuilder()SetRegistryClientOptionsConnectSetRegistryClientOptionsregistry := mgocompat.NewRegistryBuilder().Build()\nclientOpts := options.Client().ApplyURI(URI).SetAuth(info).SetRegistry(registry)\nclient, err := mongo.Connect(ctx, clientOpts)\nEmbeddedDocumentbson.Mmgocompat",
"text": "Hello, @karthick_d . Thanks for your question.We definitely recommend using the mgocompat registries to mimic the encoding and decoding behavior of mgo (see docs here). If you’re trying to have nil, uninitialized Go slices Marshal to BSON empty arrays and not to BSON null, you should just use mgocompat.NewRegistryBuilder(). If you want the other features of the mgocompat registry, but you would like nil, uninitialized Go slices to Marshal to BSON null, you should use mgocompat.NewRespectNilValuesRegistryBuilder() (from your comment “nilvalues in the models as empty array”, this might not be your desired behavior).You’ll want to call SetRegistry a single time on the ClientOptions passed to Connect. SetRegistry will change the registry used internally when we Marshal and Unmarshal values to and from BSON. Calling it multiple times on the same ClientOptions struct will simply reset the value.Given the context of your code; I would suggest something like:I can see you’re trying to register a type map entry of EmbeddedDocument for bson.M. The mgcompat registries do that by default, so you shouldn’t need that logic.Also the other problem that i see is i’m not able to see the mongocompat registry file in my vendor directoryI believe the vendor directory will only contain the packages that are used in your application or the packages imported by those packages. You’re likely not referring to mgocompat anywhere in your code, so it’s not vendored.",
"username": "benjirewis"
},
{
"code": "mgocompat",
"text": "mgocompatThanks for the workaround , this works just fine . But is it going to be an expensive operation and is it safe to do this ?",
"username": "karthick_d"
},
{
"code": "",
"text": "But is it going to be an expensive operation and is it safe to do this ?While the mgocompat registries change encoding and decoding behavior, they should have more or less the same performance as the default registry. The mgocompat registry is also public, stable API within the driver, so, if by “safe” you mean stable, then yes. There should be no backward-breaking changes to their behavior in the 1.x versions of the driver.",
"username": "benjirewis"
},
{
"code": "",
"text": "@benjirewis Thanks for the help , Appreciate it.",
"username": "karthick_d"
},
{
"code": "",
"text": "encoding and decoding bIn future if i want to set the empty array back to nil , will that be doable ? if yes how can I do that ?",
"username": "karthick_d"
}
]
| How to use mongocompact package in mongoclient connect options | 2022-12-02T09:23:01.085Z | How to use mongocompact package in mongoclient connect options | 2,134 |
[]
| [
{
"code": "{\n \"error\": \"no authentication methods were specified\",\n \"error_code\": \"InvalidParameter\",\n \"link\": \"https://realm.mongodb.com/groups/...[truncated]\n}\n",
"text": "Problem setting up a Webhook / HTTPS EndpointFollowing along with this video (which, despite being created this year, is apparently really out of date – application is seemingly undergoing massive change):Learn how to create a data api with MongoDB Atlas in 10 minutes or lessAt the 7:00 mark, Michael Lynn describes the settings for setting up a “3rd party service” (or “webhook”) (now apparently known as “HTTPS Endpoints”) - specifically, choosing an authentication method (Application | System | UserId | Script).I am configuring my own HTTPS Endpoint, and nowhere am I given the option to configure this.when I try to test via Postman, I get the errorI thought maybe I had missing config within Realm - and found that I had no auth providers enabled. I enabled and deployed “API Keys”, then created an API key…I am trying to post form data to that endpoint. In Postman, I am:…but I get the same error.Any thoughts? Thank you!",
"username": "Greg_Hammond"
},
{
"code": "x-api-key{\n \"api-key\": \"<User's API Key>\"\n}\n",
"text": "@Greg_Hammond ,It sounds like the webhook definition is set of “application authentication” on the webhook definition. This method requires to provide at least one of your enabled authentication methods when calling the weebhook (for example a user/password via HTTP basic authetntication).The API KEY requires you to specify a specific field either in the body or header of the call and its not x-api-key but :If you do not need auth for the webhook you must set the authentication method to “SYSTEM”. Like in the tutorial :\n\nimage1533×789 54.7 KB\nBest regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thank you Pavel, I really appreciate the response. Changing the auth field from ‘x-api-key’ to ‘api-key’ in the header worked for me.Apparently my endpoint was indeed configured to “application authentication”, as sending the api-key worked. Again, though, the current UI for configuring this stuff doesn’t even show me the Authentication section (as it does in the tutorial) – so I have no way of changing that setting, or even knowing what it is. Is that currently on a dev list?Thanks again,\nGreg",
"username": "Greg_Hammond"
},
{
"code": "",
"text": "Going to answer my own question here, for others’ benefit:The authentication method is exposed on the Settings tab within Edit Function. This is also where the “Can Evaluate” JSON expression can be specified.Sorry for the confusion, hopefully this helps others going forward.Greg",
"username": "Greg_Hammond"
},
{
"code": "",
"text": "Greg, you’re an amazing guy, and I wish you nothing but happiness in life.This small hidden setting had me searching for about 1.5 hours, and I thought my saving grace would be finding that video that was only 10 months old.Thank you so much for helping me find this setting, and understanding that was the reason why my API wasn’t working.Also, thanks so much for coming back to post the update even after you found the resolution yourself.",
"username": "SPat"
},
{
"code": "",
"text": "Hi @Greg_Hammond and @SPat ,I see the confusion and I will highlight this UI change that might confuse users for improvement.Thanks for sharing your experience.Pavel",
"username": "Pavel_Duchovny"
},
{
"code": "no authentication methods were specified\nAuthorization: `Bearer ${_accessToken}`,\n",
"text": "Hi @Pavel_DuchovnyI’m getting the same errorand I use email/password authentication and yes my function auth is ** application authentication**\nwhen I do http request from web browser using axios, I get that error, though I have passed the auth headerI think it should work if I use the realm-web SDK but I wanna use it with axios, in case I need to change the backend provider later.",
"username": "Mohammed_Ramadan"
},
{
"code": "curl --location --request POST 'XXX' \\\n--header \"Content-Type: application/json\" \\\n--header \"jwtTokenString: <JWT_TOKEN>\"\ncurl --location --request POST 'XXX' \\\n--header \"Content-Type: application/json\" \\\n--data '{\"username\": \"XXX\", \"password\": \"XXX\"}'\n",
"text": "HI @Mohammed_Ramadan,For more details on other authentication providers (i.e, email/password and jwt token), you can follow our documentation found here. As the documentation is for our deprecated 3rd party service, please bear in mind that this is only temporary and we will also include a similar outline in the new HTTPS endpoint documentation.JWT TokenEmail / PasswordI hope this clears things up.Cheers,\nGiuliano",
"username": "giulianocelani"
},
{
"code": "",
"text": "Hi @giulianocelani ,Thank you for your comment. But I have tried the email/password approach and it didn’t work plus I want to access the app through HTTPS endpoint and not the CLI.",
"username": "Mohammed_Ramadan"
},
{
"code": "",
"text": "This is super useful. Thanks for solving. I’m new to MongoDB and working with backend data. The guides and documents from MongoDB are not very useful and quite confusing as there’s lots of assumed knowledge.Even where to select Authentication Option of System is not very intuitive where you can only change it by selecting the word within your HTTPS endpoint “Linked Function” to go to the right section where this option is enabled.We are there though, thanks again all!",
"username": "c4cf1c8579128651ea6456083ba7af8"
},
{
"code": "",
"text": "Thank you @Greg_Hammond …your contribution was helpful",
"username": "Moonsmile_Enterprise"
}
]
| Webhook / HTTPS Endpoint - error returned from Postman "no authentication methods were specified" | 2021-12-05T04:25:37.211Z | Webhook / HTTPS Endpoint - error returned from Postman “no authentication methods were specified” | 11,006 |
|
null | []
| [
{
"code": "",
"text": "I am Lahcene from Algeria Mentor in Data Visualization since 2018 with Illinois University\nBest wishes,\nLahcene",
"username": "Lahcene_Ouled_Moussa"
},
{
"code": "",
"text": "Hi @Lahcene_Ouled_Moussa,Welcome to the MongoDB Community forums Here are some quicks links that might be beneficial for you:Explore free resources for educators crafted by MongoDB experts to prepare learners with in-demand database skills and knowledge.MongoDB Atlas Charts | MongoDBI would encourage you to visit learn.mongodb.com to learn more about MongoDB.Thanks,\nKushagra",
"username": "Kushagra_Kesav"
}
]
| Hello, I'm Lahcene | 2022-12-02T16:01:26.710Z | Hello, I’m Lahcene | 1,518 |
[]
| [
{
"code": "",
"text": "I got this error when trying to start mongodb. Please help me quickly\nimage888×195 9.49 KB\n",
"username": "Kubister_11"
},
{
"code": "",
"text": "What is your os and what version of mongodb you installed.\nILL means illegal instruction\nCheck if you are meeting minimum micro architecture for your platform\nAlso search our forum threads",
"username": "Ramachandra_Tummala"
},
{
"code": "/etc/mongod.confsystemLog.path",
"text": "Also share log file as it may have clues. open /etc/mongod.conf file and note the systemLog.path. make a copy with txt extension, remove sensitive parts if exists, and upload/attach to your answer.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "I using Ubuntu 20.04.5 LTS and MongoDB 6.0",
"username": "Kubister_11"
},
{
"code": "",
"text": "I don’t have any logs. the folder is empty",
"username": "Kubister_11"
},
{
"code": "",
"text": "Most like it is as @Ramachandra_Tummala has said.One recent thread:",
"username": "chris"
},
{
"code": "",
"text": "installing using system commands should not allow installing packages of different architectures.so, if you downloaded the package manually (tgz?), it is highly possible your system is 32bit, but mongodb requires 64bit system.or you downloaded x64 package, but your system is an ARM system. you need arm64 package then",
"username": "Yilmaz_Durmaz"
}
]
| Failed with result 'core-dump' | 2022-12-04T02:10:57.913Z | Failed with result ‘core-dump’ | 2,718 |
|
[
"node-js"
]
| [
{
"code": "",
"text": "Hello,\ncan someone please give me a hint which one is correct answer and why? thanks!\nthis is Node.js\n\nimage1277×635 45.8 KB\n\nand this one,\n\nimage1271×616 20.8 KB\n",
"username": "ChangChien.Chang"
},
{
"code": "{}",
"text": "You asked for hints so here are they Which function name do we use to delete a single document (first match)? Delete Documents — MongoDB Shellwhat does an empty object, {}, mean in a multi-document update? Update Documents — MongoDB Shellwhat if field does not exists in the document when you use “$set”? $set — MongoDB Manual",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "thank you for the hints, Yilmaz Durmaz,\nso, can I ask questions further, why the answers are incorrect?",
"username": "ChangChien.Chang"
},
{
"code": "delete_one()",
"text": "I really don’t know but a possible explanation is the use of a mix of questions with python exams. python uses delete_one(). Is that the correct answer you are given?",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Thank you Yilmaz Durmaz for answering my question.\nthat was what I was guessing (mix of questions with python) when I reviewed my test results.\nHopefully, this will NOT happen during the REAL EXAM.\nthanks again!",
"username": "ChangChien.Chang"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
]
| Associate developer NodeJS practice questions - incorrect answers? | 2022-12-03T03:47:29.209Z | Associate developer NodeJS practice questions - incorrect answers? | 1,997 |
|
null | [
"dot-net",
"time-series"
]
| [
{
"code": "1.BulkWriteAsync(IClientSessionHandle session, IEnumerable1.UsingImplicitSessionAsync[TResult](Func1.ReplaceOneAsync(FilterDefinition3 bulkWriteAsync) --- End of inner exception stack trace --- at MongoDB.Driver.MongoCollectionBase1 filter, TDocument replacement, ReplaceOptions options, Func",
"text": "I am working on the timeseries collection, need to update some of the fields using ObjectId. But getting the exception in updating the document\nerror: MongoDB.Driver.MongoWriteException: A write operation resulted in an error. WriteError: { Category : “Uncategorized”, Code : 72, Message : “Cannot perform a non-multi update on a time-series collection” }.A bulk write operation resulted in one or more errors. WriteErrors: [ { Category : “Uncategorized”, Code : 72, Message : “Cannot perform a non-multi update on a time-series collection” } ].\nat MongoDB.Driver.MongoCollectionImpl1.BulkWriteAsync(IClientSessionHandle session, IEnumerable1 requests, BulkWriteOptions options, CancellationToken cancellationToken)\nat MongoDB.Driver.MongoCollectionImpl1.UsingImplicitSessionAsync[TResult](Func2 funcAsync, CancellationToken cancellationToken)\nat MongoDB.Driver.MongoCollectionBase1.ReplaceOneAsync(FilterDefinition1 filter, TDocument replacement, ReplaceOptions options, Func3 bulkWriteAsync) --- End of inner exception stack trace --- at MongoDB.Driver.MongoCollectionBase1.ReplaceOneAsync(FilterDefinition1 filter, TDocument replacement, ReplaceOptions options, Func3 bulkWriteAsync)My question is do we have any provision to update the timeseries collection?",
"username": "Rishabh_soni"
},
{
"code": "multi: trueupdateMany()",
"text": "Hello @Rishabh_soni ,Welcome to The MongoDB Community Forums!Message : “Cannot perform a non-multi update on a time-series collection”As the error suggests, update command must not limit the number of documents to be updated. Set multi: true or use the updateMany() method. This is a requirement to run update command in timeseries collection. To learn more about this please visit Updates and Deletes in Time Series Collection .Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to update timeseries collection Data using C# | 2022-10-31T11:43:04.223Z | How to update timeseries collection Data using C# | 2,462 |
null | [
"golang",
"containers",
"devops"
]
| [
{
"code": "",
"text": "HiI have a question about security vulnerabilities (cves) with mongodb container images.We are seeing cves (file attached) with some components of mongodb that are packaged into container image. Just want to check with the community and get some inputs on how evey one else is remediating these vulnerabilities. Our scanning tool is a combination of generating SBOM and then running it via OWasp Dependency-Track.|openssl| 1.1.1f-1ubuntu2.16| NVD CVE-2021-3711| Critical|\n|gopkg.in/yaml.v2| v2.4.0| NVD CVE-2022-28948| High|\n|golang.org/x/text|v0.3.7|NVD CVE-2022-32149|High|\n|tar|1.30+dfsg-7ubuntu0.20.04.2|NVD CVE-2019-9923|High|\n|gnupg| 2.2.19-3ubuntu2.2| NVD CVE-2022-34903|Medium|\n|apt| 2.0.9| NVD CVE-2020-3810|Medium|\n|procps| 2:3.3.16-1ubuntu2.3|NVD CVE-2018-1121|Medium|\n|passwd| 1:4.8.1-1ubuntu5.20.04.2|NVD CVE-2009-2360|Medium|",
"username": "Lavanya_Nutakki"
},
{
"code": "",
"text": "The Docker community is the party responsible for the mongodb containers.Docker Official Image packaging for MongoDB. Contribute to docker-library/mongo development by creating an account on GitHub.I see you have already raised an issue there.",
"username": "chris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Questions about mongodb container image cves | 2022-12-03T08:42:49.110Z | Questions about mongodb container image cves | 2,176 |
null | [
"aggregation",
"node-js",
"atlas-search"
]
| [
{
"code": "db.companies.aggregate([\n { '$search': { index: 'company_index', compound: { should: [\n { autocomplete: { path: 'companyName', query: 'fresh'\n }\n },\n { embeddedDocument: { path: 'produces', operator: { \n compound: { must: [\n { equals: {\"path\": \"produces.deleted\", \"value\": false}}\n ]}, \n compound: { must: [\n { autocomplete: {\"path\": \"produces.name\",\n \"query\": 'fresh',\n }\n }\n ]\n }\n }\n }\n }\n ]\n }\n }\n },\n { '$match': { status: 'VERIFIED'\n }\n },\n { '$sort': { companyName: 1\n }\n },\n { '$skip': 0\n },\n { '$limit': 24\n },\n { '$project': { companyName: 1, status: 1, score: {$meta: 'searchScore'}\n }\n }\n],\n{ collation: { locale: 'en'\n }\n})\n",
"text": "As you can see in the code i am performing an aggregation in my companies collection. What i want to do is get produces with given query to but order is wrong.\nfor example;\nCompanyName: Frida lmt. produces: ‘fresh apple’\nCompanyName: Fresh garden, produces ‘orange’\nCompanyName: garden Fresh, produces ‘orange twit’=Fresh garden\ngarden Fresh\nFrida lmt.I want to get companyName matches first then people who has produces by that query. Can someone tell me what i am doing wrong?",
"username": "Atakan_Yildirim"
},
{
"code": "companyNameautocompletescore",
"text": "Hi @Atakan_Yildirim,What i want to do is get produces with given query to but order is wrong.Every document returned by an Atlas Search query is assigned a score based on relevance, and the documents included in a result set are returned in order from highest score to lowest.I want to get companyName matches first then people who has produces by that query. Can someone tell me what i am doing wrong?You can possibly consider modifying the score accordingly. My guess is that in this case, you’d want to increase the score from the companyName search. The autocomplete operator can be specified with a score option.for example;\nCompanyName: Frida lmt. produces: ‘fresh apple’\nCompanyName: Fresh garden, produces ‘orange’\nCompanyName: garden Fresh, produces ‘orange twit’\n=\nFresh garden\ngarden Fresh\nFrida lmt.Is this the current output or your expected output?Regards,\nJason",
"username": "Jason_Tran"
}
]
| How to order search while using nested compounds? | 2022-11-30T11:54:20.890Z | How to order search while using nested compounds? | 1,044 |
null | []
| [
{
"code": "",
"text": "I’m working on a project where I’m trying to setup an application in Google Cloud Run which then accesses a Mongo database. I’m no the project owner, so we thought it was easiest to use the plugin on the Google Cloud Marketplace. However, it never prompted me with the yellow button as described in this page on mongoDBs documentation: https://www.mongodb.com/docs/atlas/billing/gcp-self-serve-marketplace/\nIt wasn’t me who enabled the plugin, since I didn’t have the rights to do so. I’m expecting they were the only person who got the option of being referred to the “Select an Organization to Link to Your GCP Billing Account” page on MongoDB, but since the company’s structure is quite slow, I’d rather double check this before asking him.Also, if this is the case, isn’t there a way for me to access the “Select an Organization to Link to Your GCP Billing Account” page myself? I’ve been looking everywhere in MongoDB Atlas, but never found it.Thanks in advance!",
"username": "Stan_Frinking"
},
{
"code": "",
"text": "Hey @Stan_Frinking - Welcome to the community.This question might be better suited for the Atlas support team which you can reach via the in-app chat. You can send them any relevant screenshots as well which may be of use Regards,\nJason",
"username": "Jason_Tran"
}
]
| MongoDB Atlas API on GCloud Marketplace not referring me to the page "Select an Organization to Link to Your GCP Billing Account" | 2022-11-30T14:48:41.238Z | MongoDB Atlas API on GCloud Marketplace not referring me to the page “Select an Organization to Link to Your GCP Billing Account” | 1,038 |
[
"queries",
"atlas-search",
"text-search"
]
| [
{
"code": "{$text: {$search: 'IN'}}{$text: {$search: 'US'}}'country':'IN'",
"text": "Hey guys, I am currently blocked in a weird situation, I am using full text search for a backend module I am currently building where I faced this issue. Whenever I am trying to query via country naming code like US for United States, IN for India, etc. The search query for {$text: {$search: 'IN'}} was not returning any data but there was data stored inside the database, even if I change the input of the query to US from IN {$text: {$search: 'US'}} the query works. And the problem is whenever I am trying to query 'country':'IN', I am receiving output. These are the 3 types of output for anyone who might be interested in helping me out here.\n\nMerged_document919×1594 128 KB\n",
"username": "Arnab_Ray"
},
{
"code": "{$text: {$search: 'IN'}}{$text: {$search: 'US'}}\"IN\"default_language$textmyFirstDatabase> db.text.find()\n[\n { _id: ObjectId(\"638d229c86e26bbc448fd096\"), country: 'IN' },\n { _id: ObjectId(\"638d22a286e26bbc448fd097\"), country: 'US' },\n { _id: ObjectId(\"638d245886e26bbc448fd099\"), country: 'india' },\n { _id: ObjectId(\"638d245c86e26bbc448fd09a\"), country: 'and' },\n { _id: ObjectId(\"638d249686e26bbc448fd09b\"), country: 'the' }\n]\n\nmyFirstDatabase> db.text.createIndex({country:\"text\"})\ncountry_text\n\nmyFirstDatabase> db.text.find({$text:{$search:\"IN\"}})\n/// No documents returned\n\"text\"{default_language: \"none\"}$textmyFirstDatabase> db.text.dropIndexes()\n{\n nIndexesWas: 2,\n msg: 'non-_id indexes dropped for collection',\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1670194676, i: 1 }),\n signature: {\n hash: Binary(Buffer.from(\"eae9fced3efdda5d4c1a91b9fb13bd39c49ffd4f\", \"hex\"), 0),\n keyId: Long(\"7115111615044780034\")\n }\n },\n operationTime: Timestamp({ t: 1670194676, i: 1 })\n}\n\nmyFirstDatabase> db.text.createIndex({country:\"text\"},{default_language:\"none\"})\ncountry_text\n\nmyFirstDatabase> db.text.find({$text:{$search:\"IN\"}})\n[ { _id: ObjectId(\"638d229c86e26bbc448fd096\"), country: 'IN' } ]\n",
"text": "Hi @Arnab_Ray - Welcome to the community I would just like to clarify the issue you’re experiencing - My interpretation is that when you query for {$text: {$search: 'IN'}}, no documents are returned. However, when the query is {$text: {$search: 'US'}}, the correct documents are returned? If so, this is possibly due to \"IN\" being a stop word and perhaps the below default_language specification on the text index may help your use case.Please also note that as per the Text Search Languages documentation:If you specify a language value of “none”, then the text search uses simple tokenization with no list of stop words and no stemming.Test documents, text index and initial $text query yielding no results:Dropping all indexes on the test collection, re-creating the \"text\" index with {default_language: \"none\"} and performing the same $text query above:Hope this helps.On a side note, if you’re running on Atlas then perhaps it may be worth taking a look into Atlas Search.Regards,\nJason",
"username": "Jason_Tran"
}
]
| Text Search query error | 2022-12-01T13:32:27.679Z | Text Search query error | 2,055 |
|
null | [
"dot-net",
"replication",
"compass",
"connecting",
"atlas-cluster"
]
| [
{
"code": "mongodb+srvSystem.TimeoutException: A timeout occurred after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 }, OperationsCountServerSelector }. Client view of cluster state is { ClusterId : \"1\", ConnectionMode : \"ReplicaSet\", Type : \"ReplicaSet\", State : \"Disconnected\", Servers : [] }. \nat MongoDB.Driver.Core.Clusters.Cluster.ThrowTimeoutException(IServerSelector selector, ClusterDescription description) \nat MongoDB.Driver.Core.Clusters.Cluster.WaitForDescriptionChangedHelper.HandleCompletedTask(Task completedTask) \nat MongoDB.Driver.Core.Clusters.Cluster.WaitForDescriptionChangedAsync(IServerSelector selector, ClusterDescription description, Task descriptionChangedTask, TimeSpan timeout, CancellationToken cancellationToken) \nat MongoDB.Driver.Core.Clusters.Cluster.SelectServerAsync(IServerSelector selector, CancellationToken cancellationToken) \nat MongoDB.Driver.MongoClient.AreSessionsSupportedAfterServerSelectionAsync(CancellationToken cancellationToken) \nat MongoDB.Driver.MongoClient.AreSessionsSupportedAsync(CancellationToken cancellationToken) \nat MongoDB.Driver.MongoClient.StartImplicitSessionAsync(CancellationToken cancellationToken) \nat MongoDB.Driver.MongoCollectionImpl`1.UsingImplicitSessionAsync[TResult](Func`2 funcAsync, CancellationToken cancellationToken)\nMongoDB.DriverMongoDB-SDAM Verbose: 405 : connection[1:xxxxxxxxx-dev-shard-00-02.yyyy.mongodb.net:27017:6-155102]: sent heartbeat in 60059.3808ms.\n DateTime=2022-02-17T14:48:01.0366223Z\nMongoDB-SDAM Verbose: 404 : connection[1:xxxxxxxxx-dev-shard-00-02.yyyy.mongodb.net:27017:6-155102]: sending heartbeat.\n DateTime=2022-02-17T14:48:01.0370828Z\nMongoDB-SDAM Verbose: 405 : connection[1:xxxxxxxxx-dev-shard-00-00.yyyy.mongodb.net:27017:2-155256]: sent heartbeat in 61040.5318ms.\n DateTime=2022-02-17T14:48:02.0370829Z\nMongoDB-SDAM Verbose: 404 : connection[1:xxxxxxxxx-dev-shard-00-00.yyyy.mongodb.net:27017:2-155256]: sending heartbeat.\n DateTime=2022-02-17T14:48:02.0373158Z\nMongoDB-SDAM Verbose: 405 : connection[1:xxxxxxxxx-dev-shard-00-01.yyyy.mongodb.net:27017:5-161874]: sent heartbeat in 60914.9487ms.\n DateTime=2022-02-17T14:48:02.0385639Z\nMongoDB-SDAM Verbose: 404 : connection[1:xxxxxxxxx-dev-shard-00-01.yyyy.mongodb.net:27017:5-161874]: sending heartbeat.\n DateTime=2022-02-17T14:48:02.0396348Z\ntelnet xxx 27017",
"text": "Hi,We are using mongodb atlas using a mongodb+srv connection. When our webapp is starting up and handling requests - everything is working just fine. When our webapp gets inactive/idle a few hours we a getting timeouts on new requests:We are using MongoDB.Driver version 2.14.1.The heartbeats are also running just fine, even tough the app reports the timeouts:Meanwhile the app is reporting timeouts, we can;but we can only get the webapp running again by restarting it.We have 3 different test environments, they all having the same problem.Any suggestions, or know ways of doing further diagnostics?",
"username": "FrankNielsen"
},
{
"code": "Servers : []\nmongodb://",
"text": "Hi, @FrankNielsen,Welcome to the MongoDB Community! Thank you for providing the exception with stack trace. I notice that your cluster topology contains no servers:We have seen this problem in containerized environments due to a DNS issue. I suspect that you are running your app servers in Kubernetes, AKS, or similar environment.The root cause of the problem is a bug in DnsClient.NET. DnsClient.NET is a third-party dependency that we use for SRV and TXT lookups. This bug has been resolved and will be included in the upcoming 2.15.0 release. See CSHARP-4001 for more information on the fix.Prior to the release of the 2.15.0 driver, you can avoid this problem by using the mongodb:// connection string as A, AAAA, and CNAME lookups are not affected by this bug.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "Hi @James_Kovacs ,Thx for the reply, it was worth waiting for.I have looked around your Jira but i am unable to find any roadmap for the v2.15.0 release, can you reveal anything - is it within weeks or months?Cheers, Frank",
"username": "FrankNielsen"
},
{
"code": "",
"text": "Hi, @FrankNielsen,Glad that I could be of assistance. We are wrapping up some features that we want to include in the 2.15.0 release. Insert usual disclaimer about forward-looking statements, plans could change due to unforeseen circumstances, etc., etc. With that said, the 2.15.0 release is weeks away, not months, though I don’t know for sure exactly when.James",
"username": "James_Kovacs"
},
{
"code": "",
"text": "Hello,Is there a fix for this issue? I am also facing the same issue. I am using AWS + DocumentDB + SSH Tunnel. When I trying to connect toHowever when I try to connect to AWS DocumentDB from .Net application, I get below exception. I am using the same connection string in Compass and .Net code.A timeout occurred after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 }, OperationsCountServerSelector }. Client view of cluster state is { ClusterId : “1”, ConnectionMode : “Direct”, Type : “Unknown”, State : “Disconnected”, Servers : [{ ServerId: “{ ClusterId : 1, EndPoint : “Unspecified/documentdb-test.cluster-c7jldipe45vl.ap-south-1.docdb.amazonaws.com:27017” }”, EndPoint: “Unspecified/documentdb-test.cluster-c7jldipe45vl.ap-south-1.docdb.amazonaws.com:27017”, ReasonChanged: “NotSpecified”, State: “Disconnected”, ServerVersion: , TopologyVersion: , Type: “Unknown”, HeartbeatException: “MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server. —> System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 172.31.27.19:27017\nat System.Net.Sockets.Socket.Connect(IPAddress[] addresses, Int32 port)\nat System.Net.Sockets.Socket.Connect(String host, Int32 port)\nat MongoDB.Driver.Core.Connections.TcpStreamFactory.Connect(Socket socket, EndPoint endPoint, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Connections.TcpStreamFactory.CreateStream(EndPoint endPoint, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Connections.SslStreamFactory.CreateStream(EndPoint endPoint, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelper(CancellationToken cancellationToken)\n— End of inner exception stack trace —\nat MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelper(CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Connections.BinaryConnection.Open(CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Servers.ServerMonitor.InitializeConnection(CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Servers.ServerMonitor.Heartbeat(CancellationToken cancellationToken)”, LastHeartbeatTimestamp: “2022-04-07T05:48:53.4181477Z”, LastUpdateTimestamp: “2022-04-07T05:48:53.4241461Z” }] }.Pleas help.Thanks,\nYogesh",
"username": "Yogesh_Satpute"
},
{
"code": "SRV",
"text": "A short follow up, since mognodb driver release v2.15.0 we have not experienced any problems running with the SRV connection string format.Cheers, FrankPS: @Yogesh_Satpute i think your problem is a general first time connection problem!?",
"username": "FrankNielsen"
},
{
"code": "",
"text": "The exception is there while using 2.17.1 version of the driver, and following the connection string format given by Atlas MongoDB (by checking the Connect->Connect from Application option). I am also trying to get a solution for this exception.\nFollowing is the exception:\nA timeout occurred after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 }, OperationsCountServerSelector }. Client view of cluster state is { ClusterId : “1”, Type : “Unknown”, State : “Disconnected”, Servers : }.In this exception it is mentioned that the type is unknown, state is disconnected, server list is empty etc). The servers etc are not mentioned as part of the connection string which I obtained from the AtlasDB->Connect option. I do not know how to proceed and what to do at the moment.",
"username": "Developer_BB"
},
{
"code": "",
"text": "Hello, we also use the 2.17.1 version and have the same problem.",
"username": "Yaroslav_Vasilenko"
},
{
"code": "mongoshmongodb+srv://mongodb://",
"text": "Some things to check:Given this sounds like a network connectivity/configuration issue rather than a driver bug, I would recommend reaching out to our Atlas Support Team by clicking the in-app chat icon in the lower right corner of MongoDB Atlas.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "mongomongoshCompass",
"text": "Anytime a “timeout” error occurs, the first thing to check should be the “network access” control on the server. Most of the time, a timeout is related to an improperly set IP list. to test this possibility, login to your Atlas cluster (or find your private server config files) and head to the security section to allow access from anywhere and test the app that way. also check other security settings so you can at least connect with mongo or mongosh from console, or with Compass.Next is the IP address of the host pc of the application. Having a static domain name does not guarantee having a static IP. your cloud provider might be changing the IP address of the host where your app resides if you don’t have a static IP subscription. they are, after all, virtual PCs or containers and will most possibly be restarted with different IP addresses regularly (disaster recovery, load balancing, etc.). If you have set strict network access in your MongoDB server, then your app will have no chance but give a timeout. In case you host the app on a PC in your own infrastructure, then make sure you give static IP to that PC.if you can eliminate these two possibilities, please add these details to your posts so help can find you faster.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "The IP is given as 0.0.0.0 inside Atlas Mongo DB network access option.\nI can not set static ip because of work place restrictions. Also there are no proxy settings enforced.\nThe strange thing now is that the access to MongoDB in Atlas is possible since Friday. There has been no changes in the source code, it is the same as when I posted the issue in this forum. It is all strange to me. Does it have anything to do with the fact that I am using shared/free server/cluster option hosted somewhere, and there could be network congestion due to multiple requests from different places to this setup?",
"username": "Developer_BB"
},
{
"code": "",
"text": "“access anywhere” kinda removes all restrictions but there are still many ways to cripple the network access on the app side (before finally blaming a driver). your app seems to work now but may stop again, so best to understand where else the network may diverge expectations. unfortunately, it is like debugging a program and it won’t be apparent immediately what causes the problem.you have said “since Friday”, can you please test your app today/tomorrow again!? it now is pretty possible that your network has a week-long restriction on ports so employees would not surf distracting websites. If your app will again give timeouts today and tomorrow, you need to speak to your administration.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Thank you for the reply.\nThere was no intention to blame anything or anyone, thought this was a support forum.\nI already explained there are IT restrictions as per company policy, but do not know up to what extent and where all those restrictions are. Had I known, then I did not have to think much.I close my further communication on this topic here.",
"username": "Developer_BB"
},
{
"code": "",
"text": "@Developer_BB Please take a breath first. I am sorry you have offended, but I had/have no intention to offend anyone here.I do not have any way to test your own app and infrastructure on-premise, so I was trying to say what else you can test. we need to first eliminate probable causes, then check for unexpected ones.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Have you resolved this issue. I have struggling to find solution for this.\nI am facing the same issue in aws lambda. Please find the details below:Application details: .NET Core application (lambda). .NET mongodb driver (2.18.0), Serverless mongodb instance.Personal Laptop or Local environment\nI have .NET core lamda application and was not working, so I changed DNS to google. It started working in local laptop.Deployed Lambda function\nWhen I deployed the application to aws lambda, it is not working. I have added DataAccess and network access also in Atlas.\nI tried to connect to serverless using mongosh in Cloud 9 machine from the same VPC where lambda is condfigured. I am able to connect to serverless instance.Please help me why I am unable to connect to Serverless mongo instance from lambda.Please let me know if you need more details.\n@James_Kovacs",
"username": "Veerakondalu_Pallati"
},
{
"code": "",
"text": "Have you find the solution for this problem?",
"username": "Veerakondalu_Pallati"
}
]
| Timeout when selecting a server after inactivity | 2022-02-17T15:14:58.953Z | Timeout when selecting a server after inactivity | 13,720 |
null | [
"indexes"
]
| [
{
"code": "",
"text": "Mongo Atlas is recommending a high impact index that should improve 173 queries/hour. But when I create it, the index statistics don’t show any usage after several days. I’m just wondering if this is something that is not uncommon.Thanks!",
"username": "Steven_Kong"
},
{
"code": "",
"text": "Hi @Steven_Kong ,Hope you are doing great. Please note that as explained in limitations, index suggestion is created based on the previous 200k lines. There may be chances that these queries are ran during that time or Index usage is calculated on the output of indexStats which also resets when the mongod restart or index dropped.So please check if the query is still running or any recent mongod restart happened which might have reset this stats.should improve 173 queries/hour.Further, I believe you are aware that above statement refers that the query was running 173 times per hour and does not mean 173 different queries per hour.Thanks,\nDarshan",
"username": "Darshan_j"
}
]
| Are Mongo Atlas Index Recommendations Reliable? | 2021-10-22T19:33:36.408Z | Are Mongo Atlas Index Recommendations Reliable? | 2,041 |
[
"atlas-device-sync"
]
| [
{
"code": "history__realm_synchistory",
"text": "I recently noticed that the history collection under __realm_sync has quickly taken up ALL the storage in my cluster, which seems excessive. The actual data that I’m syncing is extremely minimal (not even 1 MB).Is there a way to handle realm’s history better so it doesn’t consume so much storage?See here (history)\n\nScreen Shot 2021-07-12 at 1.38.22 PM2044×372 52.3 KB\nAnd here are the 2 collections for my actual data:\n\nScreen Shot 2021-07-12 at 1.40.10 PM1890×524 39.1 KB\n",
"username": "Annie_Sexton"
},
{
"code": "",
"text": "@Annie_Sexton Realm Sync stores all the operations that have mutated any data over the course of the lifetime of your Realm Sync app. You can see more details here - https://docs.mongodb.com/realm/sync/protocol/#changesetWe do have a compaction feature that we have rolled out to some users which will trim some operations from the history collection. If you’d like to have it enabled for your Realm Sync app please send me an email with the URL of your Realm App dashboard and I can get it enabled for you - [email protected]",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Yes - this has been a problem for me as well. I’ll send you a note to get the compaction features rolled out on my dev and prod instances.What are the implications if I just go in and drop all from the collection? I presume this feature allows for some rolling back, which I guess leads to another question of how you can actually roll it back?",
"username": "Roger_Cheng"
},
{
"code": "",
"text": "If you drop a collection you will break sync and will need to terminate and re-enable sync - https://docs.mongodb.com/realm/reference/terminating-and-reenabling-realm-sync/",
"username": "Ian_Ward"
},
{
"code": "",
"text": "I am facing this same issue in Aug 2022 on a Shared Cluster. Has this this compaction feature already rolled out publicly? Or do we still need to reach out to you to enable it on your end?",
"username": "NightNight"
},
{
"code": "",
"text": "I am facing the same issue and it seems flexible sync could be a solution because it supports “trimming” but I could not find a migration guide to migrate partition sync to flexible sync.",
"username": "NightNight"
}
]
| __realm_sync history taking up all the storage on Atlas cluster | 2021-07-12T18:43:30.529Z | __realm_sync history taking up all the storage on Atlas cluster | 4,384 |
|
null | [
"queries"
]
| [
{
"code": "",
"text": "I have been searching, and I find it rather complicated to get this done, how do I find all documents where the field = “yes” ? Is the result shown in a array of documents?ps. I find it strange that I can’t use the db.collection.find({active: yes}).then(result=>{ console.log(result); });\nSo, I cannot use a promise there, I am confused… how do I get these results?I want to update all fields on a function with a timer, so that e.g. every 5 seconds it decreases 5000…\nSo every 5 seconds it queries and verifies the fields…(its a timer function)heelp",
"username": "Zoo_Zaa"
},
{
"code": "",
"text": "I don’t understand what exactly you want here. please check this example playground: Mongo playground. Isn’t that what you want? the query is simple.next, “find” is a cursor method. you need to understand what a cursor is, and how to use it. Cursor Methods — MongoDB Manual. its use in a driver changes, so also check the manuals of whichever driver you use.alternatively, you can use aggregations for multi-step queries (pipeline). Aggregation Operations — MongoDB Manualquerying and updating are two separate things. timers are the easy parts. you need to decide how you update your documents.Anyways, you seem pretty new to MongoDB. I suggest you visit MongoDB University. M001 and M121, along with M220xx (select language) and M320 would be nice for starters. (Or follow the “new university” link for a bit more guided paths)\nFree MongoDB Official Courses | MongoDB University",
"username": "Yilmaz_Durmaz"
}
]
| Finding all documents with field = yes | 2022-12-03T22:02:44.251Z | Finding all documents with field = yes | 813 |
[]
| [
{
"code": "",
"text": "In my opinion, a release is the most joyful moment for the team, and at least, people can reach it and appreciate them without ignoring it, I would suggest,\nA simple icon can be more eye-catching in a release post, before/below the title,\n\nimage1180×75 9.09 KB\nA good example is when we see any event/meeting RESP post, this catches my eye,\nMore good examples like Accepted posts, Bookmarked posts,\nThanks.",
"username": "turivishal"
},
{
"code": "",
"text": "Hi @turivishal,Thank you for the suggestion! Perhaps we can add or similar for topics in the Release Announcements categories.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "I’d like to second @turivishal and extend by the idea to add links to related blog posts describing new features?\nRegards,\nMichael",
"username": "michael_hoeller"
}
]
| Can we highlight new release post or put a eye-catching icon around title? | 2022-12-01T11:07:53.800Z | Can we highlight new release post or put a eye-catching icon around title? | 1,983 |
|
null | []
| [
{
"code": "db.collection.findMany({ 'QUERY_FIELD', { $in: ['a', 'b', 'c'] } })\nQUERY_FIELDabc$in|db.collection.findMany({ 'QUERY_FIELD', /^(a|b|c)/ })OR$in",
"text": "My task is to query a rather big dataset (4M+ entries) for several hundred matching values.\nThe queried field is indexed and preliminary profiling on a smaller dataset (100k entries) has shown that the query is quite performant (few ms).\nCurrently, my query is setup like this:Where QUERY_FIELD is the indexed field and a, b, c, … are the searched values. Up to 1.000 searched values are expected for a single query.The documentation for $in says:It is recommended that you limit the number of parameters passed to the $in operator to tens of values. Using hundreds of parameters or more can negatively impact query performance.So now I am wondering if query performance will degrade when moving to the larger dataset (it’s not available to me, yet).Another approach I know of is using a regex query with the values combined with |. (e.g: db.collection.findMany({ 'QUERY_FIELD', /^(a|b|c)/ })).\nHowever, it feels awkward building a regex string with hundreds of OR conditions. Additionally, I would expect that approach to perform worse than $in, because mongodb has to parse the (very long) regex string before performing the query.Can you suggest another approach to my task?",
"username": "cabus"
},
{
"code": "",
"text": "My gut feeling is that regex will be slower. But I am pretty sure if you have access to your data you may make a copy and test.Are the values strings or numbers?Can you share a little bit about your use case? I am always worry when a query requires such a big number of parameters. I feel there might be some flaw in the data schema or in the data access pattern. Except of course it if is a one time query. But for one time query you should not worry about performance.I am pretty sure it is not a human typing the 1000 searched values. Where do they come from? Are they the result of another query? Are they coming from another system? Could you permanently tag documents with those values with a field that will tie them together so your query becomes a single value (the tag) search?If the search values come from another query, could you use aggregation with $lookup rather than retrieving 1000 values and the use the 1000 values in the findMany? About findMany, which driver has the method findMany() rather than find()?Querying 1000 values could result in 1000 documents and many more, what do you with so many documents? Could you use aggregation to do what ever computation you intend to do?",
"username": "steevej"
}
]
| Performance of $in vs regex for several hundred values | 2022-12-03T17:07:30.842Z | Performance of $in vs regex for several hundred values | 1,923 |
null | []
| [
{
"code": " \"touchPointsDate\" : [\n [\n ISODate(\"2022-09-26T00:00:00.000+0000\")\n ],\n [\n ISODate(\"2022-03-07T00:00:00.000+0000\")\n ],\n [\n ISODate(\"2022-09-05T00:00:00.000+0000\")\n ],\n [\n ISODate(\"2022-03-21T00:00:00.000+0000\")\n ],\n [\n ISODate(\"2022-05-30T00:00:00.000+0000\")\n ]\n \"touchPointsDate\" : [\n ISODate(\"2022-09-26T00:00:00.000+0000\"),\n ISODate(\"2022-03-07T00:00:00.000+0000\"),\n ISODate(\"2022-09-05T00:00:00.000+0000\"),\n ISODate(\"2022-03-21T00:00:00.000+0000\"),\n ISODate(\"2022-05-30T00:00:00.000+0000\")\n ]\n",
"text": "I feel like the answer to this should be simple but I am stumped. I have millions of documents with a few fields that are arrays, and have arrays with one element inside of it. Is there a way to simplify the array field? There is no consistent amount of single element arrays within the touchPointsDate array field.This is how a field looks:And I would like to simplify it to:",
"username": "Geoff_Materna"
},
{
"code": "[\n {\n '$set': {\n 'touchPointsDate': {\n '$reduce': {\n 'input': '$touchPointsDate', \n 'initialValue': [], \n 'in': {\n '$concatArrays': [\n '$$value', '$$this'\n ]\n }\n }\n }\n }\n }\n]\n",
"text": "Hi @Geoff_Materna and welcome in the MongoDB Community !This is a job for $reduce.\nimage1059×659 80.1 KB\nThe mistake here would be to use $unwind + $group by _id with $push to rebuild the array properly.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "[\n {\n '$set': {\n 'touchPointsDate': {\n '$reduce': {\n 'input': '$touchPointsDate', \n 'initialValue': [], \n 'in': {\n '$concatArrays': [\n '$value', '$this'\n ]\n }\n }\n }\n }\n }\n]\n",
"text": "Amazing!! Yes this is exactly what I was hoping for. Thank you so much!",
"username": "Geoff_Materna"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Arrays with one element within an array | 2022-12-02T17:20:07.281Z | Arrays with one element within an array | 1,232 |
[
"aggregation",
"queries"
]
| [
{
"code": "[{$set: {\nChildren: {\n $map: {\n input: '$Children',\n 'in': {\n DateOfBirthDateTime: {\n $cond: [\n {\n $ne: [\n '$$this.DateOfBirth',\n ''\n ]\n },\n {\n $toDate: '$$this.DateOfBirth'\n },\n null\n ]\n },\n Name: '$$this.Name'\n }\n }\n}\n}}, {$match: {\n'Children.DateOfBirthDateTime': {\n $gte: ISODate('2017-01-01'),\n $lte: ISODate('2018-01-01')\n}\n}}]\n",
"text": "How would I write a query which checks the Children.DateOfBirth and finds all documents between 01-01-2017 and 01-01-2018?\nI am storing the DOB as a string, transforming them in the pipeline, then applying the filtering for the DOB. However I am getting back documents that are outside the given date range.Here is how my documents look like:\nThis is the aggregation pipeline that I am using:",
"username": "Laura_Mazarini"
},
{
"code": "",
"text": "Hi @Laura_Mazarini and welcome in the MongoDB Community !I think your pipeline works as intended but I suspect that you are getting results that you don’t expect because you are working on an array and not a single value.With your current pipeline if ONE child within the array is in the range, you get the entire doc (with the entire array of children in it).I illustrated this here in Compass:\nimage1342×1010 121 KB\nThe doc with a single child born in 2008 is discarded while the one with 2 children born in 2008 and 2010 is selected.If you only want the 2nd child, you must $unwind the array first to break the array.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hi Maxime! Thanks for the answer, however is still not getting me the wanted result.In your example the date range filter that you are using is “2009-01-01” to “2010-01-01”. However you are getting back a document in which none of the dates from the Children array is between the given ones, since the DOB for child1 is “2008-05-05” and the DOB for child2 is “2010-05-05”. The second child matches the year, but it is out of the date range with more than 4 months. This is exactly my issue as well.If any of the Children’s DOB form the array falls within the date range I want the entire document, so I don’t think the unwind is wanted for my scenario. I need to search the entire array, but like in your example I get back some false positives.",
"username": "Laura_Mazarini"
},
{
"code": "test [direct: primary] test> db.coll.drop()\ntrue\ntest [direct: primary] test> db.coll.insertMany([{Children: [{DateOfBirth: \"2010-05-05\"}, {DateOfBirth: \"2012-06-06\"}]},{Children: [{DateOfBirth: \"2008-02-02\"}]}])\n{\n acknowledged: true,\n insertedIds: {\n '0': ObjectId(\"638a6ffcbd6d1ee1c17b6ce2\"),\n '1': ObjectId(\"638a6ffcbd6d1ee1c17b6ce3\")\n }\n}\ntest [direct: primary] test> db.coll.find()\n[\n {\n _id: ObjectId(\"638a6ffcbd6d1ee1c17b6ce2\"),\n Children: [ { DateOfBirth: '2010-05-05' }, { DateOfBirth: '2012-06-06' } ]\n },\n {\n _id: ObjectId(\"638a6ffcbd6d1ee1c17b6ce3\"),\n Children: [ { DateOfBirth: '2008-02-02' } ]\n }\n]\ntest [direct: primary] test> db.coll.aggregate([\n... {\n... '$set': {\n... 'Children': {\n... '$map': {\n... 'input': '$Children', \n... 'in': {\n... 'DateOfBirthDateTime': {\n... '$cond': [\n... {\n... '$ne': [\n... '$$this.DateOfBirth', ''\n... ]\n... }, {\n... '$toDate': '$$this.DateOfBirth'\n... }, null\n... ]\n... }, \n... 'Name': '$$this.Name'\n... }\n... }\n... }\n... }\n... }, {\n... '$match': {\n... 'Children': {\n... '$elemMatch': {\n... 'DateOfBirthDateTime': {\n... '$gte': new Date('Sun, 01 Jan 2017 00:00:00 GMT'), \n... '$lte': new Date('Wed, 01 Jan 2020 00:00:00 GMT')\n... }\n... }\n... }\n... }\n... }\n... ])\n\ntest [direct: primary] test> db.coll.aggregate([ { '$set': { 'Children': { '$map': { 'input': '$Children', 'in': { 'DateOfBirthDateTime': { '$cond': [ { '$ne': [ '$$this.DateOfBirth', ''] }, { '$toDate': '$$this.DateOfBirth' }, null] }, 'Name': '$$this.Name' } } } } }, { '$match': { 'Children': { '$elemMatch': { 'DateOfBirthDateTime': { '$gte': new Date('Sun, 01 Jan 2007 00:00:00 GMT'), '$lte': new Date('Wed, 01 Jan 2009 00:00:00 GMT') } } } } }])\n[\n {\n _id: ObjectId(\"638a6ffcbd6d1ee1c17b6ce3\"),\n Children: [ { DateOfBirthDateTime: ISODate(\"2008-02-02T00:00:00.000Z\") } ]\n }\n]\ntest [direct: primary] test> db.coll.aggregate([ { '$set': { 'Children': { '$map': { 'input': '$Children', 'in': { 'DateOfBirthDateTime': { '$cond': [ { '$ne': [ '$$this.DateOfBirth', ''] }, { '$toDate': '$$this.DateOfBirth' }, null] }, 'Name': '$$this.Name' } } } } }, { '$match': { 'Children': { '$elemMatch': { 'DateOfBirthDateTime': { '$gte': new Date('Sun, 01 Jan 2009 00:00:00 GMT'), '$lte': new Date('Wed, 01 Jan 2011 00:00:00 GMT') } } } } }])\n[\n {\n _id: ObjectId(\"638a6ffcbd6d1ee1c17b6ce2\"),\n Children: [\n { DateOfBirthDateTime: ISODate(\"2010-05-05T00:00:00.000Z\") },\n { DateOfBirthDateTime: ISODate(\"2012-06-06T00:00:00.000Z\") }\n ]\n }\n]\ntest [direct: primary] test> db.coll.aggregate([ { '$set': { 'Children': { '$map': { 'input': '$Children', 'in': { 'DateOfBirthDateTime': { '$cond': [ { '$ne': [ '$$this.DateOfBirth', ''] }, { '$toDate': '$$this.DateOfBirth' }, null] }, 'Name': '$$this.Name' } } } } }, { '$match': { 'Children': { '$elemMatch': { 'DateOfBirthDateTime': { '$gte': new Date('Sun, 01 Jan 2007 00:00:00 GMT'), '$lte': new Date('Wed, 01 Jan 2011 00:00:00 GMT') } } } } }])\n[\n {\n _id: ObjectId(\"638a6ffcbd6d1ee1c17b6ce2\"),\n Children: [\n { DateOfBirthDateTime: ISODate(\"2010-05-05T00:00:00.000Z\") },\n { DateOfBirthDateTime: ISODate(\"2012-06-06T00:00:00.000Z\") }\n ]\n },\n {\n _id: ObjectId(\"638a6ffcbd6d1ee1c17b6ce3\"),\n Children: [ { DateOfBirthDateTime: ISODate(\"2008-02-02T00:00:00.000Z\") } ]\n }\n]\n",
"text": "Oops my bad ! I went a little too fast on this one !You need a $elemMatch to make sure that your filter applies to the same array element.I tested again with a few examples to prove it’s working fine this time. Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
]
| Query between two dates returns unexpected documents | 2022-12-02T14:53:23.159Z | Query between two dates returns unexpected documents | 1,248 |
|
null | [
"atlas-search"
]
| [
{
"code": "",
"text": "Hi All expert,\nI have a document which has one attribute is an object contains two fields. Hierarchy as following. I need provide wildcard search for inco1 and inco2. How should be the atlas index definition for them?\nThanks in advance.\nDocument\n->deliveryNumber(string)\n->incoterm(object)\ninco1(string)\ninco2(string)",
"username": "yichao_wang"
},
{
"code": "",
"text": "It seems like embeddedDocument’s might be a good path forward for you.",
"username": "Elle_Shwer"
}
]
| How to create atlas index for object's property | 2022-11-23T11:59:48.219Z | How to create atlas index for object’s property | 1,103 |
null | [
"aggregation",
"atlas-search"
]
| [
{
"code": "",
"text": "Hi all,Is there any way to have the equivalent of multiple groups of logical OR in Atlas Search? I have a query case where a result must satisfy at least one condition of one group of criteria and at least one condition of another group of criteria. Having two “should” clauses would not work. Also it is very limiting that there cannot be a $match before the $search stage. One of the groups of OR requires that a document field is in an array of ObjectIds, but we cannot use $equals in Atlas Search as it accepts only one ObjectId, not an array.Many thanks for help.Kind regards,\nGueorgui",
"username": "Gueorgui_58194"
},
{
"code": "",
"text": "Can you share a sample document?",
"username": "Elle_Shwer"
},
{
"code": "{\n \"_id\": {\n \"$oid\": \"636bf48947c869baecdd19f6\"\n },\n \"title\": \"Fairy Tale\",\n \"authors\": [\n \"Stephen King\"\n ],\n \"price\": {\n \"$numberInt\": \"100\"\n },\n \"genres\": [\n \"Fiction\"\n ],\n \"identifiers\": [\n \"1668002175\",\n \"9781668002179\"\n ],\n \"averageRating\": {\n \"$numberDouble\": \"3.5\"\n },\n \"imageLinks\": {\n \"smallThumbnail\": \"http://books.google.com/books/content?id=jPzjzgEACAAJ&printsec=frontcover&img=1&zoom=5&source=gbs_api\",\n \"thumbnail\": \"http://books.google.com/books/content?id=jPzjzgEACAAJ&printsec=frontcover&img=1&zoom=1&source=gbs_api\"\n },\n \"quantityAvailableForReservation\": {\n \"$numberInt\": \"1\"\n },\n \"totalQuantityListed\": {\n \"$numberInt\": \"1\"\n },\n \"quantityReserved\": {\n \"$numberInt\": \"0\"\n },\n \"description\": \"Legendary storyteller ...\",\n \"shopSuppliedIdentifier\": \"9781668002179\",\n \"infoLink\": \"http://books.google.co.uk/books?id=jPzjzgEACAAJ&dq=isbn:9781668002179&hl=&source=gbs_api\",\n \"bookLibraryId\": {\n \"$oid\": \"636bf45047c869baecdd19ee\"\n },\n \"dateListedForSale\": {\n \"$date\": {\n \"$numberLong\": \"1668019337373\"\n }\n },\n \"shopInfo\": {\n \"shopId\": {\n \"$oid\": \"635b93aaa86789aa76c2bc8a\"\n },\n \"charityName\": \"PDSA\",\n \"addressLine1\": \"307 High Street\",\n \"addressLine2\": \"\",\n \"city\": \"Orpington\",\n \"postcode\": \"BR6 0NN\",\n \"phone\": \"01689 871243\",\n \"geolocation\": {\n \"type\": \"Point\",\n \"coordinates\": [\n {\n \"$numberDouble\": \"0.098295\"\n },\n {\n \"$numberDouble\": \"51.373795\"\n }\n ]\n },\n \"email\": \"[email protected]\"\n },\n \"bestseller\": [\n {\n \"bestsellerSource\": \"New York Times\",\n \"reviewLinks\": []\n }\n ]\n}\n",
"text": "Thank you Elle.A sample document looks like this:A user needs to be able to search for documents (books) that are in several shops by supplied array of shopIds (any of them should match) and also for books that fall in any one of the three categories (bestsellers, recommended - essentially 4 star or over rating, and new arrivals - essentially listed in the last 30 days) - the user may select one, two or all three categories and any should match (a book may be in all three). So both searches use logical OR. I can include a separate $match stage after $search, which seems the only option available given the constraints of Atlas Search. Is there any other way to do this in the $search stage?Many thanks.Kind regards,\nGueorgui",
"username": "Gueorgui_58194"
},
{
"code": " \"shopInfo\": {\n \"shopId\": {\n \"$oid\": \"635b93aaa86789aa76c2bc8a\"\n }\n",
"text": "Could you use filter ?So to clarify your specific ask is:Is there ever more oid’s listed?",
"username": "Elle_Shwer"
},
{
"code": "if (shopIds) {\n for (let shopId of shopIds) {\n searchStage.$search.compound.should.push(\n {\n equals: {\n path: \"shopInfo.shopId\",\n value: ObjectId(shopId)\n }\n }\n )\n }\n searchStage.$search.compound.minimumShouldMatch = 1;\n }\nif (category && category !== 'all') {\n if (category === \"bestsellers\") {\n searchStage.$search.compound.filter.push({\n exists: {\n path: \"bestseller\"\n }\n });\n } else if (category === \"recommended\") {\n searchStage.$search.compound.filter.push({\n exists: {\n path: \"recommended\"\n }\n });\n } else if (category === \"new arrivals\") {\n // new arrivals - 30 days before todays date; use 31 to account for time\n afterDate.setDate(todaysDate.getDate() - 31);\n searchStage.$search.compound.filter.push({\n range: {\n path: \"dateListedForSale\",\n gt: afterDate\n }\n });\n }\n }\n",
"text": "The front-end supplies an array of shopIds. So my original solution was:I am not sure how I can do it with filter for arrays of values (any should match)? The problem seems to me is that “equals” does not accept an array of values (similar to $in).For categories, I have used “exists” + “range”.But now I need to use “should” too because I need mutiple select of several categories at the same time.Many thanks.Kind regards,\nGueorgui",
"username": "Gueorgui_58194"
}
]
| How to implement multiple should clauses or match on the basis of array of ObjectIds in Atlas Search | 2022-12-02T13:27:54.340Z | How to implement multiple should clauses or match on the basis of array of ObjectIds in Atlas Search | 1,816 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.