image_url
stringlengths 113
131
⌀ | tags
list | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"node-js",
"next-js"
]
| [
{
"code": "",
"text": "Hello Guys\nwhen i create the project with ( npx create-next-app - - example mongodb Name of the project ) it will create the project for me but automatically the language is TypeScript how to change language to JavaScript ?im not a huge fan of TypeScript i still love JavaScript to use",
"username": "rebaz_jabar"
},
{
"code": "npm uninstall typescript @types/node @types/reactrm -rf types tsconfig.jsonmv pages/index.tsx pages/index.jsx:InferGetServerSidePropsType<typeof getServerSideProps>Homemv lib/mongodb.ts lib/mongodb.js:Promise<MongoClient>clientPromise",
"text": "The example does not have heavy use of typescript. so there are only a few occurrences you need to check.",
"username": "Yilmaz_Durmaz"
},
{
"code": "next-11npx create-next-app --example https://github.com/vercel/next.js/tree/next-11/examples/with-mongodb nextmongo\n",
"text": "an alternative is to crawl old examples in old versions in the github repository and use that link as the example.this following will install example in next-11 from a year ago:This got me the example code, but failed to install dependencies due to version changes to React. You will need to fix those dependencies by yourself.PS: even though the size is small, this will be a bit slower to read files from GitHub.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "what a beautiful explanation thanks alot dear",
"username": "rebaz_jabar"
}
]
| TypeScript to JavaScript | 2022-12-17T21:01:22.030Z | TypeScript to JavaScript | 3,136 |
null | [
"java",
"python",
"compass",
"spark-connector"
]
| [
{
"code": " mypkey = paramiko.RSAKey.from_private_key_file(ssh_file)\n tunnel = SSHTunnelForwarder((ssh_host,int(ssh_port)), ssh_username=ssh_user, ssh_pkey=mypkey, remote_bind_address=(jdbcHostname, int(jdbcPort))) \n tunnel.start()\n \n jdbcUrl = \"mongodb+srv://{0}:{1}@{2}/?retryWrites=true&w=majority\".format(jdbcUsername, jdbcPassword, jdbcHostname)\n\nmongodb+srv://***:***@mongodb_host/?retryWrites=true&w=majority\n\ncom.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused (Connection refused)}}]\n",
"text": "Hi, I can connect to MongoDB through SSH tunnel using MongoDB Compass, but when I try to connect from pyspark, setting up the tunnel using sshtunnel libraries it crashes.Without the tunnel I can connect OK, but not using the tunnel. Although the tunnel works fine with other datasources like MySQL but it doesn’t work when try to connect to Altlas.My code is:The error message is a Timeout… it seems like it doesn’t find the redirected port, but mongodb+srv does not allow port specification:Any help would be appreciated. Thanks in advance!",
"username": "Abel_Martinez1"
},
{
"code": "",
"text": "Hi @Abel_Martinez1 and welcome to the MongoDB community forum!!Thank you for sharing the above information. However, to understand the issue better, could you help me with a few understandings related to the issue being observed:I can connect to MongoDB through SSH tunnel using MongoDB Compass, but when I try to connect from pyspark, setting up the tunnel using sshtunnel libraries it crashes.Is this a local MongoDB deployment or an Atlas deployment ?\nCould you also help to provide the actual error messages you observed if there’s any?Although the tunnel works fine with other datasources like MySQL but it doesn’t work when try to connect to Altlas.Is the MySQL deployment and on-prem or is this deployed on a cloud provider ?\nAlso, can you confirm if you are successfully able to connect without the SSH tunnelling.Please refer to the official documentation on Spark Connector With MongoDB for more information.Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused (Connection refused)}}]",
"text": "Hi, sorry for the delay in the response.BR,Abel",
"username": "Abel_Martinez2"
}
]
| Connect to MongoDB Atlas through SSH Tunnel from Spark | 2022-11-22T15:57:53.839Z | Connect to MongoDB Atlas through SSH Tunnel from Spark | 2,760 |
null | [
"queries",
"node-js"
]
| [
{
"code": "I have document like this in a collection called **Template** :\n",
"text": "I retrieve template collection by _id and by RevisionNum value.{\n_id: ObjectId(‘613f16eda156d84fd428510f’),\nTemplates: [\n{\nRevisionNum: “210719”,\nEffectiveDate: 2021-07-19T17:43:06.693+00:00,\nHardwareVer: “A”,\nSoftwareVer: “1.0”,\nWorkTasks: [“ObjectId(‘abce16eda156d84fd428510f’)”],\nHasTraveller: true\n},\n{\nRevisionNum: “210908”,\nEffectiveDate: 2021-07-19T17:43:06.693+00:00,\nHardwareVer: “AA”,\nSoftwareVer: “11.0”,\nWorkTasks: [ “ObjectId(‘wdfe16eda156d84fd428510f’)”,],\nHasTraveller: true\n},\n]\n}\nThe id inside the WorkTasks object array is from a separate collection called worktasks, see sample document below :{\n_id: ObjectId(‘abce16eda156d84fd428510f’),\nName: “Box Assembly”,\nCanPerform: [ “Benjamin”, “Shirley” ],\nInstructionFiles: [\n{\nName: “Assembly.pdf”,\nMimeType: “application/pdf”,\nPath: “path_to_the_file”,\n},\n]\n}I am trying to create a mongodb query using the native node.js driver to get a result like this:{\n“TemplateDetails”: {\n“HardwareVer”: “A”,\n“SoftwareVer”: “1.0”,\n“RevisionNum”: “210719”,\n“EffectiveDate”: 2021-07-19T17:43:06.693+00:00,\n“HasTraveller”: true,\n“WorkTasks”: [\n{\nName: “Box Assembly”,\nCanPerform: [ “Benjamin”, “Shirley” ],\n“InstructionFiles”: [\n{\n“Name”: “Assembly.pdf”,\n“MimeType”: “application/pdf”,\n“Path”: “path_to_the_file”\n},\n]\n},…//other workTask\n] },\n“RevisionNums”: [ “210719”, “210908” ]\n}",
"username": "Min_Thein_Win"
},
{
"code": "",
"text": "Please read Formatting code and log snippets in posts and update your sample documents so that we can cut-n-paste into our system.",
"username": "steevej"
}
]
| Join data collection MongoDB inside an array and two element result add array element | 2022-12-23T05:48:48.289Z | Join data collection MongoDB inside an array and two element result add array element | 2,222 |
null | [
"queries",
"node-js",
"mongoose-odm"
]
| [
{
"code": "Fee.find(\n{\n_id: feeDocId,\n},\n(err, fee) => {\nif (fee[0] !== undefined) {\nif (!err && fee[0] != “”) {\nres.json(fee[0][“gstno”]);\n}\n}\n}\n)\nfired at api 2022-05-11T12:45:03+05:30 {}\nresponse at api{\n explainVersion: '1',\n queryPlanner: {\n namespace: 'myFirstDatabase.fee',\n indexFilterSet: false,\n parsedQuery: { _id: [Object] },\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: { stage: 'IDHACK' },\n rejectedPlans: []\n },\n executionStats: {\n executionSuccess: true,\n nReturned: 1,\n executionTimeMillis: 1,\n totalKeysExamined: 1,\n totalDocsExamined: 1,\n executionStages: {\n stage: 'IDHACK',\n nReturned: 1,\n executionTimeMillisEstimate: 0,\n works: 2,\n advanced: 1,\n needTime: 0,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 1,\n keysExamined: 1,\n docsExamined: 1\n },\n allPlansExecution: []\n },\n command: {\n find: 'fee',\n filter: { _id: new ObjectId(\"623042ce5fc371ac74c9b371\") },\n projection: {},\n '$db': 'myFirstDatabase'\n },\n \n}\nresponse at api 2022-05-11T12:48:41+05:30\nAPI took this much time 217829\n",
"text": "Hi Everyone,\nI have a simple api using the mongoose find method, along with an _id filter:The problem is that the response time of this api keeps on increasing exponentially with time (high latency).Any ideas as to why the latency increases progressively? Thanks in advance…",
"username": "ROSHAN_NAIR"
},
{
"code": "_id_idfind()",
"text": "Hi @ROSHAN_NAIRIt’s been a while since you posted this question. Have you found out what’s causing this?Since your query is a simple find by _id, I wonder if the problem is not due to MongoDB (find by _id is highly optimized), but more to do with how the find() was called in your code?If this is still an issue, could you post a small, self contained code that can reproduce this situation consistently, along with your MongoDB version and your Mongoose version?Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi @ROSHAN_NAIR\nDid you manage to find the solution. i am facing same issue.",
"username": "Faisal_Anwar"
}
]
| High latency (delayed response) in mongoose as api is called consecutively over time | 2022-05-11T13:55:54.671Z | High latency (delayed response) in mongoose as api is called consecutively over time | 2,756 |
null | [
"java"
]
| [
{
"code": "public String processQuery(String query, String endPoint) {\n try {\n OkHttpClient client = new OkHttpClient().newBuilder().build();\n MediaType mediaType = MediaType.parse(\"application/json\");\n RequestBody body = RequestBody.create(query, mediaType);\n String url = \"https://data.mongodb-api.com/app/data-HIDDEN/endpoint/data/v1/action/%s\".formatted(endPoint);\n Request request = new Request.Builder()\n .url(url)\n .method(\"POST\", body)\n .addHeader(\"content-type\", \"application/json\")\n .addHeader(\"Access-Control-Request-Headers\", \"*\")\n .addHeader(\"api-key\", \"HIDDEN\")\n .build();\n Response response = client.newCall(request).execute(); // execute the request\n LOGGER.debug(\"Process Query for Endpoint %s Called for %s %s - \\nQuery: \\n%s\".formatted(endPoint, serverName, serverId, query));\n\n String tempResponse = Objects.requireNonNull(response.body()).string();\n response.close();\n return tempResponse;\n } catch (IOException e) {\n LOGGER.warn(e.getMessage());\n return defaultEmpty;\n }\n}\n10:11:13.451 [Thread-1] DEBUG QueryHandler.class - Process Query for Endpoint insertOne Called for Test Room 939915140835467315 - \nQuery: \n{\"collection\":\"Jars\",\n \"database\":\"TikoJarTest\",\n \"dataSource\":\"PositivityJar\",\n \"document\": {\n \"serverID\" : \"9399514467315\",\n \"openingCondition\" : {\n \"hasMessageLimit\" : true,\n \"messageLimit\" : 3,\n \"creationDate\" : \"2022-09-21\",\n \"openingDate\" : \"2022-09-21\",\n \"serverChannelID\" : \"10211009899320\"\n },\n \"messages\" : [ ]\n}}\n\n10:11:13.451 [Thread-1] DEBUG QueryHandler.class - -- Check if Jar Created Post Response --\n\"Header missing: please add content-type: application/json or application/ejson to specify payload data types\"\n",
"text": "I am working with a classmate’s code from a project we worked on together last semester. Everything was working fine back then, but now that I come back to it, I’m having issues with our MongDB database.Here is the particular method where I’m getting hung up:And here’s what I’m seeing in the console:The project is using okhttp3. I will be happy to provide any other details you may need to know. I have almost zero experience with this, so please bear with me.",
"username": "Matthew_Brown1"
},
{
"code": "",
"text": "Oh, I should note:In MongoDB, the logs DO show that the database was pinged with a request.The database remains empty; the POST request is never completed, apparently.",
"username": "Matthew_Brown1"
},
{
"code": "",
"text": "Hi, have you already found a solution? I am facing the same problem.",
"username": "Elisabeth_Mayrhuber"
},
{
"code": "",
"text": "did you find the solution? I am facing exactly the same problem. Please let me know if you resolved it. I have created a SO question in case you want to answer it there. Thanks",
"username": "Syed_Ali1"
},
{
"code": "RequestBody body = RequestBody.create(json.getBytes(StandardCharsets.UTF_8 ));\nRequest request = new Request.Builder().get().url(httpUrl)\n .addHeader(\"content-type\", \"application/json\") // set content-type.\n .addHeader(\"api-key\", atlasToken) // the Atlas api key.\n .post(body) // Do a POST request with the given contents.\n .build(); // build the Request.\nResponse response = client.newCall(request).execute();\n",
"text": "I had this same problem with OkHttp3 as well as Apache HttpClient. The issue was fixed by doing the following:save the json as bytes with no MediaTypemodify the the RequestBuilderthen executeThe response code is now 201 indicating that an insert was successful.",
"username": "Richard_Hart1"
}
]
| Unable to POST, strange response | 2022-09-21T14:32:44.674Z | Unable to POST, strange response | 2,784 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "[\n {\n \"pid\": \"102\",\n \"e_date\": ISODate(\"2017-04-01T00:00:00.000+00:00\"),\n \"h_val\": 4,\n \n },\n {\n \"pid\": \"102\",\n \"e_date\": ISODate(\"2005-04-01T00:00:00.000+00:00\"),\n \"h_val\": 5,\n \n },\n {\n \"pid\": \"102\",\n \"e_date_1\": ISODate(\"2017-05-01T00:00:00.000+00:00\"),\n \"s_val\": 87,\n \"d_val\": 58\n },\n {\n \"pid\": \"102\",\n \"e_date_1\": ISODate(\"2016-09-01T00:00:00.000+00:00\"),\n \"s_val\": 81,\n \"d_val\": 62\n },\n {\n \"pid\": \"102\",\n \"e_date_1\": ISODate(\"2010-09-01T00:00:00.000+00:00\"),\n \"s_val\": 81,\n \"d_val\": 62\n },\n {\n \"pid\": \"101\",\n \"e_date\": ISODate(\"2016-04-01T00:00:00.000+00:00\"),\n \"h_val\": 5,\n \n },\n {\n \"pid\": \"101\",\n \"e_date_1\": ISODate(\"2011-05-01T00:00:00.000+00:00\"),\n \"s_val\": 87,\n \"d_val\": 58\n }, \n]\ndb.collection.aggregate([\n {\n \"$match\": {\n $expr: {\n $lte: [\n {\n $abs: {\n $dateDiff: {\n startDate: \"$e_date\",\n endDate: \"$e_date_1\",\n unit: \"year\"\n }\n }\n },\n 1\n ]\n }\n }\n },\n {\n \"$group\": {\n \"_id\": \"$pid\",\n \"actions\": {\n \"$push\": \"$$ROOT\"\n },\n \"total\": {\n \"$sum\": 1\n }\n }\n }\n])\n",
"text": "I wonder if it it possible to group documents based on a common field and a condition.\nIn the following collection:I am trying to group documents based on pid and check if e_date_1 is within 1 year of e_date. I have tried match and group aggregates:But I get all documents grouped by pid not the ones that are within 1 year.",
"username": "WilJoe"
},
{
"code": "e_datee_date_1$dateDiffnull1$addFields{ \n \"$addFields\": {\n \"yearDiff\": {\n \"$dateDiff\": {\n \"startDate\": \"$e_date\",\n \"endDate\": \"$e_date_1\",\n \"unit\": \"year\"\n }\n },\n \"isWithinYear\": { \"$lte\": [\"$yearDiff\", 1]}\n }\n}\n",
"text": "Hello @WilJoe!Based on your example data, none of the documents have both an e_date and a e_date_1 field.So the result of the $dateDiff will be null … which happens to be less than or equal to 1 … so it is matching all your documents.Is that example data complete?For debugging (or even perhaps as part of your final pipeline) you could add these as fields, to make it easier to see what they are … using $addFields something like:",
"username": "Justin_Jenkins"
},
{
"code": "",
"text": "Hi Justin,\nThanks for checking this out. Yes, this is a sample of my collection. The point is comparing e_date and e_date_1 which are coming from different sources and group them by pid. I am trying to find the temporal relationship between the documents, that is why I have chosen different names for the date fields.",
"username": "WilJoe"
},
{
"code": "$dateDiff2_id: \"102\"total5_id: \"101\"total27pid$matche_datee_date_1totalactions",
"text": "I’m not really understanding your exact problem @WilJoe? Your query looks like it is probably okay.But I get all documents grouped by pid not the ones that are within 1 year.If the documents had both fields … so $dateDiff had something to compare, you’d get different results.Given your example data you’d get back 2 documents:Which are your 7 documents grouped by the pid since the $match isn’t doing anything.If you make sure at least one of the documents has both e_date and e_date_1 and those dates are more than a year apart you’d get a different total and different documents inside actions.",
"username": "Justin_Jenkins"
},
{
"code": "[\n {\n \"pid\": \"102\",\n \"e_date\": ISODate(\"2017-04-01T00:00:00.000+00:00\"),\n \"h_val\": 4, \n },\n{\n \"pid\": \"102\",\n \"e_date_1\": ISODate(\"2017-05-01T00:00:00.000+00:00\"),\n \"s_val\": 87,\n \"d_val\": 58\n },\n{\n \"pid\": \"102\",\n \"e_date_1\": ISODate(\"2016-09-01T00:00:00.000+00:00\"),\n \"s_val\": 81,\n \"d_val\": 62\n }\n]\n",
"text": "I am trying to first find the ones with same pid then create (possibly multiple) groups for the ones that are within 1 year of each other:\nThe output should be like:",
"username": "WilJoe"
}
]
| How to match by a condition and group by common field? | 2022-12-23T03:26:46.433Z | How to match by a condition and group by common field? | 1,059 |
[
"node-js",
"field-encryption"
]
| [
{
"code": "",
"text": "I am implementing Client-side field-level encryption (auto encryption) in my node-js application. Creating a secure client it was taking more than 30 seconds. Is there any documentation present related to this?",
"username": "Pruthwiraj_Nayak"
},
{
"code": "",
"text": "PS: I use a high-speed internet connection, so it is not an internet issue.",
"username": "Pruthwiraj_Nayak"
}
]
| Why secure connection for CSFLE taking so much of time? | 2022-12-21T15:35:25.621Z | Why secure connection for CSFLE taking so much of time? | 1,532 |
|
null | [
"queries",
"mongodb-shell"
]
| [
{
"code": "",
"text": "Team, in the old mongo shell you were able to pretty print using .pretty()I jsut installed the new version, which is mongosh, and pretty printing is by default. When I am looking at multiple records, I find it inconvenient. Does anyone know how to turn off pretty print by default in mongosh …or at all?Thank you",
"username": "Peter_Ebeid"
},
{
"code": "inspectCompact",
"text": "You might try the configuration variable inspectCompact documented here",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Hmm, I just tried my suggestion and it’s not terribly effective.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "I didn’t know about the mongo shell settings. I spent all of yesterday looking, so thank you very much!\nThere doesn’t seem to be an option that does exactly that. Perhaps the new shell design doesn’t allow for it",
"username": "Peter_Ebeid"
},
{
"code": "",
"text": "Yes, I think it may be so that there’s no way to format better the output in the mongosh interactive shell.\nPerhaps writing your own function in JS within the shell would output the way you want.",
"username": "Jack_Woehr"
}
]
| Stop Pretty default Mongo Shell (mongosh) | 2022-12-22T03:32:10.401Z | Stop Pretty default Mongo Shell (mongosh) | 2,293 |
null | [
"indexes"
]
| [
{
"code": "",
"text": "Hiwe have 3M records in collection. we want to populate array fields called “array1” and “array2” if doesn’t exist in document.\nwhat would be the best way to make search faster for documents which don’t have array fields “array1” and “array2”. We can use “$exist” but it’s taking longer timeThanks\nDhruvesh",
"username": "Dhruvesh_Patel"
},
{
"code": "db.collection.createIndex( { _id : 1 }, { \"partialFilterExpression\": { \"array1\": { \"$exists\": false }, \"array2\": { \"$exists\": false } } } )\n",
"text": "Hi @Dhruvesh_PatelTry to create a partial index :And do a batch update on the located _id from this index.",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "My gut feeling is that creating such an index would as slow as just doing the query for the update. I think that it could even be slower as the index has to be written to disk and then updated when the arrays are populated.It may however distribute the work over a longer period and reduce the CPU spike.But as with any performance issues, gut feeling is not always right.It would be nice to know.",
"username": "steevej"
}
]
| Best way to search documents if array field does not exist in document | 2022-12-22T21:27:56.467Z | Best way to search documents if array field does not exist in document | 2,491 |
null | [
"aggregation"
]
| [
{
"code": "{\n _id: ObjectId,\n colors: {\n blue: true,\n red: true,\n green: true,\n yellow: false,\n }\n ...restOfData\n}\n[ 'Blue', 'Green', 'Blue' ]{\n _id: ObjectId,\n colors: [ 'Blue', 'Green', 'Blue' ],\n ...restOfData\n}\n$facet:{\n products_facet: [\n {\n $group: {\n _id: null,\n products: {\n $push: \"$$ROOT\",\n *This is when I want to manipulate colors\"\n },\n },\n },\n ]\n}\n",
"text": "In the middle of $facet aggregation, I am having an array of objects, each object looks like this:while I’m going through all records, I want to access the colors key and make it an array of strings equal to the value that is true like this: [ 'Blue', 'Green', 'Blue' ].End result should look like this:aggregation code:",
"username": "Aviv_azulay"
},
{
"code": "$facet: {\n products_facet: [\n {\n $group: {\n _id: null,\n products: {\n $push: {\n $mergeObjects: [\n \"$$ROOT\",\n {\n colors: {\n $filter: {\n input: {\n $objectToArray: \"$colors\"\n },\n as: \"color\",\n cond: {\n $eq: [ \"$$color.v\", true ]\n }\n }\n }\n }\n ]\n }\n }\n }\n }\n ]\n}\n",
"text": "Hi @Aviv_azulayTo transform the colors field into an array of strings, you can use the $objectToArray and $filter stages in the aggregation pipeline.Here is an example of how you can modify your aggregation pipeline to achieve the desired result:The $objectToArray stage converts the colors field from an object to an array of key-value pairs. The $filter stage then filters this array to only include the key-value pairs where the value is true. Finally, the $mergeObjects stage merges the resulting array of key-value pairs back into the original document, replacing the colors field with the desired array of strings.This will result in the colors field being transformed into an array of strings, as shown in the example in your question.",
"username": "Pavel_Duchovny"
}
]
| Returning an array of strings if the value of the key is true | 2022-12-22T17:51:32.076Z | Returning an array of strings if the value of the key is true | 782 |
null | [
"spark-connector"
]
| [
{
"code": "",
"text": "During restart spark streaming job mongo db source is failing with following error. I am using azure cosmos db version 4.0Caused by: com.mongodb.MongoCommandException: Command failed with error 9 (FailedToParse): ‘Unrecognized field: startAfter’ on server xxxxxx.mongo.cosmos.azure.com:10255. The full response is {“ok”: 0.0, “errmsg”: “Unrecognized field: startAfter”, “code”: 9, “codeName”: “FailedToParse”}",
"username": "Luca_chengretta"
},
{
"code": "",
"text": "The MongoDB Spark Connector is not designed or tested against CosmosDB.",
"username": "Robert_Walters"
}
]
| Error using checkpoint with mongo spark connector 10.xx | 2022-12-22T19:18:32.950Z | Error using checkpoint with mongo spark connector 10.xx | 1,857 |
null | [
"swift"
]
| [
{
"code": "",
"text": "Hey everyone,I apologize if this is something that has been covered add nauseam here but I am absolutely confused after reading various tutorials and realm documentation. I should also offer for context that I am new(ish) to swiftUI and less than a week new to Realm and Mongo DB.I understand how to have the app create data on the fly (I think) but in my case I want to provide the data as either read only or download it from Atlas or Mongo DB one first run and then populate it into a local Realm DB.I have tried to load a bundled file into memory (incredibly horribly I am sure) and it gave an error that it needed to be updated. I saved out a new DB and updated the code and it solved that but now its throwing another error about it needed a valid location (which confuses me because if it cleared the error and its a valid realm file then how would it not know the location?)Anyway is there any tutorials floating around in relation to Swift or SwiftUI and bundling or pulling it in on first run? This might sound silly but I feel like I have read so much and tried so many different approaches that now I am very confused as to what to do.",
"username": "Matt_Ross"
},
{
"code": "",
"text": "This is a very good question! Either method will work but they have their trade-offs and use cases.First though, bundling a Realm is covered in the documentation Bundle A Realm and those docs work so you can almost copy and paste the code.The advantage of bundling a Realm with static data is… it’s static data. It makes ensuring the user has that data is easy, as it’s built-in and it’s fast (local) and re-usable. It requires no connection to the internet and there’s no cost involved. The downside is that it’s well…static data, so refreshing it is not possible without a new release of the app.Syncing the data would require the data to be pre-populated in MongoDB Atlas data so when the user first connects it syncs and downloads the data. Nice thing there is you can update and change that data on the fly. Downside is there’s a cost involved and every install of the App will be banging on the server to get said data. That could drive up costs significantly based on the number of users and quantity of data.If it’s truly static data, like State names or a list of wine grapes - those are not likely to change frequently, if ever, so bundling them makes the most sense. Also if you are relying on that data for say, a popup menu or some other static UI element, bundling is the way to go and has no cost.On the other hand, if the data may change or is in any way dynamic, updating the server data is a quick an easy way to update everyones app and push out those changes - but there is a cost involved.Hope that helps conceptually.If you’re stuck on a coding question, please create a question here or on StackOverflow and we’ll take a look. Keep the code short, include you troubleshooting a description of the specific issue.Jay",
"username": "Jay"
},
{
"code": "",
"text": "Thank you for the link. I does make it a lot more clear about using a bundled realm. This is more of a swift or apple question but when it says App Services ID is it referring to something on Apples end like an App ID or is it referring to something like an App ID within Mongo?If it’s the first option I have no idea where to find that lol as the few things I can find don’t seem to work. If it is the later isn’t the Realm DB a local thing so why would it need a Mongo App ID?",
"username": "Matt_Ross"
},
{
"code": "",
"text": "Replying to my own comment here.I was able to sort out the issue and I was just overlooking the fine print that says use realm.configure for a local version and was taking the code for a synced file. I did however get the sync to work though I don’t plan on using it. And I say I got it to work in that it opens the realm correctly but then terminated the app due to a permissions error lol.Either way I am going to try and find a document with the local file code to use that.",
"username": "Matt_Ross"
},
{
"code": "realm.configure",
"text": "I was able to sort out the issueYay!If you’re not using Sync the everything in the sync’ing section can be ignored. Also, if that’s the case you don’t have to use realm.configure - that will establish the location of the Realm file if you want it somewhere else, OR if you are using multiple Realm files, which in the case of a local-only use case may not be needed at all.Using a Local Only Realm is document hereAnd is just a matter oflet realm = try! Realm()\n…\nlet results = realm.objects(SomeObject.self) //get some objectsortry! realm.write {\nmyObject.someProperty = someValue //update an object\n}",
"username": "Jay"
},
{
"code": "class RealmManager: ObservableObject {\n \n func openBundledSyncedRealm() {\n let identifier = \"UnitsV2\"\n let config = Realm.Configuration(inMemoryIdentifier: identifier)\n let realm = try! Realm(configuration: config)\n print(\"The bundled realm URL is: \\(FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!.path)\")\n print(\"Successfully opened the bundled realm\")\n }\n}\n",
"text": "Man that is even easier… Thank you for all your help. I saw some of your stuff on SO and was hoping you would respond here. I have loads of questions but I will try and solve them my self first Next up I need to figure out using the data. I have confirmed it opened via the consle and I can link it to a view where it comes up as it should thing.value all without error but nothing shows. So now I need to figure out if it is my model that is off or what would cause it to not throw errors but now show the data.I am using this for the code to set the bundled realm in the app…Is that correct?",
"username": "Matt_Ross"
},
{
"code": "",
"text": "I am not sure if that’s correct or not as you have two different things going on.I believe the goal is to bundle a realm with your app - that would entail creating a Realm file locally and then dragging that file into your XCode project - therefore bundling it. See Bundle A RealmIf you want to use an In Memory realm that’s a different use case; if allows you to work with Realm data in a temporary fashion while the app is active; it does not persist the data. See In Memory Realm",
"username": "Jay"
},
{
"code": "",
"text": "Okay.That makes more sense. I took in memory as it loaded the bundled database into memory to manipulate (like sorting) or display but if that’s not the same then I very well might have things backwards.I did create the realm file and drag it into the project so I can see it there. It’s in its own folder labeled database and has the .real extension.I will check that link and see where I might have gotten confused.",
"username": "Matt_Ross"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Confused about Bundling Verses Downloading an Existing Realm DB | 2022-12-21T18:11:57.830Z | Confused about Bundling Verses Downloading an Existing Realm DB | 2,297 |
null | [
"node-js",
"production"
]
| [
{
"code": "",
"text": "The MongoDB Node.js team is pleased to announce version 4.13.0 of the mongodb package!NODE-4447: disable causal consistency in implicit sessions (#3479) (6566fb5)NODE-4834: ensure that MessageStream is destroyed when connections are destroyed (#3482) (8338bae)Reference: https://docs.mongodb.com/drivers/node/current/API: mongodbChangelog: HISTORY.mdWe invite you to try the mongodb driver immediately, and report any issues to the NODE project.",
"username": "neal"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB Node.js Driver 4.13.0 Released | 2022-12-19T15:44:08.174Z | MongoDB Node.js Driver 4.13.0 Released | 2,101 |
null | [
"connector-for-bi"
]
| [
{
"code": "",
"text": "I’m trying to setup MongoDB BI Connector to use it for Tableau. I have already running mongosqld and it is connecting successfully from windows cmd to aws documentdb, but when I try to setup MongoDB ODBC Data Source it shows Connection failed [MongoDB][ODBC 1.4(w) Driver] can’t connect to MySQL server on ‘xxxxx’. Perhaps anyone has had the same issue and can advise how to solve it?\nThanks.",
"username": "Lina_Valiokaite"
},
{
"code": "",
"text": "Already found a solution by myself.",
"username": "Lina_Valiokaite"
},
{
"code": "",
"text": "If you ask the community, it would be helpful for others to show how you solved your problem. Thx.",
"username": "Karl_Fritz"
},
{
"code": "",
"text": "Hi Lina,\nCan you share what the solution is?\nI have the same issue.\nCheers\nPete",
"username": "Peter_Li"
},
{
"code": "",
"text": "Could you share what is the solution, please? I have the same problem. Thanks.",
"username": "Miguel_Rodriguez"
}
]
| MongoDB ODBC configuration | 2021-10-01T11:58:33.360Z | MongoDB ODBC configuration | 5,579 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "[\n{\n$lookup: {\n....,\nas: \"user\"\n\n},\n{\n$unwind: {\npath: \"$user\",\npreserveNullAndEmptyArrays: true\n}\n},\n{\n$project: {\nisValid: {\n$cond: [{$eq: [\"$user\", null], '123', 'abc']\n}\n}\n}\n]\n$cond: [ {'user': {exitst: true}}, true, false]",
"text": "How so I check if the unwind field is null?The example above is just to print 123 is the unwinded $user is null / undefined / does not exist, and ‘abc’ is if it exists. (it’s an example, as the MongoDB always returns ‘123’).I have also tried $cond: [ {'user': {exitst: true}}, true, false] and it didn’t work, always returned true alsoe",
"username": "Thiago_Bernardes1"
},
{
"code": "set_stage = { \"$set\" : {\n \"isValid\" : { \"$ne\" : [ { \"$size\" : \"$user\"} , 0 ] }\n} }\n",
"text": "I do not have an answer to your specific question. But a way to do what I think you want to do is compute isValid before $unwind using a $set stage and the $size of the array. Something like:",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to check if the $unwind field is null? | 2022-12-20T13:03:50.319Z | How to check if the $unwind field is null? | 1,159 |
null | [
"aggregation"
]
| [
{
"code": "{\n $lookup: {\n from: collection_name,\n let: { idToSearchFor: '$_id' },\n pipeline: [\n {\n $match: {\n $expr: {\n $and: [\n { $eq: ['$$idToSearchFor', '$another_id'] },\n { $eq: ['dummy', '$another_field'] },\n ],\n },\n },\n },\n { $limit: 1 },\n ],\n as: 'nice_name',\n },\n },\n",
"text": "Hi I came across with this discussion where it was advised to limit the lookup only to the first occurrence.\nIt got me wondering if I have a lookup with a limit set to one:Is Mongo clever enough to terminate after the first match, or still will search through the whole collection, collecting all the matches and then returning with the very first one?If it is searching through the whole collection, is there a way to terminate after the first match and save the time spent on looking through all the documents?Many thanks in advance!",
"username": "Lorand_Veger"
},
{
"code": "",
"text": "where it was advised to limit the lookup only to the first occurrencePlease read carefully. It is totally dependant on the use-case you are implementing. The use-case of the other post was to determine if there was 0 or at least one related document in the $lookup-ed collection. So you do not need to $lookup at all the related documents. So it is wise to stop when you found one. Not all use-case only needs to $lookup at only one. But if you only need one then $limit:1 is helpful.Is Mongo clever enough to terminate after the first matchYes it is.",
"username": "steevej"
},
{
"code": "",
"text": "Thank you, yes that is how I meant it as well.\nThanks again for the confirmation :))",
"username": "Lorand_Veger"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Lookup performance with limit: 1 | 2022-12-21T14:55:05.974Z | Lookup performance with limit: 1 | 2,290 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "",
"text": "Hi everyone! I have a collection ‘users’ which contains documents having fields ‘name’ and ‘schoolid’. I have another collection ‘schools’ ,which contains ‘schoolid’ and ‘name’ (schoolid is not mongodb generated objectid). Now I want output which will have ‘name’ from users and another field ‘school’ which will contain ‘name’ from schools collection corresponding to the schoolid. Basically I have to join these collections based on schoolid(I am not using REF so I cannot use .populate). I am able to do so by following code: db.user.aggregate([{ $lookup:{from:“schools”,as:“school”,let:{schoolid:\"$schoolid\"},pipeline:[{$match:{$expr:{$and:[{$eq:[\"$schoolid\",\"$$schoolid\"]}]}}},{$project:{name:1,_id:0}}]}},])\nBut in this output I am getting ‘school’ as an array, containing object with a key ‘name’ and then value as the school name from ‘schools’ collection. But instead I want ‘school’ as a normal string which will directly display the school name and not along with the key ‘name’. I tried to use unwind but it simply converts that array into object. Please help me with the same. Thank you",
"username": "Nikita_Damani"
},
{
"code": "",
"text": "Please provide sample documents from both collections so that we can experiment with your data.Also provide documents showing the expected result.",
"username": "steevej"
},
{
"code": "",
"text": "Hi steevej, Thanks for replying.\nhere are the collections:Users collection\n[{\"_id\": ObjectId\n“name”: “Nikita”\n“schoolid”: “1”},\n{_id: ObjectId\n“name”: “Sunita”\n“schoolid”: “2”}]Schools collection\n[{\"_id\": ObjectId\n“name”: “Sheel niketan”\n“schoolid”: 1},{\"_id\": ObjectId\n“name”: “Vidya sagar”\n“schoolid”: “2”}3.Output I want\n[{“name”: “Nikita”, “schoolname”:“Sheel niketan”},\n{“name”: “Sunita”, “schoolname”:“Vidyasagar”}]",
"username": "Nikita_Damani"
},
{
"code": "",
"text": "Please read Formatting code and log snippets in posts and format your data in a usable format.",
"username": "steevej"
}
]
| Unwind for array of objects | 2022-12-20T13:31:02.239Z | Unwind for array of objects | 2,526 |
null | [
"dot-net"
]
| [
{
"code": "",
"text": "Hello everyone!I’m looking for a solution to the question above: how to automatically identify that my app lost connection with MongoDB and auto-reconnect to it?There is any event generated by the MongoDB C# library that helps with that? Or some kind of trigger to connection lost/connection failed?Any suggestions?\nThank you!",
"username": "Joao_Victor_Souza_Lima_Santos"
},
{
"code": "",
"text": "In my experience the C# Mongo driver handles lost connections/reconnections fine. Is there something specific/more complex you are looking to do?",
"username": "John_Wiegert"
},
{
"code": "",
"text": "I was wonder if there is any event triggered correlated to a disconnection or a connection failed, so in that way, I can handle to reconnect again to my server.I’m new at MongoDB, I don’t know if it already handle it. I couldn’t notice that on my debug and in my researches about the matter.Can you help me?",
"username": "Joao_Victor_Souza_Lima_Santos"
},
{
"code": "",
"text": "Here is where you can read about events in the driver. The main thing you might want to watch out for is retrying a command if it fails due to a network error, but overall the driver itself handles network blips fine in my experience.",
"username": "John_Wiegert"
},
{
"code": "",
"text": "Thank you John, I will take a look at it",
"username": "Joao_Victor_Souza_Lima_Santos"
}
]
| How to auto-reconnect to the MongoDB? C# | 2022-12-09T13:49:01.546Z | How to auto-reconnect to the MongoDB? C# | 2,464 |
[]
| [
{
"code": "",
"text": "Hello, im trying to get the audit logs from a database via API,\nI know i can manually download them on the UI but i want to automate the process.\nSo far I came up with this information:\nbut there’s some information that is required and I cannot find\nNeed to know where to find the HOSTNAME, PUBLIC KEY and PRIVATE KEY.\nIf anyone could point me in the right direction, it would be very appreciated.",
"username": "Juan_Fernandez_Bernt"
},
{
"code": "",
"text": "Hi @Juan_Fernandez_Bernt welcome to the community!I think you need to follow the steps outlined in Get Started with the Atlas Administration API.In the section titled Manage Programmatic Access to an Organization, it mentioned how to create the public & private keys. You would need to follow all the prerequisites outlined in that page before arriving at this step though.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "got the keys but im getting an error 400, invalid hostname.\nWhere am i supposed to find the hostname? I was trying with the name of the actual database.",
"username": "Juan_Fernandez_Bernt"
},
{
"code": "",
"text": "When im checking the metrics for my deployments i saw this:\n[databasename-shard-xx-xx.xxxxxxx.gcp.mongodb.net:27017] and i thought that was the hostname, cant seem to find anything that would point me in the right direction",
"username": "Juan_Fernandez_Bernt"
},
{
"code": "",
"text": "Hi @Juan_Fernandez_Bernt , I’m sorry you’re having that experience. Best way to get the hostname is to use Get Processes endpoint or “atlas process list -o json” command in the Atlas CLI. Host name example: atlas-vpjr9d-shard-00-03.hbmflhz.mongodb.netThanks,\nJakub",
"username": "Jakub_Lazinski"
},
{
"code": "",
"text": "It works now, but its getting the logs from the last 24 hours.\nHow to make it pull the last 5 minutes?",
"username": "Juan_Fernandez_Bernt"
},
{
"code": "",
"text": "I’m so sorry for getting back to you late. I’ve missed this follow up question. Currently there’s no way to filter out results by timeframe. You’d need to do some post processing after downloading the full logs. I’ll pass it as an improvement idea to my colleagues.Thanks,\nJakub",
"username": "Jakub_Lazinski"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to pull Audit logs with API? | 2022-10-05T17:33:13.187Z | How to pull Audit logs with API? | 3,017 |
|
null | [
"sharding"
]
| [
{
"code": "atlas logs download <hostname> <mongodb.gz|mongos.gz|mongosqld.gz|mongodb-audit-log.gz|mongos-audit-log.gz> [options]curl --user '{PUBLIC-KEY}:{PRIVATE-KEY}' --digest \\ --header 'Accept: application/gzip' \\ --request GET \"https://cloud.mongodb.com/api/atlas/v1.0/groups/{GROUP-ID}/clusters/{HOSTNAME}/logs/mongodb.gz\" \\ --output \"mongodb.gz\"hostname",
"text": "Hi there,\nI am currently trying to automate my log-downloading process from Atlas. I am using atlas logs download <hostname> <mongodb.gz|mongos.gz|mongosqld.gz|mongodb-audit-log.gz|mongos-audit-log.gz> [options] for CLI\nand curl --user '{PUBLIC-KEY}:{PRIVATE-KEY}' --digest \\ --header 'Accept: application/gzip' \\ --request GET \"https://cloud.mongodb.com/api/atlas/v1.0/groups/{GROUP-ID}/clusters/{HOSTNAME}/logs/mongodb.gz\" \\ --output \"mongodb.gz\" for API.My question is how can I get the hostname parameter using API and CLI?Thanks!",
"username": "Jie_Dong"
},
{
"code": "",
"text": "Hi @Jie_Dong,\nI’m sorry the experience is sub-optimal here and not as straight forward as we’d like to. The best way to get the hostname is to use Get Processes endpoint or “atlas process list -o json” command in the Atlas CLI. Host name example: atlas-vpjr9d-shard-00-03.hbmflhz.mongodb.net.\nThanks,\nJakub",
"username": "Jakub_Lazinski"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to get Atlas hostnames using CLI and API | 2022-12-20T12:13:56.216Z | How to get Atlas hostnames using CLI and API | 2,783 |
null | [
"transactions"
]
| [
{
"code": "",
"text": "Hello guys.I have a question regarding read preference in a transaction.Why can’t read preference be secondary when executing a transaction?I noticed in the documentation that it should be set the ‘primary’ read preference.In my code I tried to run the transaction with read preference ‘secondary’ to see the result.The result was the following exception:\ncom.mongodb.MongoClientException: Read preference in a transaction must be primaryWhy does it happen ? Why does it always need to be ‘primary’ in a transaction?Regards,\nCaio",
"username": "morcelicaio"
},
{
"code": "",
"text": "Hi @morcelicaioIn a replica set (i.e. a distributed system), things are eventually consistent, except in the primary. The transaction realistically doesn’t get replicated to the secondaries until it’s committed.I noticed in the documentation that it should be set the ‘primary’ read preference.Not really “should”, but “must” Please see Read Preference Use Cases for more details and examples.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi @kevinadi .I understand that the primary node receives the writes and then replicates to the secondary nodes.The documentation doesn’t say anything about transactions other than using read preference ‘primary’.That’s why I had this doubt.What is the reason for not being able to execute a transaction with read preference ‘secondary’ ?regards,\nCaio",
"username": "morcelicaio"
},
{
"code": "",
"text": "What is the reason for not being able to execute a transaction with read preference ‘secondary’ ?This was a deliberate design decision (see SERVER-33580). I’m not privy to all the technical details regarding this decision, but if I have to guess, allowing multi-document transactions on secondaries would be difficult from causality point of view, would be of lower performance, for not much gain in functionality since all writes must go to the primary anyway Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Why can’t read preference be ‘secondary’ in a transaction? | 2022-12-14T23:31:40.617Z | Why can’t read preference be ‘secondary’ in a transaction? | 1,976 |
null | []
| [
{
"code": "",
"text": "Server:\nOracle Linux 8.5After installing “unixODBC” and downloading the files “libmdbodbca.so” and “libmdbodbcw.so” to the directory “/usr/local/lib”.Changed “/etc/odbc.ini” with connection information and parameter “DRIVER = /usr/local/lib/libmdbodbcw.so”.But the error is occurring:[unixODBC][Driver Manager]Can’t open lib ‘/usr/local/lib/libmdbodbcw.so’ : file not found\n[ISQL]ERROR: Could not SQLDriverConnect-rwxrwxrwx. 1 root root 35557176 Jul 12 23:21 libmdbodbca.so\n-rwxrwxrwx. 1 root root 35557632 Jul 12 23:21 libmdbodbcw.so",
"username": "Rafael_Carvalho1"
},
{
"code": ".",
"text": "Hi @Rafael_Carvalho1 ,The . at the end of the permissions suggest SElinux is at play.I would generally set SELinux permissive, test it, revert to enforcing and then work out what context needs to be applied.There are ways with more finesse I’m sure, but hopefully this can help.",
"username": "chris"
},
{
"code": "",
"text": "I did the total deactivation of SELINUX and the same error persisted.",
"username": "Rafael_Carvalho1"
},
{
"code": "ldd /usr/local/lib/libmdbodbcw.so",
"text": "Okay could be missing some dependencies.Try ldd /usr/local/lib/libmdbodbcw.so and you’ll likely see some missing shared libraries.I’m guessing it will be libssl1.0.0 but could be others.",
"username": "chris"
},
{
"code": "odbcinst.iniodbc.iniodbcinst",
"text": "this might be related to an older prodecure you might be following. newer systems seems to do it like this: driver name and file path are set in odbcinst.ini, then that name is used in odbc.ini, and also odbcinst command is executed at last. check this link and see if it applies to your issue:\n[zLinux] RHEL and SUSE: Configuring an Oracle datasource to use ODBC - IBM DocumentationOr, it might just be an issue of using spaces in that line. examples in this following link do not use spaces:\nConfigure the ODBC Source of Data (Linux) (oracle.com)",
"username": "Yilmaz_Durmaz"
},
{
"code": "ldd /usr/local/lib/libmdbodbcw.soldd /usr/local/lib/libmdbodbcw.so",
"text": "ldd /usr/local/lib/libmdbodbcw.so\nreturn:\nlinux-vdso.so.1 (0x00007ffdcb09f000)\nlibrt.so.1 => /lib64/librt.so.1 (0x00007f9a071fe000)\nlibpthread.so.0 => /lib64/libpthread.so.0 (0x00007f9a06fde000)\nlibm.so.6 => /lib64/libm.so.6 (0x00007f9a06c5c000)\nlibgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007f9a06a07000)\nlibssl.so.10 => not found\nlibcrypto.so.10 => not found\nlibstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007f9a06672000)\nlibdl.so.2 => /lib64/libdl.so.2 (0x00007f9a0646e000)\nlibgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f9a06255000)\nlibc.so.6 => /lib64/libc.so.6 (0x00007f9a05e8f000)\n/lib64/ld-linux-x86-64.so.2 (0x00007f9a094b4000)\nlibkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007f9a05ba5000)\nlibk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007f9a0598e000)\nlibcom_err.so.2 => /lib64/libcom_err.so.2 (0x00007f9a0578a000)\nlibkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007f9a05579000)\nlibkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007f9a05375000)\nlibcrypto.so.1.1 => /lib64/libcrypto.so.1.1 (0x00007f9a04e8c000)\nlibresolv.so.2 => /lib64/libresolv.so.2 (0x00007f9a04c74000)\nlibselinux.so.1 => /lib64/libselinux.so.1 (0x00007f9a04a49000)\nlibz.so.1 => /lib64/libz.so.1 (0x00007f9a04831000)\nlibpcre2-8.so.0 => /lib64/libpcre2-8.so.0 (0x00007f9a045ad000)Than i do:\nyum whatprovides libcrypto.so.10\nLast metadata expiration check: 1:42:20 ago on Wed 21 Dec 2022 09:07:32 AM -03.\ncompat-openssl10-1:1.0.2o-3.el8.i686 : Compatibility version of the OpenSSL library\nRepo : ol8_appstream\nMatched from:\nProvide : libcrypto.so.10compat-openssl10-1:1.0.2o-4.el8_6.i686 : Compatibility version of the OpenSSL library\nRepo : ol8_appstream\nMatched from:\nProvide : libcrypto.so.10ldd /usr/local/lib/libmdbodbcw.so\nlinux-vdso.so.1 (0x00007fffb2172000)\nlibrt.so.1 => /lib64/librt.so.1 (0x00007fdd07316000)\nlibpthread.so.0 => /lib64/libpthread.so.0 (0x00007fdd070f6000)\nlibm.so.6 => /lib64/libm.so.6 (0x00007fdd06d74000)\nlibgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007fdd06b1f000)\nlibssl.so.10 => /lib64/libssl.so.10 (0x00007fdd068b0000)\nlibcrypto.so.10 => /lib64/libcrypto.so.10 (0x00007fdd0644e000)\nlibstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007fdd060b9000)\nlibdl.so.2 => /lib64/libdl.so.2 (0x00007fdd05eb5000)\nlibgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007fdd05c9c000)\nlibc.so.6 => /lib64/libc.so.6 (0x00007fdd058d6000)\n/lib64/ld-linux-x86-64.so.2 (0x00007fdd095cc000)\nlibkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007fdd055ec000)\nlibk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007fdd053d5000)\nlibcom_err.so.2 => /lib64/libcom_err.so.2 (0x00007fdd051d1000)\nlibkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007fdd04fc0000)\nlibkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007fdd04dbc000)\nlibcrypto.so.1.1 => /lib64/libcrypto.so.1.1 (0x00007fdd048d3000)\nlibresolv.so.2 => /lib64/libresolv.so.2 (0x00007fdd046bb000)\nlibz.so.1 => /lib64/libz.so.1 (0x00007fdd044a3000)\nlibselinux.so.1 => /lib64/libselinux.so.1 (0x00007fdd04278000)\nlibpcre2-8.so.0 => /lib64/libpcre2-8.so.0 (0x00007fdd03ff4000)Now I don’t have the error anymore, thanks!",
"username": "Rafael_Carvalho1"
},
{
"code": "",
"text": "You’re welcome.I haven’t looked into why linux gives the File Not Found when the shared library has a missing dependency, just have hit it often enough in the past. Had to rule out SELinux first, although it probably gives a different error in any case.",
"username": "chris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| [unixODBC][Driver Manager]Can't open lib '/usr/local/lib/libmdbodbcw.so' : file not found | 2022-12-19T18:50:35.099Z | [unixODBC][Driver Manager]Can’t open lib ‘/usr/local/lib/libmdbodbcw.so’ : file not found | 5,070 |
null | [
"compass",
"atlas-cluster"
]
| [
{
"code": "",
"text": "I am getting connection error when attempting to connect to Atlas from MongoDB Compass:\nHostname/IP does not match certificate’s altnames: Host: cluster0-shard-00-02.acib9.mongodb.net. is not in the cert’s altnames: DNS:*.mongodb.net, DNS:mongodb.netThe connection worked fine previously, I am using Compass Version 1.34.2 (1.34.2) on MacOS Monterey V12.6.Can anyone assist? Thanks\nGeorge",
"username": "George_89197"
},
{
"code": "mongosh",
"text": "@George_89197 would you be able to share a screenshot of your connection screen and/or more of your connection string? that error makes us think you are connecting with an invalid certificate but it’s hard to tell without more details.What version of Compass was working for you? Did you get this error when using a favorite connection after upgrading Compass? And finally, are you able to connect from the command line with mongosh?",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "Hi - many thanks for your response. I am using Compass Version 1.34.2 (1.34.2) on MacOS Monterey 12.6.The connection string copied from Atlas:\nmongodb://george:%3Cpassword%[email protected]:27017/test?readPreference=primaryPreferred\nScreen Shot 2022-12-21 at 9.41.57 am1412×1312 114 KB\nI dont have mongosh installed, so I have tried connecting from the command line.regards,\nGeorge",
"username": "George_89197"
},
{
"code": "",
"text": "The connect string from Atlas does not seem to be correct for the Compass version you are using\nCheck/tick appropriate version while choosing the connect using Compass method\nIt would have given you SRV type connect string",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I selected v1.12 or later which gives the connect string:\nmongodb+srv://george:@cluster0.jrzcmr8.mongodb.net/test\nbut that also does not work:\n\nScreen Shot 2022-12-21 at 8.05.12 pm1490×1390 198 KB\n",
"username": "George_89197"
},
{
"code": "",
"text": "May be some dns or firewall issue\nDid you try from another location or diferent network,wifi or mobile hotspot",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Many thanks for your input. I tried connecting via ExpressVPN and it works! I spent hours trying to resolve this issue, and I still dont understand why re-directing via ExpressVPN solves the problem.",
"username": "George_89197"
},
{
"code": "",
"text": "You have a uncooperative DNS server. Either the one you use directly or one upstream from it if it forwards queries. Not an uncommon problem seen here.If you want to NOT use a VPN then get the DNS server fixed, updated, replaced.For work, get the IT team involved, these are basic DNS records that have been around for years and should be used.For home, check that routers have up to date firmware/software. Try different DNS servers; google’s, OpenDNS’s. Or you can run your own DNS caching server.If you have a particularly obnoxious ISP who intercepts DNS then VPN might be the only recourse to using mongodb+srv:// URIs/",
"username": "chris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Problem connecting to Atlas MongoDB from Compass | 2022-12-19T07:25:28.106Z | Problem connecting to Atlas MongoDB from Compass | 2,203 |
[]
| [
{
"code": "> atlas backups exports buckets create <bucketName> --cloudProvider AWS --iamRoleId <roleId>\n\nError: POST https://cloud.mongodb.com/api/atlas/v1.0/groups/6398ab379ab5a87ebccd2144/backup/exportBuckets: 400 (request \"EXPORT_BUCKET_INVALID_BUCKET\") Export Bucket with ID null does not exist or is inaccessible from the role specified.\n{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Action\": [\n \"s3:GetBucketLocation\",\n \"s3:PutObject\"\n ],\n \"Resource\": <bucketARN>,\n \"Effect\": \"Allow\"\n }\n ]\n}\n{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"AWS\": \"arn:aws:iam::536727724300:root\"\n },\n \"Action\": \"sts:AssumeRole\",\n \"Condition\": {\n \"StringEquals\": {\n \"sts:ExternalId\": \"c094be3b-5aca-49a0-838c-f30acfb95788\"\n }\n }\n }\n ]\n}\nGetBucketLocation",
"text": "I’m following the instructions to export backup snapshots to s3:I set up and authorized the iam role, and I’m trying to run this command:The role has the following statement:And the following trust relationsips:Through cloudwatch, I can see a successful GetBucketLocation event, and the assumed role is correct. So it seems that atlas is successfully assuming the role and finding the bucket.So I’m wondering what’s missing? Is there another permission other than the ones listed in the documentation that is needed?",
"username": "Denis_Lantsman1"
},
{
"code": " \"Resource\": [\"<bucketARN>\", \"<bucketARN>/*\"]\n",
"text": "Ah, after some experimentation I my own question:The IAM permission needs to apply to all the contents of the bucket, so you need to change the statement granting s3 permissions to:",
"username": "Denis_Lantsman1"
},
{
"code": "",
"text": "Hi @Denis_Lantsman1 ,Thank you for reaching out about the error you saw. You are correct that the permission needs to apply to all of the contents. I am glad that you were able to resolve that quickly and please let us know if you have any other questions in the future!Best regards and happy holidays,\nEvin",
"username": "Evin_Roesle"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Error when trying to set up export bucket | 2022-12-16T20:01:43.640Z | Error when trying to set up export bucket | 1,616 |
|
null | [
"node-js",
"crud"
]
| [
{
"code": "{\n acknowledged: true,\n modifiedCount: 1,\n upsertedId: null,\n upsertedCount: 0,\n matchedCount: 1\n}\nreturn client.db(infos.db).collection(infos.collection).updateOne(\n {machine_number: 192512},\n \n {\n $set: {comments : \"comm1\", designation : \"designation1\", \"new field\" : \"field\"},\n },\n { upsert: false, returnDocument: 'after' },\n \n )\n\"dependencies\": {\n \"express\": \"^4.18.2\",\n \"mongodb\": \"^4.13.0\"\n },\n \"devDependencies\": {\n \"nodemon\": \"^2.0.20\"\n }\n",
"text": "Hi,I’m in front of the following issue.\nWhen I use findOneAndUpdate() the result is an interfaceBelow the code and the package.json:Is it the expected result? On the documentation, I thought I heard the result should be the updated document.Thanks",
"username": "zak_aria"
},
{
"code": "",
"text": "@zak_aria the query you are running uses updateOne, afaik this doesnt return the new doc. However, findOneAndUpdate does: FindOneAndUpdateOptions | mongodb",
"username": "santimir"
},
{
"code": "",
"text": "@santimir thanks. It’s bad copy/paste ",
"username": "zak_aria"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| findOneAndUpdate return interface and not the document | 2022-12-20T16:58:09.719Z | findOneAndUpdate return interface and not the document | 1,374 |
null | [
"node-js",
"mongoose-odm",
"compass"
]
| [
{
"code": "Shutting down... Error: Error connecting to db: connect ECONNREFUSED ::1:27017\nconst dotenv = require('dotenv');\nconst mongoose = require('mongoose');\nconst server = require('./app');\n\nprocess.on('uncaughtException', (err) => {\n console.log('UNCAUGHT EXCEPTION! Shutting down...', err);\n process.exit(1);\n});\n\nprocess.on('unhandledRejection', (err) => {\n console.log('UNHANDLED REJECTION! Shutting down...', err);\n process.exit(1);\n});\n\ndotenv.config({path: './config.env'});\nconst PORT = process.env.PORT || 5000;\n\nconst DB_PASS = process.env.DB_PASSWORD;\nconst DB_URL = process.env.DB_URL.replace('<password>', DB_PASS);\n\nmongoose\n .connect(DB_URL, {\n useNewUrlParser: true,\n useCreateIndex: true,\n useFindAndModify: false,\n useUnifiedTopology: true\n })\n .then(() => console.log('Database sucessfully connected'));\n\n exports.sessionUrl = DB_URL; \n\nserver.listen(PORT, () => {\n console.log(`Server running in ${process.env.NODE_ENV} mode on port ${PORT}`);\n});\n\n",
"text": "I am having an error in my node server. I am using mongoose as an ORM with node.js and I am trying to connect to MongoDB Atlas and at first I am able to connect to my cluster but after some seconds I am getting this error in my console as,I am also able to connect to my cluster with Compass. I am really getting confused why I am getting this error as this has nothing to do with my localhost MongoDB sever but still I have enabled my server at localhost too just to check whether it can solve anything or not but really it didn’t For more specifications:\nNode js version: v18.12.1\nMongoDB version: MongoDB shell version v4.4.18\nMongoose version: ^5.13.14,\nOS: Pop_os(22.04)here is my code of my server.js where I am trying to connect to my MongoDB cluster.",
"username": "Harit_Joshi"
},
{
"code": "",
"text": "It is one of the reason already mentioned in this forum.See Search results for 'ECONNREFUSED IPv6' - MongoDB Developer Community Forums",
"username": "steevej"
},
{
"code": "",
"text": "But I am trying to connect to my remote cluster what do IPv6 and this error have to do with that can you brief me on this?",
"username": "Harit_Joshi"
},
{
"code": ".env",
"text": "The error you are showing us seems to come from a local server, can you make sure that:Or just add more information. We will help you out.If everything is correct the error could still be mongoosejs.",
"username": "santimir"
}
]
| MongoDB connection error | 2022-12-20T13:09:42.507Z | MongoDB connection error | 1,989 |
null | []
| [
{
"code": "new Schema({\n organisation_name: {\n type: String,\n unique: true,\n required: true,\n },\n members: [\n {\n user_id: {\n type: Schema.Types.ObjectId,\n required: true,\n unique: true,\n ref: \"User\",\n },\n role:{\ntype:String\n},\n _id: false,\n },\n ],\n});\nOrganisationSchema.create({\n organisation_name,\n members: [{ user_id: id, role: \"owner\" }],\n });\n E11000 duplicate key error collection: pmt.organisations index: members.user_id_1 dup key: { members.user_id: ObjectId('12331231231') }\n",
"text": "Hi, I’m getting a duplicate error when creating a new record.ModelCreating a doc queryErrorThank You",
"username": "sai_reddy"
},
{
"code": "user_iddb.collections.insertOne({_id:1})_idmembers.user_id: ObjectId('12331231231')user_id",
"text": "Indexed fields like user_id can not be duplicated.The same will happen if you run db.collections.insertOne({_id:1}), because _id is an index.From the value it seems to have been created manually (members.user_id: ObjectId('12331231231')).What you should do is to have the user_id value created automatically, and not accepting it from user input.Maybe someone else can complete this with some extra help.The MongoDB article for indexes is also quite good: https://www.mongodb.com/docs/manual/indexes/",
"username": "santimir"
},
{
"code": "",
"text": "Yeah, I understood the issue.It’s because of indexes, which create conflicts. so I resolved it by removing the index on that reference document.and ‘12331231231’ this id is manually written while creating above issueThanks",
"username": "sai_reddy"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Duplicate Error | 2022-12-21T04:07:27.592Z | Duplicate Error | 1,859 |
null | [
"queries"
]
| [
{
"code": ".find(){\n <field>: 1000000\n}\ndb.<collection>.find({<field>: 1_000_000}); \n",
"text": "Hi, folks.I’m new to MongoDB, and had a question about using the .find() method with a query.If a value is stored within a document field as an integer, and that integer is large enough that it becomes hard to read; am I able to query for it with an integer that has underscore separators even if the value itself within document field doesn’t have underscore separators?Example Document:Example Query:",
"username": "Drew_Bentrott"
},
{
"code": "",
"text": "If you are using mongodb inside Node (like Mongo Shell), or anything that will be interpreted as JS, the answer is yes.If you query from the browser, should check the support in caniuse.com though. Also this is a nice little post about it. Numeric Separators in JavaScript - DEV Community 👩💻👨💻",
"username": "santimir"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Underscores for Readability with Number Types | 2022-12-21T03:11:43.403Z | Underscores for Readability with Number Types | 911 |
[
"app-services-user-auth",
"kotlin",
"data-api"
]
| [
{
"code": "",
"text": "Dears,I have made Google Sign in, i been getting sometime Error Illegal base64 data at input byte 360. However, now i started getting the attached image.\n\nimage1872×582 114 KB\n\nCould Someone Help in resolving this issue.\nTo note i am using MongoDB Atlas Data API and Kotlin Language.\nThank you.",
"username": "Laith_Ayyat1"
},
{
"code": "",
"text": "Could someone help, i have rest the Keys in Google Cloud. I tried sigining in with different users, tried to delete the users on MongoDB nothing Worked??? please urgent help as i am stuck.Thank you",
"username": "Laith_Ayyat1"
},
{
"code": "",
"text": "Hello @Laith_Ayyat1I am in the same situation as you.\nDid you manage to find a solution/workaround?Best regards,\nAndrei",
"username": "Rasvan_Andrei_Dumitr"
}
]
| Google Auth Service Error 47 | 2022-11-06T10:41:54.619Z | Google Auth Service Error 47 | 2,088 |
|
null | [
"java"
]
| [
{
"code": "loginAsyncprivate void signInWithGoogle() {\n GoogleSignInOptions gso = new GoogleSignInOptions\n .Builder(GoogleSignInOptions.DEFAULT_SIGN_IN)\n .requestIdToken(\"MY-KEY.apps.googleusercontent.com\")\n .build();\n GoogleSignInClient googleSignInClient = GoogleSignIn.getClient(this, gso);\n Intent signInIntent = googleSignInClient.getSignInIntent();\n ActivityResultLauncher<Intent> resultLauncher =\n registerForActivityResult(\n new ActivityResultContracts.StartActivityForResult(),\n new ActivityResultCallback<ActivityResult>() {\n @Override\n public void onActivityResult(ActivityResult result) {\n Task<GoogleSignInAccount> task =\n GoogleSignIn.getSignedInAccountFromIntent(result.getData());\n handleSignInResult(task);\n }\n });\n resultLauncher.launch(signInIntent);\n}\n\nprivate void handleSignInResult(Task<GoogleSignInAccount> completedTask) {\n try {\n if (completedTask.isSuccessful()) {\n GoogleSignInAccount account = completedTask.getResult(ApiException.class);\n String token = account.getIdToken();\n Credentials googleCredentials =\n Credentials.google(token, GoogleAuthType.ID_TOKEN);\n app.loginAsync(googleCredentials, it -> {\n if (it.isSuccess()) {\n Log.v(\"AUTH\",\n \"Successfully logged in to MongoDB Realm using Google OAuth.\");\n } else {\n Log.e(\"AUTH\",\n \"Failed to log in to MongoDB Realm: \", it.getError());\n }\n });\n } else {\n Log.e(\"AUTH\", \"Google Auth failed: \"\n + completedTask.getException().toString());\n }\n } catch (ApiException e) {\n Log.w(\"AUTH\", \"Failed to log in with Google OAuth: \" + e.getMessage());\n }\n}\nFailed to log in to MongoDB Realm: \n AUTH_ERROR(realm::app::ServiceError:47): illegal base64 data at input byte 345\n at io.realm.internal.jni.OsJNIResultCallback.onError(OsJNIResultCallback.java:60)\n at io.realm.internal.objectstore.OsApp.nativeLogin(Native Method)\n at io.realm.internal.objectstore.OsApp.login(OsApp.java:99)\n at io.realm.mongodb.App.login(App.java:360)\n at io.realm.mongodb.App$3.run(App.java:411)\n at io.realm.mongodb.App$3.run(App.java:408)\n at io.realm.internal.mongodb.Request$1.run(Request.java:57)\n at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:462)\n at java.util.concurrent.FutureTask.run(FutureTask.java:266)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)\n at java.lang.Thread.run(Thread.java:923)\n\n E/REALM_JNI: jni: ThrowingException 6, [json.exception.type_error.316] invalid UTF-8 byte at index 1012: 0xFF in /tmp/realm-java/realm/realm-library/src/main/cpp/io_realm_internal_objectstore_OsApp.cpp line 195, .\nE/REALM_JNI: Exception has been thrown: Unrecoverable error. [json.exception.type_error.316] invalid UTF-8 byte at index 1012: 0xFF in /tmp/realm-java/realm/realm-library/src/main/cpp/io_realm_internal_objectstore_OsApp.cpp line 195\n\nFailed to log in to MongoDB Realm: \n UNKNOWN(unknown:-1): Unexpected error\n io.realm.exceptions.RealmError: Unrecoverable error. [json.exception.type_error.316] invalid UTF-8 byte at index 1012: 0xFF in /tmp/realm-java/realm/realm-library/src/main/cpp/io_realm_internal_objectstore_OsApp.cpp line 195\n at io.realm.internal.objectstore.OsApp.nativeLogin(Native Method)\n at io.realm.internal.objectstore.OsApp.login(OsApp.java:99)\n at io.realm.mongodb.App.login(App.java:360)\n at io.realm.mongodb.App$3.run(App.java:411)\n at io.realm.mongodb.App$3.run(App.java:408)\n at io.realm.internal.mongodb.Request$1.run(Request.java:57)\n at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:462)\n at java.util.concurrent.FutureTask.run(FutureTask.java:266)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)\n at java.lang.Thread.run(Thread.java:923)\n \n at io.realm.internal.mongodb.Request$1.run(Request.java:61)\n at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:462)\n at java.util.concurrent.FutureTask.run(FutureTask.java:266)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)\n at java.lang.Thread.run(Thread.java:923)\n",
"text": "I followed the Docs of Google Sign-in (here: https://www.mongodb.com/docs/realm/sdk/java/examples/authenticate-users/#google-user)\nIt seems fine, but sometimes (1 of 4 on average), loginAsync fails.\nHere is my code:After choosing Google account to login with,\nSome of the errors I receive:or:",
"username": "Yarin_Belker"
},
{
"code": "",
"text": "Same here … did you manage to fix it ?",
"username": "Rasvan_Andrei_Dumitr"
},
{
"code": "",
"text": "Hello @Yarin_Belker ,I am having the same sporadical problem.\nDid you manage to find a solution?Best regards,\nAndrei",
"username": "Rasvan_Andrei_Dumitr"
}
]
| Google Auth sometimes doesn't work- base64 issues? | 2022-07-01T03:23:49.438Z | Google Auth sometimes doesn’t work- base64 issues? | 2,112 |
null | [
"dot-net",
"xamarin"
]
| [
{
"code": "Realms.Exceptions.RealmInvalidObjectException: 'Attempted to access detached row'",
"text": "Hello , I’m having problem when deleting from Realm, I’m using Xamarin Forms with ReactiveUI. Adding , Updating works fine , but Deleting not , I’m getting Realms.Exceptions.RealmInvalidObjectException: 'Attempted to access detached row' , I have a service class that have all methods for CRUD operations . There I’m creating instance of Realm object. My service class is singleton.",
"username": "Dragan_Blanusa"
},
{
"code": ".freeze()realm:yg/core6realm:ni/frozen-objects",
"text": "@Dragan_Blanusa I think you need to use .NET’s .freeze() method for this, especially for ReactiveUI. You can see a code snippet here:\nhttps://www.mongodb.com/article/realm-database-and-frozen-objectsA more detailed example can be seen here -<!--\n Make sure to assign one and only one Type (`T:`) label.\n Select reviewer…s if ready for review. Our bot will automatically assign you.\n -->\n\n## Description\n\nThis introduces support for \"frozen objects\". These are objects, collections, or Realms that have been pinned at a particular version. This allows moving them between threads, but also means that they are now immutable and will not be updated as changes to the database happen. Trying to open a write transaction on a frozen Realm or subscribe for changes on an object/collection will throw an exception.\n\nBelow is a high level overview of the newly added API\n\n```csharp\nvar liveRealm = Realm.GetInstance();\n\n// Get a frozen Realm from a live one\nvar frozenRealm = liveRealm.Freeze();\n\n// Check if a Realm is frozen\nfrozenRealm.IsFrozen; // true\n\n// Frozen Realms can be queried as normal\nvar frozenFoos = frozenRealm.All<Foo>().Where(f => f.Bar > 5);\n\n// Queries in frozen Realms are frozen themselves\nfrozenFoos.IsFrozen(); // true\n\n// Writing to a frozen Realm or subscribing for changes is not allowed\n// frozenRealm.Write(...) <-- throws\n// frozenRealm.RealmChanged += (...) <-- throws\n\n// Get a frozen query from a live one\nvar frozenFoos = liveRealm.All<Foo>().Where(f => f.Bar > 5).Freeze(); // extension method on IQueryable<T>\nfrozenFoos.IsFrozen(); // extension method on IQueryable<T> -> true\n\n// Objects in a frozen query are frozen themselves\nfrozenFoos().First().IsFrozen; // true\n\n// Get a frozen list from a live one\nvar liveFoo = liveRealm.Find<Foo>(\"123\");\nvar frozenBars = liveFoo.Bars.Freeze(); // extension method on IList<T>\nfrozenBars.IsFrozen(); // extension method on IList<T> -> true\n\n// Get a frozen object from a live one\nvar frozenFoo = liveFoo.Freeze();\nfrozenFoo.IsFrozen; // true\n\n// RealmObjects can be frozen in place too if you no longer need the live version\nliveFoo.FreezeInPlace();\nliveFoo.IsFrozen; // true\n```\n\nHaving multiple frozen Realms at different versions will cause file growth. To control the maximum number of pinned versions, a new property - `MaxNumberOfActiveVersions` is added to all `RealmConfiguration` classes. When set, Realm will keep track of the number of active versions and throw an exception if the limit is exceeded.\n\nFixes #1945, RNET-184\n\n## TODO\n\n* [x] Wire up MaxNumberOfActiveVersions\n* [x] Changelog entry\n* [x] Tests for collections\n* [x] Tests for MaxNumberOfActiveVersionsIts hard to say without seeing your code but this will likely help you",
"username": "Ian_Ward"
},
{
"code": "",
"text": "What should I do with frozen object if I can’t write and subscribe to it ?? My problem is when I remove something from server side and my mobile app is in background , and goes from background to foreground , app just crash with exception I wrote .",
"username": "Dragan_Blanusa"
},
{
"code": "RealmInvalidObjectException\n \n public override object Invoke(object obj, BindingFlags invokeAttr, Binder binder, object[] parameters, CultureInfo culture)\n {\n var ro = obj as RealmObjectBase;\n if (ro == null || ro.IsValid)\n {\n return _mi.Invoke(obj, invokeAttr, binder, parameters, culture);\n }\n \n if (ReturnType.IsValueType)\n {\n return Activator.CreateInstance(ReturnType);\n }\n \n return null;\n }\n \n ",
"text": "Can you isolate a small repro case that we can use to investigate the issue? Generally what happens is that when an object is deleted, the collection containing the object emits a collection changed event that will tell the container to remove an element. Unfortunately, some binding engines will attempt to read data from the removed item prior to the removal, which then results in RealmInvalidObjectException. We have some custom logic to handle this when using Xamarin.Forms:Essentially, what this does is provide a custom property getter that the XF binding engine uses that will return the property’s default value instead of throwing an exception, if the object is deleted. I imagine whatever framework you’re using for ReactiveUI is doing things slightly differently, which is why it doesn’t go through that piece of code. In any case, having a small repro project will allow us to explore ways to support it.",
"username": "nirinchev"
},
{
"code": "",
"text": "@Dragan_Blanusa You can open an issue in this repo if you are are able to provide a reproduction case:Realm is a mobile database: a replacement for SQLite & ORMs - Issues · realm/realm-dotnet",
"username": "Ian_Ward"
},
{
"code": "",
"text": "I just raised a feature request about this exact kind of behaviour -Using Realm with reactive extensions, 3rd party libraries like DynamicData, and UI binding allows one to create beautifully dynamic, responsive apps with very few lines of clean code. This works fantastically for us in all scenarios *except for when...It’s interesting to see that the Realm team went down the route of adding custom handling to ‘fix’ this issue for Xamarin Forms binding. To me that’s 1) Indicative of a larger issue, and 2) Doesn’t help scenarios where Xamarin Forms binding isn’t the cause (seemingly in @Dragan_Blanusa’s case, and also in scenarios I refer to in the feature request).Realm’s behaviour of completely invalidating objects on deletion to the point where you can’t even access properties (and hence can’t call any overriden GetHashCode() or Equals() methods, for starters) makes it fundamentally incompatible with various reactive components/libraries, and also certain UI binding approaches. Can we please (optionally!) allow a more pragmatic approach that would make life much easier?",
"username": "Matt_Craig"
},
{
"code": "",
"text": "We can create a mode of operation where accessing deleted object properties returns the default value for that type and is something we’re evaluating in relation to compiled data bindings. It is unlikely we’ll freeze objects prior to the deletion in the short-medium term as that is a fairly involved task with a lot of potential to be a footgun.",
"username": "nirinchev"
},
{
"code": "",
"text": "That may help in the specific data binding scenario, but unfortunately doesn’t help at all for the reactive pipeline scenarios.The workaround we have to employ currently is to duplicate all necessary properties on each Realm object, where the copied properties are [Ignored], and we try to ensure they are kept in sync with the persisted properties, and then use those duplicated properties in Equals() and GetHashCode() so that they may still be called by reactive library implementations without blowing up the whole app when an item is deleted.It’s an ugly, imperfect hack, but I can’t see a better way to do it. I’d love to hear any suggestions!Thanks.",
"username": "Matt_Craig"
},
{
"code": "",
"text": "Why does the reactive library need to access properties of deleted objects? If you do have a small example that highlights the issues, I’d be happy to take it for a spin and see if there’s something clever we can do to support it.",
"username": "nirinchev"
},
{
"code": "",
"text": "I wholeheartedly concur. I’ve used Realm on 5 projects now, and this issue shows up again and again in the app center crash logs. The only “solution” is to do as Matt says, to copy properties from the model to the view model.The question as to why something like ReactiveUI needs access to properties of a detached/delated object is mute. It does, as I can confirm from the unit testing exception screen in front of me.Please give us the option of not throwing an exception when we access the properties of a detached row. Really. It’s the only thing I dread when I recommend Realm for mobile projects.Thanks.\nTim",
"username": "Void"
},
{
"code": "",
"text": "As I mentioned above, happy to try and do something about it, if I can get a simple repro case. Right now we have a limitation that we can’t access deleted objects, meaning that even if we did remove the exception, all properties of the deleted object would be set to their default values, meaning we’re likely going to hit a different issue.If we can get a repro project with the concrete reactive framework you use, that would enable us to investigate a fully fledged solution rather than a band-aid.",
"username": "nirinchev"
},
{
"code": "",
"text": "If I’m using database I would like to at least be able to delete data from it. Realm just throw some awkward exceptions so I should really dig under the hood to undestand what to do with this “Attempted to access detached row”.",
"username": "Alexey_Shavrukov"
},
{
"code": "",
"text": "Not sure how this is related. You can delete data from the database and accessing a detached row means you’re trying to read data from the database, that is no longer there. If you want to be able to reference realm object data after deleting it, you need to extract this data into an in-memory object.",
"username": "nirinchev"
},
{
"code": " private async Task UpdateRecentlyBookingIdList(string bookingId)\n {\n var recentlyOpenedBookings = RealmInstance.All<RecentlyOpenedBookingModel>().AsEnumerable().ToList();\n if (recentlyOpenedBookings?.Any(s => s.BookingId == bookingId)??false) return;\n var model = new RecentlyOpenedBookingModel()\n {\n BookingId = bookingId\n };\n await RealmInstance.WriteAsync(() => RealmInstance.Add(model, true));\n if (recentlyOpenedBookings is {Count:>=MaxRecentlyBookingCount})\n await RealmInstance.WriteAsync(() => RealmInstance.Remove(recentlyOpenedBookings[0]));\n }\n",
"text": "Well that’s my code. So from time to time I’m getting this exception. Maybe you could give me a clue at least on what I’m doing wrong? I don’t use realm objects in the app I always convert them to non realm objects cause I don’t need all this “smartish” sync thing Realm does. I’m using it as a plain database (not sure that it’s really possible).",
"username": "Alexey_Shavrukov"
},
{
"code": "",
"text": "Can you post the stacktrace of the exception you’re getting as well? The code for deleting a booking seems reasonable to me, though there are multiple optimizations that’ll make it more efficient. Are you getting the exception from this part of the code or from somewhere else?PS: you can totally use Realm as a dumb database, though you’ll be missing out on a lot of the benefits it brings to the table.",
"username": "nirinchev"
},
{
"code": "SIGABRT: Attempted to access detached rowCrashVersion 1.8.1896 (1.8.1896)1 user1 report \n\n* NativeException.ThrowIfNecessary (System.Func`2[T,TResult] overrider)\n\n* ObjectHandle.RemoveFromRealm (Realms.SharedRealmHandle realmHandle)\n\n* Realm.Remove (Realms.IRealmObjectBase obj)\n\n* PersistentStorage+<>c__DisplayClass9_0`1[T].<RemoveAsync>b__0 ()\n\n* Realm+<>c__DisplayClass71_0.<WriteAsync>b__0 ()\n\n* Realm.WriteAsync[T] (System.Func`1[TResult] function, System.Threading.CancellationToken cancellationToken)\n\n* PersistentStorage.RemoveAsync[T] (T item)\n\n* RecentlySearchBookingsStorage.UpdateRecentlyBookingIdList (System.String bookingId)\n\n* RecentlySearchBookingsStorage.AddRecentlyOpenedBookingIdAsync (System.String bookingId)\n\n* ArchiveViewModel.NavigateToBookingDetails (System.String id, System.String paxName)\n\n* ArchiveViewModel.ShowDetails (CTeleport.Mobile.Core.ViewModels.Archive.BaseBookingViewModel booking)\n\n* AsyncMethodBuilderCore+<>c.<ThrowAsync>b__7_0 (System.Object state)\n\n* NSAsyncSynchronizationContextDispatcher.Apply ()\n\n* (wrapper managed-to-native) UIKit.UIApplication.xamarin_UIApplicationMain(int,string[],intptr,intptr,intptr&)\n\n* UIApplication.UIApplicationMain (System.Int32 argc, System.String[] argv, System.IntPtr principalClassName, System.IntPtr delegateClassName)\n\n* UIApplication.Main (System.String[] args, System.String principalClassName, System.String delegateClassName)\n\n* Application.Main (System.String[] args)\n",
"text": "It’s really hard to reproduce it so that’s all what I have for now is a stacktrace from our AppCenter logs:p.s. yes the code requires optimiztion indeed. But the goal for now to make it stable.",
"username": "Alexey_Shavrukov"
},
{
"code": "MaxRecentlyBookingCountrecentlyOpenedBookings[0]",
"text": "There seem to be two major points of vulnerability from a quick look at your code snippet.",
"username": "Andy_Dent"
},
{
"code": "",
"text": "",
"username": "Alexey_Shavrukov"
},
{
"code": "",
"text": "How can it be incorrect? I’ve just read it from the dbThis is async code. It seems you’re assuming only ONE invocation of this can be running but my understanding of async/await is there is no guarantee you only have one thread doing it.\nEven within a single mobile app, can users inadvertently “stutter” a button or could animation code cause enough delay that more than on delete happens?I would be logging thread identifiers to eliminate this.",
"username": "Andy_Dent"
}
]
| Deleting From RealmDB - XamarinForms | 2020-11-02T07:16:25.702Z | Deleting From RealmDB - XamarinForms | 7,770 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "[\n {\n region: \"EMEA\",\n startZipCode: 0,\n endZipCode: 10,\n lastDelivery: ISODate(\"2020-01-01T10:58:18Z\")\n },\n {\n region: \"EMEA\",\n startZipCode: 10,\n endZipCode: 20,\n lastDelivery: ISODate(\"2020-12-01T11:13:26Z\")\n },\n {\n region: \"EMEA\",\n startZipCode: 20,\n endZipCode: 30,\n lastDelivery: ISODate(\"2020-11-01T11:13:26Z\")\n },\n {\n region: \"NA\",\n startZipCode: 30,\n endZipCode: 40,\n lastDelivery: ISODate(\"2020-01-01T11:13:26Z\")\n },\n {\n region: \"NA\",\n startZipCode: 40,\n endZipCode: 50,\n lastDelivery: ISODate(\"2020-01-01T11:13:26Z\")\n },\n {\n region: \"EMEA\",\n startZipCode: 50,\n endZipCode: 60,\n lastDelivery: ISODate(\"2020-01-01T11:13:26Z\")\n }\n]\n// Range [0-30) belonging to EMEA\n// Grouping together contiguous zip codes (3 first docs)\n// And choosing the higher delivery date in the group\n {\n region: \"EMEA\",\n startZipCode: 0,\n endZipCode: 30,\n lastDelivery: ISODate(\"2020-12-01T11:13:26Z\")\n },\n// Range [30, 50) belonging to NA\n// Grouping together contiguous zip codes (2 docs)\n// And choosing the higher delivery date in the group (they're the same)\n {\n region: \"NA\",\n startZipCode: 30,\n endZipCode: 50,\n lastDelivery: ISODate(\"2020-01-01T11:13:26Z\")\n },\n// Range [50, 60) belonging to EMEA (single document in the range, no grouping needed)\n {\n region: \"EMEA\",\n startZipCode: 50,\n endZipCode: 60,\n lastDelivery: ISODate(\"2020-01-01T11:13:26Z\")\n }\n",
"text": "Hi, I tried so long to figure out how to aggregate some documents via a pipeline but I was not able to get close to the result.I have documents representing zip code ranges belonging to a region and a “last delivery date” for each range of zip codes. I would like to obtain one document for each contiguous range belonging to a region, with the date of the last delivery that happened for that range.Currently I am doing it programmatically in my app, but doing that with an aggregation would bring to a major performance improvement given that it is much more efficient to don’t have to process all the documents on the client side.Here a sample of my dataset:Here an example of the output I would expect:Thank you very much in advance for your help.",
"username": "Leyo_Cory"
},
{
"code": "[\n {\n '$group': {\n '_id': '$region', \n 'starts': {\n '$push': '$startZipCode'\n }, \n 'ends': {\n '$push': '$endZipCode'\n }\n }\n }, {\n '$set': {\n 'mins': {\n '$filter': {\n 'input': '$starts', \n 'as': 's', \n 'cond': {\n '$not': {\n '$in': [\n '$$s', '$ends'\n ]\n }\n }\n }\n }\n }\n }, {\n '$lookup': {\n 'from': 'coll', \n 'let': {\n 'id': '$_id', \n 'mins': '$mins'\n }, \n 'pipeline': [\n {\n '$match': {\n '$expr': {\n '$and': [\n {\n '$eq': [\n '$region', '$$id'\n ]\n }, {\n '$in': [\n '$startZipCode', '$$mins'\n ]\n }\n ]\n }\n }\n }\n ], \n 'as': 'test'\n }\n }, {\n '$unwind': {\n 'path': '$test'\n }\n }, {\n '$replaceRoot': {\n 'newRoot': '$test'\n }\n }, {\n '$facet': {\n 'EMEA': [\n {\n '$match': {\n 'region': 'EMEA'\n }\n }, {\n '$graphLookup': {\n 'from': 'coll', \n 'startWith': '$startZipCode', \n 'connectFromField': 'endZipCode', \n 'connectToField': 'startZipCode', \n 'as': 'zips', \n 'restrictSearchWithMatch': {\n 'region': 'EMEA'\n }\n }\n }\n ], \n 'NA': [\n {\n '$match': {\n 'region': 'NA'\n }\n }, {\n '$graphLookup': {\n 'from': 'coll', \n 'startWith': '$startZipCode', \n 'connectFromField': 'endZipCode', \n 'connectToField': 'startZipCode', \n 'as': 'zips', \n 'restrictSearchWithMatch': {\n 'region': 'NA'\n }\n }\n }\n ]\n }\n }, {\n '$set': {\n 'aze': {\n '$concatArrays': [\n '$EMEA.zips', '$NA.zips'\n ]\n }\n }\n }, {\n '$unwind': {\n 'path': '$aze'\n }\n }, {\n '$set': {\n 'result': {\n '$reduce': {\n 'input': '$aze', \n 'initialValue': [], \n 'in': {\n 'region': '$$this.region', \n 'startZipCode': {\n '$concatArrays': [\n '$$value.startZipCode', [\n '$$this.startZipCode'\n ]\n ]\n }, \n 'endZipCode': {\n '$concatArrays': [\n '$$value.endZipCode', [\n '$$this.endZipCode'\n ]\n ]\n }, \n 'lastDelivery': {\n '$concatArrays': [\n '$$value.lastDelivery', [\n '$$this.lastDelivery'\n ]\n ]\n }\n }\n }\n }\n }\n }, {\n '$replaceRoot': {\n 'newRoot': '$result'\n }\n }, {\n '$set': {\n 'startZipCode': {\n '$min': '$startZipCode'\n }, \n 'endZipCode': {\n '$max': '$endZipCode'\n }, \n 'lastDelivery': {\n '$max': '$lastDelivery'\n }\n }\n }\n]\n[\n {\n region: 'EMEA',\n startZipCode: 0,\n endZipCode: 30,\n lastDelivery: ISODate(\"2020-12-01T11:13:26.000Z\")\n },\n {\n region: 'EMEA',\n startZipCode: 50,\n endZipCode: 60,\n lastDelivery: ISODate(\"2020-01-01T11:13:26.000Z\")\n },\n {\n region: 'NA',\n startZipCode: 30,\n endZipCode: 50,\n lastDelivery: ISODate(\"2020-01-01T11:13:26.000Z\")\n }\n]\n",
"text": "Hi @Leyo_Cory,First of all, thank you for the brain teasing exercice! I have to say that this has been a challenge!My solution IS NOT perfect! I couldn’t find a solution to generalise the regions. They are hardcoded in my pipeline… Hopefully you don’t have 765 regions and you can generate the pipeline from your regions if they are dynamic.Buckle up, here we go:Result:Just to explain a bit the pipeline…I’m not going to lie, I was like this!It is 99% sure that there is a more direct and optimized solution (with generic regions) but it’s the best I could do with the limited time I have!Cheers,\nMaxime ",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thank you very much Maxime! It required a lot of time for me to digest this solution, a real artwork! The regions are indeed not so many hence I could integrate this aggregation quite nicely into the application. A great performance improvement thanks to the reduced data transfer latency (~213%).",
"username": "Leyo_Cory"
},
{
"code": "",
"text": "I’ll take the 213% optimization !And it would be awesome to see the actual optimized version with generic regions. I’d gladly read that pipeline.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "$facet",
"text": "And it would be awesome to see the actual optimized version with generic regions.I ended up manually pasting region names in the $facet stage (21 regions in total). It would be great to have a more generic stage but from my side the aggregation looks acceptable atm.",
"username": "Leyo_Cory"
}
]
| Pipeline to group zip codes by region | 2022-12-14T15:12:37.868Z | Pipeline to group zip codes by region | 1,464 |
null | [
"golang"
]
| [
{
"code": "",
"text": "Hey, I am implementing a pagination query which contains huge data from different collection(s). I have used lookup for joining the data. The problem I am facing currently is that, When I perform count operation on the data it takes around 1s40min. Is there a way to perform pagination count in less time.Looking forward for the replyThank you!",
"username": "Sahildeep_Kaur"
},
{
"code": "",
"text": "@Sahildeep_Kaur could you share your queries & indices details?",
"username": "K.V"
}
]
| How to reduce the execution time for the pagination query? | 2022-12-21T04:10:37.988Z | How to reduce the execution time for the pagination query? | 1,409 |
null | [
"queries"
]
| [
{
"code": "primary_idssecondary_idswildcard_ids$addToSet",
"text": "Hi, I want to update three fields namely primary_ids, secondary_ids and wildcard_ids. All are of type array of int. I am doing this using $addToSet operator. In case, If any of the field does not exist or is null, then the query is returning error.Is there a way to check which field is null before $addToSet or any other method to implement this.Thank you!",
"username": "Sahildeep_Kaur"
},
{
"code": "$addToSet$addToSet",
"text": "Hi @Sahildeep_Kaur,I am doing this using $addToSet operator. In case, If any of the field does not exist or is null, then the query is returning error.Could you provide the following information:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hey, Thank you but I have done it.",
"username": "Sahildeep_Kaur"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
]
| Update elements in array | 2022-11-21T12:30:54.426Z | Update elements in array | 1,303 |
null | [
"indexes"
]
| [
{
"code": "{$and: [ \n {\n $and: [...], $or: [...]\n },\n {\n $or: [\n {\n a:{$in: [...]}, \n b:{$in: [...]}\n }\n //.... etc\n ]}\n]}\n",
"text": "Hello,\nIn the process of diagnosing query slowness I detected that the wrong index is being used.\nHinting towards the right index yielded the wanted performance, Trying building different variations of the wanted index had no impact on the index selection.\nAs a temporary attempt - I have deleted the ‘wrong’ index and then the index was selected correctly. What happened next had left me confused, as when I restored the ‘wrong’ index, the ‘right’ index is still selected.I am familiar with the fundamental concept in MongoDB that fields order within an index affects index selection, query planning and execution, but I cannot find any mention to that order of indexes affects index selection - which is my conclusion from my observation in this case.Is this a documented behavior of MongoDB? Is it a bug? I will be grateful for any help.Some additional info",
"username": "Daniel_Pleshkov"
},
{
"code": "hint()",
"text": "Hi @Daniel_Pleshkov welcome to the community!As you’ve discovered, it is entirely possible for the query planner to chosse a suboptimal index. This could be the case when two or more indexes can satisfy the query, and the planner can’t tell for sure which one will be more performant than others since superficially they look similar. See my answer on StackOverflow for details on this.Once an index is selected, it will be cached. What you’re seeing is the effect of this cache. Removing and readding the index basically invalidates the cache. Note that you can manually flush the plan cache as well. See Query Plan Cache Methods.To recap, either remove the unecessary index, or use hint() if you know something that the query planner doesn’t Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Unexpected index selection behavior - order of indexes matters? | 2022-12-20T15:00:12.406Z | Unexpected index selection behavior - order of indexes matters? | 1,280 |
null | []
| [
{
"code": ".project.findOne.findfindOnefind,findfindOnefindOne",
"text": "I need just one property of one document.I see that I can’t .project after .findOne, only after .find.I’m concerned about the data egress (although it is small) and the performance of the request.With my understanding of how MongoDB queries a collection - once it finds a satisfying record in findOne it immediately returns. But using find, it needs to search through the whole collection - right?If it matters, I am searching on _id (a single _id), so since I’m searching on an index, how would it affect the performance to use find (and then I can use project) vs findOne.On the other hand, the size of the full document which would be returned from the findOne query is very small.So I’m just not sure which is best. Thanks for any help.Edit to add: Digging around, I found the findOptions and set the limit to 1 (and went ahead and just used the project in the findOptions). I think this will work well for me.",
"username": "Michael_Jay2"
},
{
"code": "findOnefind,findOne()find()find()",
"text": "With my understanding of how MongoDB queries a collection - once it finds a satisfying record in findOne it immediately returns. But using find, it needs to search through the whole collection - right?No it doesn’t if you have the right index backing the query (although it also depends on what your query is). See Create Indexes to Support Your QueriesAs you have discovered, findOne() is basically a shorthand to do a find() with limit 1, execute the cursor, and return the single document. See this line in particular for the Node driver for the actual implementation.In contrast, find() returns a cursor. The query is not executed until you act on the cursor.Best regards\nKevin",
"username": "kevinadi"
}
]
| Performance of find & project vs findOne and then grabbing the property on the result | 2022-12-15T00:34:46.272Z | Performance of find & project vs findOne and then grabbing the property on the result | 2,999 |
null | [
"aggregation",
"dot-net",
"android",
"xamarin"
]
| [
{
"code": "string connString = \"mongodb+srv://\" + user + \":\" + secret + \"@\" + server + \"/\"+ project +\"?retryWrites=true&w=majority\";\n\nvar settings = MongoClientSettings.FromConnectionString(connString);\nsettings.ServerApi = new ServerApi(ServerApiVersion.V1);\nvar mongoClient = new MongoClient(settings);\nstring connString = \"mongodb+srv://\" + user + \":\" + secret + \"@\" + server + \"/\"+ project +\"?retryWrites=true&w=majority\";\nvar mongoClient = new MongoClient(connString);\n",
"text": "Hi, I am finding an error whenever I try to instantiate a MongoClient or a MongoClientSettings object (from MongoDB.Driver) when passing the connection string to their constructor.The whole error message that I can get a hold off says:{System.TypeInitializationException: The type initializer for ‘MongoDB.Driver.Core.Misc.DnsClientWrapper’ threw an exception. —> System.AggregateException: Error resolving name servers (Object reference not set to an instance of an object.) (Could not find file “/etc/resolv.conf”) —> System.NullReferenceException: Object reference not set to an instance of an object. at DnsClient.NameServer.QueryNetworkInterfaces () [0x0004c] in <519bb9af32234e5dba6bd0b076a88151>:0 at DnsClient.NameServer.ResolveNameServers (System.Boolean skipIPv6SiteLocal, System.Boolean fallbackToGooglePublicDns) [0x0005e] in <519bb9af32234e5dba6bd0b076a88151>:0 — End of inner exception stack trace — at DnsClient.NameServer.ResolveNameServers (System.Boolean skipIPv6SiteLocal, System.Boolean fallbackToGooglePublicDns) [0x00192] in <519bb9af32234e5dba6bd0b076a88151>:0 at DnsClient.LookupClient…ctor (DnsClient.LookupClientOptions options, DnsClient.DnsMessageHandler udpHandler, DnsClient.DnsMessageHandler tcpHandler) [0x000bc] in <519bb9af32234e5dba6bd0b076a88151>:0 at DnsClient.LookupClient…ctor (DnsClient.LookupClientOptions options) [0x00000] in <519bb9af32234e5dba6bd0b076a88151>:0 at DnsClient.LookupClient…ctor () [0x00006] in <519bb9af32234e5dba6bd0b076a88151>:0 at MongoDB.Driver.Core.Misc.DnsClientWrapper…ctor () [0x00006] in :0 at MongoDB.Driver.Core.Misc.DnsClientWrapper…cctor () [0x00000] in :0 — End of inner exception stack trace — at MongoDB.Driver.Core.Configuration.ConnectionString…ctor (System.String connectionString) [0x00000] in :0 at MongoDB.Driver.MongoUrlBuilder.Parse (System.String url) [0x00000] in <27273b0202ea4c34867b683ed7b21818>:0 at MongoDB.Driver.MongoUrlBuilder…ctor (System.String url) [0x00006] in <27273b0202ea4c34867b683ed7b21818>:0 at MongoDB.Driver.MongoUrl…ctor (System.String url) [0x00000] in <27273b0202ea4c34867b683ed7b21818>:0 at MongoDB.Driver.MongoClientSettings.FromConnectionString (System.String connectionString) [0x00000] in <27273b0202ea4c34867b683ed7b21818>:0 at --REDACTED FILENAME–The lines of code that cause this error are:It will break at MongoClientSettings.FromConnectionString(connString);\nOr this will also break:This issue is only happening when I run this code in a Xamarin project. I have the exact same code in a .NET 6 project and everything works fine there. The Xamarin project on the other hand targets .NET Standard 2.0 (also tried 2.1 with same issue) and I’ve been debugging it in an Android 12 device. The driver versions I’ve tested are 2.4.4 and 2.18.0. IDE is Visual Studio Version 17.2.0I appreciate any help I could get here. Thank you.",
"username": "Santiago_Suarez"
},
{
"code": "",
"text": "Hi @Santiago_Suarez, as this appears to be environmental (Xamarin/Android) I’d recommend opening a new ticket at https://jira.mongodb.org/projects/CSHARP/ so the Driver team can investigate this as a potential bug.",
"username": "alexbevi"
},
{
"code": "{System.TypeInitializationException: The type initializer for ‘MongoDB.Driver.Core.Misc.DnsClientWrapper’ threw an exception. —> \n System.AggregateException: Error resolving name servers (Object reference not set to an instance of an object.) (Could not find file “/etc/resolv.conf”) —> \n System.NullReferenceException: Object reference not set to an instance of an object. \n at DnsClient.NameServer.QueryNetworkInterfaces () [0x0004c] \n in <519bb9af32234e5dba6bd0b076a88151>:0 \n at DnsClient.NameServer.ResolveNameServers (System.Boolean skipIPv6SiteLocal, System.Boolean fallbackToGooglePublicDns) [0x0005e] \n in <519bb9af32234e5dba6bd0b076a88151>:0 \n— \nEnd of inner exception stack trace \n— \n at DnsClient.NameServer.ResolveNameServers (System.Boolean skipIPv6SiteLocal, System.Boolean fallbackToGooglePublicDns) [0x00192] \n in <519bb9af32234e5dba6bd0b076a88151>:0 at DnsClient.LookupClient…ctor (DnsClient.LookupClientOptions options, DnsClient.DnsMessageHandler udpHandler, DnsClient.DnsMessageHandler tcpHandler) [0x000bc] \n in <519bb9af32234e5dba6bd0b076a88151>:0 at DnsClient.LookupClient…ctor (DnsClient.LookupClientOptions options) [0x00000] \n in <519bb9af32234e5dba6bd0b076a88151>:0 at DnsClient.LookupClient…ctor () [0x00006] \n in <519bb9af32234e5dba6bd0b076a88151>:0 at MongoDB.Driver.Core.Misc.DnsClientWrapper…ctor () [0x00006] in :0 \n at MongoDB.Driver.Core.Misc.DnsClientWrapper…cctor () [0x00000] in :0 — End of inner exception stack trace — at MongoDB.Driver.Core.Configuration.ConnectionString…ctor (System.String connectionString) [0x00000] in :0 \n at MongoDB.Driver.MongoUrlBuilder.Parse (System.String url) [0x00000] \n in <27273b0202ea4c34867b683ed7b21818>:0 at MongoDB.Driver.MongoUrlBuilder…ctor (System.String url) [0x00006] in <27273b0202ea4c34867b683ed7b21818>:0 \n at MongoDB.Driver.MongoUrl…ctor (System.String url) [0x00000] in <27273b0202ea4c34867b683ed7b21818>:0 \n at MongoDB.Driver.MongoClientSettings.FromConnectionString (System.String connectionString) [0x00000] in <27273b0202ea4c34867b683ed7b21818>:0\n at --REDACTED FILENAME–\n",
"text": "You may want to format errors as code blocks as it will improve readability.Error resolving name servers … (Could not find file “/etc/resolv.conf”)I am not expert over android and just wandering around. could that be the problem? and if may I ask: why don’t you use string extrapolation to form your connection url?",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Thank you, I’ve opened this ticket and hopefully someone will take up on it.\nhttps://jira.mongodb.org/browse/CSHARP-4436",
"username": "Santiago_Suarez"
},
{
"code": "",
"text": "Thank you for the suggestion, I didn’t know how it would end up looking. Shame I can’t edit the top entry to make it more readable.Regarding the “/etc/resolv.conf” I have no idea since that error comes from some MongoDriver component I don’t have access to, nor can I control. I don’t create nor need that file myself, but it is most likely at least very close to the issue underneath.As for the string interpolation, that was just a fast and dirty example, but I’m pretty sure the connString variable works since the same code works in dotnet 6.",
"username": "Santiago_Suarez"
},
{
"code": "mongodb://mongodb+srv://",
"text": "Hi, @Santiago_Suarez,The .NET/C# Driver uses DnsClient.NET, a third-party DNS library, for resolving SRV and TXT records. Unfortunately it appears that increased security restrictions around DNS introduced in Android Oreo prevent DnsClient.NET from working correctly. See issue #17 in DnsClient.NET’s issue tracker. Given that the issue is closed, it doesn’t appear that a fix is forthcoming.You can work around this issue by using the standard connection string format (AKA mongodb://) rather than the DNS seedlist format (AKA mongodb+srv://). A and CNAME record lookups use .NET’s built-in capabilities and don’t require any third-party libraries. (Unfortunately these built-in capabilities do not include SRV and TXT record lookups, which is why we depend on DnsClient.NET for these record types.)Please let us know if this workaround is successful for you.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "var client = new MongoClient(\"mongodb://user:[email protected]:27017\");",
"text": "Hi @James_Kovacs, thank you for your reply. I have tried to no avail the suggested workaround and I keep getting the same exception when running this line:var client = new MongoClient(\"mongodb://user:[email protected]:27017\");I apologize if I’m misunderstanding something from the provided documentation.",
"username": "Santiago_Suarez"
},
{
"code": "mongodb://name:password@machine1:port1,machine2:port2,machine3:port3\nmongo+srv://main_set_address",
"text": "127.0.0.1:27017this will try to connect tolocalhost, in this case to your Android’s own network. it won’t work. you need to give addresses to your mongodb servers. something like this:you can say, DNS resolution is basically resolves mongo+srv://main_set_address to this format to find each member’s address.check this for more info on connection strings: Connection String URI Format — MongoDB Manual",
"username": "Yilmaz_Durmaz"
},
{
"code": "mongodb://",
"text": "The fastest way to get working connaction string in mongodb:// format:",
"username": "Yilmaz_Durmaz"
},
{
"code": "mongo+srv://main_set_address+srv",
"text": "Yes, I know the local address won’t work, I also tried the actual address. But even if I use the local address it shouldn’t break the execution with the System.TypeInitializationException, just throw a connection error.I cannot use mongo+srv://main_set_address because that would not be a standard connection string format as mentioned in the workaround from @ James_Kovacs. As I understood from his reply it should not include the +srv but I’m not sure of what else it entails. Also, just removing it from the one provided from the cluster’s Atlas page does not work in normal .dotnet client where it otherwise does work.",
"username": "Santiago_Suarez"
},
{
"code": "",
"text": "please read my answer, just one above your last response to get the “standard” connection string you need. that should get your app up and running. report back if your problem continues even with that.",
"username": "Yilmaz_Durmaz"
},
{
"code": "string standardString = $\"mongodb://{user}:{secret}@{shard0},{shard1},{shard2}/?ssl=true&replicaSet={replicaShard}&authSource=admin&retryWrites=true&w=majority\";\nvar settings = MongoClientSettings.FromConnectionString(standardString);\n_client = new MongoClient(settings);\n",
"text": "Sorry, I skipped the select version when reading your reply. Thank you for your help.So the code ends up looking like this:This still works fine in dotnet 6, but still throws System.TypeInitializationException in Xamarin.",
"username": "Santiago_Suarez"
},
{
"code": "",
"text": "the error you are getting now might have something different. can you share what comes out now? try formatting to look nicers as I did before ",
"username": "Yilmaz_Durmaz"
},
{
"code": "{System.TypeInitializationException: The type initializer for 'MongoDB.Driver.Core.Misc.DnsClientWrapper' threw an exception. ---> System.AggregateException: Error resolving name servers (Object reference not set to an instance of an object.) (Could not find file \"/etc/resolv.conf\") ---> System.NullReferenceException: Object reference not set to an instance of an object.\n at DnsClient.NameServer.QueryNetworkInterfaces () [0x0004c] in <519bb9af32234e5dba6bd0b076a88151>:0 \n at DnsClient.NameServer.ResolveNameServers (System.Boolean skipIPv6SiteLocal, System.Boolean fallbackToGooglePublicDns) [0x0005e] in <519bb9af32234e5dba6bd0b076a88151>:0 \n --- End of inner exception stack trace ---\n at DnsClient.NameServer.ResolveNameServers (System.Boolean skipIPv6SiteLocal, System.Boolean fallbackToGooglePublicDns) [0x00192] in <519bb9af32234e5dba6bd0b076a88151>:0 \n at DnsClient.LookupClient..ctor (DnsClient.LookupClientOptions options, DnsClient.DnsMessageHandler udpHandler, DnsClient.DnsMessageHandler tcpHandler) [0x000bc] in <519bb9af32234e5dba6bd0b076a88151>:0 \n at DnsClient.LookupClient..ctor (DnsClient.LookupClientOptions options) [0x00000] in <519bb9af32234e5dba6bd0b076a88151>:0 \n at DnsClient.LookupClient..ctor () [0x00006] in <519bb9af32234e5dba6bd0b076a88151>:0 \n at MongoDB.Driver.Core.Misc.DnsClientWrapper..ctor () [0x00006] in <b4b75168888d44e1ac6b514244ab7a7d>:0 \n at MongoDB.Driver.Core.Misc.DnsClientWrapper..cctor () [0x00000] in <b4b75168888d44e1ac6b514244ab7a7d>:0 \n --- End of inner exception stack trace ---\n at MongoDB.Driver.Core.Configuration.ConnectionString..ctor (System.String connectionString) [0x00000] in <b4b75168888d44e1ac6b514244ab7a7d>:0 \n at MongoDB.Driver.MongoUrlBuilder.Parse (System.String url) [0x00000] in <27273b0202ea4c34867b683ed7b21818>:0 \n at MongoDB.Driver.MongoUrlBuilder..ctor (System.String url) [0x00006] in <27273b0202ea4c34867b683ed7b21818>:0 \n at MongoDB.Driver.MongoUrl..ctor (System.String url) [0x00000] in <27273b0202ea4c34867b683ed7b21818>:0 \n at MongoDB.Driver.MongoClientSettings.FromConnectionString (System.String connectionString) [0x00000] in <27273b0202ea4c34867b683ed7b21818>:0 \n at --FILENAME-- }\n",
"text": "Sure,It seems to me that it is still using a dns lookup, despite the standard connection string format.",
"username": "Santiago_Suarez"
},
{
"code": "Could not find file \"/etc/resolv.conf\"",
"text": "Could not find file \"/etc/resolv.conf\"this line still boils hot. parts of libraries tries to fetch from that file and Android does not have that. it is not something specific to MongoDB.Driver either. I have met an issue on github related to MAUI. anyway, I get to this topic from a year and a half ago: c# - Last version of MongoDB.Driver not working for Android 8+: Could not find file “/etc/resolv.conf” - Stack Overflow.Interpreting that post, they resolved with an older driver version at the time, I suggest you try lowering your driver version, a major version at a time, and see if you can find a working one within your project’s dotnet version. and also from @James_Kovacs answer above, depending on another library, you should not expect a resolution on newer versions anytime soon (maybe never, but I feel optimistic)",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "I’ve rolled back the driver version to 2.4.4 and it seems to work using the standard format. By 2.5.0 it doesn’t break but doesn’t work (timeouts). I’ll keep an eye on any further releases but I am a bit inclined to rework this into using the Atlas Data API. Still not sure if I’ll change it I but can say at least that the 2.4.4 version of the driver does work with the standard format and Android 12 and Xamarin forms 5",
"username": "Santiago_Suarez"
},
{
"code": "",
"text": "Hey did you ever find another fix to this? I cannot roll back my drivers but am running into the same issue now. Did you just wind up using the Atlas Data API instead of the Drivers?",
"username": "George_Ely"
},
{
"code": "",
"text": "Hi, I’d recommend the Data API if rolling back is not an option… I moved to working on some other features in the meantime but I do want to eventually move to using http calls to the Data API. I find it more platform agnostic in case I ever need to move away from Xamarin for some reason. For now my connection is still the Mongo Driver in 2.4.4.",
"username": "Santiago_Suarez"
},
{
"code": "",
"text": "Okay gotcha, I appreciate the response. I think I have gotten the Realm SDK to work in Xamarin w/ Device Syncing which was my end goal so I’ll just ignore the .NET Drivers for now and possibly look at the Data API later on if need be. Thanks",
"username": "George_Ely"
}
]
| Error System.TypeInitializationException: ‘The type initializer for ‘MongoDB.Driver.Core.Misc.DnsClientWrapper’ threw an exception.’ | 2022-12-01T06:19:04.321Z | Error System.TypeInitializationException: ‘The type initializer for ‘MongoDB.Driver.Core.Misc.DnsClientWrapper’ threw an exception.’ | 5,078 |
null | [
"kotlin"
]
| [
{
"code": "",
"text": "I am getting reports that sign in is not working properly for a large number of users in a live app. The error is reporting as “illegal base64 data at byte…”. The problem seems to be intermittent, sometimes they can log in, and sometimes they can’t. Sometimes they can log in immediately after a failure with no problem.I am using Google sign in, using ID_TOKEN, in Kotlin.This was causing a crash before, but I have tracked down the problem with this and handled it, but I still get the error.Thanks for your help!",
"username": "Ada_Lovelace"
},
{
"code": "",
"text": "I think I may have solved this!Google seems to sometimes return extra characters at the end of their token, ending with * and some Unicode characters. Stripping these before passing them to Mongo allows it to sign in as expected.",
"username": "Ada_Lovelace"
},
{
"code": "",
"text": "This is in fact not solved.Still getting “illegal base64 data at byte 342” even when the token looks fine - no Unicode escape characters, only AZaz09_-.",
"username": "Ada_Lovelace"
},
{
"code": "",
"text": "Is no one else having this problem? I thought I could just rerun the sign in, but if I get a token that Mongo fails to recognise, I’m stuck with that token until Google decides to invalidate it. Calling GoogleSignInClient.signOut() does not invalidate the token.My app is live, and intermittently broken for a good number of users, because of something seemingly out of my control. Any suggestions at all would be very gratefully received.",
"username": "Ada_Lovelace"
},
{
"code": "",
"text": "G’Day @Ada_Lovelace,Thank you for raising your concern. I have reported this internally.Please allow me some time to debug this and I will soon be back with any findings. Could you please confirm your forum registration email is also the email for your cloud project?I appreciate your patience with us and I look forward to your response Warm Regards,",
"username": "henna.s"
},
{
"code": "illegal base64 data at input byte 344crypto/rsa: verification error",
"text": "Hello @Ada_Lovelace,I noticed multiple auth request errors of the form illegal base64 data at input byte 344 and crypto/rsa: verification error.It is possible that somewhere in the code the token is getting invalid. I found a post on SO for Google SignIn validation. Please let me know if that is helpful in your case.I look forward to hearing from you.Cheers, ",
"username": "henna.s"
},
{
"code": "",
"text": "Thanks for your reply. I took a look at the post you mentioned, but it doesn’t really help I’m afraid. I take the token from Google, and I pass it to MongoDb, there’s nothing really in between, so either Google is providing me with bad tokens, or Mongo is not accepting valid tokens.Could I send you a token that is not working, so you can check this?",
"username": "Ada_Lovelace"
},
{
"code": "",
"text": "Hello @Ada_Lovelace ,Thank you for your response. Sorry to hear the post was not helpful.Could you share the full stack trace from the client application with me when this error is thrown? Please also share the valid and invalid tokens that allow and do not allow authentication to Cloud. I will share this with the engineering team internally to verify the reasons for not-accepting the tokens.I look forward to your response.Cheers, ",
"username": "henna.s"
},
{
"code": "AUTH_ERROR(realm::app::ServiceError:47): illegal base64 data at input byte 349\n\tat io.realm.internal.network.NetworkRequest.onError(NetworkRequest.java:68)\n\tat io.realm.internal.objectstore.OsJavaNetworkTransport.nativeHandleResponse(Native Method)\n\tat io.realm.internal.objectstore.OsJavaNetworkTransport.handleResponse(OsJavaNetworkTransport.java:98)\n\tat io.realm.internal.network.OkHttpNetworkTransport.lambda$sendRequestAsync$0$io-realm-internal-network-OkHttpNetworkTransport(OkHttpNetworkTransport.java:100)\n\tat io.realm.internal.network.OkHttpNetworkTransport$$ExternalSyntheticLambda0.run(Unknown Source:14)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)\n\tat java.lang.Thread.run(Thread.java:920)\n",
"text": "Sure thing, the stack trace is:Do you have an email address I can send the tokens to?",
"username": "Ada_Lovelace"
},
{
"code": "",
"text": "I have pasted one of the failing tokens into https://jwt.io/ and it tells me it’s a valid token",
"username": "Ada_Lovelace"
},
{
"code": "",
"text": "Hello Ada,Thank you for sharing the requested information. Could you please also share the respective lines of code when this error is thrown?Looking forward to your response.Cheers, ",
"username": "henna.s"
},
{
"code": " private fun initGoogleSignIn() {\n val gso = GoogleSignInOptions.Builder(GoogleSignInOptions.DEFAULT_SIGN_IN)\n .requestIdToken(RealmConsts.clientIdWeb.value)\n .build()\n\n mGoogleSignInClient = GoogleSignIn.getClient(this, gso)\n resultLauncher = registerForActivityResult(ActivityResultContracts.StartActivityForResult()) { result ->\n val task: Task<GoogleSignInAccount> =\n GoogleSignIn.getSignedInAccountFromIntent(result.data)\n onSignInResult(task)\n }\n }\n\nprivate fun onSignInResult(completedTask: Task<GoogleSignInAccount>) {\n try {\n Log.d(\"SIGN IN\", \"onSignInResult\")\n val account = completedTask.getResult(ApiException::class.java)\n signInUser(account, // callback to UI here)\n } catch (e: ApiException) {\n // error handling removed for brevity\n }\n }\nfun signInUser (account: GoogleSignInAccount?, callback: ((Boolean) -> Unit)?) {\n viewModelScope.launch {\n\n val token = validateToken(account?.idToken) // this checks for non-unicode characters\n if (token == \"\") {\n // error handling\n return@launch\n }\n\n signInGoogle(token) // callback that handles the user here\n }\n\nfun signInGoogle (token: String, callback: (User?, String?) -> Unit) {\n signIn(Credentials.google(token, GoogleAuthType.ID_TOKEN), callback)\n }\n\nprivate fun signIn(credentials: Credentials, callback: (User?, String?) -> Unit){\n try {\n app.loginAsync(credentials) {\n Log.d(\"HERE\", \"Async complete ${it.isSuccess}\")\n if (it.isSuccess) {\n callback(it.get(), null)\n } else {\n // FAILS HERE\n }\n }\n } catch (e: Exception) {\n // error handling\n }\n }\n }\n",
"text": "This is the code I’m using:",
"username": "Ada_Lovelace"
},
{
"code": "",
"text": "Thank you for sharing the requested information, Ada. I have raised this internally.I will update you once I have more information to share.Appreciate your patience with us on this.Cheers, ",
"username": "henna.s"
},
{
"code": "",
"text": "Thanks for your help on this.",
"username": "Ada_Lovelace"
},
{
"code": "",
"text": "Hello @Ada_Lovelace ,I wanted to update you that the engineering team is looking into this. There is a possibility that this may be an issue on Google’s side but I will be able to confirm once more information is available.Appreciate your patience with us on this.Regards,\nHenna",
"username": "henna.s"
},
{
"code": "",
"text": "Hello @Ada_Lovelace,Could you share details on how intermittently is this happening? The engineering team reported that the JWT can be parsed but cannot be verified due to a corrupt signing key.The specific error is:I look forward to your response.Cheers, ",
"username": "henna.s"
},
{
"code": "try {\n val urlDecoded = java.util.Base64.getUrlDecoder().decode(secret)\n Log.d(\"JWT\", \"URL Decoded: $urlDecoded\")\n } catch (e: Exception) {\n Log.d(\"JWT\", \"e: $e\");\n } // WORKS\n\n try {\n val decoded = java.util.Base64.getDecoder().decode(secret)\n Log.d(\"JWT\", \"Decoded: $decoded\")\n } catch (e: Exception) {\n Log.d(\"JWT\", \"e: $e\");\n } // FAILS\n\n try {\n val androidDecoded = android.util.Base64.decode(secret, android.util.Base64.DEFAULT)\n Log.d(\"JWT\", \"Android Decoded DEFAULT: $androidDecoded\")\n } catch (e: Exception) {\n Log.d(\"JWT\", \"e: $e\");\n } // FAILS\n\n try {\n val urlAndroidDecoded = android.util.Base64.decode(secret, android.util.Base64.URL_SAFE)\n Log.d(\"JWT\", \"Android Decoded URL_SAFE: $urlAndroidDecoded\")\n } catch (e: Exception) {\n Log.d(\"JWT\", \"e: $e\");\n } // WORKS\n",
"text": "With my login, it happens every single time now. I am unable to log in at all.Thanks for the code example. I’ve made my own test in Kotlin, decoding just the secret from the token:The secret decoded fine when using URL encoding using both java.util.Base64 and android.util.Base64 libraries, although when I put the same secret into the Go example, it gives an error.",
"username": "Ada_Lovelace"
},
{
"code": "",
"text": "OK, after a little testing (and learning Go!), I think I’ve found why the error is occuring - the token is presented without padding, meaning that base64.URLEncoding.DecodeString will not work (if the token length is not divisible exactly by the byte length), but base64.RawURLEncoding.DecodeString will.",
"username": "Ada_Lovelace"
},
{
"code": "private fun padToken (nonNullToken: String, padDivisor: Int = 4): String {\n val comps = nonNullToken.split(\".\")\n if (comps.size < 3) return \"\"\n var token = comps[2]\n val size = token.length\n val padAmount = ((ceil(size.toDouble() / padDivisor) * padDivisor) - size).toInt()\n token = token.padEnd(size + padAmount, '=')\n return \"${comps[0]}.${comps[1]}.$token\"\n }\n",
"text": "I worked on this a little more, because currently I cannot use my app.Adding padding to the token using:generates a string that passes the Go playground above. However, it still does not allow me to sign in, and is also against the JWT spec (jwt.io reports that padded tokens are invalid)",
"username": "Ada_Lovelace"
},
{
"code": "",
"text": "Did u ever find a solution?\nI am having the exact same issue!Best regards,\nAndrei",
"username": "Rasvan_Andrei_Dumitr"
}
]
| URGENT help needed with Realm sign in | 2022-05-22T09:12:15.813Z | URGENT help needed with Realm sign in | 7,030 |
null | [
"aggregation",
"indexes"
]
| [
{
"code": "test> db.events.find({}).limit(3)\n[\n {\n _id: ObjectId(\"63a086c116b1985489f8ccaf\"),\n data: { x: 'X' },\n meta: {\n key: 'nk0',\n schema: { tenant: 't1', name: 'type#0', version: '2.0.22' }\n }\n },\n {\n _id: ObjectId(\"63a086c116b1985489f8ccb0\"),\n data: { x: 'X' },\n meta: {\n key: 'nk1',\n schema: { tenant: 't1', name: 'type#1', version: '2.0.22' }\n }\n },\n {\n _id: ObjectId(\"63a086c116b1985489f8ccb1\"),\n data: { x: 'X' },\n meta: {\n key: 'nk2',\n schema: { tenant: 't1', name: 'type#4', version: '2.0.22' }\n }\n }\n]\ntest> db.events.getIndices()\n[\n { v: 2, key: { _id: 1 }, name: '_id_' },\n {\n v: 2,\n key: { 'meta.key': 1, 'meta.schema.name': 1, 'meta.schema.tenant': 1 },\n name: 'meta.key_1_meta.schema.name_1_meta.schema.tenant_1',\n unique: true\n }\n]\ntest> db.events.explain().aggregate([{$project: {'_id':0, 'meta.key': 1, 'meta.schema.name': 1, 'meta.schema.tenant': 1}}, {$out: 'out_test'}])\n{\n explainVersion: '1',\n stages: [\n {\n '$cursor': {\n queryPlanner: {\n namespace: 'test.events',\n indexFilterSet: false,\n parsedQuery: {},\n queryHash: 'E7D9C058',\n planCacheKey: 'E7D9C058',\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: {\n stage: 'PROJECTION_DEFAULT',\n transformBy: {\n meta: { key: true, schema: { tenant: true, name: true } },\n _id: false\n },\n inputStage: { stage: 'COLLSCAN', direction: 'forward' }\n },\n rejectedPlans: []\n }\n }\n },\n { '$out': { db: 'test', coll: 'out_test' } }\n ],\n serverInfo: {\n host: '02295ffc4e95',\n port: 27017,\n version: '6.0.2',\n gitVersion: '94fb7dfc8b974f1f5343e7ea394d0d9deedba50e'\n },\n serverParameters: {\n internalQueryFacetBufferSizeBytes: 104857600,\n internalQueryFacetMaxOutputDocSizeBytes: 104857600,\n internalLookupStageIntermediateDocumentMaxSizeBytes: 104857600,\n internalDocumentSourceGroupMaxMemoryBytes: 104857600,\n internalQueryMaxBlockingSortMemoryUsageBytes: 104857600,\n internalQueryProhibitBlockingMergeOnMongoS: 0,\n internalQueryMaxAddToSetBytes: 104857600,\n internalDocumentSourceSetWindowFieldsMaxMemoryBytes: 104857600\n },\n command: {\n aggregate: 'events',\n pipeline: [\n {\n '$project': {\n _id: 0,\n 'meta.key': 1,\n 'meta.schema.name': 1,\n 'meta.schema.tenant': 1\n }\n },\n { '$out': 'out_test' }\n ],\n cursor: {},\n '$db': 'test'\n },\n ok: 1\n}\n",
"text": "Hi, could you help find out why the projection stage doesn’t use an index that contains all the fields to be included? And any advice on how to avoid full scan to get the projection.Objects have the following schema:I’ve created a unique compound index that includes ‘meta.key’, ‘meta.schema.name’, ‘meta.schema.tenant’I want to extract these fields into a separate collection. I expect that aggregation pipeline will use the index to retrieve data, but it doesn’t.What is wrong?Using MongoDB: 6.0.2\nUsing Mongosh: 1.5.0",
"username": "Vitaly_Velyamidov"
},
{
"code": "{$match:{'meta.key':{$gte:Minkey}}}",
"text": "You need to add a match stage.Something like {$match:{'meta.key':{$gte:Minkey}}} should work for you.",
"username": "chris"
},
{
"code": "",
"text": "That works. Thanks.\nCould you explain or share a link to understand why and how it’s working? And why is not working without match stage?",
"username": "Vitaly_Velyamidov"
},
{
"code": "db.collection.find({}, {'_id':0, 'meta.key': 1, 'meta.schema.name': 1, 'meta.schema.tenant': 1})db.collection.find({'meta.key': value}, {'_id':0, 'meta.key': 1, 'meta.schema.name': 1, 'meta.schema.tenant': 1})",
"text": "It because before the project your query is doing a match on all of the documents, similar to a find({}) with no arguments then the project is showing or suppressing the fields that you want from those documents. When you do an empty find it is a COLLSCAN and can’t use an indexSo by adding a match on an indexed field you get the benefit of an index (IXSCAN) and then the projection on those documents.It would be similar to doing db.collection.find({}, {'_id':0, 'meta.key': 1, 'meta.schema.name': 1, 'meta.schema.tenant': 1})VS\ndb.collection.find({'meta.key': value}, {'_id':0, 'meta.key': 1, 'meta.schema.name': 1, 'meta.schema.tenant': 1})",
"username": "tapiocaPENGUIN"
},
{
"code": "> db.product.find({}, {'_id':0}).explain()\n{\n \"queryPlanner\" : {\n \"plannerVersion\" : 1,\n \"namespace\" : \"products.product\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n\n },\n \"queryHash\" : \"8B3D4AB8\",\n \"planCacheKey\" : \"8B3D4AB8\",\n \"winningPlan\" : {\n \"stage\" : \"PROJECTION_DEFAULT\",\n \"transformBy\" : {\n \"_id\" : 0\n },\n \"inputStage\" : {\n \"stage\" : \"COLLSCAN\",\n \"direction\" : \"forward\"\n }\n },\n \"rejectedPlans\" : [ ]\n },\n \"serverInfo\" : {\n \"host\" : \"\",\n \"port\" : 27017,\n \"version\" : \"4.4.15\",\n \"gitVersion\" : \"bc17cf2c788c5dda2801a090ea79da5ff7d5fac9\"\n },\n \"ok\" : 1\n}\n> db.product.find({'sku': 20000026}, {'_id':0}).explain()\n{\n \"queryPlanner\" : {\n \"plannerVersion\" : 1,\n \"namespace\" : \"products.product\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"sku\" : {\n \"$eq\" : 20000026\n }\n },\n \"queryHash\" : \"5B7E4F14\",\n \"planCacheKey\" : \"FD7B6BBF\",\n \"winningPlan\" : {\n \"stage\" : \"PROJECTION_DEFAULT\",\n \"transformBy\" : {\n \"_id\" : 0\n },\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"sku\" : 1\n },\n \"indexName\" : \"sku_1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"sku\" : [ ]\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"sku\" : [\n \"[20000026.0, 20000026.0]\"\n ]\n }\n }\n }\n },\n \"rejectedPlans\" : [ ]\n },\n \"serverInfo\" : {\n \"host\" : \"\",\n \"port\" : 27017,\n \"version\" : \"4.4.15\",\n \"gitVersion\" : \"bc17cf2c788c5dda2801a090ea79da5ff7d5fac9\"\n },\n \"ok\" : 1\n}\n\n",
"text": "Here is an example with find + project and we see similar results you get",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Projection doesn't use index to make query covered | 2022-12-19T16:15:56.499Z | Projection doesn’t use index to make query covered | 2,165 |
null | [
"aggregation",
"atlas-search"
]
| [
{
"code": "use(\"Marketplace\")\n\ndb.Users.aggregate([\n {\n \"$search\": {\n \"regex\": {\n \"path\": \"Name\",\n \"query\": \"carl\",\n \"allowAnalyzedField\": true\n }\n }\n }\n])\n",
"text": "Hi,\nim trying to use regex using case insensitive but can’t figure out how to use it,\nthis is my currently aggregation:I want to get results like Carl, carl, CARL",
"username": "Henrique_Shoji"
},
{
"code": "Namelucene.keywordlucene.standard",
"text": "@Henrique_Shoji - what is the analyzer you’re using for the Name field? If the analyzer does not lowercase, then queries are case sensitive. Perhaps you’re using lucene.keyword? Try lucene.standard and see if that helps.",
"username": "Erik_Hatcher"
},
{
"code": "",
"text": "It worked, thanks for the help!!",
"username": "Henrique_Shoji"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Regex case insensitive using Atlas Search | 2022-12-20T15:01:39.877Z | Regex case insensitive using Atlas Search | 1,614 |
null | [
"replication",
"transactions"
]
| [
{
"code": "",
"text": "Hello,I’ve been trying to under stand “replica set” for a long time. We are using MongoDB 4.0.2. We are not replicating anything. We have 1 MongoDB instance running.But, when reading about transactions. It says \" * In version 4.0 , MongoDB supports multi-document transactions on replica sets.\"Our system is now breaking and we need to solve this.ThanksWhat does that mean?EdIt: Documentation https://www.mongodb.com/docs/manual/core/transactions/#transactions-and-atomicity",
"username": "unityworks_carl"
},
{
"code": "",
"text": "You might wish to take the free online course MongoDB Database Administrator (DBA) which will explain things more clearly and completely than you are likely to get by Q&A here in the community forums. You can complete this course in a few hours if you dedicate an afternoon or two.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "This is the one that will really answer your questions:",
"username": "Jack_Woehr"
}
]
| What is a replica set? (MongoDB 4.0.2) | 2022-12-20T14:26:13.585Z | What is a replica set? (MongoDB 4.0.2) | 1,413 |
null | [
"sharding"
]
| [
{
"code": "{time_posted:1}",
"text": "Hello, Folks!We have a collection which has two important fields - userid and date (at which the document was created/inserted)use cases/system behaviourgoalsshard key choicesProblem\nWe’re inserting around 10 k docs each having the same userid but different dates (each differing by an hour compared to the previous doc). All these 10 k docs end up on only one shard even with the composite shard key mentioned aboveshouldn’t MongoDB split the chunks and distribute the data into both the shards? As per the MongoDB blog on-selecting-a-shard-key-for-mongodb, consecutive docs should reside on same shard while other docs may reside on other shards. Please clarify what’s going wrong in our testif “Asya” is a prolific poster, and there are hundreds of chunks with her postings in them, then the {time_posted:1} portion of the shard key will keep consecutive postings together on the same shardSetup details\nWe are running a 2-shard MongoDB v5.0.13 cluster on-premise. More details are posted in this Stackoverflow postPlease ask any question/detail you would like to see. Thank you so much!",
"username": "A_S_Gowri_Sankar"
},
{
"code": "",
"text": "trying to revive this threadsorry for tagging but just wanted to get some visibility on this. any info would be very helpful\n@Aasawari @Stennie_X",
"username": "A_S_Gowri_Sankar"
},
{
"code": "'userid''userid'",
"text": "Hi @A_S_Gowri_Sankar,We’re inserting around 10 k docs each having the same userid but different dates (each differing by an hour compared to the previous doc). All these 10 k docs end up on only one shard even with the composite shard key mentioned aboveBased off those 2 index keys and your test data, the behaviour mentioned appears as expected - All the 10,000 documents having the same 'userid' value would belong to the same shard due to the hashing function against the same 'userid' value. Just to clarify here, are you seeing these 10K documents on one shard or one chunk? Lastly, could you advise if you set a custom chunk size?shouldn’t MongoDB split the chunks and distribute the data into both the shards?Normally, MongoDB splits a chunk after an insert if the chunk exceeds the maximum chunk size. The balancer monitors the number of chunks and migrates accordingly, attempting to keep the number of chunks relatively even amongst shards. More details noted here on the migration threshold documentation for sharded clusters.As per the MongoDB blog on-selecting-a-shard-key-for-mongodb , consecutive docs should reside on same shard while other docs may reside on other shards. Please clarify what’s going wrong in our testThe example on that blog page doesn’t appear to use hashed sharding where as the two shard keys you’ve provided both contain a hashed field which I presumed you based your testing on.I think in this particular scenario, I would advise testing with a more relevant set of data closer to what you are getting on your production environment or even what you have mentioned (multiple users, some users creating more documents than others rather than that of just 10k documents from a single user).Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "db.CompoundHashedShardKeyTest.getShardDistribution()\nShard mongos-01-shard-02 at mongos-01-shard-02/mongos-01-shard-02-01.company.net:27017,mongos-01-shard-02-02.company.net:27017\n{\n data: '1.1MiB',\n docs: 20000,\n chunks: 1,\n 'estimated data per chunk': '1.1MiB',\n 'estimated docs per chunk': 20000\n}\n---\nTotals\n{\n data: '1.1MiB',\n docs: 20000,\n chunks: 1,\n 'Shard mongos-01-shard-02': [\n '100 % data',\n '100 % docs in cluster',\n '58B avg obj size on shard'\n ]\n}\nfor (var i = 1; i <= 100; i++) {\n\tinner: \n\tfor (var j = 1; j <= 100; j++) {\n\t var date = new Date(1640995200000 + j * 1000 * 60 * 60);\n\t db.CompoundHashedShardKeyTest.insert({\"userid\":i,\"created_at\":date});\n\t if (i % 10 == 0 && j >= 10) {\n\t print(\"Inserted \" + j + \" records for user \" + i);\n\t break inner;\n\t }\n\t print(i + \"-\" + date);\n\t}\n}\ndb.CompoundHashedShardKeyTest.getShardDistribution()\nShard mongos-01-shard-02 at mongos-01-shard-02/mongos-01-shard-02-01.company.net:27017,mongos-01-shard-02-02.company.net:27017\n{\n data: '479KiB',\n docs: 9100,\n chunks: 1,\n 'estimated data per chunk': '479KiB',\n 'estimated docs per chunk': 9100\n}\n---\nTotals\n{\n data: '479KiB',\n docs: 9100,\n chunks: 1,\n 'Shard mongos-01-shard-02': [\n '100 % data',\n '100 % docs in cluster',\n '54B avg obj size on shard'\n ]\n}\ndb.CompoundHashedShardKeyTest.getShardDistribution()\nShard mongos-01-shard-01 at mongos-01-shard-01/mongos-01-shard-01-01.company.net:27017,mongos-01-shard-01-02.company.net:27017\n{\n data: '203KiB',\n docs: 3860,\n chunks: 2,\n 'estimated data per chunk': '101KiB',\n 'estimated docs per chunk': 1930\n}\n---\nShard mongos-01-shard-02 at mongos-01-shard-02/mongos-01-shard-02-01.company.net:27017,mongos-01-shard-02-02.company.net:27017\n{\n data: '276KiB',\n docs: 5240,\n chunks: 2,\n 'estimated data per chunk': '138KiB',\n 'estimated docs per chunk': 2620\n}\n---\nTotals\n{\n data: '479KiB',\n docs: 9100,\n chunks: 4,\n 'Shard mongos-01-shard-01': [\n '42.41 % data',\n '42.41 % docs in cluster',\n '54B avg obj size on shard'\n ],\n 'Shard mongos-01-shard-02': [\n '57.58 % data',\n '57.58 % docs in cluster',\n '54B avg obj size on shard'\n ]\n}\n",
"text": "hello, @Jason_Tran . thank you for the response!could you advise if you set a custom chunk size?No, we use the default chunk size of 64 MBJust to clarify here, are you seeing these 10K documents on one shard or one chunk?all these are in the same chunk on the same shard as seen belowThe example on that blog page doesn’t appear to use hashed shardingthat’s correct. After reading the blog initially, I was using the range sharding for both these fields and the distribution wasn’t good either. in fact, it was skewed towards the one shard no matter how many unique userid and date combinations of data are inserted into the collection. That’s when I inclined towards using the hash of the userid field while keeping the date field for the range sharding.I would advise testing with a more relevant set of data closer to what you are getting on your production environment or even what you have mentionedsure. That test was merely done to understand how to distribute documents belonging to one user among all the shards.I have now inserted data corresponding to the example I mentioned. There are a total of 100 users each inserting 100 documents except for every 10th user who inserts 10 records each. So, this test setup simulates 90 users each having 100 docs while the remaining 10 users have 10 docs each.I’m attaching the test script and results from two scenarios. first is with the shard key where there’s no hashing on either of these fields and second where there is hashing on userid field and ranged sharding on date fieldMy observations areI just want to solve this problem of distributing documents belonging to a userid containing different date values across shards to achieve better distribution and read locality when targeting documents belonging to a give input date rangePlease let me know if you need more info. Thank you very much for helping! Much appreciated Test data scriptShard distribution with ranged shard key on both fields as mentioned in the blogShard distribution with hashed-sharding on userid and range-sharding on date",
"username": "A_S_Gowri_Sankar"
},
{
"code": "",
"text": "@Jason_Tran just tagging you again in case you missed this one. thank you!",
"username": "A_S_Gowri_Sankar"
},
{
"code": "numInitialChunks",
"text": "Hi @A_S_Gowri_Sankar,in fact, it was skewed towards the one shard no matter how many unique userid and date combinations of data are inserted into the collectionShard distribution with ranged shard key on both fields as mentioned in the blogWith the ranged sharding, please note the following information which is on the Ranged Sharding documentation:If you shard an empty collection:Shard distribution with hashed-sharding on userid and range-sharding on dateWith the hashed-shard key, please note the following information which is on the Hashed Sharding documentation:Sharding Empty Collection on Single Field Hashed Shard KeyRegards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "hello, @Jason_Tran . thank you for respondingYeah, I’m aware of the chunk allocation/splits mentioned in the documentation. I don’t think it’s the chunk allocation that is causing the skew here. It’s the nature of the shard key chosen no matter how many initial chunks are specifiedI think my question boils down to something like this - what composite shard key should I choose for a collection which hasI guess we only have four shard key choices as mentioned below and none of it has helped the distribution better than choice #5 which is still heavily skewed towards one shard which matches a given userid",
"username": "A_S_Gowri_Sankar"
},
{
"code": "",
"text": "hello, @Jason_TranI did more tests and understood that sizes of the chunks do matter when using a non-hashing shard key for a collection. Whenever there’s less than 64 MB of data in the collection, MongoDB doesn’t bother and keeps the data in one chunk (and thereby one shard). However, the moment we add more data to the collection in the order of GBs, balancer triggers and splits the chunks and distributes the data in a somewhat uniform wayso, the current answer to my question so far is, out of the three meaningful shard key choices I havethe 3rd provides the most uniform distribution of all whereas 1st is relatively less uniform. 2nd one is not suitable in our case because all our read queries are on userid whereas date field is included in only certain cases. So, this shard key will result in scatter-gather and hence ignoredthank you for linking to the MongoDB docs. Though I have read them before, I was under the assumption that routing the documents to different chunks/shards is purely based on the incoming shard key values but the fact that it doesn’t work that way for ranged sharding is made evident in our tests so far. Thank you once again!Please feel free to comment/suggest further",
"username": "A_S_Gowri_Sankar"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Choosing a composite shard key involving an identifier and a date field | 2022-11-10T13:22:47.846Z | Choosing a composite shard key involving an identifier and a date field | 3,546 |
null | [
"java"
]
| [
{
"code": "// JsonObject json\n// ZonedDateTime date\n\nStringWriter sw = new StringWriter();\n\ntry (JsonWriter writer = Json.createWriter(sw)) {\n writer.writeObject(json);\n writer.close();\n} catch (Exception ex) {\n return null;\n}\n\nDocument document = new Document(BasicDBObject.parse(sw.toString()).toMap());\ndocument.append(\"entryDate\", date)\ncollection.insertOne(document); // <-- ERROR\nEncoding a ZonedDateTime: '2022-12-16T19:35:59.038918-05:00[America/New_York]' failed with the following exception:\nFailed to encode 'ZonedDateTime'. Encoding 'zone' errored with: An exception occurred when encoding using the AutomaticPojoCodec.\nEncoding a ZoneRegion: 'America/New_York' failed with the following exception:\nUnable to get value for property 'id' in ZoneRegion\nA custom Codec or PojoCodec may need to be explicitly configured and registered to handle this type.\nA custom Codec or PojoCodec may need to be explicitly configured and registered to handle this type.\n",
"text": "I’m having trouble finding documentation on how I can convert a ZonedDateTime object to a BSON property on a BSON Document. I’m using driver version 4.5.0.My code is like such:Error:I’ve seen anything like this before. ZonedDateTime is not handled by the driver? If I were to write a PojoCodec, what format would I be converting that for MongoDB to see it as a date?",
"username": "John_Manko"
},
{
"code": "",
"text": "Hi @John_Manko,How do you want a ZonedDateTime serialized into BSON?Regards,\nJeff",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "Well, I guess that what I’m trying to figure out. I need to retain time zone information, but also would like it if it’s a mongo date. Does mongo have any such type?",
"username": "John_Manko"
},
{
"code": "{\n \"date\" : <BSON Date value>, // assuming you only need millisecond precision>\n \"zoneId\" : <BSON String value representing the zone>\n}\n",
"text": "MongoDB does not support any such type directly. The closed thing is the BSON date type, which does not have time zone info. You might want to represent it as some sort of nested document that contains all the information you need. Something like:If something like that works, I can show you how to write a custom Codec to accomplish the goal.",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "I would appreciate the guidance on writing a codec. Your solution seems to be the only way.",
"username": "John_Manko"
},
{
"code": "import org.bson.BsonDocument;\nimport org.bson.BsonDocumentReader;\nimport org.bson.BsonDocumentWriter;\nimport org.bson.BsonReader;\nimport org.bson.BsonWriter;\nimport org.bson.codecs.Codec;\nimport org.bson.codecs.DecoderContext;\nimport org.bson.codecs.EncoderContext;\n\nimport java.time.Instant;\nimport java.time.ZoneId;\nimport java.time.ZonedDateTime;\n\npublic class ZonedDateTimeCodec implements Codec<ZonedDateTime> {\n\n @Override\n public ZonedDateTime decode(BsonReader reader, DecoderContext decoderContext) {\n reader.readStartDocument();\n\n // There is some risk in assuming that the order of these fields in the document never changes. \n // The order _could_ change in some circumstances, like if you update one of the fields independently \n // of the other, or if you use some other code in some cases to create these documents. If you're\n // concerned about the risk, then it's not that hard to re-write this code to remove the assumption\n // that the \"date\" field is always first\n long date = reader.readDateTime(\"date\");\n String zoneId = reader.readString(\"zoneId\");\n\n reader.readEndDocument();\n\n return ZonedDateTime.ofInstant(Instant.ofEpochMilli(date), ZoneId.of(zoneId));\n }\n\n @Override\n public void encode(BsonWriter writer, ZonedDateTime value, EncoderContext encoderContext) {\n writer.writeStartDocument();\n\n writer.writeDateTime(\"date\",value.toInstant().toEpochMilli());\n writer.writeString(\"zoneId\", value.getZone().getId());\n\n writer.writeEndDocument();\n }\n\n @Override\n public Class<ZonedDateTime> getEncoderClass() {\n return ZonedDateTime.class;\n }\n\n // Refactor this into some actual unit tests :)\n public static void main(String[] args) {\n var subject = ZonedDateTime.now();\n\n System.out.println(subject);\n\n var doc = new BsonDocument();\n var writer = new BsonDocumentWriter(doc);\n var codec = new ZonedDateTimeCodec();\n\n codec.encode(writer, subject, EncoderContext.builder().build());\n\n System.out.println(doc.toJson());\n\n var reader = new BsonDocumentReader(doc);\n\n var decoded = codec.decode(reader, DecoderContext.builder().build());\n\n System.out.println(decoded);\n }\n}\n\n",
"text": "Sure, no problem:",
"username": "Jeffrey_Yemin"
}
]
| Adding a ZonedDateTime propery to a BSON Document | 2022-12-16T19:39:23.408Z | Adding a ZonedDateTime propery to a BSON Document | 2,240 |
null | [
"crud",
"golang"
]
| [
{
"code": "Error: Found multiple array filters with the same top-level field name x\tfilter := bson.D{primitive.E{Key: \"_id\", Value: e.EventId}}\n\n\tarrayFilters := options.ArrayFilters{\n\t\tFilters: []interface{}{\n\t\t\tbson.M{\"x._id\": tt.TypeId},\n\t\t\tbson.M{\"x.typeAmountUsed\": bson.M{\"$lt\": \"$x.typeAmount\"}}, // <-- multiple filters\n\t\t},\n\t}\n\n\tupsert := true\n\topts := options.UpdateOptions{\n\t\tArrayFilters: &arrayFilters,\n\t\tUpsert: &upsert,\n\t}\n\tupdate := bson.M{\n\t\t\"$inc\": bson.D{{\"eventTicketTypes.$[x].typeAmountUsed\", 1}},\n\t}\n\n if _, err = events.UpdateOne(sessCtx, filter, update, &opts); err != nil {\n\t\treturn nil, err\n\t}\n",
"text": "I’m trying to update a nested document in MongoDB based on certain conditions. The problem is that using arrayFilters on the same top-level field name is not allowed. Any solutions for solving this problem?Error: Found multiple array filters with the same top-level field name x",
"username": "Daan_VDH"
},
{
"code": "\tarrayFilters := options.ArrayFilters{\n\t\tFilters: []interface{}{\n\t\t\tbson.D{\n\t\t\t\t{\"x._id\", tt.TypeId},\n\t\t\t\t{\"x.typeAmountUsed\", bson.M{\"$lt\": \"$x.typeAmount\"}},\n\t\t\t},\n\t\t},\n\t}\n",
"text": "Solved by doing:",
"username": "Daan_VDH"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Using multiple arrayFilters on the same top-level field in MongoDB with Go | 2022-12-20T13:19:58.221Z | Using multiple arrayFilters on the same top-level field in MongoDB with Go | 2,467 |
null | [
"compass",
"mongodb-shell",
"server",
"transactions",
"storage"
]
| [
{
"code": "brew services restart [email protected]\nStopping `mongodb-community`... (might take a while)\n==> Successfully stopped `mongodb-community` (label: homebrew.mxcl.mongodb-community)\n==> Successfully started `mongodb-community` (label: homebrew.mxcl.mongodb-community)\nmongosh\nCurrent Mongosh Log ID: 6399ebcbf7b524ab8380978a\nConnecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.6.1\nMongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017\n{\"t\":{\"$date\":\"2022-12-14T17:26:52.465+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"thread1\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-12-14T17:26:52.466+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"thread1\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-12-14T17:26:52.467+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"thread1\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2022-12-14T17:26:52.480+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-12-14T17:26:52.480+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-12-14T17:26:52.480+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2022-12-14T17:26:52.480+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"thread1\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-12-14T17:26:52.480+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":4184,\"port\":27017,\"dbPath\":\"/usr/local/var/mongodb\",\"architecture\":\"64-bit\",\"host\":\"MacBook-Pro-2.local\"}}\n{\"t\":{\"$date\":\"2022-12-14T17:26:52.480+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.1\",\"gitVersion\":\"32f0f9c88dc44a2c8073a5bd47cf779d4bfdee6b\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-12-14T17:26:52.480+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"22.2.0\"}}}\n{\"t\":{\"$date\":\"2022-12-14T17:26:52.480+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/usr/local/etc/mongod.conf\",\"net\":{\"bindIp\":\"127.0.0.1\"},\"storage\":{\"dbPath\":\"/usr/local/var/mongodb\"},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/usr/local/var/log/mongodb/mongo.log\"}}}}\n{\"t\":{\"$date\":\"2022-12-14T17:26:52.482+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":5693100, \"ctx\":\"initandlisten\",\"msg\":\"Asio socket.set_option failed with std::system_error\",\"attr\":{\"note\":\"acceptor TCP fast open\",\"option\":{\"level\":6,\"name\":261,\"data\":\"00 04 00 00\"},\"error\":{\"what\":\"set_option: Invalid argument\",\"message\":\"Invalid argument\",\"category\":\"asio.system\",\"value\":22}}}\n{\"t\":{\"$date\":\"2022-12-14T17:26:52.502+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/usr/local/var/mongodb\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2022-12-14T17:26:52.502+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=32256M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2022-12-14T17:26:53.152+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":13,\"message\":\"[1671038813:151111][4184:0x7ff845fae8c0], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: int __posix_open_file(WT_FILE_SYSTEM *, WT_SESSION *, const char *, WT_FS_OPEN_FILE_TYPE, uint32_t, WT_FILE_HANDLE **), 805: /usr/local/var/mongodb/WiredTiger.turtle: handle-open: open: Permission denied\"}}\n{\"t\":{\"$date\":\"2022-12-14T17:26:53.154+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":13,\"message\":\"[1671038813:154273][4184:0x7ff845fae8c0], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: int __posix_open_file(WT_FILE_SYSTEM *, WT_SESSION *, const char *, WT_FS_OPEN_FILE_TYPE, uint32_t, WT_FILE_HANDLE **), 805: /usr/local/var/mongodb/WiredTiger.turtle: handle-open: open: Permission denied\"}}\n{\"t\":{\"$date\":\"2022-12-14T17:26:53.155+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":13,\"message\":\"[1671038813:155027][4184:0x7ff845fae8c0], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: int __posix_open_file(WT_FILE_SYSTEM *, WT_SESSION *, const char *, WT_FS_OPEN_FILE_TYPE, uint32_t, WT_FILE_HANDLE **), 805: /usr/local/var/mongodb/WiredTiger.turtle: handle-open: open: Permission denied\"}}\n{\"t\":{\"$date\":\"2022-12-14T17:26:53.155+00:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22347, \"ctx\":\"initandlisten\",\"msg\":\"Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade.\"}\n{\"t\":{\"$date\":\"2022-12-14T17:26:53.155+00:00\"},\"s\":\"F\", \"c\":\"STORAGE\", \"id\":28595, \"ctx\":\"initandlisten\",\"msg\":\"Terminating.\",\"attr\":{\"reason\":\"13: Permission denied\"}}\n{\"t\":{\"$date\":\"2022-12-14T17:26:53.155+00:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":28595,\"file\":\"src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp\",\"line\":702}}\n{\"t\":{\"$date\":\"2022-12-14T17:26:53.155+00:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n",
"text": "Just upgraded (14.12.2022) to macOS Ventura 13.1 (22C65).Using Mongo on /usr/local/Cellar/mongodb-community/6.0.1/bin:Then I run “mongosh”:Now gives:Also unable to use: MongoDB Compass or Studio 3T.Was working fine before I upgraded macOS Ventura 13.0.1 → 13.1. TIA./usr/local/var/log/mongodb/mongo.logBTW - Had to move back to “mongodb-community/5.0.7”:",
"username": "Roger_Lee"
},
{
"code": "port upgrade outdated",
"text": "I assume you used MacPorts to install MongoDB? Try doing port upgrade outdated",
"username": "Jack_Woehr"
},
{
"code": "\"Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade.\"mongodfeatureCompatibilityVersion",
"text": "Hi @Roger_LeeThe main error is this one:\"Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade.\"This is a typical error when the mongod binaries are changed from an older version without setting the featureCompatibilityVersion to match. You mentioned that using 5.07 works, so I would assume that this same deployment was also binary-upgraded from the 4.4 series?I suggest you examine the following links for an upgrade path from 4.4 → 5.0 → 6.0Hope this helps!Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "mongodfeatureCompatibilityVersionbrew unpin mongodb-community\n\nbrew upgrade\nRunning `brew update --auto-update`...\n==> Auto-updated Homebrew!\nUpdated 1 tap (homebrew/cask).\n\nYou have 2 outdated formulae installed.\nYou can upgrade them with brew upgrade\nor list them with brew outdated.\n\n==> Upgrading 1 outdated package:\nmongodb/brew/mongodb-community 6.0.3\n==> Fetching mongodb/brew/mongodb-community\n==> Downloading https://fastdl.mongodb.org/osx/mongodb-macos-x86_64-6.0.3.tgz\n######################################################################## 100.0%\n==> Upgrading mongodb/brew/mongodb-community\n -> 6.0.3 \n\n==> Caveats\nTo restart mongodb/brew/mongodb-community after an upgrade:\n brew services restart mongodb/brew/mongodb-community\nOr, if you don't want/need a background service you can just run:\n mongod --config /usr/local/etc/mongod.conf\n==> Summary\n🍺 /usr/local/Cellar/mongodb-community/6.0.3: 10 files, 208.8MB, built in 3 seconds\n==> Running `brew cleanup mongodb-community`...\nDisable this behaviour by setting HOMEBREW_NO_INSTALL_CLEANUP.\nHide these hints with HOMEBREW_NO_ENV_HINTS (see `man brew`).\nRemoving: /usr/local/Cellar/mongodb-community/5.0.7... (11 files, 182.5MB)\nbrew services restart mongodb/brew/mongodb-community\n\nStopping `mongodb-community`... (might take a while)\n\n==> **Successfully stopped `mongodb-community` (label: homebrew.mxcl.mongodb-community)**\n==> **Successfully started `mongodb-community` (label: homebrew.mxcl.mongodb-community)**\nCurrent Mongosh Log ID: 63a1a7085da8f3d98754d6f3\n\nConnecting to: **mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.6.1**\n\nUsing MongoDB: 5.0.7\n\n**Using Mongosh** : 1.6.1\n",
"text": "This is a typical error when the mongod binaries are changed from an older version without setting the featureCompatibilityVersion to matchI haven’t changed/upgraded MongoDB, still with v.6.0.1. It was caused when I updated macOS Ventura 13.1 (22C65).To get mongoDB running again I had to delete all ‘mongodb’ on brew and reinstall my last version (5.0.7) before 6.0/6.0.1.Update: Just noticed on ‘brew’ that MongoDB 6.0.1 → 6.0.3:Obviously a bug with MongoDB on macOS. The two upgrads of 6.01 to 6.0.2/6.0.3 fixes:BTW - “mongosh” always says it running on “MongoDB 5.0.7”:Using “Studio 3T” to run JS on MongoDB 6.0.3.",
"username": "Roger_Lee"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB 6.x - No longer runs on macOS Ventura Version 13.1 (22C65) | 2022-12-19T09:31:48.944Z | MongoDB 6.x - No longer runs on macOS Ventura Version 13.1 (22C65) | 4,514 |
[
"dot-net"
]
| [
{
"code": "",
"text": "I’m trying to create a RealmObject entry with an IList Embedded Object as a nested property. I can’t make it work. The google answers I’ve found shows a single index added from a list, not the whole list.how can i make this work ?the code:\n\nimage416×533 43.2 KB\nthe error :\nCannot implicitly convert type ‘Heals’ to ‘System.Collections.Generic.IList’. An explicit conversion exists (are you missing a cast?)",
"username": "Joshua_Baldos"
},
{
"code": "HealsIList<Heals>HealItems.Add(healing)IList",
"text": "Hi @Joshua_BaldosYou are trying to assign a single Heals value to a IList<Heals>, that is the reason why you are getting that error.\nYou probably wanted to do something like HealItems.Add(healing). You don’t need to (and can’t) assign the IList itself, but you can add and remove elements from it.",
"username": "papafe"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| IList embedded Object for a RealmObject returning with error | 2022-12-20T11:43:28.333Z | IList embedded Object for a RealmObject returning with error | 1,209 |
|
null | []
| [
{
"code": "",
"text": "my documents have a fieldextensionSV= [\n{‘system’: ‘rfrnc_yr’, ‘value’: ‘0003’},\n{‘system’: ‘dual_01’, ‘value’: ‘AA’},\n{‘system’: ‘dual_02’, ‘value’: ‘AA’},\n{‘system’: ‘dual_03’, ‘value’: ‘AA’},\n{‘system’: ‘dual_04’, ‘value’: ‘AA’},\n{‘system’: ‘dual_05’, ‘value’: ‘AA’},\n{‘system’: ‘dual_06’, ‘value’: ‘AA’},\n{‘system’: ‘dual_07’, ‘value’: ‘AA’},\n{‘system’: ‘dual_08’, ‘value’: ‘AA’},\n{‘system’: ‘dual_09’, ‘value’: ‘AA’},\n{‘system’: ‘dual_10’, ‘value’: ‘AA’},\n{‘system’: ‘dual_11’, ‘value’: ‘AA’},\n{‘system’: ‘dual_12’, ‘value’: ‘AA’}\n]further in pipeline I need to use a\nvar value = ?? (where system==dual_09),\nhow can i access array values in that way",
"username": "Vikram_Jindal"
},
{
"code": "db.collection.find({system:\"dual_09\"})\ndb.collection.find({extensionSV.system:\"dual_09\"})\n",
"text": "If that represents your databaseIf that represents a field, mongodb traverses the arrays:Or what exactly are you trying to achieve?",
"username": "santimir"
},
{
"code": "",
"text": "in project stage we need a new key-value, added at document level which is ‘rfrnc_yr’ : ‘0003’ or ‘dual_03’ : ‘AA’As of now these exist in the array",
"username": "Vikram_Jindal"
},
{
"code": "",
"text": "Add a complete sample please:Then we can try to write a pipeline.",
"username": "santimir"
}
]
| Find value in array | 2022-12-19T17:32:35.043Z | Find value in array | 1,111 |
null | [
"node-js"
]
| [
{
"code": "Error [MongoError]: BSON field 'authenticate.nonce' is an unknown field. at Function.MongoError.create (D:\\website\\node_modules\\mongodb-core\\lib\\error.js:31:11) at D:\\website\\node_modules\\mongodb-core\\lib\\connection\\pool.js:497:72 at authenticateStragglers (D:\\website\\node_modules\\mongodb-core\\lib\\connection\\pool.js:443:16) at Connection.messageHandler (D:\\website\\node_modules\\mongodb-core\\lib\\connection\\pool.js:477:5) at TLSSocket.<anonymous> (D:\\website\\node_modules\\mongodb-core\\lib\\connection\\connection.js:333:22) at TLSSocket.emit (node:events:527:28) at TLSSocket.emit (node:domain:475:12) at addChunk (node:internal/streams/readable:315:12) at readableAddChunk (node:internal/streams/readable:289:9) at TLSSocket.Readable.push (node:internal/streams/readable:228:10) at TLSWrap.onStreamRead (node:internal/stream_base_commons:190:23) { ok: 0, errmsg: \"BSON field 'authenticate.nonce' is an unknown field.\", code: 8000, codeName: 'AtlasError' }\"mongodb\": \"^2.2.36\",\n\"mongoose\": \"^5.1.1\",\n\"mongoskin\": \"^2.1.0\",",
"text": "Error [MongoError]: BSON field 'authenticate.nonce' is an unknown field. at Function.MongoError.create (D:\\website\\node_modules\\mongodb-core\\lib\\error.js:31:11) at D:\\website\\node_modules\\mongodb-core\\lib\\connection\\pool.js:497:72 at authenticateStragglers (D:\\website\\node_modules\\mongodb-core\\lib\\connection\\pool.js:443:16) at Connection.messageHandler (D:\\website\\node_modules\\mongodb-core\\lib\\connection\\pool.js:477:5) at TLSSocket.<anonymous> (D:\\website\\node_modules\\mongodb-core\\lib\\connection\\connection.js:333:22) at TLSSocket.emit (node:events:527:28) at TLSSocket.emit (node:domain:475:12) at addChunk (node:internal/streams/readable:315:12) at readableAddChunk (node:internal/streams/readable:289:9) at TLSSocket.Readable.push (node:internal/streams/readable:228:10) at TLSWrap.onStreamRead (node:internal/stream_base_commons:190:23) { ok: 0, errmsg: \"BSON field 'authenticate.nonce' is an unknown field.\", code: 8000, codeName: 'AtlasError' }",
"username": "Kushal_Davda"
},
{
"code": "",
"text": "couldn’t find any data about this error/issue.",
"username": "Kushal_Davda"
},
{
"code": "",
"text": "Does your collection contain a field called authenticate? What is the data type of authenticate.nonce? I guess I am wondering if this field is being represented in the sql/relational schema that BIC needs in place.\nAre you using on-prem BIC or Atlas BIC? It looks like this is Atlas data based on the error message. If you are using Atlas BIC, you could try to resample the data (found in the cluster config).\nWhat tool are using to connect with BIC? This last question is more out of curiosity and might help with further troubleshooting efforts for this issue.Best,\nAlexi Antonino (Product Manager, BIC & Atlas SQL)",
"username": "Alexi_Antonino"
},
{
"code": "",
"text": "no, my collection does not contain a field called authenticate. it seems to be an internal error in node modules.",
"username": "Kushal_Davda"
},
{
"code": "",
"text": "also what’s BIC? its been 5 days I am not able to resolve this issue",
"username": "Kushal_Davda"
},
{
"code": "",
"text": "no, my collection does not contain a field called authenticate. it seems to be an internal error in node modules.",
"username": "Kushal_Davda"
}
]
| BSON field 'authenticate.nonce' is an unknown field | 2022-12-15T10:42:34.656Z | BSON field ‘authenticate.nonce’ is an unknown field | 2,466 |
[]
| [
{
"code": "",
"text": "hello\ni am trying to install mongodb community edition on ubuntu 20 through this docs\nwhen i run sudo apt-get update command i am getting these errorsErr:5 MongoDB Repositories focal/mongodb-org/6.0 InRelease\n403 Forbidden [IP: 2600:9000:2240:5800:0:bd83:86c0:93a1 443]i have read about apt-get 403 errors and the answers point to internet restrictions so i tried with vpn but i ran to the same issue",
"username": "Ayin_Mozhdi"
},
{
"code": "",
"text": "Could be firewall issues\nDid you try from another network?\nCheck this link",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "firewall is not even activated yet its a new serveri have done this many times before and i never had this problemi have to mention that i have installed mongodb on my own laptop and its working well but when i try sudo apt-get update iam having the same erroriam not sure about internet ; there are too many restrictions on internet here (in iran) but i tried with 2 different internet providers and also with vpni have tried what was said in the link you mentioned the result was the same still same issue exists",
"username": "Ayin_Mozhdi"
}
]
| Installing issues | 2022-12-20T02:10:55.670Z | Installing issues | 974 |
|
null | [
"queries"
]
| [
{
"code": "",
"text": "I found this article here:Code, content, tutorials, programs and community to enable developers of all skill levels on the MongoDB Data Platform. Join or follow us here to learn more!Looking at the last example connecting with Static Geeration with MongoDBCould anybody provide a sample code for theimport { connectToDatabase } from “…/util/mongodb”;^^^ the …util/mongodb code is not shown in the article.From there I can go to the rest of it like this and start working with the data.export async function getStaticProps() {\n10 const { db } = await connectToDatabase();\n11\n12 const movies = await db\n13 .collection(“movies”)\n14 .find({})\n15 .sort({ metacritic: -1 })\n16 .limit(1000)\n17 .toArray();\n18\n19 return {\n20 props: {\n21 movies: JSON.parse(JSON.stringify(movies)),\n22 },\n23 };\n24 }",
"username": "Edgar_Lindo"
},
{
"code": "import { MongoClient } from 'mongodb'\n\nlet uri = process.env.MONGODB_URI\nlet dbName = process.env.MONGODB_DB\n\nlet cachedClient = null\nlet cachedDb = null\n\nif (!uri) {\n throw new Error(\n 'Please define the MONGODB_URI environment variable inside .env.local'\n )\n}\n\nif (!dbName) {\n throw new Error(\n 'Please define the MONGODB_DB environment variable inside .env.local'\n )\n}\n\nexport async function connectToDatabase() {\n if (cachedClient && cachedDb) {\n return { client: cachedClient, db: cachedDb }\n }\n\n const client = await MongoClient.connect(uri, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n })\n\n const db = await client.db(dbName)\n\n cachedClient = client\n cachedDb = db\n\n return { client, db }\n}\n",
"text": "Hey @Edgar_Lindo. The code is located in this github repo.I have also pasted it below:Cheers!",
"username": "sohailshaikh1413"
},
{
"code": "JSON.parse(JSON.stringify(dbData))",
"text": "My only head scratcher was on how to return props from the getStaticProps().\nI lost 2 days until I saw the JSON.parse(JSON.stringify(dbData)) solution.\nDoes anyone know why can’t I return an array of objects?",
"username": "Branislav_Damjanovic"
}
]
| How to connect to MongoDB using Nextjs getStaticProps? | 2022-05-26T01:22:46.281Z | How to connect to MongoDB using Nextjs getStaticProps? | 3,122 |
null | [
"queries",
"java"
]
| [
{
"code": "collection.find().sort(Sorts.descending(\"x\", \"y\")).skip(page * limit).limit(limit)",
"text": "I’m currently using collection.find().sort(Sorts.descending(\"x\", \"y\")).skip(page * limit).limit(limit), but how can I sort by whatever “x” and “y” is added together? (Java)",
"username": "Arbee_N_A"
},
{
"code": "{ _id : 0 ,\n foo : 10 ,\n bar : 20 }\n{ _id : 1 ,\n foo : 5 ,\n bar : 35 }\nset_sum = { \"$set\" : { \"_sum\" : { \"$sum\" : [ \"$foo\" , \"$bar\" ] } } }\nsort_sum = { \"$sort\" : { \"_sum\" : -1 } }\nskip = { \"$skip\" : page * limit }\nlimit = { \"$limit\" : limit }\npipeline = [ set_sum , sort_sum , skip , limit ]\n",
"text": "If I understand correctly your documents have two fields such as:and you want to sort based on the result of adding foo to bar.I do not think you can do that with find() and sort(). You need aggregation:I do not use the Builders so I cannot give you the exact syntax but Compass can take the pipeline and export it to Java code that uses the builders.Since you are sorting on a computed value a collection scan will be performed. You might want to store the sum permanently and have an index if you have performance issues.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks you, that worked… Also thanks for mentioning that Compass can do that… Wouldn’t have been able to figure it out without that ._:",
"username": "Arbee_N_A"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Sort by two fields added together | 2022-12-19T01:56:59.245Z | Sort by two fields added together | 1,331 |
[
"swift"
]
| [
{
"code": "import SwiftUI\nimport Realm\nimport RealmSwift\n\nlet realmApp = RealmSwift.App(id: \"xxxxxx\")\n\n@main\nstruct RealmDemoApp: SwiftUI.App {\n var body: some Scene {\n WindowGroup {\n if UserDefaults.standard.bool(forKey: \"ISUSERLOGGEDIN\") == true {\n if let user = realmApp.currentUser {\n DetailView(model: DemoModel())\n .environment(\\.realmConfiguration, user.configuration(partitionValue: \"demo\"))\n }\n }\n else {\n LoginView()\n }\n }\n }\n}\nimport SwiftUI\nimport RealmSwift\n\nstruct LoginView: View {\n @State var email: String = \"\"\n @State var password: String = \"\"\n @State private var isSecured: Bool = true\n let realm = try! Realm()\n \n var body: some View {\n NavigationView {\n VStack(alignment: .center, spacing: 30) {\n Spacer()\n Text(\"Login\")\n .font(.largeTitle)\n .bold()\n .foregroundColor(Color(.init(srgbRed: 255, green: 215, blue: 0, alpha: 0.9)))\n \n VStack(spacing: 20){\n VStack(alignment: .leading){\n HStack(){\n Text(\"Email\")\n .font(.title)\n .bold()\n .foregroundColor(Color(.init(srgbRed: 255, green: 215, blue: 0, alpha: 0.9)))\n Spacer()\n }\n TextField(\"Email\", text: $email)\n .textFieldStyle(.roundedBorder)\n .border(Color(.init(srgbRed: 255, green: 215, blue: 0, alpha: 0.9)))\n .cornerRadius(7)\n }\n VStack(alignment: .leading){\n HStack(){\n Text(\"Password\")\n .font(.title)\n .bold()\n .foregroundColor(Color(.init(srgbRed: 255, green: 215, blue: 0, alpha: 0.9)))\n Spacer()\n }\n \n ZStack(alignment: .trailing, content: {\n Group {\n if isSecured {\n SecureField(\"Password\", text: $password)\n .textFieldStyle(.roundedBorder)\n .border(Color(.init(srgbRed: 255, green: 215, blue: 0, alpha: 0.9)))\n .cornerRadius(7)\n }\n else {\n TextField(\"Password\", text: $password)\n .textFieldStyle(.roundedBorder)\n .border(Color(.init(srgbRed: 255, green: 215, blue: 0, alpha: 0.9)))\n .cornerRadius(7)\n }\n }\n Button {\n isSecured.toggle()\n } label: {\n Image(systemName: self.isSecured ? \"eye.slash\" : \"eye\")\n .accentColor(.gray)\n .padding(20)\n }\n \n })\n }\n \n \n NavigationLink {\n if UserDefaults.standard.bool(forKey: \"ISUSERLOGGEDIN\") == true {\n if let user = realmApp.currentUser {\n DetailView(model: DemoModel())\n .environment(\\.realmConfiguration, user.configuration(partitionValue: \"pnl\"))\n }\n else {\n Text(\"login view 84th line\")\n }\n }\n } label: {\n Button {\n if(email.isEmpty && password.isEmpty){\n// RealmAuthAnonymous()\n print(\"enter email and password\")\n }\n else {\n RealmAuth(email: email, password: password)\n }\n } label: {\n Text(\"Sign In\")\n }\n .cornerRadius(10)\n .frame(width: UIScreen.main.bounds.width/1.5, height: 50)\n .background(Color(.init(srgbRed: 255, green: 215, blue: 0, alpha: 0.9)))\n .onTapGesture {\n if UserDefaults.standard.bool(forKey: \"ISUSERLOGGEDIN\") == true {\n UserDefaults.standard.set(email, forKey: \"email\")\n print(\"email is: \\($email)\")\n }\n }\n }\n .navigationBarBackButtonHidden(true)\n \n \n \n HStack{\n Button {\n \n \n } label: {\n Text(\"Forgot Password\")\n .underline()\n .font(.caption)\n .bold()\n .foregroundColor(Color(.init(srgbRed: 255, green: 215, blue: 0, alpha: 0.9)))\n }\n \n }\n HStack{\n Text(\"Don't have an account?\")\n .font(.caption)\n .bold()\n .foregroundColor(Color(.init(srgbRed: 255, green: 215, blue: 0, alpha: 0.9)))\n \n NavigationLink {\n SignUpView()\n } label: {\n Text(\"Sign Up\")\n .underline()\n .font(.title2)\n .bold()\n .foregroundColor(Color(.init(srgbRed: 255, green: 215, blue: 0, alpha: 0.9)))\n }\n \n \n }\n }\n Text(\"or\")\n .font(.caption)\n .bold()\n .foregroundColor(Color(.init(srgbRed: 255, green: 215, blue: 0, alpha: 0.9)))\n \n Spacer()\n \n }\n .padding()\n .edgesIgnoringSafeArea(.all)\n .background(.black)\n }\n\n \n // func login(){\n // Task {\n // do{\n // let user = try await app.login(credentials: .anonymous)\n // username = user.id\n // } catch {\n // print(\"Failed to login: \\(error.localizedDescription)\")\n // }\n // }\n // }\n }\n}\n\nstruct LoginView_Previews: PreviewProvider {\n static var previews: some View {\n LoginView()\n }\n}\nimport SwiftUI\n\nstruct SignUpView: View {\n @State var email: String = \"\"\n @State var password: String = \"\"\n @State private var isSecured: Bool = true\n// let userDefaults = UserDefaults.standard\n \n var body: some View {\n \n VStack(alignment: .center, spacing: 30) {\n Spacer()\n Text(\"Sign Up\")\n .font(.largeTitle)\n .bold()\n .foregroundColor(Color(.init(srgbRed: 255, green: 215, blue: 0, alpha: 0.9)))\n \n VStack(spacing: 20){\n VStack(alignment: .leading){\n HStack(){\n Text(\"Email\")\n .font(.title)\n .bold()\n .foregroundColor(Color(.init(srgbRed: 255, green: 215, blue: 0, alpha: 0.9)))\n Spacer()\n }\n TextField(\"Email\", text: $email)\n .textFieldStyle(.roundedBorder)\n .border(Color(.init(srgbRed: 255, green: 215, blue: 0, alpha: 0.9)))\n .cornerRadius(7)\n }\n VStack(alignment: .leading){\n HStack(){\n Text(\"Password\")\n .font(.title)\n .bold()\n .foregroundColor(Color(.init(srgbRed: 255, green: 215, blue: 0, alpha: 0.9)))\n Spacer()\n }\n ZStack(alignment: .trailing, content: {\n Group {\n if isSecured {\n SecureField(\"Password\", text: $password)\n .textFieldStyle(.roundedBorder)\n .border(Color(.init(srgbRed: 255, green: 215, blue: 0, alpha: 0.9)))\n .cornerRadius(7)\n }\n else {\n TextField(\"Password\", text: $password)\n .textFieldStyle(.roundedBorder)\n .border(Color(.init(srgbRed: 255, green: 215, blue: 0, alpha: 0.9)))\n .cornerRadius(7)\n }\n }\n Button {\n isSecured.toggle()\n } label: {\n Image(systemName: self.isSecured ? \"eye.slash\" : \"eye\")\n .accentColor(.gray)\n .padding(20)\n }\n \n })\n }\n \n Button {\n if(email.isEmpty && password.isEmpty){\n// RealmAuthAnonymous()\n print(\"enter email and password\")\n }\n else {\n RealmRegister(email: email, password: password)\n }\n } label: {\n Text(\"Sign Up\")\n }\n .cornerRadius(10)\n .frame(width: UIScreen.main.bounds.width/1.5, height: 50)\n .background(Color(.init(srgbRed: 255, green: 215, blue: 0, alpha: 0.9)))\n \n }\n \n Text(\"or\")\n .font(.caption)\n .bold()\n .foregroundColor(Color(.init(srgbRed: 255, green: 215, blue: 0, alpha: 0.9)))\n \n Button {\n //\n } label: {\n Image(\"google\")\n .clipShape(Circle())\n }\n \n Spacer()\n \n }\n .padding()\n .edgesIgnoringSafeArea(.all)\n .background(.black)\n }\n}\n\nstruct SignUpView_Previews: PreviewProvider {\n static var previews: some View {\n SignUpView()\n }\n}\nimport SwiftUI\nimport Realm\nimport RealmSwift\n\nstruct DetailView: View {\n @ObservedRealmObject var model: DemoModel\n \n @State var busy = false\n \n var body: some View {\n ZStack{\n VStack {\n List {\n Text(\"entry\")\n .onAppear(){\n print(model)\n }\n }\n }\n .padding()\n if busy {\n ProgressView()\n }\n }\n .onChange(of: model) { newValue in\n print(\"on change: \")\n print(model)\n }\n }\n}\n\nstruct DetailView_Previews: PreviewProvider {\n static var previews: some View {\n DetailView(model: DemoModel())\n }\n}\nimport Foundation\nimport RealmSwift\n\nclass DemoModel: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id: ObjectId?\n @Persisted var SubModel: List<DemoModel_SubModel>\n @Persisted var date: String?\n \n override static func primaryKey() -> String? {\n return \"_id\"\n }\n \n convenience init(SubModel: List<DemoModel_SubModel>, date: String?) {\n self.init()\n self.SubModel = SubModel\n self.date = date\n }\n \n}\nimport Foundation\nimport RealmSwift\n\nclass DemoModel_SubModel: EmbeddedObject, ObjectKeyIdentifiable {\n @Persisted var field1: Int?\n @Persisted var field2: Int?\n @Persisted var field3: Int?\n @Persisted var field4: Int?\n @Persisted var field5: Int?\n @Persisted var field6: Int?\n @Persisted var field7: Int?\n @Persisted var field8: Int?\n @Persisted var field9: Int?\n @Persisted var field10: Int?\n @Persisted var field11: Int?\n @Persisted var field12: Int?\n @Persisted var field13: Int?\n @Persisted var field14: Int?\n @Persisted var field15: Int?\n @Persisted var field16: Int?\n @Persisted var field17: Int?\n @Persisted var field18: Int?\n @Persisted var field19: Int?\n @Persisted var field20: Int?\n @Persisted var field21: String?\n @Persisted var field22: Int?\n @Persisted var field23: Int?\n @Persisted var field24: Int?\n @Persisted var field25: Int?\n @Persisted var field26: Int?\n @Persisted var field27: Int?\n @Persisted var field28: Int?\n @Persisted var field29: Int?\n @Persisted var field30: Int?\n @Persisted var field31: Int?\n @Persisted var field32: Int?\n \n convenience init(field1: Int?, field2: Int?, field3: Int?, field4: Int?, field5: Int?, field6: Int?, field7: Int?, field8: Int?, field9: Int?, field10: Int?, field11: Int?, field12: Int?, field13: Int?, field14: Int?, field15: Int?, field16: Int?, field17: Int?, field18: Int?, field19: Int?, field20: Int?, field21: String?, field22: Int?, field23: Int?, field24: Int?, field25: Int?, field26: Int?, field27: Int?, field28: Int?, field29: Int?, field30: Int?, field31: Int?, field32: Int?) {\n self.init()\n self.field1 = field1\n self.field2 = field2\n self.field3 = field3\n self.field4 = field4\n self.field5 = field5\n self.field6 = field6\n self.field7 = field7\n self.field8 = field8\n self.field9 = field9\n self.field10 = field10\n self.field11 = field11\n self.field12 = field12\n self.field13 = field13\n self.field14 = field14\n self.field15 = field15\n self.field16 = field16\n self.field17 = field17\n self.field18 = field18\n self.field19 = field19\n self.field20 = field20\n self.field21 = field21\n self.field22 = field22\n self.field23 = field23\n self.field24 = field24\n self.field25 = field25\n self.field26 = field26\n self.field27 = field27\n self.field28 = field28\n self.field29 = field29\n self.field30 = field30\n self.field31 = field31\n self.field32 = field32\n }\n \n}\nimport Foundation\nimport Realm\nimport RealmSwift\n\nfunc RealmRegister(email: String, password: String){\n let client = realmApp.emailPasswordAuth\n client.registerUser(email: email, password: password){ (error) in\n guard error == nil else {\n print(\"Failed to register: \\(error!.localizedDescription)\")\n return\n }\n print(\"successfully registered user\")\n }\n}\n\nfunc RealmAuth(email: String, password: String){\n realmApp.login(credentials: Credentials.emailPassword(email: email, password: password)) { (result) in\n switch result {\n case .failure(let error):\n print(\"Login failed: \\(error.localizedDescription)\")\n case .success(let user):\n UserDefaults.standard.set(true, forKey: \"ISUSERLOGGEDIN\")\n print(\"Successfully logged in as user \\(user)\")\n }\n }\n}\n\nfunc RealmAuthAnonymous() {\n let anonymousCredentials = Credentials.anonymous\n realmApp.login(credentials: anonymousCredentials){ (result) in\n switch result {\n case .failure(let error):\n print(\"Anonymous Login failed: \\(error.localizedDescription)\")\n case .success(let user):\n UserDefaults.standard.set(true, forKey: \"ISUSERLOGGEDIN\")\n print(\"Successfully anonymously logged in as user \\(user)\")\n }\n }\n}\n{\n \"title\": \"DemoModel\",\n \"properties\": {\n \"SubModel\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"field1\": {\n \"bsonType\": \"int\"\n },\n \"field10\": {\n \"bsonType\": \"int\"\n },\n \"field11\": {\n \"bsonType\": \"int\"\n },\n \"field12\": {\n \"bsonType\": \"int\"\n },\n \"field13\": {\n \"bsonType\": \"int\"\n },\n \"field14\": {\n \"bsonType\": \"int\"\n },\n \"field15\": {\n \"bsonType\": \"int\"\n },\n \"field16\": {\n \"bsonType\": \"int\"\n },\n \"field17\": {\n \"bsonType\": \"int\"\n },\n \"field18\": {\n \"bsonType\": \"int\"\n },\n \"field19\": {\n \"bsonType\": \"int\"\n },\n \"field2\": {\n \"bsonType\": \"int\"\n },\n \"field20\": {\n \"bsonType\": \"int\"\n },\n \"field21\": {\n \"bsonType\": \"string\"\n },\n \"field22\": {\n \"bsonType\": \"int\"\n },\n \"field23\": {\n \"bsonType\": \"int\"\n },\n \"field24\": {\n \"bsonType\": \"int\"\n },\n \"field25\": {\n \"bsonType\": \"int\"\n },\n \"field26\": {\n \"bsonType\": \"int\"\n },\n \"field27\": {\n \"bsonType\": \"int\"\n },\n \"field28\": {\n \"bsonType\": \"int\"\n },\n \"field29\": {\n \"bsonType\": \"int\"\n },\n \"field3\": {\n \"bsonType\": \"int\"\n },\n \"field30\": {\n \"bsonType\": \"int\"\n },\n \"field31\": {\n \"bsonType\": \"int\"\n },\n \"field32\": {\n \"bsonType\": \"int\"\n },\n \"field4\": {\n \"bsonType\": \"int\"\n },\n \"field5\": {\n \"bsonType\": \"int\"\n },\n \"field6\": {\n \"bsonType\": \"int\"\n },\n \"field7\": {\n \"bsonType\": \"int\"\n },\n \"field8\": {\n \"bsonType\": \"int\"\n },\n \"field9\": {\n \"bsonType\": \"int\"\n }\n }\n }\n },\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"date\": {\n \"bsonType\": \"string\"\n }\n }\n}\n",
"text": "I created an embeddedObject containing many fields that are present if excel as column headings (which are the key), and values are the values in each row accordingly. A list of these embeddedObjects was then created, having an array of embeddedObject so as to contain each day’s data. The object had a date field where a string was given, and an ObjectId as well as the list of these embeddedObjects. Flexible sync I tried, but in documentation it is mentioned that it requires(prerequisites) non shared MongoDB Atlas cluster running MongoDB 5.0 or greater. Also created an ObservedResults variable to get the results for the frontend. While opening the frontend, I also gave an environment after checking if user is logged in. I have also tried partition based sync, which apparently did not work for this project, although it did for another one.Here are the files:\nRealmDemoApp.Swift:LoginView.swift:SignUpView.swift:DetailView.swift:DemoModel.swift:DemoModel_SubModel.swift:Constants.swift:I have given Database access as read and write to any database, network access as 0.0.0.0/0, defined Rules as readAndWriteAll, generated Schema from data as:DemoModel.swift and DemoModel_SubModel.swift have been replicated as suggested by RealmObjectModels.\nI have also enabled Partition-based Device Sync, and the Developer Mode is on.\nThe database looks like this:\nimage2372×712 60 KB\n\nimage1582×608 50.7 KB\n\nimage1548×646 62 KB\nI am getting stuck somewhere still, as the results show this:\n\nimage824×220 15 KB\nThis project is at urgent priority, would be appreciative if somebody could help where I am getting it wrong.",
"username": "Margi_Bhatt"
},
{
"code": "",
"text": "Someone from MongoDB team or other developers, please help me out here.",
"username": "Margi_Bhatt"
}
]
| Connection of Realm with SwiftUI App | 2022-12-02T11:43:50.735Z | Connection of Realm with SwiftUI App | 1,150 |
|
null | [
"node-js",
"mongoose-odm",
"compass",
"atlas-cluster"
]
| [
{
"code": "MongoServerSelectionError: connect ECONNREFUSED SERVER_IP:27017\n at Timeout._onTimeout (/home/node/app/node_modules/mongodb/lib/sdam/topology.js:318:38)\n at listOnTimeout (internal/timers.js:554:17)\n at processTimers (internal/timers.js:497:7) {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n servers: Map(3) {\n '*****-00-02.ajljo.mongodb.net:27017' => [ServerDescription],\n '*****-00-00.ajljo.mongodb.net:27017' => [ServerDescription],\n '*****-00-01.ajljo.mongodb.net:27017' => [ServerDescription]\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: 'atlas-*****-shard-0',\n logicalSessionTimeoutMinutes: undefined\n },\n code: undefined,\n [Symbol(errorLabels)]: Set(0) {}\n }\n",
"text": "Hey,We are using MongoDB payed version and we started receiving ReplicaSetNoPrimary error randomly this week on local. We are using node.js but when the error appears while we are working, it´s stop answering from Atlas too.Any advice is appreciated. Thank youThis is the error we receive from mongoose",
"username": "dimitar_vasilev"
},
{
"code": "",
"text": "Hi @dimitar_vasilev and welcome to the MongoDB community forum!!Thank you for sharing the above information in detail. However, to understand clearly, could you help with a few more details below:We are using MongoDB payed versionWe are using node.jsIt happens only sometimeswhen it’s about to happen the queries start working really slowPlease let me know if you have any further queries.Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Hi Assawari thank you for your answer.Let me know if there is anything I can provide to help understand the problem better.Thank you for your time!",
"username": "dimitar_vasilev"
},
{
"code": "mongod",
"text": "Hi @dimitar_vasilev and thank you for your reply.It would be helpful if you help me understand further with more details.local dev environment.Does this mean that only the local MongoDB you’re having issues with but not Atlas?\nCould you confirm how the local environment was run? Is it on a laptop, what’s the mongod command line parameters are?\nWhat are the operating system configuration?Could you also provide the connection string that you are using to connect for both the environments?Does you local setup has a different configuration or security setting from the the production environment ?Regards\nAasawari",
"username": "Aasawari"
}
]
| ReplicaSetNoPrimary error after a while | 2022-12-04T20:33:14.431Z | ReplicaSetNoPrimary error after a while | 2,495 |
null | [
"upgrading"
]
| [
{
"code": "",
"text": "Just upgraded a MongoDB cluster and got the message:\" Your cluster upgrade failed during data migration, and in some rare cases, your cluster may be missing data. If this is the case, please contact our support staff for help with data recovery.\"Checked the database and 90%+ of the data is gone. I tried contacting support but didn’t get any reply so far.Also now, 24 hours later, “Disk Usage” is suddenly through the roof for no apparent reason. Data size (collections and indices) is still only around 2GB but “disk usage” is suddenly 16GB and growing fast.Only thing I changed is upgrading the Cluster. Any ideas what’s going on and how to go about fixing things would be greatly appreciated.",
"username": "Jakob_Greenfeld"
},
{
"code": "",
"text": "Hi @Jakob_Greenfeld welcome to the community!Sorry that your experience hasn’t been optimal.I have referred your issue to the Atlas support team, and someone should be (or may have already be) in touch with you regarding this.Hope it gets resolved soon!Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| 90% of all data vanished after upgrading the cluster | 2022-12-18T10:52:54.434Z | 90% of all data vanished after upgrading the cluster | 1,791 |
null | [
"dot-net",
"unity"
]
| [
{
"code": "",
"text": "Im using Realms for my Unity project [offline game], but where do I see the created database ? I’ve searched google and the answer i got is to look for the “.realms” file, but I can’t find one.How do I save the Realms database into a file ?",
"username": "Joshua_Baldos"
},
{
"code": "Realm.GetInstancerealm.Config.DatabasePathRealmConfiguration",
"text": "When you open a Realm instance (i.e. call Realm.GetInstance), you’re already working with the database. That means that any writes you do will be persisted to disk. If you want to know where the database is stored, you can inspect realm.Config.DatabasePath. We’ll generate a default path, typically in the documents folder, but you can always customize that by manually specifying the optional path when creating a RealmConfiguration.",
"username": "nirinchev"
},
{
"code": "",
"text": "thank you very much !!!",
"username": "Joshua_Baldos"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Realm file in a unity project (c# .net) | 2022-12-17T21:00:41.585Z | Realm file in a unity project (c# .net) | 2,179 |
null | [
"atlas-triggers"
]
| [
{
"code": "",
"text": "I want to send email to customer everytime they order something. i will be doing this using database trigger but i need function code so i can write that function.",
"username": "Its_Me"
},
{
"code": "newpickedsenterror",
"text": "We use an alternative workflow:We create messages in a database with all relevant fields and status.status: new -> picked -> sent / errorThen we have an external job (crontab, nodejs, nedemailer) checking every minute for new email docs in status new, setting them to picked, sending them via SMTP, then setting the final status to sent or error.Be aware that sent emails can still bounce. So we have another job checking incoming emails for bounced messages and updating the database. We use a trick and write our own reference into the email header which is included in a returned message. So we know exactly which email was bounced.",
"username": "blue_puma"
},
{
"code": "",
"text": "I have an interest in implement the solution you have identified in your response above.\nHowever, your answer did not have any code snippet or an example for how to write/include/enable the external jobs (crontab, nodejs, nedemailer).\nCan you please provide some code for us to use?",
"username": "Wizzi_Intl"
}
]
| How to Send Email from MongoDB Function | 2021-05-01T18:41:08.120Z | How to Send Email from MongoDB Function | 5,929 |
null | []
| [
{
"code": "",
"text": "i am using kafka connector to get data from Mongodb but the dates in the messages look kinda weird.instead of “created_at”: “1671463988665” i get “created_at”: {\"$date\": 1671463988665}\nis there a setting to change this behavior? please advise on how to proceed.",
"username": "ton_test"
},
{
"code": "\"output.json.formatter\"=\"com.mongodb.kafka.connect.source.json.formatter.SimplifiedJson\"\n",
"text": "You might want to try modifying the “output.json.formatter” property something like",
"username": "Robert_Walters"
}
]
| Kafka connect weird date formatting issue | 2022-12-19T21:30:30.167Z | Kafka connect weird date formatting issue | 1,089 |
null | [
"react-native"
]
| [
{
"code": "watchEROR [TypeError: Invalid attempt to iterate non-iterable instance.\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method.]\n",
"text": "Hi, I am currently trying to integrate “watch” over a collection changes in React Native app. But after I implemented like in the documentation, the app isn’t notified on the change of any record. I have tried to query collection data and it works fine, so the connection isn’t the issue.\nThe error that I get when running watch is:Upon investigation I came accross this issue in Github collection.watch() fails on React Native SDK · Issue #3494 · realm/realm-js · GitHub, which seems to be identical to the issue I am facing.Is there anyone know a workaround or an alternative to achieve this?\nThanks!",
"username": "Ahmad_Ali_Abdilah"
},
{
"code": "",
"text": "Hello @Ahmad_Ali_Abdilah ,Welcome to Realm Community Forums Unfortunately, this is a known issue and the engineering team is still investigating a resolution for it. The workaround to get data from Atlas is to use Realm Sync.Kind Regards,\nHenna",
"username": "henna.s"
},
{
"code": "",
"text": "Hey were you able to solve this?",
"username": "Arijit_Das3"
},
{
"code": "",
"text": "Is there a solution to this now?",
"username": "Arijit_Das3"
}
]
| Realm collection watch | 2021-12-03T02:31:50.942Z | Realm collection watch | 3,719 |
null | [
"aggregation"
]
| [
{
"code": "{\n $addFields: {\n created_at: {\n $toDate: \"$creationDate\",\n },\n current_date: {\n $toDate: \"$$NOW\",\n },\n threshholdDate:{\n $dateSubtract:{\n startDate:\"$$NOW\",\n unit: \"month\",\n amount: 7,\n \n }\n },\n },\n },\n",
"text": "Unrecognized expression ‘$dateSubtract’ in aggregation.(InvalidPipelineOperator) Invalid $addFields :: caused by :: Unrecognized expression ‘$dateSubtract’",
"username": "Anil_Prasad1"
},
{
"code": "",
"text": "As per documentation $dateSubtract is new in 5.0. Perhaps you are using an older version.",
"username": "steevej"
},
{
"code": "",
"text": "You are right we are using version 4.4. Can you recommend some alternative please.",
"username": "Anil_Prasad1"
},
{
"code": " */\n{\n created_at: {\n $toDate: \"$creationDate\",\n },\n threshholdDate:{\n $subtract: [ \"$$NOW\", 7*30*24*60*60*1000 ]\n \n }\n \n }\n",
"text": "Solution I have came up with is. Any better solution will be appreciated.",
"username": "Anil_Prasad1"
}
]
| (InvalidPipelineOperator) Invalid $addFields :: caused by :: Unrecognized expression '$dateSubtract' | 2022-12-19T17:24:23.415Z | (InvalidPipelineOperator) Invalid $addFields :: caused by :: Unrecognized expression ‘$dateSubtract’ | 2,890 |
null | [
"queries",
"java",
"atlas-cluster",
"kotlin"
]
| [
{
"code": "class Main: JavaPlugin() {\n override fun onEnable() {\n logger.info(\"Hello World!\")\n\n val connectionString = ConnectionString(\"mongodb+srv://<user>:<password>@cluster0.aapbl4p.mongodb.net/?retryWrites=true&w=majority\")\n val mongoClient = MongoClients.create(connectionString)\n val database = mongoClient.getDatabase(\"sample_weatherdata\")\n val col = database.getCollection(\"data\")\n val data = col.find().first()\n logger.info(data!!.getString(\"callLetters\"))\n }\n}\njava.io.UncheckedIOException: java.net.SocketException: Invalid argument\n\tat sun.nio.ch.DatagramSocketAdaptor.disconnect(DatagramSocketAdaptor.java:136) ~[?:?]\n\tat java.net.DatagramSocket.disconnect(DatagramSocket.java:393) ~[?:?]\n\tat com.sun.jndi.dns.DnsClient.doUdpQuery(DnsClient.java:437) ~[jdk.naming.dns:?]\n\tat com.sun.jndi.dns.DnsClient.query(DnsClient.java:214) ~[jdk.naming.dns:?]\n\tat com.sun.jndi.dns.Resolver.query(Resolver.java:81) ~[jdk.naming.dns:?]\n\tat com.sun.jndi.dns.DnsContext.c_getAttributes(DnsContext.java:434) ~[jdk.naming.dns:?]\n\tat com.sun.jndi.toolkit.ctx.ComponentDirContext.p_getAttributes(ComponentDirContext.java:235) ~[?:?]\n\tat com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.getAttributes(PartialCompositeDirContext.java:141) ~[?:?]\n\tat com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.getAttributes(PartialCompositeDirContext.java:129) ~[?:?]\n\tat javax.naming.directory.InitialDirContext.getAttributes(InitialDirContext.java:171) ~[?:?]\n\tat com.mongodb.internal.dns.DefaultDnsResolver.resolveAdditionalQueryParametersFromTxtRecords(DefaultDnsResolver.java:114) ~[?:?]\n\tat com.mongodb.ConnectionString.<init>(ConnectionString.java:378) ~[?:?]\n\tat rpg.oracle.EconomySystem.Main.onEnable(Main.kt:30) ~[?:?]\n\tat org.bukkit.plugin.java.JavaPlugin.setEnabled(JavaPlugin.java:263) ~[spigot-1.16.5.jar:3096a-Spigot-9fb885e-af1a232]\n\tat org.bukkit.plugin.java.JavaPluginLoader.enablePlugin(JavaPluginLoader.java:342) ~[spigot-1.16.5.jar:3096a-Spigot-9fb885e-af1a232]\n\tat org.bukkit.plugin.SimplePluginManager.enablePlugin(SimplePluginManager.java:480) ~[spigot-1.16.5.jar:3096a-Spigot-9fb885e-af1a232]\n\tat org.bukkit.craftbukkit.v1_16_R3.CraftServer.enablePlugin(CraftServer.java:492) ~[spigot-1.16.5.jar:3096a-Spigot-9fb885e-af1a232]\n\tat org.bukkit.craftbukkit.v1_16_R3.CraftServer.enablePlugins(CraftServer.java:406) ~[spigot-1.16.5.jar:3096a-Spigot-9fb885e-af1a232]\n\tat org.bukkit.craftbukkit.v1_16_R3.CraftServer.reload(CraftServer.java:879) ~[spigot-1.16.5.jar:3096a-Spigot-9fb885e-af1a232]\n\tat org.bukkit.Bukkit.reload(Bukkit.java:651) ~[spigot-1.16.5.jar:3096a-Spigot-9fb885e-af1a232]\n\tat org.bukkit.command.defaults.ReloadCommand.execute(ReloadCommand.java:27) ~[spigot-1.16.5.jar:3096a-Spigot-9fb885e-af1a232]\n\tat org.bukkit.command.SimpleCommandMap.dispatch(SimpleCommandMap.java:149) ~[spigot-1.16.5.jar:3096a-Spigot-9fb885e-af1a232]\n\tat org.bukkit.craftbukkit.v1_16_R3.CraftServer.dispatchCommand(CraftServer.java:761) ~[spigot-1.16.5.jar:3096a-Spigot-9fb885e-af1a232]\n\tat org.bukkit.craftbukkit.v1_16_R3.CraftServer.dispatchServerCommand(CraftServer.java:746) ~[spigot-1.16.5.jar:3096a-Spigot-9fb885e-af1a232]\n\tat net.minecraft.server.v1_16_R3.DedicatedServer.handleCommandQueue(DedicatedServer.java:426) ~[spigot-1.16.5.jar:3096a-Spigot-9fb885e-af1a232]\n\tat net.minecraft.server.v1_16_R3.DedicatedServer.b(DedicatedServer.java:395) ~[spigot-1.16.5.jar:3096a-Spigot-9fb885e-af1a232]\n\tat net.minecraft.server.v1_16_R3.MinecraftServer.a(MinecraftServer.java:1127) ~[spigot-1.16.5.jar:3096a-Spigot-9fb885e-af1a232]\n\tat net.minecraft.server.v1_16_R3.MinecraftServer.w(MinecraftServer.java:966) ~[spigot-1.16.5.jar:3096a-Spigot-9fb885e-af1a232]\n\tat net.minecraft.server.v1_16_R3.MinecraftServer.lambda$0(MinecraftServer.java:273) ~[spigot-1.16.5.jar:3096a-Spigot-9fb885e-af1a232]\n\tat java.lang.Thread.run(Thread.java:831) [?:?]\nCaused by: java.net.SocketException: Invalid argument\n\tat sun.nio.ch.DatagramChannelImpl.disconnect0(Native Method) ~[?:?]\n\tat sun.nio.ch.DatagramChannelImpl.disconnect(DatagramChannelImpl.java:1294) ~[?:?]\n\tat sun.nio.ch.DatagramSocketAdaptor.disconnect(DatagramSocketAdaptor.java:134) ~[?:?]\n\t... 29 more\n",
"text": "I have connection error. This is my code.\nI’m using mongodb-driver-sync:4.0.5Error logpls Help me",
"username": "RPG_Oracle"
},
{
"code": "",
"text": "When I was using Oracle JDK 15, I had same error.\nI changed to Oracle JDK 19. it’s OK.\nFile > Project Structure",
"username": "muyoungko"
}
]
| Connection error with spigot & Kotlin | 2022-10-31T12:25:12.649Z | Connection error with spigot & Kotlin | 2,213 |
null | [
"app-services-cli"
]
| [
{
"code": "C:\\Users\\janedoe\\myproject>realm-cli push --remote=\"myproject_development-scxyo\"\nDetermining changes\npush failed: resource name can only contain ASCII letters, numbers, and underscores\n",
"text": "I was following the guide https://www.mongodb.com/docs/atlas/app-services/manage-apps/configure/copy-app/ and got stuck on step 7.However, it wasn’t specified which resource is violating the rule “resource name can only contain ASCII letters, numbers, and underscores”, so I’m at a loss on how to get started on fixing the error.The original app is working fine, though, which confuses me more.",
"username": "njt"
},
{
"code": "",
"text": "Did you ever figure this out? I am running into the same issue.",
"username": "Tyler_Collins"
}
]
| Encountered error on step 7 of Making a Copy of an App w/ Realm CLI | 2022-10-04T06:42:59.798Z | Encountered error on step 7 of Making a Copy of an App w/ Realm CLI | 2,174 |
[
"atlas-data-lake"
]
| [
{
"code": "{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"AWS\": <ARN>\n },\n \"Action\": \"sts:AssumeRole\",\n \"Condition\": {\n \"StringEquals\": {\n \"sts:ExternalId\": <ID>\n }\n }\n }\n ]\n}```",
"text": "Getting this error “atlas cannot assume the specified role” while trying to add aws data store for data lake in atlas. Can someone help me on what is the cause of this error why it can’t assume a role?\nIMG-20221217-WA00011280×1066 360 KB\npolicy json",
"username": "P_Vivek"
},
{
"code": "",
"text": "Hello @P_VivekIt’s a bit hard to say from this error exactly what is causing the issue. But for Atlas Data Federation, this setup flow is giving MongoDB’s Data Federation’s IAM User the permission to “assume” your IAM Role, which grants access to your bucket. For some reason Data Federation is getting an error back from AWS at this step while it is attempting to assume the role.Did you successfully complete the earlier steps to create a new role by running the AWS CLI commands? Did anything change with the role in between creating them with the suggested commands and trying to run this?If you’d like to walk through this live, throw some time on my calendar here: Calendly - Benjamin FlastBest,\nBen",
"username": "Benjamin_Flast"
}
]
| Error in Configuring AWS S3 Data Store in Mongodb Data Lake | 2022-12-17T04:30:12.192Z | Error in Configuring AWS S3 Data Store in Mongodb Data Lake | 1,932 |
|
[
"aggregation",
"node-js",
"mongoose-odm"
]
| [
{
"code": "modifierNameemaillet data = await Request.aggregate([\n { $match: searchFilters },\n { $lookup: {\n from: 'user',\n localField: 'user',\n foreignField: '_id',\n as: 'modifier',\n } },\n]);\n\nlet email = data.email; // [email protected]\nlet modifierName = email.split('@')[0]; // julio\n\ndata = await Request.aggregate([\n { $match: searchFilters },\n { $lookup: {\n from: 'user',\n localField: 'user',\n foreignField: '_id',\n as: 'modifier',\n } },\n { $group: {\n _id: {\n date: groupIdDate,\n },\n modifierName: { $sum: 1 }\n } }\n]);\n",
"text": "Hi all, I would like to define modifierName from email variable and use it as a field in an aggregation query (specifically $group). How can I achieve that with this code?Here is this question in a picture :\n\nproblem-21110×593 56.6 KB\nAny suggestion is appreciated, thank you!",
"username": "marc"
},
{
"code": "",
"text": "This question has more to do with JavaScript than MongoDB.What you want is called computed property names.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks! By that, it would be like this right?\nimage526×538 37.9 KB\n",
"username": "marc"
},
{
"code": "mongosh > modifierName = \"foo\"\n< 'foo'\nmongosh > bar = { [modifierName] : \"bar\" }\n< { foo: 'bar' }\nmongosh > bar\n< { foo: 'bar' }\n",
"text": "By that, it would be like this right?The best way to find out is to try it.Using mongosh",
"username": "steevej"
},
{
"code": "modifier$$modifier...\n{\n '$group': {\n '_id': {\n 'date': {\n '$dateToString': {\n 'format': '%d/%m/%Y', \n 'date': '$updatedTime'\n }\n }\n }, \n '$$modifier': { '$sum': 1 }\n }\n },\n...\n'$$modifier': { '$sum': 1 }...\n[modifier]: { '$sum': 1 }\n// OR\n['$modifier']: { '$sum': 1 }\n...\n",
"text": "Thanks!\nBut what if I would like to use result of previous stage in MongoDB?Let’s say, I have modifier values as a result of $unwind stage which I would like to use it as a field in the next stage ($group), like in the picture below :\nask-11141×552 84.3 KB\nI tried to define it as $$modifier, but it failed :Is it possible to label a field using result in the previous stage?Edit :\nChanging '$$modifier': { '$sum': 1 } to :said : Stage must be a properly formatted document.",
"username": "marc"
},
{
"code": "",
"text": "Maybe this helps you @marc : Update nested object by key - #5 by santimir",
"username": "cris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to use a variable value as a field in aggregation query ($group)? | 2022-12-17T01:47:59.370Z | How to use a variable value as a field in aggregation query ($group)? | 2,885 |
|
null | [
"node-js"
]
| [
{
"code": "const stringExample = \"Are you a veggie?\"\"...: { extraFields\": {\n \"topic1\": {\n \"Are you a veggie?\": {\n \"email\": [\"[email protected]\"]\n },\n \"Do you like dogs?\":{\n \"email\": [\"[email protected]\"]\n }\n }\n },\n}\nconst stringExampledocumentExample.findByIdAndUpdate(\n { _id: exampleID },\n {\n $push: {\n \"extraFields.topic1[$stringExample]\": {\n email: \"example value\",\n },\n },\n }\n )\n",
"text": "Hi guys, I would love your help here, as I spent 3h+ trying to solve this problem.What I want to achieve:Specifically:Can this be done?\nThanks!",
"username": "cris"
},
{
"code": "",
"text": "This is a JS question rather than mongo. It is called computed field names. See",
"username": "steevej"
},
{
"code": "extraFields.topic1",
"text": "Thanks - I actually didnt know how this was called.I managed to get this to work in regular JS in similar use cases, but I really dont know the syntax for MongoDB to make this work & it seems to me more about how MongoDB handles nested object calls (eg. dot notation to access nested fields extraFields.topic1)",
"username": "cris"
},
{
"code": "$pushemail \"extraFields.topic1[stringExample]\"",
"text": "@steevej To use $push I need right now to go directly to the array email, so I need to somehow combine the dot notation of MongoDB with computed field names - something like \"extraFields.topic1[stringExample]\". Do you know how?Edit: Maybe theres a better way altogether to structure this object and I am missing it.",
"username": "cris"
},
{
"code": "const stringExample = \"Are you a veggie?\"\ndb.sales.updateOne(\n { _id: exampleID },\n {\n $push: {\n [`extraFields.topic1.${stringExample}.email`]: \"example value\",\n },\n }\n )\n",
"text": "The idea would be this I think. Feel free to replace updateOne, I just tested without any driver.(The computed property name must point to an array)I think keys with spaces are not a good idea normally, but that is your choice. For example, you may camelcase it using some library.",
"username": "santimir"
},
{
"code": "",
"text": "@santimir It works! Many many thanks! Such a small thing, but wasnt able to figure it out.",
"username": "cris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Update nested object by key | 2022-12-19T13:03:27.771Z | Update nested object by key | 3,499 |
null | [
"aggregation"
]
| [
{
"code": "{\n \"_id\": \"bac62c07d4bc49a0a6e2884483aced1c\",\n \"sorts\": [\n {\n \"id\": \"t1\",\n \"sortNo\": 1\n },\n {\n \"id\": \"t2\",\n \"sortNo\": 3\n }\n ]\n}\n\n\n{\n \"_id\": \"bac62c07d4bc49a0a6e2884483aced13\",\n \"sorts\": [\n {\n \"id\": \"t1\",\n \"sortNo\": 3\n },\n {\n \"id\": \"t2\",\n \"sortNo\": 1\n }\n ]\n}\n\ndb.getCollection(\"test\").aggregate([\n {\n $unwind: \"$sorts\"\n },\n {\n $match: {\n \"sorts.id\": \"t1\"\n }\n },\n {\n $sort: {\n \"sorts.sortNo\": 1\n }\n },\n {\n $project: {\n \"_id\": 1\n }\n },\n {\n $lookup: {\n \"from\": \"test\",\n \"as\": \"list\",\n \"foreignField\": \"_id\",\n \"localField\": \"_id\"\n }\n },\n {\n $unwind: \"$list\"\n },\n {\n \"$replaceRoot\": {\n \"newRoot\": \"$list\"\n }\n }\n])\n\n",
"text": "i have the data need SortNo sorting for different sorts.id:I have a more complex implementation scheme:I want to know whether it is possible to directly use the simple find method??",
"username": "1111027"
},
{
"code": "",
"text": "Hi @1111027 ,Not sure I fully understand the needed output.Can you share how would the output look like?Are you trying to sort the _id by the inner arrays or you want to sort the arrays?Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "I do not see any differences in the output you supplied.",
"username": "steevej"
},
{
"code": "",
"text": "sorry,i Input error!\nsort: “sorts.sortNo”: 1if match: “sorts.id”: “t1”\nwell get results:\n“_id”: “bac62c07d4bc49a0a6e2884483aced1c”,…\n“_id”: “bac62c07d4bc49a0a6e2884483aced13”,…if match:“sorts.id”: “t2”:\n“_id”: “bac62c07d4bc49a0a6e2884483aced13”,…\n“_id”: “bac62c07d4bc49a0a6e2884483aced1c”,…",
"username": "1111027"
},
{
"code": "{\n \"_id\": \"bac62c07d4bc49a0a6e2884483aced1c\",\n \"sorts\": { t1 : 1 , t2 : 3 }\n}\n{\n \"_id\": \"bac62c07d4bc49a0a6e2884483aced13\",\n \"sorts\": { t1: 3 , t2 : 1 }\n}\ndb.test.find( { \"sorts.t1\" : { \"$exists\" : true } } ).sort( \"sorts.t1\" ).projection( { _id : 1 } )\n",
"text": "I really do not understand this.And why don’t you simply have documents like:and do",
"username": "steevej"
},
{
"code": "",
"text": "Because in actual business, t1 and t2 are not always fixed, and there may be t3, t4… and so on. We want to achieve this through array data expansion rather than through extended fields. In this way, you can have a more fixed and scalable data model",
"username": "1111027"
},
{
"code": "",
"text": "It looks like you are somehow implementing the attribute pattern.One way I could see how you could do that is with the following.1 - a $set stage that uses $filter to extract the sorts needed\n2 - a $match stage that removes document where $filter did not find a sorts to use\n3 - a $set stage that uses the result of the $filter to set a top level sortNo",
"username": "steevej"
}
]
| How to implement according to the array by aggregating ordered by qualified field | 2022-12-09T05:25:14.903Z | How to implement according to the array by aggregating ordered by qualified field | 1,520 |
null | [
"indexes",
"performance",
"time-series"
]
| [
{
"code": "tstest> db.createCollection(\"test1\", {timeseries: {timeField: \"ts\", metaField: \"md\", granularity: \"seconds\"}})\n{ ok: 1 }\ntest> db.createCollection(\"test2\")\n{ ok: 1 }\ntest> db.test1.createIndex({ts: -1})\nts_-1\ntest> db.test2.createIndex({ts: -1})\nts_-1\ntest> for (var i = 1; i <= 100; i++) {\n... const docs = [...Array(10000)].map(_ => ({\n..... ts: new Date(new Date().getTime() - (Math.random() * 172800000)),\n..... md: \"\",\n..... value: Math.random() * 1000000\n..... }));\n... db.test1.insertMany(docs);\n... db.test2.insertMany(docs);\n... }\ntest> db.test1.countDocuments()\n1000000\ntest> db.test2.countDocuments()\n1000000\ntest> db.test1.find({}).sort({ts: -1}).limit(1)\n[\n {\n ts: ISODate(\"2021-12-20T18:45:43.956Z\"),\n md: '',\n _id: ObjectId(\"61c0cf58435319318616399e\"),\n value: 933482.8127467203\n }\n]\ntest> db.test2.find({}).sort({ts: -1}).limit(1)\n[\n {\n _id: ObjectId(\"61c0cf6143531931861660ae\"),\n ts: ISODate(\"2021-12-20T18:45:43.956Z\"),\n md: '',\n value: 933482.8127467203\n }\n]\n",
"text": "Hello,I created two collections. One is a timeseries, the other is not.\nI added an index to the ts field to both and then wrote 1’000’000 random documents to both.\nThe code to do that is as follows:Querying the timeseries collection (test1) is MUCH slower than querying the regular collection (test2).test1 query explain and stats: test> db.test1.find({}).sort({ts: -1}).limit(1).explain(){ explainVersion: - Pastebin.com\ntest2 query explain and stats: test> db.test2.find({}).sort({ts: -1}).limit(1).explain(){ explainVersion: - Pastebin.comIs there something wrong with my query? Or the way I’m structuring my data?I’ve tried using .hint() to force the timeseries collection to use the index but the performance is the same.Would love any ideas or suggestions.Thank you",
"username": "Sina_Ghaffari"
},
{
"code": "$match",
"text": "We have noticed the exact same issue. Currently on MongoDb 5.2.0.Sorting by timestamp on a timeseries collection does not use the index.\nWe made sure that index order and sort order are the same.Is there any update on this?The only workaround we found so far is using a $match on the timestamp to limit the amount of documets as much as possible - but that is not possible in all cases.A statement from MongoDB on why this is, and how to work around this would be great!",
"username": "Pasukaru"
},
{
"code": "",
"text": "Same here.\nAny update about this?\nIn my use case i want to calculate the ranges of time-contiguous data and i have to sort the data before i do my calculations. Also for me limiting the documents gives better performances but that is not a real solution for me.",
"username": "DAVIDE_SAVARRO"
},
{
"code": "",
"text": "Hi,one thing that I found out that improves performance is an index in DESCENDING order on the timestamp field. At least on MongoDB 6.0.0 this has different behavior. A compound index on meta field + timestamp field does not provide the same performance. It will still load and unpack all the documents that match the query criteria and then sort and limit.\nYou might try to enforce the usage of the index by using the .hint() function. https://www.mongodb.com/docs/manual/reference/method/cursor.hint/Cheers,\nMartin",
"username": "Martin_Prodanov"
},
{
"code": "",
"text": "Hi any updates on this issue?I’m receiving also the alarm from mongodb atlas because the retrieved docuemnt overcome the limit of 1000.Right now, I’m using 5.0.12 version on the cloud.",
"username": "Vatemecum"
},
{
"code": "var collection = db.collection(datasource.collectionName); \nvar latest = await collection.findOne({iSensor:0}, { sort: {$natural:-1} });\ndb.collectionName.find({}).sort({$natural:-1}).limit(1)\n",
"text": "After an eternity of research on getting a fast view on latest time series document of my collection\nI found out this was efficient with nodejs driver:Or with mongosh",
"username": "Matthieu_Moret"
}
]
| Timeseries: Poor Performance on Indexed TimeField | 2021-12-20T19:01:27.898Z | Timeseries: Poor Performance on Indexed TimeField | 5,015 |
null | [
"aggregation",
"queries",
"data-modeling",
"indexes"
]
| [
{
"code": "",
"text": "I am facing the decision to either use one collection with embedding or use 2 collections.If I use one collection, then I only need to perform 1 query. If I use 2 collections, then I need to perform two queries. However, performing 2 questions allows me to use 2 indexes.In light of the ability to use 2 indexes, can I expect better read performances by using 2 collections instead of embedding?",
"username": "Big_Cat_Public_Safety_Act"
},
{
"code": "{ orders: [ { order_id: 1, ... }, { order_id: 2, ... } .... { order_id: nnn, .... } ] }",
"text": "@Big_Cat_Public_Safety_Act this will entirely depend on your data model and how much data you’re embedding vs. referencing. Performing a single operation (embedding) will require less disk/cache activity and fewer network roundtrips to retrieve the data.If the data you’re referencing is potentially an unbounded array (ex: { orders: [ { order_id: 1, ... }, { order_id: 2, ... } .... { order_id: nnn, .... } ] }) it may make more sense to store that in a separate collection and only filter for the subset of data you need at any given time.In light of the ability to use 2 indexes, can I expect better read performances by using 2 collections instead of embedding?If you’re retrieving 100% of the data in a document 100% of the time, embedding will be a better choice. If you’re referencing a lot of data per document, 2 collections may be beneficial.The beauty of MongoDB is you have both options at your disposal, however choosing the appropriate design will depend entirely on your application’s usage and access patterns.",
"username": "alexbevi"
}
]
| Can MongoDB use multiple indexes in one query? | 2022-12-19T05:12:32.460Z | Can MongoDB use multiple indexes in one query? | 1,243 |
null | [
"queries",
"node-js",
"crud"
]
| [
{
"code": "{\n \"_id\" : ObjectId(\"63a00087347cec20308de2ec\"),\n \"registration\" : {\n \"email\" : \"[email protected]\",\n \"password\" : \"password\"\n },\n \"profile\" : {\n \"name\" : \"My Name\",\n \"mark\" : \"r4u4tlbOSq\",\n \"category\" : \"general\",\n //other 60 fields\n }\n}\n//req.body\n{ // name field not present\n \"mark\" : \"newmark\",\n \"category\" : \"general\",\n //other 60 fields\n }\nconst query = { _id: ObjectId(req.userid) };\n\n const update = { $set: {\"profile\" :{...req.body}}};\n\n const options = { upsert: true };\n\n await colname.updateOne(query, update, options);\n{\n \"_id\" : ObjectId(\"63a00087347cec20308de2ec\"),\n \"registration\" : {\n \"email\" : \"[email protected]\",\n \"password\" : \"password\"\n },\n \"profile\" : {\n \"mark\" : \"newmark\",\n \"category\" : \"general\",\n //other 60 fields\n }\n}\n{\n \"_id\" : ObjectId(\"63a00087347cec20308de2ec\"),\n \"registration\" : {\n \"email\" : \"[email protected]\",\n \"password\" : \"password\"\n },\n \"profile\" : {\n \"name\" : \"My Name\",\n \"mark\" : \"newmark\",\n \"category\" : \"general\",\n //other 60 fields\n }\n}\n",
"text": "I want to update the sub-document “profile” withThe code I wrote below and it is deleting profile.name, but I want to preserve profile.name. I want to preserve all the fields that are either not present in req.body or is empty string. How can I do that.Actual ResultExpected ResultThank You",
"username": "Freiza_Gen"
},
{
"code": "$setdb.collection.update({}, //your search query\n[\n {\n $addFields: {\n \"profile\": { //overwrite profile field\n \"$mergeObjects\": [\n \"$$ROOT.profile\",// $$ROOT means current object\n {\n \"hello\": \"world\" // your update object here\n }\n ]\n }\n }\n }\n],\n{\n upsert: true\n})\n",
"text": "The problem with standard $set updates is that the update can’t access the internal fields for the document being processed.You can do it using a pipeline-update instead, if I get your idea correctly. See mongoplayground, and sample below:There may be better ways.",
"username": "santimir"
},
{
"code": "{ $set: {\"profile\" : value } }{ \"$set\" : {\n \"profile.mark\" : \"new_mark\" ,\n \"profile.category\" : \"new_category\"\n} }\nObject.keys(req.body).map(...)const update = { $set: {\"profile\" :{...req.body}}};",
"text": "There may be better ways.I cannot decide if it is a better way or not, just that it is different.When you use{ $set: {\"profile\" : value } }you ask to set the field profile to the given value. To set a field inside an object you use the dot notation like:In this case the difficulty is to go from req.body to the appropriate dot notation. In JS, it should be easy to do that with something likeObject.keys(req.body).map(...)But that would be dangerous since you need to trust the req.body you received. But you already do that withconst update = { $set: {\"profile\" :{...req.body}}};",
"username": "steevej"
},
{
"code": "const myQueryObj = { }\nconst entries = Object.entries(req.body)\nfor (const [key,val] of entries){\n myQueryObj[`options.${key}`] = val\n}\n",
"text": "You are correct @steevej, nice explanation. I have forgotten some syntax, maybe this",
"username": "santimir"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Upsert nested object | 2022-12-19T10:47:04.053Z | Upsert nested object | 2,033 |
null | []
| [
{
"code": "[root@ns1 ~]# systemctl start mongod\nJob for mongod.service failed because a fatal signal was delivered to the control process. See \"systemctl status mongod.service\" and \"journalctl -xe\" for details.\n[root@ns1 ~]# systemctl status mongod.service\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)\n Active: failed (Result: signal) since Wed 2022-07-06 21:11:45 +03; 31s ago\n Docs: https://docs.mongodb.org/manual\n Process: 572214 ExecStart=/usr/bin/mongod $OPTIONS (code=killed, signal=ILL)\n Process: 572210 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)\n Process: 572207 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)\n Process: 572205 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)\n\nJul 06 21:11:45 ns1.localhost.com systemd[1]: Starting MongoDB Database Server...\nJul 06 21:11:45 ns1.localhost.com systemd[1]: mongod.service: control process exited, code=killed status=4\nJul 06 21:11:45 ns1.localhost.com systemd[1]: Failed to start MongoDB Database Server.\nJul 06 21:11:45 ns1.localhost.com systemd[1]: Unit mongod.service entered failed state.\nJul 06 21:11:45 ns1.localhost.com systemd[1]: mongod.service failed.\n-- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat\n--\n-- A new session with the ID c15063 has been created for the user root.\n--\n-- The leading process of the session is 561230.\nJul 06 00:31:55 ns1.localhost.com sudo[561230]: pam_unix(sudo:session): session opened for user root by root\nJul 06 00:31:55 ns1.localhost.com polkitd[458]: Registered Authentication Agent for unix-process:561232:1926\nJul 06 00:31:55 ns1.localhost.com systemd[1]: Starting MongoDB Database Server...\n-- Subject: Unit mongod.service has begun start-up\n-- Defined-By: systemd\n-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel\n--\n-- Unit mongod.service has begun starting up.\nJul 06 00:31:55 ns1.localhost.com kernel: traps: mongod[561243] trap invalid opcode ip:55b5947535da sp:7ffd0\nJul 06 00:31:55 ns1.localhost.com systemd[1]: mongod.service: control process exited, code=killed status=4\nJul 06 00:31:55 ns1.localhost.com systemd[1]: Failed to start MongoDB Database Server.\n-- Subject: Unit mongod.service has failed\n-- Defined-By: systemd\n-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel\n--\n-- Unit mongod.service has failed.\n--\n-- The result is failed.\nJul 06 00:31:55 ns1.localhost.com systemd[1]: Unit mongod.service entered failed state.\nJul 06 00:31:55 ns1.localhost.com systemd[1]: mongod.service failed.\nJul 06 00:31:55 ns1.localhost.com polkitd[458]: Unregistered Authentication Agent for unix-process:561232:19\nJul 06 00:31:55 ns1.localhost.com sudo[561230]: pam_unix(sudo:session): session closed for user root\nJul 06 00:31:55 ns1.localhost.com systemd-logind[459]: Removed session c15063.\n-- Subject: Session c15063 has been terminated\n-- Defined-By: systemd\n-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel\n",
"text": "I installe mongodb following this tutorial https://www.mongodb.com/docs/manual/tutorial/install-mongodb-on-red-hat/. It gives an error when I try to start the service. can you help me ?The errors I get when I run journalctl -xe.",
"username": "Ibrahim_COBAN"
},
{
"code": "Jul 06 00:31:55 ns1.localhost.com kernel: traps: mongod[561243] trap invalid opcode ip:55b5947535da sp:7ffd0\nmongodx86_64x86_64x86_64mongodmongosmongox86_64",
"text": "Your hardware doesn’t support the version of mongod you are attempting to run.x86_64MongoDB requires the following minimum x86_64 microarchitectures: [3]Starting in MongoDB 5.0, mongod , mongos , and the legacy mongo shell no longer support x86_64 platforms which do not meet this minimum microarchitecture requirement.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Hello misterHad you resolv the problem with mongodb?Best regards.",
"username": "Jose_Manuel"
}
]
| Job for mongod.service failed because a fatal signal was delivered to the control process | 2022-07-06T21:16:09.203Z | Job for mongod.service failed because a fatal signal was delivered to the control process | 6,676 |
null | []
| [
{
"code": "",
"text": "http://ssscmsapi.samparksmartshala.org:8082/apis/activeusers?date=2020-08-07&daycount=7\nIs it possible to store this data directly into atlas? (apart from doing it manually)\nCan it be automated using App Services?\nIs there a way atlas can pull this data from the link and store it in existing database?\nPlease reach out, It is a little hard to explain.Thank you",
"username": "Mayank_Rana"
},
{
"code": "",
"text": "Please provide closure on your other thread:Is it possible to store this data directly into atlas?On unix based systems, it is trivial.You configure a cron job that does curl of the link you supplied to standard output that is redirected to mongoimport that uses standard input when --file is not specified.",
"username": "steevej"
},
{
"code": "",
"text": "Any sample project where this has been implemented? for reference.\nThanks",
"username": "Mayank_Rana"
},
{
"code": "",
"text": "Sorry, I never had to implement such a thing.",
"username": "steevej"
}
]
| Storing data from an API directly into Atlas | 2022-12-15T10:40:05.967Z | Storing data from an API directly into Atlas | 1,339 |
null | [
"node-js",
"mongoose-odm"
]
| [
{
"code": "serverSelectionTimeoutMS150003000015000MongooseServerSelectionError",
"text": "The nodejs driver default serverSelectionTimeoutMS is 30s. But for some scenarios, like inside AWS Lambda, waiting 30s is not ideal.\nAccording to this documentation we should set it to 15s:Lower the serverSelectionTimeoutMS to 15000 from the default of 30000 . MongoDB elections typically take 10 seconds, but can be as fast as 5 seconds on Atlas. Setting this value to 15 seconds ( 15000 milliseconds) covers the upper bound of election plus additional time for latency.(also is strange that this is only mentioned for the C driver … while I would expect to be a setting mostly unrelated for the driver but related to the server instead)Mongoose documentation instead suggest to use 5s for Lambda. But I have noticed some random connection timeout error (MongooseServerSelectionError) when using 5s with an Atlas cluster.Can you please confirm that 15s should be a good default value for Atlas connections?",
"username": "Davide_Icardi"
},
{
"code": "150003000015000MongooseServerSelectionErrorserverSelectionTimeoutserverSelectTryOnce",
"text": "Hi @Davide_Icardi,Welcome to the community According to this documentation we should set it to 15s:\nLower the serverSelectionTimeoutMS to 15000 from the default of 30000 . MongoDB elections typically take 10 seconds, but can be as fast as 5 seconds on Atlas. Setting this value to 15 seconds ( 15000 milliseconds) covers the upper bound of election plus additional time for latency.The 15000ms setting mentioned in the docs is only applicable to some drivers (notably C in single threaded mode) but not others, so please follow the instructions for the applicable drivers if available. More specifically, the 15000ms value provided is to allow for less interruptions between replica set failovers for certain drivers in single thread mode.Mongoose documentation instead suggest to use 5s for Lambda. But I have noticed some random connection timeout error ( MongooseServerSelectionError ) when using 5s with an Atlas cluster.Mongoose’s connection settings (including the serverSelectionTimeout & serverSelectTryOnce ) should follow the recommendation for the node.js driver.In addition to the above, you may find the Best Practices Connecting from AWS Lambda documentation helpful which also notes the below recommendation:Define the client to the MongoDB server outside the AWS Lambda handler function.\nDon’t define a new MongoClient object each time you invoke your function. Doing so causes the driver to create a new connection pool with each function call.Lastly, depending on your use case and environment, you may wish to consider using the Atlas Data API which allows for read / writes using simple REST calls. However, please note that this is not yet GA and is currently in preview.Hope this helpsRegards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thank you @Jason_Tran for the info.But I’m not sure to have understand what value should I use for serverSelectionTimeoutMS, an Atlas cluster and a nodejs driver running indide AWS Lambda.Do you suggest to use 5s as written by Mongoose or leave the default 30s?",
"username": "Davide_Icardi"
},
{
"code": "",
"text": "Hi @Davide_Icardi,Do you suggest to use 5s as written by Mongoose or leave the default 30s?Other than the default value of 30s, there’s not really a “recommended” value for serverSelectionTimeout. There are use cases that allows for a lower number if a client wants to fail fast and throw an error if it cannot see a suitable server within 30s, and vice versa there are also use cases that needs a higher number to ensure a connection is made to the servers and the client is willing to wait.Thus if you find that you want to fail fast and not wait 30s before the driver throws an error, you may be able to set a lower number. This number would be different for different use cases, so you should try different numbers according to your specific circumstances (e.g. network latency, your topology, your tolerance to drivers throwing connection errors, etc.)Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "cani set this number to NUMBER.MAX_VALUE, to make sure my client keeps trying to connect to my db server when it gets disconnected?",
"username": "Ramesh_Ravi"
},
{
"code": "",
"text": "Hi @Ramesh_Ravi\nFrom my experience as a mongodb user, I don’t think it is a good idea to use very large timeout, because it can hide the connection problems with other timeouts.\nFor example: most http server/api gateway as some kind of timeout (30, 60, …). If you set the serverSelectionTimeoutMS larger than that timeout you can reach it without really understanding what is going on. I prefer to see a mongodb connection error, and eventually handle some kind of retry, that just receive a generic timeout or worst a very very long operation that “never” end.",
"username": "Davide_Icardi"
}
]
| What is the recommended value for the serverSelectionTimeoutMS setting for Atlas? | 2022-02-28T11:20:46.184Z | What is the recommended value for the serverSelectionTimeoutMS setting for Atlas? | 7,383 |
[
"python"
]
| [
{
"code": "",
"text": "\nScreenshot from 2022-12-18 11-02-001254×591 85 KB\nThe above question specifies to get a recipe for cookies without chocolate right?",
"username": "showrya_D"
},
{
"code": "",
"text": "The above question specifies to get a recipe for cookies without chocolate right?Yes it does and the answer marked as correct is wrong. What is funny is that it is exactly words for words the same 5 steps as the first answer. They correctly mentioned $nin in the description but they used $all in the steps.Good catch!",
"username": "steevej"
},
{
"code": "",
"text": "Hey @showrya_D,Thanks for highlighting this. We will forward this to the concerned team and will keep you updated.Thanks,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB Associate Developer Python Exam Topics - Incorrect Answer | 2022-12-18T05:34:00.158Z | MongoDB Associate Developer Python Exam Topics - Incorrect Answer | 1,818 |
|
null | [
"replication",
"connecting",
"sharding",
"containers"
]
| [
{
"code": "",
"text": "Hi,\nFrom time to time one or two of my mongos instances gets into the state where it can’t connect replica set:numYields:0 ok:0 errMsg:“Encountered non-retryable error during query :: caused by :: Couldn’t get a connection within the time limit” errName:NetworkInterfaceExceededTimeLimit errCode:202 reslen:342 protocol:op_msg 20038msAfter restart of the mongos everything is fine again. Do you have idea what may be cause of that ?Here is version of my mongo installation:\n[mongosMain] mongos version v4.2.3\n[mongosMain] db version v4.2.3\n[mongosMain] git version: 6874650b362138df74be53d366bbefc321ea32d4\n[mongosMain] OpenSSL version: OpenSSL 1.0.2j-fips 26 [mongosMain] allocator: tcmalloc\n[mongosMain] modules: none\n[mongosMain] build environment:\n[mongosMain] distmod: suse12\n[mongosMain] distarch: x86_64\n[mongosMain] target_arch: x86_64",
"username": "Piotr_Tajdus"
},
{
"code": "",
"text": "Encountered similar issue for couple of mongos after upgrading to 4.0. I would also like to know what caused this and how to fix .",
"username": "Sudheer_Palempati"
},
{
"code": "ShardingTaskExecutorPoolMinSizeShardingTaskExecutorPoolMaxConnecting",
"text": "I still don’t know what is causing it. It doesn’t happen on router which is on the same machine as primary node so probably something with network. I will try to play with ShardingTaskExecutorPoolMinSize and ShardingTaskExecutorPoolMaxConnecting parameters.Here is fragment of my log with network debug when such problem happens:2020-05-13T13:21:18.631+0200 D3 NETWORK [ReplicaSetMonitor-TaskExecutor] Updating 10.122.129.44:27018 lastWriteDate to 2020-05-13T13:21:16.000+0200\n2020-05-13T13:21:18.631+0200 D3 NETWORK [ReplicaSetMonitor-TaskExecutor] Updating 10.122.129.44:27018 opTime to { ts: Timestamp(1589368876, 1), t: 3 }\n2020-05-13T13:21:18.631+0200 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set crkid took 0ms\n2020-05-13T13:21:18.631+0200 D4 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Returning ready connection to 10.122.129.44:27018\n2020-05-13T13:21:18.631+0200 D4 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Updating controller for 10.122.129.44:27018 with State: { requests: 0, ready: 1, pending: 0, active: 0, isExpired: false }\n2020-05-13T13:21:18.631+0200 D4 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Comparing connection state for 10.122.129.44:27018 to Controls: { maxPending: 2, target: 1, }\n2020-05-13T13:21:18.833+0200 D4 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Updating controller for 10.122.129.45:27019 with State: { requests: 0, ready: 1, pending: 0, active: 0, isExpired: false }\n2020-05-13T13:21:18.833+0200 D4 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Comparing connection state for 10.122.129.45:27019 to Controls: { maxPending: 2, target: 1, }\n2020-05-13T13:21:18.833+0200 D4 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Updating controller for 10.122.129.44:27018 with State: { requests: 0, ready: 1, pending: 0, active: 0, isExpired: false }\n2020-05-13T13:21:18.833+0200 D4 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Comparing connection state for 10.122.129.44:27018 to Controls: { maxPending: 2, target: 1, }\n2020-05-13T13:21:18.833+0200 D4 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Updating controller for 10.122.129.44:27019 with State: { requests: 0, ready: 1, pending: 0, active: 0, isExpired: false }\n2020-05-13T13:21:18.833+0200 D4 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Comparing connection state for 10.122.129.44:27019 to Controls: { maxPending: 2, target: 1, }\n2020-05-13T13:21:18.833+0200 D4 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Updating controller for 10.122.129.43:27019 with State: { requests: 0, ready: 1, pending: 0, active: 0, isExpired: false }\n2020-05-13T13:21:18.833+0200 D4 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Comparing connection state for 10.122.129.43:27019 to Controls: { maxPending: 2, target: 1, }\n2020-05-13T13:21:18.903+0200 D4 CONNPOOL [TaskExecutorPool-0] Updating controller for 10.122.129.44:27018 with State: { requests: 0, ready: 1, pending: 0, active: 0, isExpired: true }\n2020-05-13T13:21:18.903+0200 D4 CONNPOOL [TaskExecutorPool-0] Comparing connection state for 10.122.129.44:27018 to Controls: { maxPending: 2, target: 1, }\n2020-05-13T13:21:19.071+0200 D4 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Updating controller for 10.122.129.43:27018 with State: { requests: 0, ready: 1, pending: 0, active: 0, isExpired: false }\n2020-05-13T13:21:19.071+0200 D4 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Comparing connection state for 10.122.129.43:27018 to Controls: { maxPending: 2, target: 1, }\n2020-05-13T13:21:19.107+0200 D4 CONNPOOL [ShardRegistry] Updating controller for 10.122.129.44:27019 with State: { requests: 0, ready: 1, pending: 0, active: 0, isExpired: false }\n2020-05-13T13:21:19.107+0200 D4 CONNPOOL [ShardRegistry] Comparing connection state for 10.122.129.44:27019 to Controls: { maxPending: 2, target: 1, }\n2020-05-13T13:21:19.107+0200 D4 CONNPOOL [ShardRegistry] Updating controller for 10.122.129.45:27019 with State: { requests: 0, ready: 1, pending: 0, active: 0, isExpired: false }\n2020-05-13T13:21:19.107+0200 D4 CONNPOOL [ShardRegistry] Comparing connection state for 10.122.129.45:27019 to Controls: { maxPending: 2, target: 1, }\n2020-05-13T13:21:19.107+0200 D4 CONNPOOL [ShardRegistry] Updating controller for 10.122.129.43:27019 with State: { requests: 0, ready: 1, pending: 0, active: 0, isExpired: false }\n2020-05-13T13:21:19.107+0200 D4 CONNPOOL [ShardRegistry] Comparing connection state for 10.122.129.43:27019 to Controls: { maxPending: 2, target: 1, }\n2020-05-13T13:21:19.252+0200 D2 ASIO [TaskExecutorPool-0] Failed to get connection from pool for request 19195893: NetworkInterfaceExceededTimeLimit: Couldn’t get a connection within the time\nlimit\n2020-05-13T13:21:19.252+0200 D2 ASIO [TaskExecutorPool-0] Failed to get connection from pool for request 19195894: NetworkInterfaceExceededTimeLimit: Couldn’t get a connection within the time\nlimit\n2020-05-13T13:21:19.252+0200 I NETWORK [conn1969] Marking host 10.122.129.43:27018 as failed :: caused by :: NetworkInterfaceExceededTimeLimit: Couldn’t get a connection within the time limit\n2020-05-13T13:21:19.252+0200 I COMMAND [conn1969] command crkid-prod.crkid_dokument_status command: update { update: “crkid_dokument_status”, ordered: true, txnNumber: 4, $db: “crkid-prod”, $clu\nsterTime: { clusterTime: Timestamp(1589368859, 22), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, lsid: { id: UUID(“f14f5162-2f43-4788-92b8-db2d0b11c46c”)\n} } nShards:1 nMatched:0 nModified:0 numYields:0 reslen:407 protocol:op_msg 19999ms\n2020-05-13T13:21:19.252+0200 I COMMAND [conn3081] command crkid-prod.crkid_dokument_status command: findAndModify { findAndModify: “crkid_dokument_status”, query: { _id: “CRKID#WPL.2019.01.10.00\n4869” }, new: false, update: { $set: { synced: true } }, txnNumber: 18, $db: “crkid-prod”, $clusterTime: { clusterTime: Timestamp(1589368859, 22), signature: { hash: BinData(0, 0000000000000000000\n000000000000000000000), keyId: 0 } }, lsid: { id: UUID(“e791e4a7-afd3-4c7e-9144-fd5248c50047”) } } numYields:0 ok:0 errMsg:“Couldn’t get a connection within the time limit” errName:NetworkInterfac\neExceededTimeLimit errCode:202 reslen:281 protocol:op_msg 19999ms\n2020-05-13T13:21:19.253+0200 D2 ASIO [TaskExecutorPool-0] Failed to get connection from pool for request 19195895: NetworkInterfaceExceededTimeLimit: Couldn’t get a connection within the time\nlimit\n2020-05-13T13:21:19.253+0200 D2 ASIO [TaskExecutorPool-0] Failed to get connection from pool for request 19195896: NetworkInterfaceExceededTimeLimit: Couldn’t get a connection within the time\nlimit",
"username": "Piotr_Tajdus"
},
{
"code": "",
"text": "I am also encountering same error : errName:NetworkInterfaceExceededTimeLimit errCode:202",
"username": "Aayushi_Mangal"
},
{
"code": "",
"text": "I have upgraded mongo to 4.2.6 and changed size of connection pool and it seems it helped:taskExecutorPoolSize: 0\nShardingTaskExecutorPoolMinSize: 10\nShardingTaskExecutorPoolMaxConnecting: 5",
"username": "Piotr_Tajdus"
},
{
"code": "",
"text": "hi,\nwe run version 4.4.1 on 90 shards of each 3 “mongod” servers each; we encounter many ( ! ) of these errors when we load even a little bit of data.\n@Piotr : with these pool-settings, how many shards can you run ‘stable’ ?Are there anywhere suggested settings documented for a large cluster setup ?",
"username": "Rob_De_Langhe"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Mongos Couldn't get a connection within the time limit | 2022-12-14T18:13:18.617Z | Mongos Couldn’t get a connection within the time limit | 1,804 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "{\n \"_id\": {\n \"$oid\": \"6390fd9ddb26129a8832e330\"\n },\n \"name\": \"Sabarirajan\",\n \"username\": \"testuser\",\n \"schoolId\": \"639092e21a1df07c4664eb46\",\n \"grade\": 7,\n \"type\": \"user\",\n \"email\": \"[email protected]\",\n \"email1\": \"[email protected]\",\n \"email2\": \"[email protected]\",\n \"mobile\": {\n \"$numberLong\": \"9840204606\"\n },\n \"mobile1\": {\n \"$numberLong\": \"9840204606\"\n },\n \"section\": \"ab\",\n \"birthdate\": \"2022-01-01\",\n \"uid\": \"5d04e5d6-e533-462d-bf26-4fac7d1982c4\",\n \"createdOn\": \"b7665281-c34a-4b9c-a0db-764d89e600f5\",\n \"projectDetails\": [\n {\n \"projectId\": {\n \"$oid\": \"6370d25ad4baaabdef7df774\"\n },\n \"groupId\": {\n \"$oid\": \"636f2ae52fe088dd1654e3be\"\n }\n },\n {\n \"projectId\": {\n \"$oid\": \"6370d27cd4baaabdef7df775\"\n },\n \"groupId\": {\n \"$oid\": \"636f2bc32fe088dd1654e3c0\"\n }\n }\n ]\n}\n{\n \"_id\": {\n \"$oid\": \"6370d25ad4baaabdef7df774\"\n },\n \"name\": \"Project-1\",\n \"id\": 1,\n \"modifiedBy\": \"b7665281-c34a-4b9c-a0db-764d89e600f5\",\n \"modifiedOn\": {\n \"$date\": {\n \"$numberLong\": \"1670753074882\"\n }\n },\n \"title\": \"Grade -1 (test)\",\n \"displayName\": \"Grade -1 (test)\",\n \"type\": \"okjlkj\",\n \"groups\": [\n {\n \"$oid\": \"636f2ae52fe088dd1654e3be\"\n },\n {\n \"$oid\": \"636f2bc32fe088dd1654e3c1\"\n }\n ]\n}\n[{\n \"_id\": {\n \"$oid\": \"636f2ae52fe088dd1654e3be\"\n },\n \"id\": 1,\n \"name\": \"Group-1\"\n},{\n \"_id\": {\n \"$oid\": \"636f2bc32fe088dd1654e3c0\"\n },\n \"id\": 2,\n \"name\": \"Group-2\"\n}]\n",
"text": "Student CollectionProject CollectionGroup Collectioni have 3 collection student,project,groupnotes:\nOne project have multiple groups.\nOne student have multiple projects but for one project he have only one group.i expecting output as following{\nstudentdetail…,\nprojectDetails:[\nproject:{\nprojectName:“project-1”,\ngroup:{groupName:“group-1”}\n}\nproject:{\nprojectName:“project-2”,\ngroup:{groupName:“group-2”}\n}\n]\n}",
"username": "93905812f2cee01f65f8fd01923d3b8"
},
{
"code": "",
"text": "Thanks for sharing sample documents that we can cut-n-paste.Could you please share what you have tried and explain to us how it fails to provide the expected results. This would save us a lot of time as we won’t spend time investigating in a direction that you already know is wrong. Sometimes we can just point at a small detail that you have wrong.Having field names email, email1, email2, mobile and mobile1 is a bad schema. Arrays exist for a reason. The attribute pattern exists for a reason. Field names like this are reminiscence of old SQL when arrays were invented decades ago but not possible in early SQL.Another schema no-no, is date as strings. Dates as Date uses less space, are faster and provides a rich API.",
"username": "steevej"
},
{
"code": "[\n {\n '$match': {\n '_id': new ObjectId('6390fd9ddb26129a8832e330')\n }\n }, {\n '$lookup': {\n 'from': 'project', \n 'localField': 'projectDetails.projectId', \n 'foreignField': '_id', \n 'as': 'projects'\n }\n }, {\n '$lookup': {\n 'from': 'group', \n 'localField': 'projectDetails.groupId', \n 'foreignField': '_id', \n 'as': 'groups'\n }\n }, {\n '$project': {\n 'projects.groups': 0\n }\n }\n] \n{\n \"_id\": {\n \"$oid\": \"6390fd9ddb26129a8832e330\"\n },\n \"name\": \"Student Name\",\n \"uid\": \"5d04e5d6-e533-462d-bf26-4fac7d1982c4\",\n \"projects\": [\n {\n \"_id\": {\n \"$oid\": \"6370d27cd4baaabdef7df775\"\n },\n \"name\": \"Project-2\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"6370d25ad4baaabdef7df774\"\n },\n \"name\": \"Project-1\",\n }\n ],\n \"groups\": [\n {\n \"_id\": {\n \"$oid\": \"636f2bc32fe088dd1654e3c0\"\n },\n \"id\": 2,\n \"name\": \"Group-2\"\n },\n {\n \"_id\": {\n \"$oid\": \"636f2ae52fe088dd1654e3be\"\n },\n \"id\": 1,\n \"name\": \"Group-1\"\n }\n ]\n}\n\"projectDetails\": [\n {\n \"projectId\": {\n \"$oid\": \"6370d25ad4baaabdef7df774\"\n },\n \"groupId\": {\n \"$oid\": \"636f2ae52fe088dd1654e3be\"\n }\n }\n]\n{\n//student details and projectdetails with projects and its single group details inside project json\n \"projects\": [\n {\n \"projectId\": {\n \"$oid\": \"6370d25ad4baaabdef7df774\"\n },\n \"name\": \"project-1\",\n \"group\": {\n \"_id\": {\n \"$oid\": \"636f2ae52fe088dd1654e3be\"\n },\n \"name\": \"group-1\"\n }\n }\n ]\n}\n",
"text": "@steevej , I am very new to MongoDB and I will change my schema as email and mobile array. and I will save date as Date. thank you very much for your advise that really helps.I tried something like this (i got projects and group array separately) but i need only one group details inside a project json , groupId that exists in “projectdetails” arrayOutput:I have project assignment details inside student collection like below. using i need to get projects and its groupI need output asstudent data…",
"username": "93905812f2cee01f65f8fd01923d3b8"
},
{
"code": "",
"text": "I do not understand.In collection students, the array projectDetails has 2 entries with each a projectId and groupId. In the expected result you only have project with name:project-1.How do you determine which project from the 2 projectDetails you want?",
"username": "steevej"
}
]
| Multiple collection join with inner nested join | 2022-12-11T11:54:31.643Z | Multiple collection join with inner nested join | 2,049 |
null | [
"database-tools",
"containers"
]
| [
{
"code": "",
"text": "Hi folks, just wanted to see if I’m not doing something terrible here. From a performance perspective does mongoexport provide any benefits over just connecting to a collection via a driver and iterating over it to save the data?I’m trying to automate the export of some collections (not a full dump) of my database, and I could just easily do so using my own app code to create a job (instead of having to modify my docker image to include mongo export)What is the preferred method in this situation? Is the driver based solution is ok?Thanks folks",
"username": "Vinicius_Carvalho"
},
{
"code": "",
"text": "I like to use language drivers. I’m partial to Python. Whatever works for you.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Mongo export vs language based export | 2022-12-18T16:09:14.328Z | Mongo export vs language based export | 1,223 |
null | [
"replication"
]
| [
{
"code": ".admin().command({ replSetReconfig: rsConfig, force: force }, {})\n",
"text": "Hey\nI’m trying to automate deployment of mongodb, but i keep running around i cockles regarding replicasets.\nIf i deploy 3 instances using replicaset and and 3 nodes using localhostnames\nI then expose those using traefic to add TLS and route 3 host names into each of the 3 internal host.\nBut no matter what i do it keep forcing my client to try and connect to the INTERAL host names\nIf i try and configure the replicaset using the external host names, it keeps failing with an error similar to “host not in host list” … I assume this id due todoes not know how to connect using TLSIf i add the external domains to --bind_ip … mongodb crashes with an error similar to “refuses to listen to address” … From what i could understand it’s simply resolving the host name and try to listen to the external IP witch ofc will never work …\nI then tried “hacking” it using the guide from https://www.mongodb.com/docs/manual/tutorial/change-hostnames-in-a-replica-set/ by updating db.system.replset but then mongod goes into a weird state when it no longer things it’s part of the replica set, and data is not replicated.\nSo how do i make replicasets stop sending internal host names OR tell the replicaset to use TLS when validating the config ?\nAnd no, assigning public IP’s to each host is not an option.\nAnd only publishing “the master” is not an option, since ( as far as i know ) there is no way for kubernetes to know what host is the primary at any given moment",
"username": "Allan_Zimmermann"
},
{
"code": "*facepalm*",
"text": "So i got a lot of things wrong here.\nand most embarrassingly it turns out the main issue I was having issues in all my test of different things, was the fact i had all my IngressRouteTCP’s pointing to the headless service and not a dedicated service it was intended.\nSo when testing with external hostnames i would constantly get errors about host not being in config or host x1 and x0 being the same and so on.\nSo starting mongod with --replSet rs0 and --tlsMode preferTLS works as long as requests always go to the correct mongodb host *facepalm*\nKey is using --tlsMode preferTLS this tells mongod to connect to other members using tls, but other clients can connect without.",
"username": "Allan_Zimmermann"
}
]
| How do we use external fqdn's with replicaset | 2022-12-12T20:21:19.013Z | How do we use external fqdn’s with replicaset | 1,428 |
null | [
"node-js",
"replication",
"mongodb-shell"
]
| [
{
"code": "mongodb://mongo-0.demo3.domain.com:27017/dbmongodb://mongo-1.demo3.domain.com:27017/dbmongodb://mongo-0.demo3.domain.com:27017,mongo-1.demo3.domain.com:27017/db?replicaSet=rs0",
"text": "Hey\nHow do we make mongosh and the nodejs driver respect the connection string we give it ?\nexample, if i have a replica set named rs0 and it works locally, and i can connect using mongodb://mongo-0.demo3.domain.com:27017/db and mongodb://mongo-1.demo3.domain.com:27017/db\nthen why does mongodb://mongo-0.demo3.domain.com:27017,mongo-1.demo3.domain.com:27017/db?replicaSet=rs0 not work? How do we make the clients stop using internal hostnames and use the hostnames given in the connection string",
"username": "Allan_Zimmermann"
},
{
"code": "db.hello().hosts",
"text": "When a client connects to a member of the seed list, the client retrieves a list of replica set members it can connect to. Clients often use DNS aliases in their seed lists which means the host may return a server list that differs from the original seed list. If this happens, clients will use the hostnames provided by the replica set rather than the hostnames listed in the seed list to ensure that replica set members can be reached via the hostnames in the resulting replica set config.When creating a replicaset the hostnames should be the FQDN that any client or other replicaset member can resolve and connect to.But as long as the hosts returned by db.hello().hosts are resolvable and connectable it will work as expected.",
"username": "chris"
},
{
"code": "",
"text": "This post was made in frustration over no resolution to this question\nBut i finally made it work. Update in original question",
"username": "Allan_Zimmermann"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to make mongosh / nodejs driver respect connection string | 2022-12-17T11:17:40.855Z | How to make mongosh / nodejs driver respect connection string | 1,733 |
null | []
| [
{
"code": "[\n{\n _id: ObjectId(\"62aafb10e9123be010871280\"),\n task_id: ObjectId(\"62aad470a743c5d36362fc2f\"),\n comments: [\n {\n user_id: ObjectId(\"627e4b14c35e9de228efe63b\"),\n comment: \"Tested this feature in Staging and it's working as expected\",\n _id: ObjectId(\"62aafb10a743c5d3636303c5\"),\n created_at: ISODate(\"2022-06-16T09:42:40.619Z\")\n }\n ]\n },\n {\n _id: ObjectId(\"62aafbafe9123be0108712ed\"),\n task_id: ObjectId(\"62aafb5da743c5d3636303d8\"),\n comments: [\n {\n user_id: ObjectId(\"627e4b14c35e9de228efe63b\"),\n comment: \"Tested this feature in staging and it's working as expected\",\n _id: ObjectId(\"62aafbafa743c5d363630413\"),\n created_at: ISODate(\"2022-06-16T09:45:19.732Z\")\n }\n ]\n },\n]\n[\n{\n _id: ObjectId(\"62aafb10e9123be010871280\"),\n project_id: ObjectId(\"6279fbd969d9ec50ca2e274a\"),\n task_id: ObjectId(\"62aad470a743c5d36362fc2f\"),\n __v: 0,\n comments: [\n {\n user_id: ObjectId(\"627e4b14c35e9de228efe63b\"),\n comment: {markup: \"Tested this feature in staging and it's working as expected\", text: \"Tested this feature in staging and it's working as expected\"},\n _id: ObjectId(\"62aafb10a743c5d3636303c5\"),\n created_at: ISODate(\"2022-06-16T09:42:40.619Z\")\n }\n ]\n },\n {\n _id: ObjectId(\"62aafbafe9123be0108712ed\"),\n project_id: ObjectId(\"6279fbd969d9ec50ca2e274a\"),\n task_id: ObjectId(\"62aafb5da743c5d3636303d8\"),\n __v: 0,\n comments: [\n {\n user_id: ObjectId(\"627e4b14c35e9de228efe63b\"),\n comment: {markup: \"Tested this feature in staging and it's working as expected\", text: \"Tested this feature in staging and it's working as expected\"},\n _id: ObjectId(\"62aafbafa743c5d363630413\"),\n created_at: ISODate(\"2022-06-16T09:45:19.732Z\")\n }\n ]\n },\n]\n",
"text": "I have a collection like thisand I want to make it like this",
"username": "sai_reddy"
},
{
"code": "$setdb.collection.update({ }, /* <---- selects all docs */\n{\n \"$set\": {/* think of it as \"add a field\" */\n \"comments.$[].comment\": {\n \"key\": \"value\" /* Unable to use $$ stuff */\n }\n }\n},\n{\n multi: true /* enable multiple updates */\n})\nmap$map$mergeObjectsdb.collection.update({},\n[\n {\n \"$addFields\": {\n \"comments\": { /* will add this field i.e overwrite it */ \n \"$map\": {\n input: \"$comments\",// select the field we map over (so here we have the array)\n as: \"c\",//name for the current item, could be \"thiscomment\" or anything else.\n in: {//trick comes below, we merge 2 objects (current item \"comment\" gets overwritten)\n \"$mergeObjects\": [\n \"$$c\",\n {\n \"comment\": {\n \"markup\": \"$$c.comment\",\n \"text\": \"$$c.comment\"\n }\n }\n ]\n }\n }\n }\n }\n }\n],\n{\n multi: true// modifies all matched docs (all in this case)\n})\n",
"text": "I am not an expert. This may be a start. I am adding first the approach that didn’t work as well, because that is how I like to be explained.After thinking a bit more I realized that we want to map the old array to new array. We can use all this tooling in a pipeline.A pipeline of a single stage isn’t great but this will allow to run a $map, and also using $mergeObjects. These two are powerful tools.",
"username": "santimir"
},
{
"code": "",
"text": "this doesn’t solve my problem, I want the values of the comment which are “Tested this feature in staging and it’s working as expected” (these are dynamic values to different docs), to be in a new object like ({markup\"Tested this feature in staging and it’s working as expected\"})",
"username": "sai_reddy"
},
{
"code": "db.comments.find({comments:{\"$ne\":[]}}).forEach(function(a){ a.comments.forEach(function (b,i){b.comment = b.comment.replace(b.comment,{\"markup\":b.comment, \"text\":b.comment}) })})",
"text": "i came up with this db.comments.find({comments:{\"$ne\":[]}}).forEach(function(a){ a.comments.forEach(function (b,i){b.comment = b.comment.replace(b.comment,{\"markup\":b.comment, \"text\":b.comment}) })}) but no use",
"username": "sai_reddy"
},
{
"code": "",
"text": "Did you click on the second link that I included ? i.e Mongo playground",
"username": "santimir"
},
{
"code": "",
"text": "Sorry, I missed the second link, it worked.Thank You santimir.",
"username": "sai_reddy"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Change string to object with string data | 2022-12-17T09:34:32.510Z | Change string to object with string data | 2,045 |
null | [
"node-js",
"data-modeling",
"polymorphic-pattern"
]
| [
{
"code": "",
"text": "Hey there,I am building a Job portal web application and I need some help regarding to MongoDB schema design.\nCurrently in the process of designing the schema.I have different type of users in my application and they all have different properties, and will have different data and behavior in the application.The questions is how to design the model with different users. And should I have one user model and include different type of users inside.\nAny help, suggestions will be appreciated Thanks in advance,\nAnton",
"username": "Anton_Papazyan"
},
{
"code": "",
"text": "Building with Patterns: The Polymorphic Pattern | MongoDBWhen all documents in a collection are of similar, but not identical, structure, we call this the Polymorphic Patterntry to find examples related to this name.",
"username": "Yilmaz_Durmaz"
}
]
| Schema Design (Different type of users) | 2022-12-17T12:13:55.780Z | Schema Design (Different type of users) | 1,344 |
null | [
"queries",
"node-js"
]
| [
{
"code": "",
"text": "This is my sample db\n[\n{\n‘field1’:‘val1’,\n‘field2’: ‘val1’\n}\n]This has happened to a large number of my data. How can I compare the values of field1 and field2 of a document?",
"username": "Neev_Shah"
},
{
"code": "",
"text": "You need to use the $expr syntax",
"username": "steevej"
}
]
| Comparing two fields without where operation | 2022-12-17T04:44:05.445Z | Comparing two fields without where operation | 2,606 |
null | [
"node-js",
"crud"
]
| [
{
"code": "this.db\n.collection('widgets')\n.updateOne(\n { _id: widgetId },\n { $unset: {'gizmos.color': 1 }\n)\n",
"text": "My searching hasn’t returned a satisfactory result yet!If a Widget has many Gizmos, and Gizmos have an optional color, how can I remove all colors from a Widget’s Gizmos?I thought it would be:I can’t find any documentation that specifies how $unset works in the embedded documents array context.I was reading a S.O. post (which I can’t link because I can’t find it again) that seemed to indicate that there are some options that have to be passed to make the unset work on multiple embedded documents.I see that there are some update options - but they don’t seem useful.",
"username": "Michael_Jay2"
},
{
"code": "db.widgets.updateOne( query , [ { \"$set\" : ... } ] )\n",
"text": "Please provide sample documents that we can cut-n-paste directly into our system.If I understand correctly gizmos is an array of objects. Some of the objects have a field named color. You want to remove the field color from all objects.I am not too sure if there is a way to do it directly. One approach that could work is using the $set with aggregation syntax:1 - I would use $map on gizmos to produce a temporary _gizmos by using $objectToArray\n2 - Then a $map on _gizmos that uses $filter on each element that remove k:color\n3 - A final $map that uses $arrayToObject on _gizmos elements to reconstruct an updated gizmos",
"username": "steevej"
}
]
| Node driver - $unset on a property of embedded documents in an array? | 2022-12-16T02:40:51.486Z | Node driver - $unset on a property of embedded documents in an array? | 1,021 |
null | [
"aggregation"
]
| [
{
"code": "{\n ...,\n notSorted: [\n { code: \"a\", index: 1, <other data> },\n { code: \"c\", index: 1, ...},\n { code: \"a\", index: 2, ...},\n { code: \"c\", index: 2, ...},\n { code: \"b\", index: 2, ...},\n { code: \"b\", index: 1, ...}\n ],\n ...\n}\n{\n ...,\n notSorted: [ <as before> ],\n sorted: [\n {\n codes: \"A\",\n elements: [\n { index: 1, <other data> },\n { index: 2, ... }\n ]\n },\n {\n codes: \"B\",\n elements: [\n { index: 1, ...},\n { index: 2, ... }\n ]\n },\n {\n codes: \"C\",\n elements: [\n { index: 1, ...}, \n { index: 2, ...}\n ]\n },\n ],\n ...\n}\nnotSorted[ aggregate pipeline: $unwind stage ]\n{\n path: \"$notSorted\"\n}\n[ aggregate pipeline: $group stage ]\n{ \n _id: \"$notSorted.code\",\n}\n",
"text": "I have an aggregate pipeline that is transforming documents. I’m at a point where the document contains a field like this…I want to keep that field and add a new, sorted version, field, like so…I think what I should do, within my aggregate pipeline is something like:Does that sound like it would work?if yes, what would you do for step 4?When I do the $unwind, things seem okay, I get 6 documents, one for each element of the notSorted array…Then I can use $group to pull them together by the “code” id:but I then lose all data other than the code. How do I keep the rest? I can use a $mergeObjects step in with the group, but then I can’t see how I can easily keep the existing object structure, and I don’t want to merge the various elements of A, I just want them grouped together, but still as separate objects…Urgh. I feel stupid. What am I missing?",
"username": "Oliver_Browne"
},
{
"code": "",
"text": "Please publish your sample documents in usable form. Little 3 dots and things like <other data> cannot be cut-n-paste directly into our system for experimentation. Values and keys are case sensitive. So when you use code:a make sure you use code:a rather than code:A elsewhere.My approach will involve multiple stages.1 - A $set stage that uses $reduce on notSorted to compute with $addToSet an array of codes, as:_codes.2 - A $set stage that uses $map on _codes with $filter on notSorted to collect the elements of a given code as:_grouped3 - A final $set stage that uses $map on _grouped that uses $sortArray on _grouped.elements to produce as:sorted.",
"username": "steevej"
}
]
| How to use $unwind > $group > $sort > <?> in an aggregate pipeline? | 2022-12-15T13:48:25.240Z | How to use $unwind > $group > $sort > <?> in an aggregate pipeline? | 1,234 |
null | [
"java"
]
| [
{
"code": "",
"text": "Hi can any one help me\nI want get all type of value in bson",
"username": "halim_hiouani"
},
{
"code": "// this \"text\" document\n{ \"BSON\" : [ \"awesome\", 5.05, 1986 ] }\n// serializes to the \"binary\" document\n \\x31\\x00\\x00\\x00 \n // total size 49 bytes, 45 left now (int32)\n \\x04BSON\\x00 \n // 04 is Array, 39 left now\n \\x26\\x00\\x00\\x00\n // size 38 for Array? I don't really know what is this\n \\x02\\x30\\x00\\x08\\x00\\x00\\x00awesome\\x00\n // 02 is utf string (awesome is actually 61 77 65 73 6f 6d 65)\n // 30 is utf 8 for 0 i.e first item in the object / array\n// notice the \"separator\" \\x00 right after\n \\x01\\x31\\x00\\x33\\x33\\x33\\x33\\x33\\x33\\x14\\x40\n // 01 is double,so don't expect me to translate it lol, but can be pasted elsewhere\n // 31 is first item in the array\n // notice the \"separator\" \\x00 right after\n \\x10\\x32\\x00\\xc2\\x07\\x00\\x00\n // 32 is the utf-8 key again, 2 in this case\n // notice the \"separator\" \\x00 right after \n // (7 << 8) + 0xc2 gets you to 1982\n \\x00\n",
"text": "The bson format specification is here.They tell you how the bit pattern that represents each data type allowed in BSON.I add details below that you may find useful or not, and an example with some types.Basically the format cited above explains how the data you type is serialized (characters in the screen to bytes) and deserialized (bytes to data.)BSON was created bc:when JSON is serialized the type information is not specific enough to make reading fast. I don’t know the details of this.It also adds flexibility. JSON has a similar serialization but lacks types for binary data and date.( I am not sure how this is handled with drivers, I presume they send JSON over the network and when the Driver queries a Database deserializes it to json to send it over the network as well.)Example of a bson doc adapted from here",
"username": "santimir"
},
{
"code": "",
"text": "( I am not sure how this is handled with drivers, I presume they send JSON over the network and when the Driver queries a Database deserializes it to json to send it over the network as well.)BSON is used on the network too. The query itself is a BSON document!",
"username": "chris"
},
{
"code": "",
"text": "Makes sense now that you say it. Thank you.",
"username": "santimir"
}
]
| Type of field in collection | 2022-12-16T07:16:34.293Z | Type of field in collection | 1,078 |
null | [
"aggregation"
]
| [
{
"code": "workout.nonSupersetted: [\n 0: {\n superset: \"A\",\n exercises: [\n 0: {\n supersetIndex: \"1\",\n exercise: {},\n valuesThatVaryByWeek: []\n }\n ]\n },\n 1: {\n superset: \"A\",\n exercises: [\n 0: { supersetIndex: \"2\", exercise: {}, valuesThatVaryByWeek: [] }\n ]\n },\n 2: {\n superset: \"B\",\n exercises: [ 0: { supersetIndex: \"1\", .... } ]\n },\n 3: {\n superset: \"B\",\n exercises: [ 0: { supersetIndex: \"2\", .... } ]\n },\n 4: {\n superset: \"C\",\n exercises: [ 0: { supersetIndex: \"1\", .... } ]\n },\n 5: {\n superset: \"C\",\n exercises: [ 0: { supersetIndex: \"2\", .... } ]\n },\n]\n\nworkout.nonSupersetted: [\n 0: {\n superset: \"A\",\n exercises: [\n 0: {\n supersetIndex: \"1\",\n exercise: {},\n valuesThatVaryByWeek: []\n },\n 1: { supersetIndex: \"2\", exercise: {}, valuesThatVaryByWeek: [] }\n ]\n },\n 1: {\n superset: \"B\",\n exercises: [\n 0: { supersetIndex: \"1\", ... },\n 1: { supersetIndex: \"2\", ... }\n ]\n },\n 2: {\n superset: \"C\",\n exercises: [\n 0: { supersetIndex: \"1\", ... },\n 1: { supersetIndex: \"2\", ... }\n\n ]\n }\n]\n",
"text": "How do I merge arrays? Given this object…I want to end up with this…Caveats: I want this to be done within an aggregation pipeline, since that’s what I’m using to generate this doc so far.",
"username": "Oliver_Browne"
},
{
"code": "",
"text": "Please update your sample documents and make them real JSON.We cannot cut-n-paste your documents as-is. Arrays do not show their indices. Three little dots is not syntactically correct.",
"username": "steevej"
}
]
| How to merge embedded arrays? | 2022-12-15T21:56:31.292Z | How to merge embedded arrays? | 941 |
null | [
"dot-net"
]
| [
{
"code": "",
"text": "Hi, when I add a new relation for two object, I obtain an error and clients not sync.\nIt’s necessary stop sync and restart! (Realm 10.18.0 on Xamarin)This is stacktrace (I add Professionista filed to Cliente object):2022-12-16 14:46:09.088590+0100 StudioProApp.iOS[59770:5919202] 2022-12-16 13:46:09.087 Error: Connection[2]: Session[2]: Failed to integrate downloaded changesets: Failed to apply received changeset: Update: No such field: ‘Professionista’ in class ‘class_Cliente’ (instruction target: Cliente[“25ae15af-4dc4-4426-af17-36f97721ed1f”].Professionista, version: 721, last_integrated_remote_version: 2, origin_file_ident: 6, timestamp: 251125421648)\nException backtrace:\n0 realm-wrappers 0x0000000109fcf312 ZN5realm4sync17BadChangesetErrorCI1NS_4util22ExceptionWithBacktraceISt13runtime_errorEEIJNSt3__112basic_stringIcNS5_11char_traitsIcEENS5_9allocatorIcEEEEEEEDpOT + 34\n1 realm-wrappers 0x0000000109fc91d2 _ZN5realm4sync12_GLOBAL__N_125throw_bad_transaction_logENSt3__112basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEE + 34\n2 realm-wrappers 0x0000000109fc8fe9 _ZNK5realm4sync18InstructionApplier19bad_transaction_logERKNSt3__112basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEE + 1177\n3 realm-wrappers 0x0000000109fcdaed _ZN5realm4sync18InstructionApplier12PathResolver8on_errorERKNSt3__112basic_stringIcNS3_11char_traitsIcEENS3_9allocatorIcEEEE + 13\n4 realm-wrappers 0x0000000109fcdf3f _ZN5realm4sync18InstructionApplier12PathResolver13resolve_fieldERNS_3ObjENS0_12InternStringE + 335\n5 realm-wrappers 0x0000000109fcabaf _ZN5realm4sync18InstructionApplier12PathResolver7resolveEv + 111\n6 realm-wrappers 0x0000000109fcaa91 _ZN5realm4sync18InstructionApplierclERKNS0_5instr6UpdateE + 81\n7 realm-wrappers 0x0000000109fe8764 _ZN5realm4sync18InstructionApplier5applyIS1_EEvRT_RKNS0_9ChangesetEPNS_4util6LoggerE + 100\n8 realm-wrappers 0x0000000109fe85fb _ZN5realm4util14UniqueFunctionIFbPKNS_4sync9ChangesetEEE12SpecificImplIZNS2_13ClientHistory37transform_and_apply_server_changesetsENS0_4SpanIS3_Lm18446744073709551615EEENSt3__110shared_ptrINS_11TransactionEEERNS0_6LoggerERyE3$0E4callEOS5 + 203\n9 realm-wrappers 0x000000010a026a20 _ZN5realm5_impl15TransformerImpl27transform_remote_changesetsERNS_4sync16TransformHistoryEyyNS_4util4SpanINS2_9ChangesetELm18446744073709551615EEENS5_14UniqueFunctionIFbPKS7_EEEPNS5_6LoggerE + 880\n10 realm-wrappers 0x0000000109fe488a _ZN5realm4sync13ClientHistory37transform_and_apply_server_changesetsENS_4util4SpanINS0_9ChangesetELm18446744073709551615EEENSt3__110shared_ptrINS_11TransactionEEERNS2_6LoggerERy + 426\n11 realm-wrappers 0x0000000109fe3f7e _ZN5realm4sync13ClientHistory27integrate_server_changesetsERKNS0_12SyncProgressEPKyNS_4util4SpanIKNS0_11Transformer15RemoteChangesetELm18446744073709551615EEERNS0_11VersionInfoENS0_18DownloadBatchStateERNS7_6LoggerENS7_14UniqueFunctionIFvRKNSt3__110shared_ptrINS_11TransactionEEEmEEEPNS1_20SyncTransactReporterE + 526\n12 realm-wrappers 0x0000000109ff42bc _ZN5realm4sync10ClientImpl7Session20integrate_changesetsERNS0_17ClientReplicationERKNS0_12SyncProgressEyRKNSt3__16vectorINS0_11Transformer15RemoteChangesetENS8_9allocatorISB_EEEERNS0_11VersionInfoENS0_18DownloadBatchStateE + 156\n13 realm-wrappers 0x0000000109fb85e7 _ZN5realm4sync10ClientImpl7Session29initiate_integrate_changesetsEyNS0_18DownloadBatchStateERKNS0_12SyncProgressERKNSt3__16vectorINS0_11Transformer15RemoteChangesetENS7_9allocatorISA_EEEE + 103\n14 realm-wrappers 0x0000000109ff37c5 _ZN5realm4sync10ClientImpl7Session24receive_download_messageERKNS0_12SyncProgressEyNS0_18DownloadBatchStateExRKNSt3__16vectorINS0_11Transformer15RemoteChangesetENS7_9allocatorISA_EEEE + 853\n15 realm-wrappers 0x0000000109ff9d8e _ZN5realm5_impl14ClientProtocol22parse_download_messageINS_4sync10ClientImpl10ConnectionEEEvRT_RNS0_16HeaderLineParserE + 1838\n2022-12-16 14:46:09.089621+0100 StudioProApp.iOS[59770:5919202]\n16 realm-wrappers 0x0000000109fef35f _ZN5realm5_impl14ClientProtocol22parse_message_receivedINS_4sync10ClientImpl10ConnectionEEEvRT_NSt3__117basic_string_viewIcNS8_11char_traitsIcEEEE + 783\n17 realm-wrappers 0x0000000109fece64 _ZN5realm4sync10ClientImpl10Connection33websocket_binary_message_receivedEPKcm + 52\n18 realm-wrappers 0x000000010a04b5e1 _ZN12_GLOBAL__N_19WebSocket17frame_reader_loopEv + 2337\n19 realm-wrappers 0x000000010a03fefc ZN5realm4util7network7Service9AsyncOper22do_recycle_and_executeINS0_14UniqueFunctionIFvNSt3__110error_codeEmEEEJRS7_RmEEEvbRT_DpOT0 + 156\n20 realm-wrappers 0x000000010a03fa1e _ZN5realm4util7network7Service14BasicStreamOpsINS1_3ssl6StreamEE16BufferedReadOperINS0_14UniqueFunctionIFvNSt3__110error_codeEmEEEE19recycle_and_executeEv + 190\n21 realm-wrappers 0x000000010a0426f5 _ZN5realm4util7network7Service4Impl3runEv + 405\n22 realm-wrappers 0x0000000109fbcddd _ZN5realm4sync6Client3runEv + 29\n23 realm-wrappers 0x0000000109eff50d ZNSt3__1L14__thread_proxyINS_5tupleIJNS_10unique_ptrINS_15__thread_structENS_14default_deleteIS3_EEEEZN5realm5_impl10SyncClientC1ENS2_INS7_4util6LoggerENS4_ISB_EEEERKNS7_16SyncClientConfigENS_8weak_ptrIKNS7_11SyncManagerEEEEUlvE0_EEEEEPvSN + 45\n24 libsystem_pthread.dylib 0x0000000112f6e259 _pthread_start + 125\n25 libsystem_pthread.dylib 0x0000000112f69c7b thread_start + 15\n2022-12-16 14:46:09.089987+0100 StudioProApp.iOS[59770:5919202]Thanks\nLuigi",
"username": "Luigi_De_Giacomo"
},
{
"code": "",
"text": "How did you change the schema? Did you do it on the server via the UI or did you update your C# model classes? Also, are there any errors on the server around that time?",
"username": "nirinchev"
},
{
"code": "",
"text": "I updated my C# Model in Develop Mode.\nOn server this is error:Failed to apply received changeset: Update: No such field: ‘Professionista’ in class ‘class_Cliente’ (instruction target: Cliente[“25ae15af-4dc4-4426-af17-36f97721ed1f”].Professionista, version: 721, last_integrated_remote_version: 2, origin_file_ident: 6, timestamp: 251125421648) Exception backtrace: 0 realm-wrappers 0x0000000109fcf312 ZN5realm4sync17BadChangesetErrorCI1NS_4util22ExceptionWithBacktraceISt13runtime_errorEEIJNSt3__112basic_stringIcNS5_11char_traitsIcEENS5_9allocatorIcEEEEEEEDpOT + 34 1 realm-wrappers 0x0000000109fc91d2 _ZN5realm4sync12_GLOBAL__N_125throw_bad_transaction_logENSt3__112basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEE + 34 2 realm-wrappers 0x0000000109fc8fe9 _ZNK5realm4sync18InstructionApplier19bad_transaction_logERKNSt3__112basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEE + 1177 3 realm-wrappers 0x0000000109fcdaed _ZN5realm4sync18InstructionApplier12PathResolver8on_errorERKNSt3__112basic_stringIcNS3_11char_traitsIcEENS3_9allocatorIcEEEE + 13 4 realm-wrappers 0x0000000109fcdf3f _ZN5realm4sync18InstructionApplier12PathResolver13resolve_fieldERNS_3ObjENS0_12InternStringE + 335 5 realm-wrappers 0x0000000109fcabaf _ZN5realm4sync18InstructionApplier12PathResolver7resolveEv + 111 6 realm-wrappers 0x0000000109fcaa91 _ZN5realm4sync18InstructionApplierclERKNS0_5instr6UpdateE + 81 7 realm-wrappers 0x0000000109fe8764 _ZN5realm4sync18InstructionApplier5applyIS1_EEvRT_RKNS0_9ChangesetEPNS_4util6LoggerE + 100 8 realm-wrappers 0x0000000109fe85fb|",
"username": "Luigi_De_Giacomo"
},
{
"code": "",
"text": "Okay, this looks like a server-side issue and we’ll need to have the team look into it. Can you file a support ticket so that they can investigate what went wrong there.",
"username": "nirinchev"
},
{
"code": "",
"text": "Ok, thanks!I will open a support ticketLuigi",
"username": "Luigi_De_Giacomo"
}
]
| Add Relation not working with sync | 2022-12-16T13:53:14.730Z | Add Relation not working with sync | 1,401 |
null | [
"sharding"
]
| [
{
"code": "",
"text": "Hi,\nI’m a new user of mongodb …I need to install the database on a RedHat8 …in the download page I see two rpm\nmongodb-org-server-6.0.3-1.el8.x86_64\nmongodb-org-mongos-6.0 .3-1.el8.x86_64what is the difference between the two ?thank in advance",
"username": "PAOLO"
},
{
"code": "",
"text": "mongodb-org-server-6.0.3-1.el8.x86_64This is likely the one you want. This installs the database server mongod used for standalone and replica sets.mongodb-org-mongos-6.0 .3-1.el8.x86_64This is for mongos the query router. This is used when a sharded cluster is installed.",
"username": "chris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Which rpm to choose? | 2022-12-15T13:24:32.846Z | Which rpm to choose? | 1,200 |
[
"aggregation"
]
| [
{
"code": " doc_switch = collection.aggregate([\n {\n $set:{\n rm:{\n $switch:{\n branches:[\n {case:req1,then:1},\n {case:req2,then:2},\n {case:req3,then:3}\n ],\n default:0\n }\n }\n }\n }\n ]) \ntrue$switch// > error:(InvalidPipelineOperator) Invalid $set :: caused by :: Unrecognized expression '$match’\n let match1 = {$match:{\"code\":arg_c,\"name\": arg_n , \"grade\": arg_g}}\n let match2 = {$match:{\"code\":arg_c,\"name\": arg_n }}\n let match3 = {$match:{\"code\":arg_c}}\n \n let and1 = {$and:[{\"code\":arg_c},{\"name\": arg_n},{\"grade\": arg_g}]}\n let and2 = {$and:[{\"code\":arg_c},{\"name\": arg_n}]}\n let and3 = {$and:[{\"code\":arg_c}]}\n \n // > error:(InvalidPipelineOperator) Invalid $set :: caused by :: Unrecognized expression '$match’\n let matchand1 = {$match:{$and:[{\"code\":arg_c},{\"name\": arg_n},{\"grade\": arg_g}]}}\n let matchand2 = {$match:{$and:[{\"code\":arg_c},{\"name\": arg_n}]}}\n let matchand3 = {$match:{\"code\":arg_c}}\n \n let andmatch1 = {$and:[{$match:{\"code\":arg_c,\"name\": arg_n , \"grade\": arg_g}}]}\n let andmatch2 = {$and:[{$match:{\"code\":arg_c,\"name\": arg_n }}]}\n let andmatch3 = {$and:[{$match:{\"code\":arg_c}}]}\n \n let andmatchmatch1 = {$and:[{$match:{\"code\":arg_c}},{$match:{\"name\": arg_n}} , {$match:{\"grade\": arg_g}}]}\n let andmatchmatch2 = {$and:[{$match:{\"code\":arg_c}},{$match:{\"name\": arg_n }}]}\n let andmatchmatch3 = {$and:[{$match:{\"code\":arg_c}}]}\n \n // Erreur d'expression relevée par Realm\n //let eq1 = { {$eq:[\"code\",arg_c]},{$eq:[\"name\", arg_n]},{$eq:[\"grade\", arg_g]}}\n //let eq2 = { {$eq:[\"code\",arg_c]},{$eq:[\"name\", arg_n]}}\n //let eq3 = { {$eq:[\"code\",arg_c]}}\n \n let andeq1 = { $and:[{$eq:[\"code\",arg_c]},{$eq:[\"name\", arg_n]},{$eq:[\"grade\", arg_g]}]}\n let andeq2 = { $and:[{$eq:[\"code\",arg_c]},{$eq:[\"name\", arg_n]}]}\n let andeq3 = { $and:[{$eq:[\"code\",arg_c]}]}\n \n // Chosen case\n req1 = andmatchmatch1\n req2 = andmatchmatch2\n req3 = andmatchmatch3\n\n",
"text": "I have got some issues trying to use $switch and $match or $eq for conditionnal cases. Here is my collection of 4 test documents:\nI would like to query some documents depending on some conditions. But as conditions if/else or switch exist only for aggregation, I use the following aggregation on realm:Where each req is a function of 3 arguments: function(arg_c, arg_n, arg_g) where:I want to set the rm attribute based on the switch condition req1, req2 and req3. According to mongodb documentation of switch:$switch\nEvaluates a series of case expressions. When it finds an expression which evaluates to true , $switch executes a specified expression and breaks out of the control flow.That means, if in my aggregation the case req1 is ok, the $switch won’t test req2 and so req3.I have tried different used cases for my conditions as follow:Depending on the request, I have summariezd the resultas as follow in a tab:\nimage1862×786 143 KB\n\n\nimage1845×504 66.7 KB\n\n\nimage1852×527 111 KB\n\n\nimage1850×564 74.1 KB\nI manually get rid of the _id attributes for readibility.\nI hope it’s still readable enough So basicaly, for all the used cases tested, either there are no match, either the value I have got for the new rm attribute is the wrong and the same for each document.Is someone know a solution, or know why it doesn’t work.I hope my issue is clear enough.Damien",
"username": "Damien"
},
{
"code": "casebooleanbooleandocument",
"text": "Hello @Damien ,Welcome to The MongoDB Community Forums! The reason you are getting below error while using $match in $switch is that, the case field of $switch is expected to be any valid expression that resolves to a boolean . If the result is not a boolean , it is coerced to a boolean value. $match is not an expression (provides document as result and not a boolean value) but an aggregation stage.Invalid $set :: caused by :: Unrecognized expression ‘$match’I would recommend you to go through Expressions in MongoDB to learn more about what are valid expressions and how one can use them to their advantage.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Thank you for your answer @Tarun_Gaur . I didn’t really think about if the $match was doing a boolean response or something else. I think i will try again to do my aggregation by another way with that in mind.",
"username": "Damien"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Impossible to use $switch, $and and $match for conditional queries or aggregations | 2022-12-13T22:09:43.645Z | Impossible to use $switch, $and and $match for conditional queries or aggregations | 2,080 |
|
null | [
"queries",
"mongopush"
]
| [
{
"code": "({\n \"comercio\": {\n \"descripcion\": \"Realizar pruebas\"\n },\n \"contacto\": {\n \"correo\": \"[email protected]\"\n },\n \"cuenta\": {\n \"nombreCompletoTitular\": \"Alexis Francisco \",\n \"numeroCuenta\": \"0000000000\"\n },\n \"diasLaborales\": [\n {\n \"dia\": \"Lunes\",\n \"horarios\": {\n \"horaApertura\": \"07:00:00\",\n \"horaCierre\": \"23:00:00\"\n }\n },\n {\n \"dia\": \"Martes\",\n \"horarios\": {\n \"horaApertura\": \"07:00:00\",\n \"horaCierre\": \"23:00:00\"\n }\n }\n ],\n \"domicilio\": {\n \"calle\": \"combate de leon\",\n \"codigoPostal\": \"09200\",\n \"idColonia\": NumberLong(1),\n \n }\n})\ndb.TablaEjemploDoc.update(\n{\n \"_id\": ObjectId(\"639a5cf73a78e35ff9787f32\")\n},\n{\n \"$push\": {\n \"diasLaborales\":\n {\n \"dia\": \"Miercoles\",\n \"horarios\": {\n \"horaApertura\": \"09:00:00\",\n \"horaCierre\": \"15:00:00\"\n }\n },\n {\n \"dia\": \"Jueves\",\n \"horarios\": {\n \"horaApertura\": \"09:00:00\",\n \"horaCierre\": \"15:00:00\"\n }\n },\n }\n});\n({\n \"comercio\": {\n \"descripcion\": \"Realizar pruebas\"\n },\n \"contacto\": {\n \"correo\": \"[email protected]\"\n },\n \"cuenta\": {\n \"nombreCompletoTitular\": \"Alexis Francisco \",\n \"numeroCuenta\": \"0000000000\"\n },\n \"diasLaborales\": [\n {\n \"dia\": \"Lunes\",\n \"horarios\": {\n \"horaApertura\": \"07:00:00\",\n \"horaCierre\": \"23:00:00\"\n }\n },\n {\n \"dia\": \"Martes\",\n \"horarios\": {\n \"horaApertura\": \"07:00:00\",\n \"horaCierre\": \"23:00:00\"\n }\n },\n {\n \"dia\": \"Miercoles\",\n \"horarios\": {\n \"horaApertura\": \"07:00:00\",\n \"horaCierre\": \"23:00:00\"\n }\n },\n {\n \"dia\": \"Jueves\",\n \"horarios\": {\n \"horaApertura\": \"07:00:00\",\n \"horaCierre\": \"23:00:00\"\n }\n },\n ],\n \"domicilio\": {\n \"calle\": \"combate de leon\",\n \"codigoPostal\": \"09200\",\n \"idColonia\": NumberLong(1),\n \n }\n})\n",
"text": "have to document.Where i need to insert to many values in my array “diasLaborales”, but my querie dont allow.this return a error.I hope that this response that querie return must be :",
"username": "Lourdes_Nataly_Rojas_Hernandez"
},
{
"code": "db.TablaEjemploDoc.updateOne(\n { _id: ObjectId(\"639a5cf73a78e35ff9787f32\") },\n { $push: { diasLaborales: { $each: [ {your}, {docs}, {here} ] } } }\n)\n",
"text": "Hi @Lourdes_Nataly_Rojas_Hernandez,You update query is incorrect indeed.You must use $push with $each.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thanks so much!.\nYour information is grath and of much help!!",
"username": "Lourdes_Nataly_Rojas_Hernandez"
}
]
| I want to insert to many values into array but i can't because the insertion agregate to [] | 2022-12-14T23:52:56.130Z | I want to insert to many values into array but i can’t because the insertion agregate to [] | 1,285 |
null | []
| [
{
"code": "",
"text": "Hi,I’ve manually set my mac date to 2036, and now when getting back to 2022 I am getting the following error in mongodb:\nNew $clusterTime, 2112867846, is too far from this node’s wall clock time, 1671039831.I tried restarting the server but nothing changes. What can I do to solve this?Kind regards",
"username": "Pablo_Caselas_Pedrei"
},
{
"code": "",
"text": "Hi @Pablo_Caselas_PedreiTo determine the order of operation in a distributed system, MongoDB uses a Lamport clock, with a defined maximum tolerance of forward/backward movement of 1 year (as per MongoDB 6.0.3, see mongo/vector_clock.idl at r6.0.3 · mongodb/mongo · GitHub). Lamport clocks have a property of always-increasing timestamp. Realistically, any part of your cluster should not have a time difference of more than 1 year. If you do, then this error will appear.If you’re faced with this situation, you have two possibilities:I’m guessing that you’re testing failure modes, but since most Linux distros have ntp or similar service by default, your clock should not differ by more than 1 year.Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "It was just my local environment. I restored a backup.Thanks for the clarification!",
"username": "Pablo_Caselas_Pedrei"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| New $clusterTime is too far from this node's wall clock time | 2022-12-14T17:52:02.196Z | New $clusterTime is too far from this node’s wall clock time | 1,764 |
null | [
"aggregation",
"queries",
"node-js",
"transactions",
"graphql"
]
| [
{
"code": " \"message\": \"String cannot represent value: [\\\"Debit Mastercard\\\"]\",\n \"locations\": [\n {\n \"line\": 14,\n \"column\": 7\n }\n ],\n{\n _id: new ObjectId(\"632c711adc4f85eba1b74911\"),\n bReconcileError: 'true',\n batchNumber: '1',\n btransfered: 'true',\n countryCode: '788',\n currencyCode: '788',\n merchantID: '458742236657711',\n nSettlementAmount: '159800',\n onlineMessageMACerror: 'false',\n reconciliationAdviceRRN: '000104246913',\n reconciliationApprovalCode: '',\n settledTransactions: [ [Object] ],\n settlementAmount: 'C000159800',\n settlementDate: '220617',\n settlementTime: '114110',\n terminalID: '05000002',\n traceNumber: '13',\n uniqueID: '363bc047-4cff-4013-aaad-e608a59bbd4c'\n },\n[\n {\n \"appName\": \"Debit Mastercard\",\n \"cardInputMethod\": \"INPUT_CHIP_CONTACT\",\n \"cardPAN_PCI\": \"XXXXXXXXXXXX5545\",\n \"networkName\": \"MASTERCARD\",\n \"onlineApprovalCode\": \"846022\",\n \"onlineRetrievalReferenceNumber\": \"000102846022\",\n \"transactionAmount\": \"000000069000\",\n \"transactionDate\": \"220613\",\n \"transactionTime\": \"110016\",\n \"transactionType\": \"TRANSACTION_TYPE_PURCHASE\"\n }\n ],\n \nasync getSettlementsByUser(role: string, name: string) {\n\n let settlement = await this.profileModel.aggregate([\n {\n $match: {\n bindedSuperAdmin: name,\n },\n },\n {\n $lookup: {\n from: 'tpes',\n localField: 'nameUser',\n foreignField: 'merchantName',\n as: 'tpesBySite',\n },\n },\n {\n $lookup: {\n from: 'settlements',\n localField: 'tpesBySite.terminalId',\n foreignField: 'terminalID',\n as: 'settlementsByUser',\n pipeline: [\n {\n $sort: {\n transactionDate: -1,\n },\n },\n ],\n },\n },\n\n { $unwind: '$tpesBySite' },\n\n { $unwind: '$settlementsByUser' },\n { $unwind: '$settlementsByUser.settledTransactions' },\n\n {\n $project: {\n bReconcileError: '$settlementsByUser.bReconcileError',\n batchNumber: '$settlementsByUser.batchNumber',\n btransfered: '$settlementsByUser.btransfered',\n countryCode: '$settlementsByUser.countryCode',\n currencyCode: '$settlementsByUser.currencyCode',\n merchantID: '$settlementsByUser.merchantID',\n nSettlementAmount: '$settlementsByUser.nSettlementAmount',\n onlineMessageMACerror: '$settlementsByUser.onlineMessageMACerror',\n reconciliationAdviceRRN:\n '$settlementsByUser.reconciliationAdviceRRN',\n reconciliationApprovalCode:\n '$settlementsByUser.reconciliationApprovalCode',\n settledTransactions: [\n {\n appName: '$settlementsByUser.settledTransactions.appName',\n cardInputMethod:\n '$settlementsByUser.settledTransactions.cardInputMethod',\n cardPAN_PCI:\n '$settlementsByUser.settledTransactions.cardPAN_PCI',\n networkName:\n '$settlementsByUser.settledTransactions.networkName',\n onlineApprovalCode:\n '$settlementsByUser.settledTransactions.onlineApprovalCode',\n onlineRetrievalReferenceNumber:\n '$settlementsByUser.settledTransactions.onlineRetrievalReferenceNumber',\n transactionAmount:\n '$settlementsByUser.settledTransactions.transactionAmount',\n transactionDate:\n '$settlementsByUser.settledTransactions.transactionDate',\n transactionTime:\n '$settlementsByUser.settledTransactions.transactionTime',\n transactionType:\n '$settlementsByUser.settledTransactions.transactionType',\n },\n ],\n\n settlementAmount: '$settlementsByUser.settlementAmount',\n settlementDate: '$settlementsByUser.settlementDate',\n settlementTime: '$settlementsByUser.settlementTime',\n terminalID: '$settlementsByUser.terminalID',\n traceNumber: '$settlementsByUser.traceNumber',\n uniqueID: '$settlementsByUser.uniqueID',\n },\n },\n ]);\n console.log('settlement from service ', settlement);\n\n return settlement;\n \n",
"text": "I’m working on project using nesjts graphQl and mongoDb I made $lookup between three collection. the collection which is I want to return has 226 records every record contains a field of type array of object I want to get the data within this array so I used $unwind the problem here when I use $unwind to read the field the number of records increased to be 524 records and when I remove the $unwind I couldn’t return the data and I got this errorData I want to return :Data inside the field settledTransactionsHere is the function :Anyone could help me in this please ?",
"username": "skander_lassoued"
},
{
"code": "",
"text": "Hi @skander_lassoued and welcome to the MongoDB community forum!!To replicate the query in my local environment, it would be very helpful if you could share the following information:Thanks\nAasawari",
"username": "Aasawari"
},
{
"code": "{\n _id: new ObjectId(\"62cd33dda9b59d77cffef126\"),\n sn: 'xxx',\n terminalId: 'xxx',\n merchantId: 'xxx',\n merchantName: 'xxx',\n networkType: '',\n binded: true,\n longitude: 'xx',\n latitude: 'xx',\n bindedBank: 'xx',\n bindedClient: 'xx',\n bindedSite: 'xx',\n region: 'xx'\n }\n{\n _id: new ObjectId(\"632c711adc4f85eba1b74911\"),\n nameUser: '**',\n contactName: '**',\n phone: '***',\n email: \"**\",\n password: '**',\n role: '**',\n city: '**',\n postCode: '**',\n address: '**',\n creationDate: **,\n bindedSuperAdmin: '**',\n bindedBanque: '**',\n bindedClient: '**',\n longitude: 0,\n latitude: 0,\n localisation: '**',\n region: '**'\n }\n",
"text": "I guess that other collection doesn’t matter I just want to unwind a field contains array of objects without duplicating the main collection but for your order I’m going to share with you1 - Sample of other collection :2 - mongoose version \": “^9.0.2”",
"username": "skander_lassoued"
},
{
"code": "",
"text": "Hi @skander_lassouedThank you for sharing the above collection information. However, I do not have all the details\nI would need to reproduce the issue in my local environment.\nFor instance, the fields mentioned in the expected response are not available in the above two collection.\nAlso, I think it’s best if you post the actual documents from all collections required for the aggregation to work, so we can provide a more effective helpThanks\nAasawari",
"username": "Aasawari"
},
{
"code": "{\n _id: new ObjectId(\"62cd33dda9b59d77cffef126\"),\n sn: 'xxx',\n terminalId: 'xxx',\n merchantId: 'xxx',\n merchantName: 'xxx',\n networkType: '',\n binded: true,\n longitude: 'xx',\n latitude: 'xx',\n bindedBank: 'xx',\n bindedClient: 'xx',\n bindedSite: 'xx',\n region: 'xx'\n }\n{\n _id: new ObjectId(\"632c711adc4f85eba1b74911\"),\n nameUser: '**',\n contactName: '**',\n phone: '***',\n email: \"**\",\n password: '**',\n role: '**',\n city: '**',\n postCode: '**',\n address: '**',\n creationDate: **,\n bindedSuperAdmin: '**',\n bindedBanque: '**',\n bindedClient: '**',\n longitude: 0,\n latitude: 0,\n localisation: '**',\n region: '**'\n }\n {\n $lookup: {\n from: 'tpes',\n localField: 'nameUser',\n foreignField: 'merchantName',\n as: 'tpesBySite',\n },\n },\n{\n _id: new ObjectId(\"632c711adc4f85eba1b74911\"),\n nameUser: 'TEST MERCHANT',\n contactName: 'user',\n phone: '85747485',\n email: '[email protected]',\n password: '$2b$10$.UfsvqSSoEU1PodF1g4OZ.vSgzRgp4VTTIfzbXlqPRSySNjfoHfj.',\n role: 'SITE',\n city: 'test',\n postCode: '7844',\n address: 'test',\n creationDate: 2022-09-22T00:00:00.000Z,\n bindedSuperAdmin: 'Ms-Techsoft',\n bindedBanque: 'Attijeri',\n bindedClient: 'Monoprix',\n longitude: 0,\n latitude: 0,\n localisation: 'test',\n region: 'Tunis',\n isEmailConfirmed: true,\n tpesBySite: {\n _id: new ObjectId(\"62bd66313cd6fe2410d47770\"),\n sn: 'N300W150726',\n terminalId: '05000002',\n merchantId: '458742236657711',\n merchantName: 'TEST MERCHANT',\n networkType: 'wifi',\n binded: true,\n model: 'N3',\n firmWareVer: 'v1.7.1',\n emvKernelVersion: 'com.nexgo.oaf.apiv3.EmvKernelVersionInfo@28c6fd25',\n sdkVer: '3.02.001',\n vendor: 'nexgo',\n addr: 'TunisTunis city',\n longitude: '10.302438',\n latitude: '36.835339',\n __v: 0,\n inUse: true\n },\n\n{\n _id: new ObjectId(\"632c711adc4f85eba1b74911\"),\n bReconcileError: string,\n batchNumber: string,\n btransfered: string,\n countryCode: string,\n currencyCode:string',\n merchantID: string,\n nSettlementAmount: string,\n onlineMessageMACerror:string,\n reconciliationAdviceRRN:string,\n reconciliationApprovalCode: '',\n settledTransactions: [ {\n \"appName\": string,\n \"cardInputMethod\": string,\n \"cardPAN_PCI\": string,\n \"networkName\":string,\n \"onlineApprovalCode\": string,\n \"onlineRetrievalReferenceNumber\": string,\n \"transactionAmount\": string,\n \"transactionDate\": string,\n \"transactionTime\":string,\n \"transactionType\":string\n } ],\n settlementAmount:string,\n settlementDate: string,\n settlementTime: string,\n terminalID:string',\n traceNumber: string,\n uniqueID:string\n },\n {\n $lookup: {\n from: 'settlements',\n localField: 'tpesBySite.terminalId',\n foreignField: 'terminalID',\n as: 'settlementsByUser',\n pipeline: [\n {\n $sort: {\n transactionDate: -1,\n },\n },\n ],\n },\n },\n settlementsByUser: [\n [Object], [Object], [Object], [Object], [Object], [Object],\n [Object], [Object], [Object], [Object], [Object], [Object],\n [Object], [Object], [Object], [Object], [Object], [Object],\n [Object], [Object], [Object], [Object], [Object], [Object],\n [Object], [Object], [Object], [Object], [Object], [Object],\n [Object], [Object], [Object], [Object], [Object], [Object],\n [Object], [Object], [Object], [Object], [Object], [Object],\n [Object], [Object], [Object], [Object], [Object], [Object],\n [Object], [Object], [Object], [Object], [Object], [Object],\n [Object], [Object], [Object], [Object], [Object], [Object],\n [Object], [Object], [Object], [Object], [Object], [Object],\n [Object], [Object], [Object], [Object], [Object], [Object],\n [Object], [Object], [Object], [Object], [Object], [Object],\n [Object], [Object], [Object], [Object], [Object], [Object],\n [Object], [Object], [Object], [Object], [Object], [Object],\n [Object], [Object], [Object], [Object], [Object], [Object],\n [Object], [Object], [Object], [Object],\n ... 126 more items\n ]\n\n settlementsByUser: {\n _id: new ObjectId(\"62ac9732c36810454f8f3822\"),\n bReconcileError: 'true',\n batchNumber: '1',\n btransfered: 'true',\n countryCode: '788',\n currencyCode: '788',\n merchantID: '458742236657711',\n nSettlementAmount: '159800',\n onlineMessageMACerror: 'false',\n reconciliationAdviceRRN: '000104246913',\n reconciliationApprovalCode: '',\n settledTransactions: [Array],\n settlementAmount: 'C000159800',\n settlementDate: '220617',\n settlementTime: '114110',\n terminalID: '05000002',\n traceNumber: '13',\n uniqueID: '363bc047-4cff-4013-aaad-e608a59bbd4c',\n __v: 0\n }\n\n",
"text": "@Aasawari Thanks for replying \nBut I provided you all you need but I will arrange all information for you.This is collection device :this is a collection userI joined device with user to get devices affected to user in this stagethen I got a this resultI want to join this result with collection called settlements that looks liketo get settlement related to user depends devices so I did this stageI got result like thisI need to use { $unwind: ‘$settlementsByUser’ }, to see what’s inside every object, so after $unwing i got resultas you can see settled transaction still not appeared so I need to unwind this field settledTransactions which allows me to return all data correctly\nFinally once I do { $unwind: ‘$settlementsByUser.settledTransactions’ }, it duplicates settlementsByUser depends on number of object inside that field \nI wish you appreciate my effort to explain my efforts and help me\nThanks",
"username": "skander_lassoued"
},
{
"code": "$lookup[\n {\n '$lookup': {\n from: 'device',\n localField: 'nameUser',\n foreignField: 'merchantName',\n as: 'tpesBySite'\n }\n },\n {\n '$lookup': {\n from: 'settlements',\n localField: 'tpesBySite.terminalId',\n foreignField: 'terminalID',\n as: 'settlementsByUser'\n }\n },\n { '$unwind': '$settlementsByUser' }\n]\n$unwind\"settlementsByUser.settledTransactions\"$unwind\"settledTransactions\"[Array][\n {\n _id: ObjectId(\"632c711adc4f85eba1b74911\"),\n nameUser: 'xxx',\n contactName: 'xxx',\n phone: 'xxx',\n email: 'xxx',\n password: 'xxx',\n role: 'xxx',\n city: 'xxx',\n postCode: 'xxx',\n address: 'xxx',\n creationDate: 'xxx',\n bindedSuperAdmin: 'xxx',\n bindedBanque: 'xxx',\n bindedClient: 'xxx',\n longitude: 0,\n latitude: 0,\n localisation: 'xxx',\n region: 'xxx',\n tpesBySite: [\n {\n _id: ObjectId(\"62cd33dda9b59d77cffef126\"),\n sn: 'xxx',\n terminalId: 'xxx',\n merchantId: 'xxx',\n merchantName: 'xxx',\n networkType: '',\n binded: true,\n longitude: 'xx',\n latitude: 'xx',\n bindedBank: 'xx',\n bindedClient: 'xx',\n bindedSite: 'xx',\n region: 'xx'\n }\n ],\n settlementsByUser: {\n _id: ObjectId(\"632c711adc4f85eba1b74912\"),\n bReconcileError: 'string',\n batchNumber: 'string',\n btransfered: 'string',\n countryCode: 'string',\n currencyCode: 'string',\n merchantID: 'string',\n nSettlementAmount: 'string',\n onlineMessageMACerror: 'string',\n reconciliationAdviceRRN: 'string',\n reconciliationApprovalCode: '',\n settledTransactions: [\n {\n appName: 'string',\n cardInputMethod: 'string',\n cardPAN_PCI: 'string'\n },\n {\n appName: 'string2',\n cardInputMethod: 'string',\n cardPAN_PCI: 'string'\n },\n {\n appName: 'string3',\n cardInputMethod: 'string',\n cardPAN_PCI: 'string'\n }\n ],\n settlementAmount: 'string',\n settlementDate: 'string',\n settlementTime: 'string',\n terminalID: 'xxx',\n traceNumber: 'string',\n uniqueID: 'string'\n }\n }\n]\n\"settledTransaction\"$unwind$unwind'$settlementsByUser.settledTransactions'{Array]",
"text": "Hi @skander_lassouedThank you for providing the detailed information.I ran the following pipeline on my test environment using the sample documents you provided (with slightly altered values so that the $lookup stages joined the sample documents from each collection):Note: The above does not include the final $unwind on \"settlementsByUser.settledTransactions\"as you can see settled transaction still not appeared so I need to unwind this field settledTransactions which allows me to return all data correctlyAfter this first $unwind on \"settledTransactions\", you write that \"settled transacation still not appeared. Could you clarify what you mean here? (i.e. Do you mean the [Array] value that is shown in the output is not what you are expecting and that you are expecting to see all the objects within) The output from the above pipeline I used is shown below as reference:Note: Some fields in the \"settledTransaction\" array’s elements were redacted for readabilityFinally once I do { $unwind: ‘$settlementsByUser.settledTransactions’ }, it duplicates settlementsByUser depends on number of object inside that field This is expected behaviour of $unwind. Are you wanting to $unwind '$settlementsByUser.settledTransactions' because it is showing as {Array]?Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Thanks for replying @Aasawari ,\nYes Even graphql Does not show data until I $unwind the ‘$settlementsByUser.settledTransactions’",
"username": "skander_lassoued"
},
{
"code": "",
"text": "@Aasawari could you reply me what’s the final result of your test I’m suck ",
"username": "skander_lassoued"
},
{
"code": "$unwind'$settlementsByUser.settledTransactions'{Array][Array]mongosh'inspectDepth'DB> db.depth.find() /// output shows field `a` array elements\n[\n {\n _id: ObjectId(\"6392b2170a04333a461fec1b\"),\n a: [\n 1, 2, 3, 4, 5,\n 6, 7, 8, 9, 10\n ]\n }\n]\n\nDB> config.get('inspectDepth') /// inspectDepth of 100\n100\n\nDB> config.set('inspectDepth',1) /// inspectDepth changed to 1\nSetting \"inspectDepth\" has been changed\n\nDB> db.depth.find()\n[ { _id: ObjectId(\"6392b2170a04333a461fec1b\"), a: [Array] } ] /// field 'a' value now shows as [Array] for same document\n[Array][Array]",
"text": "Hi Skander,Yes Even graphql Does not show data until I $unwind the ‘$settlementsByUser.settledTransactions’I believe this is in reference to Aasawari’s below question but please correct me if I am wrong here:Are you wanting to $unwind '$settlementsByUser.settledTransactions' because it is showing as {Array] ?I believe the data does exist within the [Array] but is just not being expanded (at least for mongosh depending on the 'inspectDepth' value). As an example:I understand you have mentioned this output [Array] is what you are also seeing in GraphQL however I am not too familiar with GraphQL but would you be able to provide some steps to replicate the output you’re achieving via GraphQL? Perhaps any documentation you have followed would be good here.Lastly, my understanding here is that the [Array] contents or data does exist but is just not expanded. Is the use case to expand this just for troubleshooting purposes?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "@Jason_Tran thanks for your time,\nIt’s not for troubleshooting purpose, I want to get the data to show it in my application I really stuck ",
"username": "skander_lassoued"
},
{
"code": "[Object][Array][Objects][Array]$unwind[Object]String cannot represent value$lookupString cannot represent value",
"text": "Let’s take a step back and determine the main issue here.I believe the main facts are:Now the issues are:Am I understanding this correctly so far?If this is correct, then:As a first step toward resolving this, may I suggest you to experiment with your GraphQL query when the result is expected to have nested objects, and work one nesting level at a time, to ensure that the query involved is behaving correctly when nested JSON is to be returned.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks for your efforts @Jason_Tran I really appreciate, I’m going to check then come back to give results ",
"username": "skander_lassoued"
},
{
"code": " settledTransactions: {\n $map: {\n input: '$settlementsByUser.settledTransactions',\n as: 'transaction',\n in: {\n appName: '$$transaction.appName',\n cardInputMethod: '$$transaction.cardInputMethod',\n cardPAN_PCI: '$$transaction.cardPAN_PCI',\n networkName: '$$transaction.networkName',\n onlineApprovalCode: '$$transaction.onlineApprovalCode',\n onlineRetrievalReferenceNumber:\n '$$transaction.onlineRetrievalReferenceNumber',\n transactionAmount: '$$transaction.transactionAmount',\n transactionDate: '$$transaction.transactionDate',\n transactionTime: '$$transaction.transactionTime',\n transactionType: '$$transaction.transactionType',\n },\n },\n },\n\n",
"text": "Finally I got it This how it works perfectly Thank you all",
"username": "skander_lassoued"
}
]
| How to unwind a nested array of objects | 2022-11-14T15:43:03.294Z | How to unwind a nested array of objects | 9,732 |
null | [
"indexes"
]
| [
{
"code": "",
"text": "Hi, suppose I have “x” number of documents in my collection. I create an index (regular or complex) for said collection. After certain period of time I add another “y” number of document to said collection. Do I have to reindex the entire collection or the added documents will be indexed automatically? I could not find a clear answer in documentation. Thank you!!",
"username": "ramat"
},
{
"code": "",
"text": "Do I have to reindex the entire collection or the added documents will be indexed automatically?No you do not have to reindex, new or updated documents are added to the index.",
"username": "steevej"
},
{
"code": "",
"text": "Much appreciate the help!",
"username": "ramat"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Indexing of appended documents in the collection | 2022-12-15T17:52:24.361Z | Indexing of appended documents in the collection | 1,499 |
null | [
"replication",
"sharding"
]
| [
{
"code": "rs.conf()mongo --host <host_name> --port <port_num> --username <user> --authenticationDatabase <auth_db> -p <pwd>rs.conf()",
"text": "Hi everyone,I hope to be in the right category post.\nI am in trouble in a production environment where there’s a 3 replicated node sharded cluster, and I want to know the current replica set settings (for each of the 3 rs) with rs.conf() but when I try to log into a single instance using:\nmongo --host <host_name> --port <port_num> --username <user> --authenticationDatabase <auth_db> -p <pwd>\nI get redirected to the mongos, where the command rs.conf() and any replica set related command dont’t work.How can this happen, what’s the configuration that is preventing me to access the single instance, or more specifically the replica-set?\nIs this default to mongodb in newer versions?I can’t find anything at all.\nThank you in advance",
"username": "Sam555"
},
{
"code": "",
"text": "Your replicas,mongos must be runing on different ports\nAre you giving the correct port_num?\nTo connect to individual replica members give appropriate port and to connect to replicaset as whole you need to give replset name along with members\nCheck documentation for exact syntax",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I don’t have admin capabilities on these systems.\nI can tell port numbers are the same on each system but with different host names",
"username": "Sam555"
},
{
"code": "",
"text": "Are you giving correct hostnames?\nDo you know your replicasetname\nDid you try\nmongo --host replset/host1:port,host2:port, etc\nCheck documentation",
"username": "Ramachandra_Tummala"
}
]
| Accessing to single instance in a Sharded Cluster, redirects me to mongos | 2022-12-15T14:31:52.169Z | Accessing to single instance in a Sharded Cluster, redirects me to mongos | 1,571 |
null | [
"database-tools",
"backup"
]
| [
{
"code": "",
"text": "Hi Team,I have 8 to 9 collections , I need to dump only 5 collections in single mongodump call. But as of now I am calling mongodump 5 times to export the collection document.",
"username": "Manjunath_Swamy"
},
{
"code": "",
"text": "Use excludeCollection\nCheck this thread",
"username": "Ramachandra_Tummala"
}
]
| Multiple Collection Mongodump | 2022-12-16T09:45:12.591Z | Multiple Collection Mongodump | 1,767 |
null | []
| [
{
"code": "",
"text": "I understand use of following stages as below. But, could not understand use of Shard_Filter stage. Can you help me?\nIXSCAN scans keys\nFETCH retrieves document based on keys\nSHARD_MERGE should combine results from multiple shards. Then, why does it shows execution time of this stage at every shard?",
"username": "Prof_Monika_Shah"
},
{
"code": "db.collection.explain('executionStats')",
"text": "Hi @Prof_Monika_Shah and welcome to the MongoDB community forum!!The following documentation for the explain results for MongoDB might help you with the explanations for the stages in detail.With regard to your question, from the link above:why does it shows execution time of this stage at every shard?Could you provide the output of db.collection.explain('executionStats') in question?Best Regards\nAasawari",
"username": "Aasawari"
}
]
| Shard_Merge , Shard_Filter, FETCH stages of execution plan | 2022-12-09T05:43:48.738Z | Shard_Merge , Shard_Filter, FETCH stages of execution plan | 1,088 |
null | [
"node-js",
"crud",
"mongoose-odm"
]
| [
{
"code": "const TodoSchema = new mongoose.Schema({\n user_id: {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"users\",\n },\n categories: [\n {\n id: {\n type: mongoose.Schema.Types.ObjectId,\n },\n category: {\n type: String,\n },\n tasks: [\n {\n id: {\n type: mongoose.Schema.Types.ObjectId,\n },\n task: {\n type: String,\n },\n done: {\n type: Boolean,\n },\n },\n ],\n },\n ],\n});\nconst deletedTask = await Todos.updateOne(\n { user_id },\n {\n $pull: {\n categories: {\n tasks: {\n _id: taskToDelete,\n },\n },\n },\n }\n );\n",
"text": "I’ve tried several different onces, the closest I’ve gotten is the following:I someone can help me delete a certain task in the array I would highly appreciate it.",
"username": "Kamen_Kanchev"
},
{
"code": "router.delete(\"/:user_id/:category_id/:id\", async (req, res) => {\n // Define the user_id and category_id and prep them for the DB query\n const user_id = mongoose.Types.ObjectId(req.params.user_id);\n const taskToDelete = mongoose.Types.ObjectId(req.params.id);\n\n // Finding the user's todo list object\n const userTodoList = await Todos.findOne({ user_id });\n\n // Finding the index of the category where the task is located\n const categoryIndex = userTodoList.categories.findIndex(\n (curr) => curr._id.valueOf() === req.params.category_id\n );\n\n // Concatenate the key before the query, so there's no errors when creating it\n const keyValue = \"categories.\" + categoryIndex + \".tasks\";\n\n // Query and pull out the task by its id\n const deletedTaskConf = await Todos.updateOne(\n { user_id },\n {\n $pull: {\n [keyValue]: {\n _id: taskToDelete,\n },\n },\n }\n );\n\n res.send(deletedTaskConf );\n});\n",
"text": "I figured the axios query after dealing with it for 4-5 hours. For anyone else going through noSQL queries and updated, I feel you, here’s the solution with a delete request on the server side for the above pointed model:Since for some reason you can not pass the category index as a single digit number through params, I took the category_id and did another query to find its index and used it to make the key in the key: value to find the task array and $pull the task id that needed to get pulled out, hope this helps someone dealing wth the same nested arrays in objects and trying to navigate this. Let me know if this helped anyone, and also if you have a better solution please let me know, so the category index query can be remove if possible.",
"username": "Kamen_Kanchev"
}
]
| Can someone please help me create a mongoose query to delete a task in a tasks array when nested like the following todolist model | 2022-12-15T12:59:21.549Z | Can someone please help me create a mongoose query to delete a task in a tasks array when nested like the following todolist model | 1,941 |
null | [
"aggregation",
"atlas-search"
]
| [
{
"code": "nullequalsvaluetextquery",
"text": "How should I write a compound condition to match when a property equals null? Considering that equals only accepts boolean/objectId for value attribute and text only accepts strings for query attribute.",
"username": "German_Medaglia"
},
{
"code": "nullnull",
"text": "Hi @German_Medaglia,The null datatype is not currently supported by Atlas Search. There is currently a feedback post request to introduce the feature that allows indexing and querying of the null datatype in Atlas Search which you can vote for.Regards,\nJason",
"username": "Jason_Tran"
}
]
| Condition for checking if a property is null in $search compound | 2022-12-08T23:07:02.179Z | Condition for checking if a property is null in $search compound | 1,137 |
null | [
"aggregation",
"queries",
"python",
"views"
]
| [
{
"code": "{\n \"_id\": {\n \"$oid\": \"639a60\"\n },\n \"status\": \"inactive\",\n \"version\": \"0.1\",\n \"configuration\": {\n \"identifier\": \"backfill-test\",\n \"secondaryIdentifier\": \"platform_type\",\n \"sql\": \"select * from test where lastupd_ts >= '2021-09-10 18:00:00' and lastupd_ts <= '2021-09-10 19:00:00'\",\n \"steps\": [\n {\n \"service\": \"Publish\",\n \"order\": 1,\n \"configuration\": {\n \"topic\": \"platform-type\",\n \"type\": \"PlatformType\",\n \"action\": \"N\",\n \"keyDeserializer\": \"serializers.Kafka\",\n \"valueDeserializer\": \"serializers.Deserializer\"\n }\n }\n ]\n },\n \"name\": \"data-exporter-svc\"\n}\ndatabase.collection_name.aggregate(\n[\n {\"$match\": {\"configuration.identifier\": \"backfill-test\"}},\n {\n \"$project\":\n {\n \"configuration.sql\": {\"$replaceAll\": {\"input\": \"$configuration.sql\", \"find\": \"lastupd_ts >= \\'2021-09-10 18:00:00\\' and lastupd_ts <= \\'2021-09-10 19:00:00\\'\", \"replacement\": \"lastupd_ts >= \\'2024-00-10 18:00:00\\' and lastupd_ts <= \\'2024-00-10 19:00:00\\'\"}}\n }\n },\n {\n \"$merge\": \"collection_name\"\n },\n ])\n{\n \"_id\": {\n \"$oid\": \"639a60\"\n },\n \"status\": \"inactive\",\n \"version\": \"0.1\",\n \"configuration\": {\n \"sql\": \"select * from test where lastupd_ts >= '2024-00-10 18:00:00' and lastupd_ts <= '2024-00-10 19:00:00'\"\n },\n \"name\": \"data-exporter-svc\"\n}\n{\n \"_id\": {\n \"$oid\": \"639a60\"\n },\n \"status\": \"inactive\",\n \"version\": \"0.1\",\n \"configuration\": {\n \"identifier\": \"backfill-test\",\n \"secondaryIdentifier\": \"platform_type\",\n \"sql\": \"select * from test where lastupd_ts >= '2024-00-10 18:00:00' and lastupd_ts <= '2024-00-10 19:00:00'\",\n \"steps\": [\n {\n \"service\": \"Publish\",\n \"order\": 1,\n \"configuration\": {\n \"topic\": \"platform-type\",\n \"type\": \"PlatformType\",\n \"action\": \"N\",\n \"keyDeserializer\": \"serializers.Kafka\",\n \"valueDeserializer\": \"serializers.Deserializer\"\n }\n }\n ]\n },\n \"name\": \"data-exporter-svc\"\n}\n",
"text": "I have mongodb version 5+ and latest version of pymongo from pipI have query which update the $configuration.sql field perfectly fineIssue: While updating the below $configuration.sql all other fields are lost in the resultset due to $merge statement. can someone correct me if I am missing something I am probably new to mongo queries?INPUT DOCUMENT STRUCTUREQUERY USED:QUERY RESULTEXPECTED RESULT",
"username": "Shubham_Dubey1"
},
{
"code": " \"$project\":\n {\n \"configuration.sql\": {\"$replaceAll\": {\"input\": \"$configuration.sql\", \"find\": \"lastupd_ts >= \\'2021-09-10 18:00:00\\' and lastupd_ts <= \\'2021-09-10 19:00:00\\'\", \"replacement\": \"lastupd_ts >= \\'2024-00-10 18:00:00\\' and lastupd_ts <= \\'2024-00-10 19:00:00\\'\"}}\n }\n",
"text": "When you writeyou indicate that you are only interested in the sql field of the configuration object. To make it work I think you would need to $mergeObjects using the old configuration and the updated sql field.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Update, aggregation pipeline in pymongo with using $merge operation to update the collection which is deleting all existing fields in the nested field | 2022-12-15T18:43:55.638Z | Update, aggregation pipeline in pymongo with using $merge operation to update the collection which is deleting all existing fields in the nested field | 1,918 |
null | [
"queries",
"crud"
]
| [
{
"code": "{\n \"CurrentVersion\": 3,\n \"EntryHistory\": [\n {\n \"State\": 0,\n \"ProposalPlan\": [\n {\n \"Description\": \"Test\",\n \"State\": 1,\n \"Proposals\": [\n {\n \"Subject\": \"Test\",\n \"Body\": \"Test\",\n \"Urls\": [\n {\n \"Description\": \"Link text\",\n \"Address\": \"https://examplelink.com\"\n }\n ]\n }\n ]\n }\n ]\n }\n ]\n}\ndb.collectionName.updateMany(\n { \"ProposalPlan.State\": 1 },\n {\n $set: {\n \"ProposalPlan.State\": 3,\n \"ProposalPlan.Proposals.10.Urls.0.Address\": \"https://newlinkexample.com\"\n }\n }\n);\ndb.collectionName.updateMany(\n { \"ProposalPlan.State\": 1, \"ProposalPlan.Proposals.10.Urls.0.Address\": { $ne: null } },\n {\n $set: {\n \"ProposalPlan.State\": 3,\n \"ProposalPlan.Proposals.10.Urls.0.Address\": \"https://newlinkexample.com\"\n }\n }\n);\n",
"text": "I am working on a Mongo updateMany() query.A condensed example of a document in my collection:Please assume that my test data is just showing structure and not the actual size of the collection and the arrays.How can I write my updateMany() query to not error out if it encounters a null field in a document? I just want it to continue with updating documents if one is problematic.Here is the query I wrote:My problem is that when I run this query, some documents that meet the filter criteria are “corrupt” and have null or nonexistent Proposals and/or null or nonexistent Urls, so I am faced with an error such as “MongoServerError: Cannot create field ‘0’ in element {Urls: null}”.I have also tried wrapping the above query in a try catch, as I expected it to continue after a document throws an error, but I see that’s not how it works.I tried to add to the filters so that I am not even trying to update the corrupt documents to begin with:But none of this has worked so far. The above extra filter does not throw an error but nothing is updated, and when I try to use the filters with findOne() it just searches infinitely rather than grabbing one of the many records where ProposalPlan.State is 1 and ProposalPlan.Proposals.10.Urls.0.Address is not null.",
"username": "Parker_Finch"
},
{
"code": "\"ProposalPlan.Proposals.10.Urls.0.Address\": { $exists: true }\n\"ProposalPlan.Proposals.10.Urls.0.Address\": { $ne: null }",
"text": "Try withrather than\"ProposalPlan.Proposals.10.Urls.0.Address\": { $ne: null }",
"username": "steevej"
}
]
| Mongo Update Array & Skip Null Fields | 2022-12-15T18:14:44.630Z | Mongo Update Array & Skip Null Fields | 2,856 |
null | [
"python"
]
| [
{
"code": "pydantic_idmy_fieldpymongopymongobson.codec_options\ndef documentize(d:dict, nominal:str)->dict:\n d[_id] = d.pop(nominal)\n return d\n\ndef pythonize(d: dict, nominal:str) ->dict:\n d[nominal]= d.pop('_id')\n return d\npydantic",
"text": "I’m trying to take a python class - a pydantic model - and save it almost as is: the class does not have an _id field, so I want to either use a field annotation or some codec / ODM way to signal that a certain field is to be written as _id into mongo, then read as my_field when read from mongo (or from bson to dict, or something like that. So with this in mind:Looks like SONManipulatior is going out of style - deprecated.bson.codec_options seem to be intended for scalar types, whereas my class is not a scaler .A pair of wrappers can certainly do the trick:But the syntax pollution around every entry / exit into a CRUD command seems suboptimal.I have gotten around this using some pydantic strong-arming, but pydantic is neither an ODM nor a storage translation layer, so I’m looking for way to do it “right” using the driver.",
"username": "Nuri_Halperin"
},
{
"code": "",
"text": "Hi @Nuri_Halperin, there is no pymongo feature to accomplish this kind of document level translation. The solutions you’ve mentioned seem reasonable to me: wrap/unwrap around CRUD methods or use pydantic to rewrite the field. You could also consider renaming the field to be _id. Feel free to open a feature request in our issue tracker: https://jira.mongodb.org/projects/PYTHON",
"username": "Shane"
},
{
"code": "_idmy_model.dict()dict",
"text": "Thanks!The whole issue arises from the fact that a pydantic model field _id will be silently ignored. Calling my_model.dict() will not return it in the dict, and any repr or listing of fields will skip it.As for a feature request, seems that SONManipulator is going out of fashion, seems prudent to know why before requesting (partial? similar?) re-introduction. Seems most related to [PYTHON-2733] Allow decoding/deserializing BSON container types to custom Python types - MongoDB JiraI built some tooling for this, but have not yet published the package here: pydanticmongo on Github",
"username": "Nuri_Halperin"
},
{
"code": "",
"text": "There were a number of reason that led us to deprecate SONManipulator. If I recall correctly, one was that it only worked on the Database level when ideally such a ODM like serialization layer should work using only the bson module (via CodecOptions). Another reason was performance problems inherent to its design which caused it to be slow with large nested/documents. When designing a new feature we would keep these design issues in mind.PYTHON-2733 is quite related but I believe it would be good to file this (field renaming) as a separate feature. We would likely group these features together into a ODM-lite project.",
"username": "Shane"
},
{
"code": "",
"text": "Thank you for the helpful insights!",
"username": "Nuri_Halperin"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Is there a codec or annotation way to map a python class field to a different field in the mongo document? | 2022-12-12T05:32:27.864Z | Is there a codec or annotation way to map a python class field to a different field in the mongo document? | 2,009 |
null | [
"replication",
"java",
"containers",
"spring-data-odm"
]
| [
{
"code": "",
"text": "My Spring boot app is able to connect to Mongo DB and running fine as standalone app.\nHowever, if the same app is set to run in Docker, the DB connection fails with exception ‘java.net.ConnectException’My Setup :\nSpring boot App is running in docker, Mongo is outside of Docker.\nSpring boot version : 2.7.4\nJava Version\t\t: 11\nMongo DB version \t: v4.2.21 ( Running outside of Docker). Local and Stage environments have standalones installations with replicaSet.(PROD is more comprehensive)mongodb URL from application.properties\nspring.data.mongodb.uri=mongodb://host.docker.internal,localhost,127.0.0.1\nspring.data.mongodb.authentication-database=admin\nspring.data.mongodb.database=xxxxxxx\nspring.data.mongodb.replica-set-name=myreplicasetClusterDescription when running as Standalone {type=REPLICA_SET, connectionMode=SINGLE, serverDescriptions=[ServerDescription{address=127.0.0.1:27017, type=REPLICA_SET_PRIMARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=13735215, setName=‘myreplicaset’, canonicalAddress=localhost:27017, hosts=[localhost:27017], passives=, arbiters=, primary=‘localhost:27017’, tagSet=TagSet{}, electionId=7fffffff0000000000000028, setVersion=1, topologyVersion=null, lastWriteDate=Tue Dec 13 16:29:48 EST 2022, lastUpdateTimeNanos=3150768696200}]}ClusterDescription when running inside a Docker Container {type=UNKNOWN, connectionMode=SINGLE, serverDescriptions=[ServerDescription{address=127.0.0.1:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused (Connection refused)}}]}pom.xml entriesCan anyone tell me the reason for connection issues?Thanks in advance,\nGana",
"username": "Ganapathi_Vaddadi"
},
{
"code": "",
"text": "Is mongodb running inside the container?\nIf not, perhaps 127.0.0.1 does not refer to the outer host if used from within the container.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "No. Mongo is not running inside the container.",
"username": "Ganapathi_Vaddadi"
},
{
"code": "localhost",
"text": "Well, I could be wrong here, but I don’ t think localhost (127.0.0.1) from within the container refers to the localhost hosting the container.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "I could be wrong hereYou are not wrong.",
"username": "steevej"
},
{
"code": "",
"text": "Thank you for the explanation. So, instead of ‘localhost’, should I be using hostname, IP address ?",
"username": "Ganapathi_Vaddadi"
},
{
"code": "",
"text": "Yes, and of course, you’ll need to configure mongod to be listening on the external interface as well as its 127.0.0.1",
"username": "Jack_Woehr"
}
]
| Containerized Spring boot App unable to connect to MongoDB | 2022-12-15T15:46:22.380Z | Containerized Spring boot App unable to connect to MongoDB | 6,770 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.