image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"atlas-functions"
] | [
{
"code": "",
"text": "Hi! One of the services I am using for email marketing (Vero) allows you to pull in data via an API. What I want to be able to do is to use a MongoDbRealm function build that api allowing Atlas data to be included in my customers emails.Unforunately the service only supports HTTP Basic Authentication (http://username:[email protected]/data-feed.json). From what I am not sure this is supported by MongoDbRealm. Ideally I would use API keys but it doesn’t seem to be supported. Can custom function authentication be used for this? If so how?",
"username": "Simon_Persson"
},
{
"code": "",
"text": "Hi @Simon_Persson,I would like to help but I don’t think I understand the use case…You need a function to execute a simple auth http request to your email service?The context.http should be able to do it easily:If you need further guidance describe your requirements and link to your realm application.Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Let me see if I can explain it better.I basically want to do the opposite. I want the external service to be able to pull data from MongoDb via a MongoDb Realm function. And the external service only supports executing API calls using Http Basic Auth.Some additional details on the use case:\nVero is email automation service. Based on user actions it can trigger sending emails. To personalize these emails it can execute an api call just prior to sending the message and populate the email with data. So I basically want Vero to be able to fetch data from MongoDb Realm before sending the email.Did that make the use case more clear? Is it possible to do this?",
"username": "Simon_Persson"
},
{
"code": "response.setBody(....)",
"text": "Hi @Simon_Persson,Well it is possible if you expose the data via a webhook function.You can use a secret or use application authentication (which is similar to basic auth with a predefined username and password)Now in the webhook function you can query your Atlas service and use response.setBody(....) to return a JSON you need.Specify the webhook url with creds in your email service.Let me know if that helps.Kind regards\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "getStudents accountId:${accountId}const query = { accountId:accountId };\nconst projection = { \"_id\": 1, \"studentId\":\"$_id\", \"firstName\":\"$firstName\",\"lastName\":\"$lastName\", \n \"accountId\":\"$accountId\"};\nresponse.setHeader(\"Content-Type\",\"application/json\");\n\nawait students.find(query, projection).toArray()\n.then(result => {\n if(result) {\n response.setStatusCode(200);\n //response.setBody(`{\"students\":${result}`);\n response.setBody(`{\"students\":${JSON.stringify(result)}}`);\n }\n else {\n console.log(\"students not found:\",JSON.stringify(result));\n response.setStatusCode(404);\n response.setBody(`{message:\"No students not found for given criteria\"}`);\n }\n}).catch(err => {\n console.log(\"error getting students:\",err);\n response.setStatusCode(500);\n response.setBody(`{error:${err}}`);\n})\n",
"text": "I have something similar. Unfortunately the documentation is not very detailed and Realm still appears kind of Beta-ish to me. I am not sure if my solution is something Mongo advises. They seem bent on pushing GraphQL, but I want a “RESTful” API.Instead of a Real, “Function”, here is what I have been doing\nSelect “3rd Partly Services”\nClick the “Add Service” button.\nSelect HTTP and give it a name (RESTService)\nClick the “Add incoming webHook” button\nChose an HTTP Method you wish to expose (i.e. GET)\nClick the Function Editor Tab\nWrite you function to extract the data you needTo execute this webHook, copy webhookUrl and add your parms\nUse the browser, Postman, Fiddler or whatever gives you the most pleasure Note: You can add authentication, but that’s a whole different topicHere is an example function I wrote to extract data:exports = async function getStudents(payload, response) {\nconst {accountId } = payload.query;\nconst db = context.services.get(“mongodb-atlas”).db(“studentsDB”);\nconst students = db.collection(“students”);\nconsole.log(getStudents accountId:${accountId});}“3rd Party Service” is a misnomer in my suggestion because your Mongo DB instance is the 3rd Party.Hope this helpsYou are not actually calling a 3rd Party Service, but instead",
"username": "Herb_Ramos"
},
{
"code": "",
"text": "Hi @Herb_Ramos,Thanks for the example. Realm Webhooks were formally known as Stitch webhooks and are GA for over 2 years now.Since the engine is versatile its hard to document all use case types. We are constantly working on improving and adding templates and snippets.@Simon_Persson if you have issues porting this example for your use let us know.Kind regards\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thank you! I’ll definitely give the webhooks a try and report back I have been on Realm for years, but I am new to Atlas… Excited about what the cloud functions and webhooks bring to the table ",
"username": "Simon_Persson"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Call MongoDb Realm Function from external service using HTTP Basic Authentication | 2020-07-29T14:28:33.706Z | Call MongoDb Realm Function from external service using HTTP Basic Authentication | 4,201 |
null | [
"stitch"
] | [
{
"code": "",
"text": "Just followed the instructions to create the client-secret required by Apple Sign In for Stitch. However I get the error saying “clientSecret length should be less than or equal to 300” when saving the stitch configurations. I don’t think there is any restrictions from Apple or from JWT side that it has to be less than 300. Could you lift this restriction because it doesn’t seem to be necessary.",
"username": "Alex_Wu"
},
{
"code": "",
"text": "I’m also running into the same issue.Provider: oauth2-apple: clientSecret length should be less than or equal to 300Could someone help with this step?thanks in advance,\nRam",
"username": "Ram_Sundaram"
},
{
"code": "",
"text": "Hi @Ram_Sundaram and @Alex_Wu can you let us know what the current char count that you are hitting, is?",
"username": "Sumedha_Mehta1"
},
{
"code": "cat ~/mongoDbRealm/appleSignIn/client_secret.txt | wc -c",
"text": "My client secret file showed a count of 308 when I used wc on the command line to checkcat ~/mongoDbRealm/appleSignIn/client_secret.txt | wc -c",
"username": "Ram_Sundaram"
},
{
"code": "",
"text": "@Ram_Sundaram - We’ve extended the limit and the fix should be out by the end of this week.",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Update: This fix is in and the new limit is 500.",
"username": "Sumedha_Mehta1"
}
] | Stitch Apple Sign In | 2020-03-27T02:48:17.592Z | Stitch Apple Sign In | 3,415 |
null | [] | [
{
"code": "db version v4.2.8\ngit version: 43d25964249164d76d5e04dd6cf38f6111e21f5f\nOpenSSL version: OpenSSL 1.1.1d 10 Sep 2019\nallocator: tcmalloc\nmodules: none\nbuild environment:\n distmod: debian10\n distarch: x86_64\n target_arch: x86_64\n2020-07-27T19:01:32.628+0000 I COMMAND [conn3146] command smartcob_3_200.importados appName: \"mongodump\" command: getMore { getMore: 1975688055802701079, collection: \"importados\", lsid: { id: UUID(\"e386989c-4ca0-4007-9806-d611e20d3b04\") }, $db: \"smartcob_3_200\" } originatingCommand: { find: \"importados\", filter: {}, lsid: { id: UUID(\"e386989c-4ca0-4007-9806-d611e20d3b04\") }, $db: \"smartcob_3_200\" } planSummary: COLLSCAN cursorid:1975688055802701079 keysExamined:0 docsExamined:11307 numYields:89 nreturned:11307 reslen:16776566 locks:{ ReplicationStateTransition: { acquireCount: { w: 90 } }, Global: { acquireCount: { r: 90 } }, Database: { acquireCount: { r: 90 } }, Collection: { acquireCount: { r: 90 } }, Mutex: { acquireCount: { r: 1 } } } storage:{ data: { bytesRead: 16761457, timeReadingMicros: 152636 } } protocol:op_msg 172ms\n2020-07-27T19:01:32.871+0000 I COMMAND [conn3144] command smartcob_3_200.importados appName: \"mongodump\" command: getMore { getMore: 1975688055802701079, collection: \"importados\", lsid: { id: UUID(\"e386989c-4ca0-4007-9806-d611e20d3b04\") }, $db: \"smartcob_3_200\" } originatingCommand: { find: \"importados\", filter: {}, lsid: { id: UUID(\"e386989c-4ca0-4007-9806-d611e20d3b04\") }, $db: \"smartcob_3_200\" } planSummary: COLLSCAN cursorid:1975688055802701079 keysExamined:0 docsExamined:11327 numYields:88 nreturned:11327 reslen:16775970 locks:{ ReplicationStateTransition: { acquireCount: { w: 89 } }, Global: { acquireCount: { r: 89 } }, Database: { acquireCount: { r: 89 } }, Collection: { acquireCount: { r: 89 } }, Mutex: { acquireCount: { r: 1 } } } storage:{ data: { bytesRead: 16797002, timeReadingMicros: 136238 } } protocol:op_msg 154ms\n2020-07-27T19:01:33.043+0000 I NETWORK [conn3145] end connection 127.0.0.1:49374 (113 connections now open)\n2020-07-27T19:01:33.043+0000 I NETWORK [conn3146] end connection 127.0.0.1:49376 (112 connections now open)\n2020-07-27T19:01:33.043+0000 I NETWORK [conn3144] end connection 127.0.0.1:49372 (111 connections now open)\n2020-07-27T19:01:33.044+0000 I NETWORK [conn3143] end connection 127.0.0.1:49370 (110 connections now open)\n2020-07-27T19:01:33.044+0000 I NETWORK [conn3142] end connection 127.0.0.1:49368 (109 connections now open)\n2020-07-27T19:01:47.971+0000 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends\n2020-07-27T19:01:48.084+0000 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...\n2020-07-27T19:01:48.084+0000 I NETWORK [listener] removing socket file: /tmp/mongodb-27017.sock\n2020-07-27T19:01:48.087+0000 I - [signalProcessingThread] Stopping further Flow Control ticket acquisitions.\n2020-07-27T19:01:48.092+0000 I CONTROL [signalProcessingThread] Shutting down free monitoring\n2020-07-27T19:01:48.094+0000 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture\n2020-07-27T19:01:48.113+0000 I STORAGE [signalProcessingThread] Deregistering all the collections\n2020-07-27T19:01:49.777+0000 I STORAGE [signalProcessingThread] Timestamp monitor shutting down\n2020-07-27T19:01:49.780+0000 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down\n2020-07-27T19:01:49.780+0000 I STORAGE [signalProcessingThread] Shutting down session sweeper thread\n2020-07-27T19:01:49.780+0000 I STORAGE [signalProcessingThread] Finished shutting down session sweeper thread\n2020-07-27T19:01:49.780+0000 I STORAGE [signalProcessingThread] Shutting down journal flusher thread\n2020-07-27T19:01:49.860+0000 I STORAGE [signalProcessingThread] Finished shutting down journal flusher thread\n2020-07-27T19:01:49.860+0000 I STORAGE [signalProcessingThread] Shutting down checkpoint thread\n2020-07-27T19:01:49.861+0000 I STORAGE [signalProcessingThread] Finished shutting down checkpoint thread\n2020-07-27T19:01:49.889+0000 I STORAGE [signalProcessingThread] Downgrading WiredTiger datafiles.\n2020-07-27T19:01:51.455+0000 I STORAGE [signalProcessingThread] WiredTiger message [1595876511:455212][444:0x7f8ba7e62700], txn-recover: Recovering log 999 through 1000\n2020-07-27T19:01:51.460+0000 I STORAGE [signalProcessingThread] WiredTiger message [1595876511:460761][444:0x7f8ba7e62700], txn-recover: Recovering log 1000 through 1000\n2020-07-27T19:01:51.664+0000 I STORAGE [signalProcessingThread] WiredTiger message [1595876511:664796][444:0x7f8ba7e62700], txn-recover: Main recovery loop: starting at 999/99623296 to 1000/256\n2020-07-27T19:01:51.740+0000 I STORAGE [signalProcessingThread] WiredTiger message [1595876511:740506][444:0x7f8ba7e62700], txn-recover: Recovering log 999 through 1000\n2020-07-27T19:01:51.745+0000 I STORAGE [signalProcessingThread] WiredTiger message [1595876511:745630][444:0x7f8ba7e62700], txn-recover: Recovering log 1000 through 1000\n2020-07-27T19:01:51.813+0000 I STORAGE [signalProcessingThread] WiredTiger message [1595876511:813085][444:0x7f8ba7e62700], txn-recover: Set global recovery timestamp: (0, 0)\n2020-07-27T19:01:51.844+0000 I STORAGE [signalProcessingThread] shutdown: removing fs lock...\n2020-07-27T19:01:52.945+0000 I CONTROL [signalProcessingThread] now exiting\n2020-07-27T19:01:52.946+0000 I CONTROL [signalProcessingThread] shutting down with code:0\n",
"text": "Today my mongod instance just shut down, no idea why. I’ve been working with Mongodb for several years, but I’ve never seen this.The instance is running in Digital Ocean and the applications that connect to this instance are in the same server.I’m running version 4.2.8 on debian:Mongod:The logs shows this but there seems to be no reason for the shutdown:I’m totally lost. Any help would be appreciated.",
"username": "Nicolas_Riesco"
},
{
"code": "CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends\n",
"text": "Hi @Nicolas_Riesco,The signal 15 means that someone or something issued a graceful kill commandCheck /var/log/messages or other system logs to understand who issued the command.Can it be part of cloud maintenance?Unfortunately the logs won’t reveal much, thus database audit is useful as part of the enterprise version.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "appName: \"mongodump\"",
"text": "Thanks for your response @Pavel_DuchovnyI’m starting to thing that the shutdown was not the problem. I did reboot the server to avoid service interruption, this is actually not a “Mongo shutting down” issue.The lines before the shutdown show appName: \"mongodump\" and I do have a mongodump script running every x hours.So my problem should be that somehow mongodump causes the db to be unresponsive, so I should move to another topic on how to make my backups so they don’t interrupt any services.Thanks a lot!As I dig into the logs the problem might be another one, it might be just before this, when doing a backup of the DB.",
"username": "Nicolas_Riesco"
},
{
"code": "",
"text": "@Nicolas_Riesco,Is that a replica set or a standalone host?Using mongodump for backups might be tricky as it needs to read all data through the memory. Also you need to export the oplog to make it consistent.If that is a replica set I would recommend using filesystem snapshots instead , and consider using our cloud manager backup for long termBest regards\nPavel",
"username": "Pavel_Duchovny"
}
] | Mongod shutting down | 2020-07-27T21:16:35.246Z | Mongod shutting down | 9,535 |
null | [
"aggregation",
"java"
] | [
{
"code": "$set",
"text": "I need to use the $set operator (aggregation) on Java, please someone can help me",
"username": "Hamilton_Smith_Carva"
},
{
"code": "$set$settestColl{ _id: 1, firstName: \"John\", lastName: \"Doe\" }fullName\"Doe, John\"firstNamelastNameMongoCollectionBsonListMongoCollection<Document> collection = mongoClient.getDatabase(\"test\")\n .getCollection(\"testColl\");\nBson query = new Document(\"_id\", 1);\nList<Bson> update = Arrays.asList(\n Filters.eq(\"$set\", \n Filters.eq(\"fullName\", \n Filters.eq(\"$concat\", \n Arrays.asList(\"$lastName\", \", \", \"$firstName\")\n )\n )\n )\n);\n\nUpdateResult result = collection.updateOne(query, update);\n{ \"_id\" : 1, \"firstName\" : \"John\", \"lastName\" : \"Doe\", \"fullName\" : \"Doe, John\" }",
"text": "Hello @Hamilton_Smith_Carva, welcome to the forum.The Updates with Aggregation Pipeline feature is introduced with MongoDB v4.2. The $set is one of the pipeline stages used with this feature. This feature allows using a pipeline with stages and Aggregation Operators in transforming the data to be updated to the document.I will help you with the usage of $set and with Java. Please do provide what is the collection document and its fields you are trying to update (and also in what way) using the pipeline. Or, is it just you are looking for an example?Here is an example:A document in a collection testColl:{ _id: 1, firstName: \"John\", lastName: \"Doe\" }And, lets update this document with a new field fullName with a value \"Doe, John\". Note the new field is a concatenation of the two existing fields, firstName and lastName.The Java code for this update operation with the Aggregation Pipeline uses the updateOne(Bson filter, List<? extends Bson> update) method of MongoCollection. Note the second parameter is a Bson List which describes the pipeline.The updated document:{ \"_id\" : 1, \"firstName\" : \"John\", \"lastName\" : \"Doe\", \"fullName\" : \"Doe, John\" }How can you build the pipeline, in Java?You can use the MongoDB Compass’s Aggregation Pipeline Builder and Export Pipeline to Specific Language.",
"username": "Prasad_Saya"
},
{
"code": "salesproducts{\n \"_id\" : \"292\",\n \"sales\" : [\n {\n \"products\" : [\n {\n \"cant\" : 28,\n \"costTotal\" : 3693200,\n \"costUnid\" : 131900,\n \"name\" : \"Marcadores Edding Color Happy Box Ch20+1 Rotuladores\"\n },\n {\n \"cant\" : 99,\n \"costTotal\" : 3960000,\n \"costUnid\" : 40000,\n \"name\" : \"Set Plumón Y Plumigrafo Stabilo - Unidad A $3250\"\n },\n {\n \"cant\" : 10,\n \"costTotal\" : 30000,\n \"costUnid\" : 3000,\n \"name\" : \"Borrador Miga De Pan Mp20 X 2 Pelikan\"\n },\n {\n \"cant\" : 91,\n \"costTotal\" : 77987000,\n \"costUnid\" : 857000,\n \"name\" : \"Calculadora Fx9860 Graficadora Gll Casio Memoria Sd\"\n },\n {\n \"cant\" : 15,\n \"costTotal\" : 487500,\n \"costUnid\" : 32500,\n \"name\" : \"Resaltador Colores Pastel X6 Original Boss Stabilo\"\n }, \n ...\n ] \n {\n \"products\" : [\n ...\n ]\n }, \n ...\n ]\n}\nColeccionPrueba$setdb.ColeccionPrueba.aggregate(\n [\n {\n $match: {\n scores: { $exists: true },\n texts: { $exists: true }\n }\n },\n {\n $project: {\n name: true,\n secondName: true,\n lastName: true,\n secondLastName: true,\n scores: true,\n texts: true\n }\n },\n {\n $set: {\n avg: { $avg: '$scores' }\n }\n },\n {\n $match: {\n $and: [\n {\n avg: { $gte: 7 }\n },\n {\n texts: /verde valle rodeado de monta/\n }\n ]\n }\n },\n {\n $project:{\n texts: false\n }\n },\n {\n $sort: {\n _id: 1\n }\n }\n ]\n)\n",
"text": "Part 2And in the sales field, it has the bellow products subfield:I want to query the collection ColeccionPrueba using the $set (aggregation) operator, i have the bellow query:And MongoDB is responding with:{ “_id” : “104”, “lastName” : “Flórez”, “name” : “Adelaide”, “scores” : [ 8 ], “secondLastName” : “Díaz”, “secondName” : “Regina”, “avg” : 8 }\n{ “_id” : “138”, “lastName” : “Arias”, “name” : “Adelaide”, “scores” : [ 7, 9, 8, 5, 4, 9, 9, 5 ], “secondLastName” : “Mejía”, “secondName” : “Eleanor”, “avg” : 7 }\n{ “_id” : “279”, “lastName” : “Moreno”, “name” : “Santiago”, “scores” : [ 9 ], “secondLastName” : “Hernández”, “secondName” : “Matías”, “avg” : 9 }\n{ “_id” : “299”, “lastName” : “Cardona”, “name” : “Mario”, “scores” : [ 7 ], “secondLastName” : “Rodríguez”, “secondName” : “David”, “avg” : 7 }\n{ “_id” : “415”, “lastName” : “Osorio”, “name” : “Alice”, “scores” : [ 9, 5, 9 ], “secondLastName” : “Rojas”, “secondName” : “Valeria”, “avg” : 7.666666666666667 }\n{ “_id” : “426”, “lastName” : “Jiménez”, “name” : “Mario”, “scores” : [ 7 ], “secondLastName” : “González”, “secondName” : “Santiago”, “avg” : 7 }\n{ “_id” : “428”, “lastName” : “Giraldo”, “name” : “Álvaro”, “scores” : [ 8 ], “secondLastName” : “Rojas”, “secondName” : “Diego”, “avg” : 8 }\n{ “_id” : “447”, “lastName” : “Marín”, “name” : “Daniela”, “scores” : [ 7, 5, 9 ], “secondLastName” : “Parra”, “secondName” : “Violet”, “avg” : 7 }\n{ “_id” : “502”, “lastName” : “Cárdenas”, “name” : “Valentín”, “scores” : [ 7 ], “secondLastName” : “Restrepo”, “secondName” : “Marcos”, “avg” : 7 }\n{ “_id” : “516”, “lastName” : “González”, “name” : “Chloe”, “scores” : [ 7 ], “secondLastName” : “Parra”, “secondName” : “Valeria”, “avg” : 7 }\n{ “_id” : “59”, “lastName” : “Parra”, “name” : “Verónica”, “scores” : [ 8 ], “secondLastName” : “Ramírez”, “secondName” : “Amalia”, “avg” : 8 }\n{ “_id” : “633”, “lastName” : “Álvarez”, “name” : “Erick”, “scores” : [ 9, 5, 9, 6 ], “secondLastName” : “Mejía”, “avg” : 7.25 }\n{ “_id” : “664”, “lastName” : “Rincón”, “name” : “Jorge”, “scores” : [ 7 ], “secondLastName” : “Rodríguez”, “secondName” : “Simón”, “avg” : 7 }\n{ “_id” : “690”, “lastName” : “Vargas”, “name” : “Manuel”, “scores” : [ 8 ], “secondLastName” : “Cortes”, “secondName” : “Lucas”, “avg” : 8 }\n{ “_id” : “706”, “lastName” : “Valencia”, “name” : “Amalia”, “scores” : [ 7, 8, 7 ], “secondLastName” : “Ramírez”, “secondName” : “Scarlett”, “avg” : 7.333333333333333 }\n{ “_id” : “766”, “lastName” : “Cardona”, “name” : “Eleanor”, “scores” : [ 8, 9, 8, 9, 8, 7, 4 ], “secondLastName” : “González”, “secondName” : “Evelyn”, “avg” : 7.571428571428571 }\n{ “_id” : “8”, “lastName” : “Montoya”, “name” : “Javier”, “scores” : [ 7, 8, 9, 9, 2, 9, 5, 8 ], “secondLastName” : “Morales”, “secondName” : “Pablo”, “avg” : 7.125 }\n{ “_id” : “806”, “lastName” : “Hernández”, “name” : “Ava”, “scores” : [ 8 ], “secondLastName” : “Martínez”, “secondName” : “Renata”, “avg” : 8 }\n{ “_id” : “808”, “lastName” : “López”, “name” : “Jorge”, “scores” : [ 9 ], “secondLastName” : “González”, “secondName” : “Sergio”, “avg” : 9 }\n{ “_id” : “824”, “lastName” : “Medina”, “name” : “Martín”, “scores” : [ 7, 9, 9 ], “secondLastName” : “Parra”, “secondName” : “Valentín”, “avg” : 8.333333333333334 }\nType “it” for moreBut i can’t get the same results with mongo-java-driver on a Java project.",
"username": "Hamilton_Smith_Carva"
},
{
"code": "{\n \"_id\" : \"292\",\n \"address\" : {\n \"city\" : \"Leticia\",\n \"department\" : \"Amazonas\",\n \"number\" : 96,\n \"postalCode\" : 703868\n },\n \"age\" : 25,\n \"birthdate\" : ISODate(\"1995-04-06T05:00:00Z\"),\n \"courses\" : [\n {\n \"completed\" : false,\n \"progress\" : 56,\n \"title\" : \"flutter-sqlite\",\n \"tutor\" : {\n \"name\" : \"Code.org\",\n \"link\" : \"https://code.org/\"\n }\n },\n {\n \"completed\" : false,\n \"progress\" : 65,\n \"title\" : \"consumir-apis-javascript\",\n \"tutor\" : {\n \"name\" : \"Code.org\",\n \"link\" : \"https://code.org/\"\n }\n },\n {\n \"completed\" : false,\n \"progress\" : 12,\n \"title\" : \"consumir-apis-javascript\",\n \"tutor\" : {\n \"name\" : \"Scratch\",\n \"link\" : \"https://scratch.mit.edu/\"\n }\n }\n ],\n \"email\" : \"[email protected]\",\n \"lastName\" : \"López\",\n \"name\" : \"Julián\",\n \"sales\" : [\n {\n \"company\" : \"24\",\n \"date\" : ISODate(\"2020-05-03T20:14:55.537Z\"),\n \"iva\" : 78460479.86,\n \"paymentMethod\" : \"Débito\",\n \"subTotal\" : 334489414.14,\n \"time\" : \"15:14:55\",\n \"total\" : 412949894,\n \"typeOfSale\" : \"TV-4\"\n },\n {\n \"company\" : \"23\",\n \"date\" : ISODate(\"2019-01-30T17:31:27.210Z\"),\n \"iva\" : 71200655.86,\n \"paymentMethod\" : \"Transferencia bancaria\",\n \"subTotal\" : 303539638.14,\n \"time\" : \"12:31:27\",\n \"total\" : 374740294,\n \"typeOfSale\" : \"TV-5\"\n },\n {\n \"company\" : \"63\",\n \"date\" : ISODate(\"2018-11-29T06:27:30.531Z\"),\n \"iva\" : 38731501.9,\n \"paymentMethod\" : \"Débito\",\n \"subTotal\" : 165118508.1,\n \"time\" : \"01:27:30\",\n \"total\" : 203850010,\n \"typeOfSale\" : \"TV-3\"\n }\n ],\n \"secondLastName\" : \"Gutiérrez\",\n \"secondName\" : \"Jorge\",\n \"sex\" : \"M\",\n \"texts\" : [\n \"Nasrudin vio a un hombre sentado al borde de un camino, con aire de completa desolación. - ¿Qué te preocupa? –quiso saber. - Hermano mío, no existe nada interesante en mi vida. Tengo dinero suficiente como para no tener que trabajar y estaba viajando para ver si encontraba alguna cosa curiosa en el mundo. Sin embargo, todas las personas que encontré no tienen nada nuevo que decirme y sólo consiguen aumentar mi tedio. Al momento Nasrudin agarró la maleta del hombre y salió corriendo por el camino. Como conocía la región, consiguió distanciarse de él, tomando atajos por campos y colinas. Cuando se distanció bastante, colocó de nuevo la maleta en mitad de la ruta por donde el viajero tendría que pasar y se escondió detrás de una roca. Media hora después el hombre apareció, sintiéndose más deprimido que nunca por haberse cruzado con un ladrón. En cuanto vio la maleta corrió hacia ella y la abrió, anhelante. Al ver que el contenido estaba intacto, elevó sus ojos hacia el cielo con alegría y dio gracias al Señor por la vida. “Ciertas personas sólo entienden el sabor de la felicidad cuando consiguen perderla”, pensó Nasrudin, contemplando la escena. \",\n \"A una estación de trenes llega una tarde una señora muy elegante. En la ventanilla le informan que el tren está retrasado y que tardará aproximadamente una hora en llegar a la estación. Un poco fastidiada, la señora va al puesto de diarios y compra una revista, luego pasa al kiosco y compra un paquete de galletitas y una lata de gaseosa. Preparada para la forzosa espera, se sienta en uno de los largos bancos del andén. Mientras hojea la revista, un joven se sienta a su lado y comienza a leer un diario. Imprevistamente la señora ve, por el rabillo del ojo, cómo el muchacho, sin decir una palabra, estira la mano, agarra el paquete de galletitas, lo abre y después de sacar una comienza a comérsela despreocupadamente. La mujer está indignada. No está dispuesta a ser grosera, pero tampoco a hacer de cuenta que nada ha pasado; así que, con un gesto ampuloso, toma el paquete y saca una galletita que exhibe frente al joven y se la come mirándolo fijamente. Por toda respuesta, el joven sonríe… y toma otra galletita. La señora gime un poco, toma una nueva galletita y, con ostensibles señales de fastidio, se la come sosteniendo otra vez la mirada en el muchacho. El diálogo de miradas y sonrisas continúa entre galleta y galleta. La señora cada vez más irritada, el muchacho cada vez más divertido. Finalmente, la señora se da cuenta de que en el paquete queda sólo la última galletita. “No podrá ser tan caradura”, piensa, y se queda como congelada mirando alternativamente al joven y a las galletitas. Con calma, el muchacho alarga la mano, toma la última galletita y, con mucha suavidad, la corta exactamente por la mitad. Con su sonrisa más amorosa le ofrece media a la señora. - Gracias - dice la mujer tomando con rudeza la media galletita. - De nada – contesta el joven sonriendo angelical mientras come su mitad. El tren llega. Furiosa, la señora se levanta con sus cosas y sube al tren. Al arrancar desde el vagón ve al muchacho todavía sentado en el banco del andén y piensa: “Insolente”. Siente la boca reseca de ira. Abre la cartera para sacar la lata de gaseosa y se sorprende al encontrar, cerrado, su paquete de galletitas… ¡intacto! \",\n \"“Había una vez un rey el cual amaba los animales, que un día recibió como regalo dos hermosas crías de halcón. El rey los entregó a un maestro cetrero para que los alimentara, cuidara y entrenara. Pasó el tiempo y después de unos meses en los que los halcones crecieron el cetrero pidió una audiencia con el rey para explicarle que si bien uno de los halcones había alzado ya el vuelo con normalidad, el otro había permanecido en la misma rama desde que llegó, no emprendiendo el vuelo en ningún momento. Ello preocupó en gran medida al rey, que mandó llamar a múltiples expertos para solucionar el problema del ave. Sin éxito. Desesperado, decidió ofrecer una recompensa a quien lograra que el ave consiguiera volar. Al día siguiente el rey pudo ver cómo el ave ya no estaba en su rama, sino que volaba libremente por la región. El soberano mandó llamar al autor de tal prodigio, encontrándose con que quien lo había logrado era un joven campesino. Poco antes de entregarle su recompensa, el rey le preguntó cómo lo había logrado. El campesino le contestó que simplemente había partido la rama, no quedándole otra opción al halcón que echar a volar.” Una breve historia que nos sirve para entender que a veces nos creemos incapaces de hacer las cosas por miedo, a pesar de que la experiencia demuestra más que a menudo que en el fondo sí tenemos la capacidad para conseguir realizarlas: el ave no confiaba en sus posibilidades para volar pero una vez se puso a prueba no le quedó más remedio que intentarlo, algo que le condujo al éxito.\",\n \"“Había una vez un zorro que caminaba, sediento, por el bosque. Mientras lo hacía vio en lo alto de la rama de un árbol un racimo de uvas, las cuales deseó al instante al servirle para refrescarse y apagar su sed. El zorro se acercó al árbol e intentó alcanzar las uvas, pero estaban demasiado altas. Tras intentarlo una y otra vez sin conseguirlo, el zorro finalmente se rindió y se alejó. Viendo que un pájaro había visto todo el proceso se dijo en voz alta que en realidad no quería las uvas, dado aún no estaban maduras, y que en realidad había cesado el intento de alcanzarlas al comprobarlo.” Otra interesante historia corta en forma de fábula que nos enseña que a menudo nos intentamos convencer a nosotros mismos de no querer algo e incluso llegamos a despreciar dicho algo por el hecho de que encontramos difícil llegar a alcanzarlo.\", \n \n ... \n\n ]\n}\n",
"text": "Hi, thanks for your willingness to help.I have a db with documents like this:",
"username": "Hamilton_Smith_Carva"
},
{
"code": "$set$addFields$set$addFields$set",
"text": "Part 2In an Aggregation Update operation $set is used. In an Aggregation Query, the same operation is performed using the $addFields. From the documentation for $set:The $set stage is an alias for $addFields.What is the difficulty you are facing using the $set aggregation stage in Java?",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thanks for you help, I’m new using MongoDB. I have a project with Java and I can not find much information about that.",
"username": "Hamilton_Smith_Carva"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | $set (aggregation) on Java with mongo-java-driver/4.0/ | 2020-07-20T20:31:45.459Z | $set (aggregation) on Java with mongo-java-driver/4.0/ | 2,498 |
[] | [
{
"code": "",
"text": "Hi there,I’m trying to bundle a default realm to my React Native app. It is working perfectly on iOS by including it in ‘Copy Bundle Resources’.However, the realm is not being copied to my android emulator when I put it inside /app/src/main/assets.Therefore, I’m getting this error:I have tried following the instructions given here: android emulator - Bundle pre-populated realm with react native app - Stack OverflowAny idea how I can get the realm to be bundled into the android app?Any help greatly appreciated,\nThanks,\nBen",
"username": "Ben_Wright"
},
{
"code": "Realm.copyBundledRealmFiles();\nlet path;\nif (Platform.OS === 'android') {\n path = 'realm.realm';\n} else {\n path = fs.MainBundlePath + '/realm.realm';\n}\n\nexport default new Realm({\n path: path,\n readOnly: true,\n schema: []});\n",
"text": "For anyone that finds this and is still stuck. Heres my solution:Follow instructions here:Ensure that you are calling Realm.copyBundledRealmFiles() before you initilize your realm.I was originally init-ing realm in index.js But once I moved to something like this is worked fine:",
"username": "Ben_Wright"
}
] | Bundle a ReadOnly Realm in React Native (Android) | 2020-07-24T19:04:43.678Z | Bundle a ReadOnly Realm in React Native (Android) | 2,212 |
|
null | [] | [
{
"code": "",
"text": "Hi,I have joined mongodb course M001 little late which started earlier this week, I have received an email which says the course has weekly deadlines for assignments and each week I’l receive check-in email and so on. Where can I find all this because I have joined late? I have not received an email as well.Regards,\nShravan",
"username": "Shravan_K_Mahankali"
},
{
"code": "",
"text": "Please check our forum\nUpdate from Shubham Ranjan,Curriculum Support EngineerIn order to make MongoDB University curriculum more accessible we have made several major updates to all of our courses:This is applicable for all the future offerings as well as all the offerings which began in March",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I don’t think I have fully understood your clarification Ramachandra, but am glad there are no weekly assignments and I have two months time to finish this course. Eitherways, email instructions seems outdated.",
"username": "Shravan_K_Mahankali"
},
{
"code": "Course OverviewChapters page",
"text": "Hi @Shravan,Thanks for surfacing this. I will check with the team and see if we need to update the instructions in the e-mail.Just for clarification - There are no weekly deadlines for the assignment anymore. You could check the end date on the Course Overview page or on the Chapters page.Screenshot 2020-07-20 at 11.26.19 PM650×1336 65.3 KB Screenshot 2020-07-20 at 11.28.14 PM2790×1432 298 KBLet me know if you have any other questions.~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "Hi @Shravan,Can you please share the content of the e-mail that you are referring to ?~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "Sorry for the late response @Shubham_Ranjan I just logged-in to the forum. Below is the requested info:The course opened on July 14 and course material is open now.[Go to My Course]The course is composed of pre-recorded lecture videos which you may watch at any time during the run of the course. There are weekly deadlines for assignments. Once our course begins, each week will include a check-in email, video lectures, quizzes, and homework assignments or labs.The homework assignments/labs will comprise one half of your grade and the final exam one half of your grade.The discussion forums are an active, fun place to share ideas. Your fellow course participants come from all over the world and it is likely that someone else has asked a question that relates to any problem you might be having. Search the forum discussions to see if help has already been provided. In addition, our teaching assistant will also be helping you in the class.As a reminder, you will receive a course completion confirmation from MongoDB, Inc. provided you achieve 65% on the graded assessments . Details are provided in the first lecture.Copyright © 2020, All rights reserved.Our mailing address is:\n229 West 43rd Street\n5th Floor\nNew York, NY 10036",
"username": "Shravan_K_Mahankali"
},
{
"code": "",
"text": "Hi @Shravan,Thanks for sharing the information. We will update the content of the email.~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB course joined late | 2020-07-18T21:42:41.816Z | MongoDB course joined late | 1,462 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "Hi,I am quite new in MongoDB and having some problems with reading my data using CSharp driver. I have a class with a Point class field called “APoint”. The document is created and APoint is an object with X and Y values as integers. So far so good. However when I list the documents I get this error. I am sure I need to write custom serialization and deserialization code, put I couldn’t find a decent example.Any help is much appreciated.\nThanks\nMGSystem.FormatException: ‘An error occurred while deserializing the APoint field of class WindowsFormsApp1.ProgramSettings: Value class System.Drawing.Point cannot be deserialized.’",
"username": "mgtheone"
},
{
"code": "intpublic class MyPoint \n{\n public int X {get; set;}\n public int Y {get; set;}\n}\n",
"text": "Hi @mgtheone, and welcome to the forumI have a class with a Point class field called “APoint”. The document is created and APoint is an object with X and Y values as integers.This is likely because the registry doesn’t know how to deserialise System.Drawing.Point. You mentioned that the value of X and Y are integers, depending on your use case try to define the type as int. i.e.If you still encountering this issue, could you provide:Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Thanks Wan, I actually found a solution. I wrote a custom serializer called “MyCustomPointSerializer”. I will submit my code now.\nThanks a lot.",
"username": "mgtheone"
},
{
"code": " public void Serialize(BsonSerializationContext context, BsonSerializationArgs args, Point value)\n {\n BsonSerializer.Serialize(context.Writer, value);\n }\n\n Point IBsonSerializer<Point>.Deserialize(BsonDeserializationContext context, BsonDeserializationArgs args)\n {\n Point p = BsonSerializer.Deserialize<Point>(context.Reader);\n return new Point(p.X, p.Y);\n }\n\n public object Deserialize(BsonDeserializationContext context, BsonDeserializationArgs args)\n {\n BsonDocument d = BsonSerializer.Deserialize<BsonDocument>(context.Reader);\n\n int x = 0;\n int y = 0;\n\n if (d[\"_t\"] == \"Point\")\n {\n x = Convert.ToInt32(d[\"X\"]);\n y = Convert.ToInt32(d[\"Y\"]);\n }\n \n return new Point(x, y);\n }\n\n public void Serialize(BsonSerializationContext context, BsonSerializationArgs args, object value)\n {\n BsonSerializer.Serialize(context.Writer, value);\n }\n}",
"text": "After some research and thinking I found a solution to this problem. A custom serializer worked fine. Note that this is used with:\n[BsonSerializer(typeof(MyCustomPointSerializer))]\npublic Point APoint;public class MyCustomPointSerializer : IBsonSerializer\n{\npublic Type ValueType => typeof(Point);",
"username": "mgtheone"
}
] | Problem with deserialization | 2020-07-23T13:40:56.744Z | Problem with deserialization | 13,772 |
null | [] | [
{
"code": "",
"text": "Hello all,\nI am trying to use the document versioning pattern follow Building with Patterns: The Document Versioning Pattern | MongoDB Blog. I can’t make the revision successfully work. I set the revision in the document but everytime I update the doc the revision doesn’t upgrade and there is no duplicated documents to store the older version. I searched from Internet but can’t find a workable tutorial, can anyone give help? Many thanks!\nJames",
"username": "Zhihong_GUO"
},
{
"code": "",
"text": "Hi @Zhihong_GUO,I can’t make the revision successfully work. I set the revision in the document but everytime I update the doc the revision doesn’t upgrade and there is no duplicated documents to store the older versionHow are you updating the document ? Could you provide:The implementation depends on your requirements and use case. The blog post gave an example of storing into two collections, but depending on the use case you could also store in one collection.Regards,\nWan.",
"username": "wan"
}
] | How to "Document Versioning Pattern" | 2020-07-24T12:07:23.424Z | How to “Document Versioning Pattern” | 1,328 |
null | [] | [
{
"code": "",
"text": "I am using the Java driver.I have successfully set up my KMS key store to store my master key.I am using it to create DEK’s in our MongoDB key store on our cluster.I have successfully encrypted/decrypted FIELDS using the DEKS.I have seen and read in several presentations that encryption can be applied to a COLLECTION and DOCUMENT, but NONE of the documentation or example show how to accomplish this. All of them define how to encrypt individual field defined in the jsonschema.So is there a way to configure Enterprise encryption to encrypt documents written too/read from a collection or entire documents without having to explicitly defined the field names in the jasonschema ?If so i would LOVE to pointed in the direction of samples/examples. I reviewed and implemented the code from the Auto Encryption Settings tour, but it does NOT actually encrypt the data. Just writes the data in plain text.Thanks.",
"username": "Paul_Calhoun"
},
{
"code": "",
"text": "Hi @Paul_Calhoun, and welcome to the forumThere are quite a few things to clarify here. First, would you be able elaborate further what you’re trying to achieve? i.e. background context, or use case.I have seen and read in several presentations that encryption can be applied to a COLLECTION and DOCUMENT, but NONE of the documentation or example show how to accomplish this.Client-Side Field Level Encryption (CSFLE) as the name suggests, it encrypts at the field level. In order to encrypt an entire document, you must encrypt each individual field in the document.Would you be able to point out which presentations mentioned the encryption on the whole collection/document ?I reviewed and implemented the code from the Auto Encryption Settings tour, but it does NOT actually encrypt the data. Just writes the data in plain text.If you have defined a field to be encrypted and in the database you could see the document field containing a plain text data, that is not the expected behaviour from CSFLE. Please see CSFLE Guide for a tutorial. The guide contains an example utilising MongoDB Java driver (sync). See also github.com/mongodb-university/csfle-guides for an example project repository.Regards,\nWan.",
"username": "wan"
}
] | Encrypting a collection with CSFLE | 2020-07-24T20:04:33.895Z | Encrypting a collection with CSFLE | 1,664 |
null | [] | [
{
"code": "mongoimport -d dbname -c collectionName --host hostname --file filename -j 4 --batchSize=200 --jsonArray",
"text": "Hi,I’m trying to import a dataset, which is in a large file (21.1GB json), which has a list of documents. I could import around 75% (15.9GB) of that file and then it gives “error inserting documents: write tcp a.b.c.d:e: write broken pipe”I tried this 3 times and each time this happened. The command I’m using is as below (connecting to mongos instance);mongoimport -d dbname -c collectionName --host hostname --file filename -j 4 --batchSize=200 --jsonArrayHas anybody faced the same issue and any recommendations?Thanks",
"username": "Laksheen_Mendis"
},
{
"code": "",
"text": "There might be a message related to the error in the server logs. It might provide more details.Here is some related information at: https://jira.mongodb.org/browse/TOOLS-379",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Hi,Thanks for the reply… However, if the document is bigger than 16MB, MongoDB skips importing that particular document and it’s printed on the command line… But this is something else I guess… Will post if I find out the reason…",
"username": "Laksheen_Mendis"
},
{
"code": "mongoimport --versiondb.collectionName.stats().avgObjSize--batchSize",
"text": "Hi @Laksheen_Mendis,What does mongoimport --version report and what specific version of MongoDB server are you importing into? Also, what type of deployment do you have (standalone, replica set, or sharded cluster)?Finally: how large are your documents on average (you can check imported documents via db.collectionName.stats().avgObjSize)? If you have large documents you may want to try further reducing the --batchSize value.Regards,\nStennie",
"username": "Stennie_X"
}
] | Mongoimport fails with broken pipe when the file is large | 2020-07-22T05:29:54.657Z | Mongoimport fails with broken pipe when the file is large | 5,067 |
null | [] | [
{
"code": "",
"text": "Hi everyone,I’m a new react-native developer and I was following this guide: https://docs.mongodb.com/realm/tutorial/react-native/Noticed that the guide doesn’t mention anything about securing the mongodb realm ID. Won’t that allow others to easily access and manipulate my db?",
"username": "Jerry_Ye"
},
{
"code": "",
"text": "@Jerry_Ye You will need to enable authentication and it will block access",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Then would I need something else to access my realm from my client besides just the ID?",
"username": "Jerry_Ye"
},
{
"code": "",
"text": "Yes you would need to authenticate with a valid logged in user",
"username": "Ian_Ward"
}
] | Security with react-native | 2020-07-27T23:56:30.551Z | Security with react-native | 1,963 |
null | [] | [
{
"code": "",
"text": "Hi,I’m trying to pass in a query to mongoexport like this:\nmongoexport --uri=“mongodb+srv://dev:removed@server/feedback” --collection=entries --query=’{ “date”: { “$lt”: { “$date”: “2020-02-28T00:00:00.000Z” } } }’ --out=dispatcher.jsonBut I am getting an error:\n2020-07-28T12:43:39.366+0100 query ‘[123 32 100 97 116 101 58 32 123 32 36 108 116 58 32 123 32 36 100 97 116 101 58 32 50 48 50 48 45 48 50 45 50 56 84 48 48 58 48 48 58 48 48 46 48 48 48 90 32 125 32 125 32 125]’ is not valid JSON: invalid character ‘-’ after object key:value pairI also tried with IsoDate:\nmongoexport --uri=“mongodb+srv://dev:removed@server/feedback” --collection=entries --query=’{ “date”: { “$lt”: IsoDate(“2020-02-28T00:00:00.000Z”) } }’ --out=dispatcher.json\n2020-07-28T13:39:37.818+0100 query ‘[123 32 100 97 116 101 58 32 123 32 36 108 116 58 32 73 115 111 68 97 116 101 40 50 48 50 48 45 48 50 45 50 56 84 48 48 58 48 48 58 48 48 46 48 48 48 90 41 32 125 32 125]’ is not valid JSON: invalid character ‘s’ in literal Infinity or ISO (expecting ‘n’ or ‘S’)What am I doing wrong?-Paul",
"username": "Alexandru_Paul_Csiki"
},
{
"code": "mongodump",
"text": "Hello @Alexandru_Paul_Csiki, welcome to the forum.There was a similar question couple of months back, on this forum. Here is the link to it. In that case, it was the query used with mongodump , but the syntax for the query option is same (I think). Please check the post and tell us if it helped.Mongodump –query not able to filter using timestampAlso, include a sample document showing the field(s) used in the query and the operating system you are working with.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thank you, that worked. Here’s the CLI I’ve used:\nλ mongoexport --host server --ssl --username dev --password omitted --authenticationDatabase admin --db feedback --collection entries --type json --out dispatcher.json --query=’{“date”: {\"$lt\": {\"$date\": “2020-02-28T00:00:00.000Z”}}}’",
"username": "Alexandru_Paul_Csiki"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Query syntax for mongoexport | 2020-07-28T12:50:11.560Z | Query syntax for mongoexport | 9,296 |
null | [] | [
{
"code": "",
"text": "Hi to all,the documentation is not very clear if index should fit in WiredTiger cache or in RAM?Thx",
"username": "Alaskent19"
},
{
"code": "",
"text": "Hello @Alaskent19,Here are some related links from documentation:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Hello @Prasad_Saya,but I don’t understand if the documentation refers to the total RAM or only to the WiredTiger RAM.\nI have a machine with 8GB of RAM … how much RAM should I consider for index and working set?Thanks",
"username": "Alaskent19"
},
{
"code": "",
"text": "I think the RAM / memory refers to the total physical RAM available on the server (or specific node).Also, see these articles:From the MongoDB blog Performance Best Practices: MongoDB Data Modeling and Memory Sizing see the section Memory Sizing: Ensure your working set fits in RAMAtlas Sizing and Tier Selection - MemoryDiscussion on this forum: Working set MUST fit in memory?",
"username": "Prasad_Saya"
}
] | Index should fit in RAM or in cache? | 2020-07-28T12:48:38.893Z | Index should fit in RAM or in cache? | 2,307 |
null | [] | [
{
"code": "",
"text": "Hi. I have an array of values , i need the aggregation to subtract the next value from the previous one for all the values in the array. I other words subtract value 1 from 2 , then 2 from 3 etc.",
"username": "johan_potgieter"
},
{
"code": "$reducefor-loop{ \"_id\" : 1, \"arr\" : [ 12, 32, 88, 1, 76, 359, 90 ] }db.test.aggregate([ \n { \n $project: {\n result: { \n $reduce: { \n input: { $range: [ 1, { $size: \"$arr\" } ] }, \n initialValue: [ ], \n in: {\n $concatArrays: [ \n \"$$value\", \n [ { $subtract: [ \n { $arrayElemAt: [ \"$arr\", \"$$this\" ] }, \n { $arrayElemAt: [ \"$arr\", { $subtract: [ \"$$this\", 1 ] } ] } \n ] } ]\t\n ]\n }\n }\n }\n }\n }\n])\n{ \"_id\" : 1, \"result\" : [ 20, 56, -87, 75, 283, -269 ] }",
"text": "Hello @johan_potgieter,You can use the $reduce aggregation array operator for this purpose. This is like using a for-loop and accessing the array elements, subtract one from another, and put each of the subtracted results in another array.The document with the array: { \"_id\" : 1, \"arr\" : [ 12, 32, 88, 1, 76, 359, 90 ] }The aggregation:The result: { \"_id\" : 1, \"result\" : [ 20, 56, -87, 75, 283, -269 ] }",
"username": "Prasad_Saya"
},
{
"code": "db.test1.insertMany([\n { _id: 'A', values: [4, 1, 6] },\n { _id: 'B', values: [] },\n { _id: 'C', values: [0, 11, 3, 3, 9] },\n { _id: 'D', values: [5] },\n { _id: 'E', }\n])\n[\n {\n \"_id\" : \"A\",\n \"initialValues\" : [ 4, 6 ],\n \"calculatedValues\" : [ 3, -5 ]\n },\n {\n \"_id\" : \"B\",\n \"initialValues\" : [ ],\n \"calculatedValues\" : [ ]\n },\n {\n \"_id\" : \"C\",\n \"initialValues\" : [ 0, 11, 3, 3, 9 ],\n \"calculatedValues\" : [ -11, 8, 0, -6 ]\n },\n {\n \"_id\" : \"D\",\n \"initialValues\" : [ 5 ],\n \"calculatedValues\" : [ ]\n },\n {\n \"_id\" : \"E\"\n }\n]\n\ndb.test1.aggregate([\n {\n $addFields: {\n result: {\n $reduce: {\n // walk array of $values with $reduce operator\n input: '$values',\n initialValue: {\n prevValue: null,\n calculatedValues: [],\n },\n in: {\n $cond: {\n if: {\n // if we do not know two neighbouring values\n // (first iteration)\n $eq: ['$$value.prevValue', null],\n },\n then: {\n // then we just skip the calculation\n // for current iteration\n prevValue: '$$this',\n calculatedValues: []\n },\n else: {\n // otherwise we know two neighbouring values\n // and it is possible to calculate the diff now\n $let: {\n vars: {\n newValue: {\n // calculate the diff\n $subtract: ['$$value.prevValue', '$$this'],\n }\n },\n in: {\n prevValue: '$$this',\n calculatedValues: {\n // push the calculated value into array of results\n $concatArrays: [\n '$$value.calculatedValues', ['$$newValue']\n ]\n }\n }\n }\n }\n }\n }\n }\n }\n }\n },\n {\n // restructure the output documents\n $project: {\n initialValues: '$values',\n calculatedValues: '$result.calculatedValues',\n }\n }\n]).toArray();\n",
"text": "Hello, @johan_potgieter!Let’s solve this by example.\nAssume, we have this dataset:So, if we want to get those results:We could use the aggregation like this:PS: Also, make sure all your values in ‘values’ prop are of numeric types. Otherwise, consider adding $convert operator to handle different data types.",
"username": "slava"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Subtract previous value from next in array | 2020-07-28T09:32:24.165Z | Subtract previous value from next in array | 6,499 |
null | [
"java"
] | [
{
"code": " public void query11(String product, int limit) {\n printBlock = new Consumer<UserBson>() {\n @Override\n public void accept(final UserBson user) {\n System.out.println(\"Hola\");\n if (cantResults < limit) {\n System.out.println(\"#=[\" + (cantResults + 1) + \"]\\t_id:\" + user.getId() );\n cantResults++;\n }\n }\n };\n\n connect.getCollection().aggregate(\n Arrays.asList(\n Aggregates.match(Filters.exists(\"scores\")),\n Aggregates.unwind(\"$scores\"),\n Aggregates.project(Projections.include(\"name\", \"secondName\", \"lastName\", \"secondLastName\", \"sales.products.name\")),\n Aggregates.sort(Sorts.ascending(\"_id\"))\n )\n ).forEach(printBlock);\n totalTime = (System.currentTimeMillis() - startTimeT);\n }\n",
"text": "I have the bellow query to MongoDB:But Mongo is responding with:Preformatted textException in thread “main” com.mongodb.MongoCommandException:\nCommand failed with error 16819 (Location16819): ‘Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting. Aborting operation. Pass allowDiskUse:true to opt in.’ on server 127.0.0.1:27017. The full response is {“ok”: 0.0, “errmsg”: “Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting. Aborting operation. Pass allowDiskUse:true to opt in.”, “code”: 16819, “codeName”: “Location16819”}",
"username": "Hamilton_Smith_Carva"
},
{
"code": "allowDiskUsecollection.aggregatecollection.aggregate(pipeline).allowDiskUse(true)",
"text": "Hello.This is the error you are getting (and the reason is quite clear in it): … ‘Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting. Aborting operation. Pass allowDiskUse:true to opt in.’ on server….The reason for the error and how to overcome is explained in the documentation: $sort and Memory Restrictions.In your Java code, use the allowDiskUse option on the collection.aggregate method as follows, and this should solve the problem:collection.aggregate(pipeline).allowDiskUse(true)",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Command failed with error 16819 (Location16819) | 2020-07-27T17:33:58.767Z | Command failed with error 16819 (Location16819) | 4,070 |
null | [
"sharding"
] | [
{
"code": "mongos> db.MYCOLLECTION.find({_id : \nObjectId(\"57c5ff81724c2e70c623e733\" });\n",
"text": "Good afternoon,MongoDB server version: 3.6.16 on RHEL/CentOS.I am trying to insert a record into a sharded collection, and I am getting a strange error I have Googled the heck out of. To summarize, the insert says: “shard version not ok: version mismatch detected for MYDATABASE.MYCOLLECTION”. What causes this? I have tried to do a flushRouterConfig with no avail.The following is the log line I that I have changed to protect the innocent:2020-07-22T16:23:45.805+0000 I COMMAND [conn1189005] command MYDATABASE.MYCOLLECTION command: insert { insert: “MYCOLLECTION”, bypassDocumentValidation: false, ordered: false, documents: 50, shardVersion: [ Timestamp(57023, 3), ObjectId(‘57c5ff81724c2e70c623e733’) ], lsid: { id: UUID(“fade92f1-0993-4db7-af03-1b6066628f8c”), uid: BinData(0, 30D9CB0F31D33F7912528ADD7F28D77AA3ADBBDF1B6E9C50BFB8163217CE97C8) }, $clusterTime: { clusterTime: Timestamp(1595435024, 596), signature: { hash: BinData(0, E82B2EB1A3E65A615144ABFD585CC0CB8DB8E2D0), keyId: 6817327097028018329 } }, $client: { driver: { name: “mongo-java-driver”, version: “3.9.1” }, os: { type: “Linux”, name: “Linux”, architecture: “amd64\", version: “3.10.0-1127.el7.x86_64” }, platform: “Java/Oracle Corporation/1.8.0_181-b13\", mongos: { host: “queryrouter:27018”, client: “xxx.xxx.xxx.xxx:49626\", version: “3.6.16” } }, $configServerState: { opTime: { ts: Timestamp(1595435023, 177), t: 4071 } }, $db: “MYDATABASE” } ninserted:0 exception: shard version not ok: version mismatch detected for MYDATABASE.MYCOLLECTION ( ns : MYDATABASE.MYCOLLECTION, received : 57023|3||57c5ff81724c2e70c623e733, wanted : 57024|3||57c5ff81724c2e70c623e733 ) code:StaleConfig numYields:0 reslen:14363 locks:{ Global: { acquireCount: { r: 4, w: 2 } }, Database: { acquireCount: { r: 1, w: 2 } }, Collection: { acquireCount: { r: 1, w: 2 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 1432380 } } } protocol:op_msg 1432msit’s always on the same object ID, and we have over 2000-3000 errors daily. Mongo cannot find the objectID:Also running below and restarting all the query routers did not fix it:db.adminCommand({ flushRouterConfig: “MYDATABASE.MYCOLLECTION”});Please assist.",
"username": "William_Crowell"
},
{
"code": "",
"text": "Hello @William_Crowell, welcome to the community!It would be helpful for others to read and respond to your question if you could apply the proper log formatting to your post. Please add spacing to break up long blocks of text/logs and improve readability.\nYou may want to review the Getting Started guide form @Jamie which has some great additional tips and information.Concerning your actual problem: it seems that your run different versions.version mismatch detected for MYDATABASE.MYCOLLECTION ( ns : MYDATABASE.MYCOLLECTION, received : 57023|3||57c5ff81724c2e70c623e733, wanted : 57024|3||57c5ff81724c2e70c623e733 )You can check the shards mongodb.log and look for something like:requested shard version differs from config shard version for my_db.my_collection, requested version isCheers,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Hey Michael,Thanks for your reply. I just added a new line after each line to improve the readability, and I am providing a public gist link to it:https://gist.githubusercontent.com/wcrowell/e29eb4b98d1a1fd5e78ca713876813b6/raw/649f01af9d16d523271147de6b43e219ee5270ec/formatted.txtLet me know if I can do anything else to improve readability.I did not see the message: “requested shard version differs from config shard version for my_db.my_collection, requested version is”.Regards,Bill Crowell",
"username": "William_Crowell"
},
{
"code": "",
"text": "I just checked last one week of the Mongo log. We do not have “requested shard version differs”",
"username": "Fory_Horio"
},
{
"code": "",
"text": "Hi @William_Crowell and @Fory_Horio,The errors indicate that at some point the sharding metadata was stale/out of sync.Usually invalidation the cache on the mongos/shards should solve this:Can you confirm the issue is gone now?Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Pavel,\nGood morning. Do you mean running: db.adminCommand({ flushRouterConfig: \" PM_AUDIT.AUDIT \" } )Or another method to invalidate the cache like this: https://docs.mongodb.com/manual/reference/command/invalidateUserCache/Thanks for your reply.Regards,Bill Crowell",
"username": "William_Crowell"
},
{
"code": "",
"text": "Hi @William_Crowell,Yes I meant flushRouterConfig.Yes you can start with the collection level and escalateBest regards\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Pavel,Thanks for your reply again. This would need to be run on both the query routers, mongod database instances, and mongod configuration database instances?Regards,William Crowell",
"username": "William_Crowell"
},
{
"code": "",
"text": "We have ran the db.adminCommand({ flushRouterConfig: \" PM_AUDIT.AUDIT \" } ) on all the QRTs and restarted mongos several times, but the error never disappear. A few months ago, we had a similar problem, at that time, flushing and restarting mongos cured the problem. But not this time. We tried several times over few days.",
"username": "Fory_Horio"
},
{
"code": "",
"text": "Hi @Fory_HorioHave you run it on all mongos instances and shards?What is the version of the cluster and the sharding distribution of this collection?Can you consider failover the shards?Best regards\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Yes, I did, several times on all the QRTs on all shards. Didn’t get solved.",
"username": "Fory_Horio"
},
{
"code": "",
"text": "We have two shards with two QRTs for each shard, total four QRTs. I tried one more round of flushing and restarting mongos against all four QRTs. The last error BEFORE the restarts is blow. We will see.2020-07-27T21:24:19.964+0000 I COMMAND [conn1221010] command PM_AUDIT.AUDIT command: insert { insert: “AUDIT”, bypassDocumentValidation: false, ordered: false, documents: 50, shardVersion: [ Timestamp(60823, 3), ObjectId(‘57c5ff81724c2e70c623e733’) ], lsid: { id: UUID(“0cc98ccd-4699-48fd-b418-dac54bf66319”), uid: BinData(0, 30D9CB0F31D33F7912528ADD7F28D77AA3ADBBDF1B6E9C50BFB8163217CE97C8) }, $clusterTime: { clusterTime: Timestamp(1595885058, 260), signature: { hash: BinData(0, 7469BFFD3F6BAB1F6127C9839A57F6ADFC72AB01), keyId: 6817327097028018329 } }, $client: { driver: { name: “mongo-java-driver”, version: “3.9.1” }, os: { type: “Linux”, name: “Linux”, architecture: “amd64”, version: “3.10.0-1127.el7.x86_64” }, platform: “Java/Oracle Corporation/1.8.0_181-b13”, mongos: { host: “monqrt-east-1b:27018”, client: “10.1.2.121:48620”, version: “3.6.16” } }, $configServerState: { opTime: { ts: Timestamp(1595885057, 177), t: 4071 } }, $db: “PM_AUDIT” } ninserted:0 exception: shard version not ok: version mismatch detected for PM_AUDIT.AUDIT ( ns : PM_AUDIT.AUDIT, received : 60823|3||57c5ff81724c2e70c623e733, wanted : 60824|3||57c5ff81724c2e70c623e733 ) code:StaleConfig numYields:0 reslen:14363 locks:{ Global: { acquireCount: { r: 4, w: 2 } }, Database: { acquireCount: { r: 1, w: 2 } }, Collection: { acquireCount: { r: 1, w: 2 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 1214048 } } } protocol:op_msg 1214ms",
"username": "Fory_Horio"
},
{
"code": "",
"text": "After flushing and restarting ALL QRTs, the error came back again.2020-07-27T23:16:53.597+0000 I COMMAND [conn1221486] command PM_AUDIT.AUDIT command: insert { insert: “AUDIT”, bypassDocumentValidation: false, ordered: false, documents: 50, shardVersion: [ Timestamp(60855, 3), ObjectId(‘57c5ff81724c2e70c623e733’) ], lsid: { id: UUID(“694c2e56-c025-48e9-9e60-e71248a44bd3”), uid: BinData(0, 30D9CB0F31D33F7912528ADD7F28D77AA3ADBBDF1B6E9C50BFB8163217CE97C8) }, $clusterTime: { clusterTime: Timestamp(1595891811, 296), signature: { hash: BinData(0, 786285F2A69138520A346EBF33FA7B3CFEA96DD2), keyId: 6817327097028018329 } }, $client: { driver: { name: “mongo-java-driver”, version: “3.9.1” }, os: { type: “Linux”, name: “Linux”, architecture: “amd64”, version: “3.10.0-1127.el7.x86_64” }, platform: “Java/Oracle Corporation/1.8.0_181-b13”, mongos: { host: “monqrt-east-1c:27018”, client: “10.1.3.191:41186”, version: “3.6.16” } }, $configServerState: { opTime: { ts: Timestamp(1595891811, 3), t: 4071 } }, $db: “PM_AUDIT” } ninserted:0 exception: shard version not ok: version mismatch detected for PM_AUDIT.AUDIT ( ns : PM_AUDIT.AUDIT, received : 60855|3||57c5ff81724c2e70c623e733, wanted : 60854|3||57c5ff81724c2e70c623e733 ) code:StaleConfig numYields:0 reslen:14363 locks:{ Global: { acquireCount: { r: 7, w: 3 } }, Database: { acquireCount: { r: 2, w: 3 } }, Collection: { acquireCount: { r: 2, w: 2, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 1549896 } } } protocol:op_msg 1994ms",
"username": "Fory_Horio"
},
{
"code": "",
"text": "Hi @Fory_Horio,It seems like you might have a performance or locking issue, which can be a result of overloaded balancing or sharding resources.Those are best covered by MongoDB support. I suggest you to engage with support.If you wish I can contact you with a sales representative to continue the investigation.Thanks\nPavel",
"username": "Pavel_Duchovny"
}
] | "shard version not ok: version mismatch detected for" | 2020-07-24T21:36:50.751Z | “shard version not ok: version mismatch detected for” | 10,993 |
null | [] | [
{
"code": "",
"text": "HiI am trying out realm-tutorial/rn at main · mongodb-university/realm-tutorial · GitHub to see how realm sdk for react native works. There is no issue to do: npm run android. However if I use vscode to do debug with the app, I got some error:ReferenceError: createSession is not definedThe whole debug output:OS: win32 x64\nAdapter node: v12.8.1 x64\nvscode-chrome-debug-core: 6.8.8\nStarting debugger app worker.\nEstablished a connection with the Proxy (Packager) to the React Native application\nDebugger worker loaded runtime on port 23489\nRequire cycle: node_modules\\realm\\lib\\browser\\util.js → node_modules\\realm\\lib\\browser\\rpc.js → node_modules\\realm\\lib\\browser\\util.jsRequire cycles are allowed, but can result in uninitialized values. Consider refactoring to remove the need for a cycle.\nReferenceError: createSession is not defined\nInvariant Violation: Module AppRegistry is not a registered callable module (calling runApplication)\nInvariant Violation: Module AppRegistry is not a registered callable module (calling runApplication)I searched createSession in the whole project, I found:\nin index.bundle file (under .vscode/.react directory):\nrpc.registerTypeConverter(_constants.objectTypes.SESSION, createSession);\nRealm[_constants.keys.id] = rpc.createSession(refreshAccessTokenCallback, debugHosts[i] + ‘:’ + debugPort);in index.map file (under .vscode/.react directory):\n“createRealm”,“USER”,“createUser”,“SESSION”,“createSession”,So it seems createSession should be defined, but current sdk [email protected] does not have it.Can someone from realm js team provide some insight how to bypass this issue?Thanks,\nJ.H.",
"username": "jerry_he"
},
{
"code": "",
"text": "@jerry_he Can you file an issue here with a reproduction app please and we still take a look -Realm is a mobile database: an alternative to SQLite & key-value stores - GitHub - realm/realm-js: Realm is a mobile database: an alternative to SQLite & key-value stores",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Can you please tell me the exact URL to report issue?",
"username": "jerry_he"
},
{
"code": "",
"text": "My current account has Support: Basic Plan, which does not allow me to create support request.However in my first post, I already outline the steps to reproduce the issue. Here I put additional info as belowgit clone from realm-tutorial/rn at main · mongodb-university/realm-tutorial · GitHub\ndo: npm install under rn/\nopen vscode to setup debug session with reactnative\nand start debug session for android (use device or simulator), the error would showed on the console of debug sessionIf I dont start debug session , but just launch app by: npm run android\nthen there is no error. So debug session opens up something more",
"username": "jerry_he"
},
{
"code": "",
"text": "Its on the github issues tab - the realm client is open source so any user can file an issue. That is where bug reports should go because supports monitors that channel. The forums is designed for general code questions and collaboration on app design for the community",
"username": "Ian_Ward"
}
] | createSession is not defined | 2020-07-27T19:27:13.342Z | createSession is not defined | 2,557 |
null | [] | [
{
"code": "",
"text": "Can someone explain\ncollection: failed to insert change into history during initial sync of collection after copying 10404 documents: error incrementing the version counter for (appID=“5…5e”, fileIdent=12869); context deadline exceededKeeps saying Enabling Sync… copying data\nThen flashes the above and back to Enabling Sync…\nover and overThanks in advance",
"username": "Barry_Fawthrop"
},
{
"code": "",
"text": "@Barry_Fawthrop Can you email me at [email protected] with the URL link to your realm app and we can take a look for you on the backend",
"username": "Ian_Ward"
}
] | Failed to insert change into history | 2020-07-27T15:11:25.079Z | Failed to insert change into history | 2,031 |
null | [] | [
{
"code": "",
"text": "Hi AllI tried expanding the collections we sync to Started with 1 now have 4Now getting Error Domain=io.realm.sync Code=112 \"Bad changeset (DOWNLOAD)\"I wiped my local copy (iOS realm-object-server) and still get this error?Any ideas how to correct this?Thanks",
"username": "Barry_Fawthrop"
},
{
"code": "",
"text": "@Barry_Fawthrop Can you open a ticket by opening a chat in the Atlas Cloud UI ? We will investigate for you on the backend",
"username": "Ian_Ward"
}
] | Code=112 "Bad changeset (DOWNLOAD)" | 2020-07-27T20:04:31.430Z | Code=112 “Bad changeset (DOWNLOAD)” | 2,427 |
null | [] | [
{
"code": "",
"text": "Hello, MongoDB enthusiasts!\nI am looking for learning partners.My plan:\nI. PreparationII. Pair workRequirements to the partner:I need to have few meetings per week - it can be 1 or 3 meetings per week, depending on the time you have.I could do all the learning alone, but with the partner the process is much more interesting and fun. And, much efficient, of course A bit about me: I have 2 years of Node.js and MongoDB development. I already know it well, but I would like to know it even better, in more details If you’re interested, you can reply here, under this post or write me to my email: [email protected].",
"username": "slava"
},
{
"code": "",
"text": "Hello @slavathis is a great idea to learn as a team! I personally I can not participate, but I like to forward this idea. Since this is a great approach in a community sense.Good luck with your certification! In case you come across questions to be solved or statements to be verified please do not hesitate to post them here. We will find an answer.Michael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Love this idea, @slava! Have you been able to find some folks to partner with?",
"username": "Jamie"
},
{
"code": "",
"text": "Hello, @Jamie!\nNope. I am still looking for a MongoDB learner partner.",
"username": "slava"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Looking for MongoDB Learning partner! | 2020-06-15T09:19:21.509Z | Looking for MongoDB Learning partner! | 5,076 |
null | [] | [
{
"code": "",
"text": "for Linux\nEmail, first name, last name, company, and phone no. ask me when I go to download.how can I solve this problem?",
"username": "Parth_Parsaniya"
},
{
"code": "",
"text": "Hi @Parth_Parsaniya,It’s just a simple form. You can enter the details and then you should be able to see the download option.Let me know if it doesn’t resolve your issue.~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "",
"username": "system"
}
] | How to download mongodb Enterprise for this course | 2020-07-27T03:20:40.366Z | How to download mongodb Enterprise for this course | 1,053 |
null | [] | [
{
"code": "",
"text": "I am using TSM (Spectrum Protect) to backup files on the server, which MongoDB files do need to exclude from the backups?",
"username": "richard_labutis"
},
{
"code": "",
"text": "Hi @richard_labutis,In general, you can backup the entire dbpath as long as your instance is down or blocked for writes during the backup.MongoDB offer tools like Ops Manager and cloud manager which does not require you to block the instance for backups.Having said that, log files and the diagnostic.data directory is not vital for a restore therefore you can consider excluding them. However, its always good to backup those for trouble shooting…Let me know if you have any questions.Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "let me rephrase the question, which files do i EXCLUDE while the database is running?",
"username": "richard_labutis"
},
{
"code": "",
"text": "Hi Richard,You cannot copy files to an external location if the instance is not locked for writes.As mentioned if you stop writes you can exclude logs and diagnostics.data directory.Best regards\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "I already known that, but you still have not answered my question. which files do I tell tsm to skip backing up.\ni don’t want the database files, i need to do do a full system backup excluding mongo database.\nmongo will get backed up with other methods.",
"username": "richard_labutis"
},
{
"code": "",
"text": "Hi @richard_labutis,Sorry I misunderstood your intention.Please provide your mongod.conf omitting any sensetive information so I can guide you which paths to omit.In general any log,pid or dbPath location should be blacklist.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Sorry I misunderstood your intention.I also read the question as ‘what files can I exclude when backing up mongoDB’.Only in this last comment do we get to the real question, this could have been asked more explicitly at the outset.",
"username": "chris"
}
] | Files to exclude during backups | 2020-07-23T13:41:20.471Z | Files to exclude during backups | 2,749 |
null | [
"python"
] | [
{
"code": ">>>results=[]\n>>> results.append(deptload.find_one())\n>>>for item in results:\n\tdeptid=item['DeptId']\n\titem2=deptprior.find_one({\"Code\":deptid})\n\tprint(item2['Name'])\n",
"text": "Hi,\nI’m using Python and I’m trying to do a compare in 2 collections to push any new changes or inserts to a 3rd collection.I have a current collection and another collection that will be updated with new data. I want to see if there is any change of the new data when compared to the current collection and if so push it to a 3rd collection in order to push to an external system.I searched and found some possible ideas but figured I’d just start with trying to pull data from the 2 collections based on one value. However, I’m getting stuck with the find_one query part. What I have so far:The issue is with the deptid in ‘({“Code”:deptid})’. It won’t accept the value from deptid and is returning back TypeError: ‘NoneType’ object is not subscriptable.However when I print the deptid and pass it manually with the actual numbers, “12000”, it works just fine and brings back the name.First am I on the right track to compare 2 collections and push the changes to a 3rd. And secondly, what am I doing wrong with the deptid variable?Thanks.",
"username": "Phuong_Hoang"
},
{
"code": "find_one()Noneresults[None]Nonedoc = deptload.find_one()\nif doc is not None:\n results.append(doc)\n",
"text": "I believe this issue is that find_one() returns None when no document matches the query. That means the results list in your example could end up being [None]. See:To fix this, check if the return is None before appending to results:Also, if your application needs to track changes to a collection you may be interested in using change streams:",
"username": "Shane"
},
{
"code": "",
"text": "Thanks. For the ChangeStream it says I need a replica set. How do I install that?",
"username": "Phuong_Hoang"
},
{
"code": "",
"text": "Please enjoy https://docs.mongodb.com/manual/replication/.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Issue with passing a variable to the find_one query | 2020-04-07T20:34:58.809Z | Issue with passing a variable to the find_one query | 6,797 |
null | [] | [
{
"code": "_id: \"5f1d78511158a201f89eaa13\" ,\nname: \"Course\",\nsections: [\n {\n _id: \"5f1d7d723bfe781024f734d9\" ,\n section: \"Section 1\",\n lessons: [ Array ]\n } \n {\n _id: \"5f1d7fd5131e9020d0477b18\" ,\n section: \"Section 1\",\n lessons: [ Array ]\n ]\n}\ndb.collection.findByIdAndUpdate( {'sections._id': req.body.section }, {$push: {lessons: lesson}, updated: Date.now()}, {new: true})\n",
"text": "This the documentI would like to push an object into the ‘lessons’ array that is completly empty, so I should use findByIdAndUpdate()\nlesson: {\ncontent: “Content”\n}I get the course and section id from the request, so I tried this but it doesn’t workI really need help, I haven’t found anything useful in the internet. What I should do?Thank you very much,\nSule",
"username": "Soulaimane_Benmessao"
},
{
"code": "lessons\"sections._id\"mongo\"sections._id\"db.collection.findOneAndUpdate( \n { \"sections._id\": req.body.section }, \n { \n $push: { \"sections.$.lessons\": lesson }, \n $set: { updated: Date.now() } \n },\n { returnNewDocument: true }\n)\n",
"text": "Hello @Soulaimane_Benmessao, welcome to the forum.You can use the positional $ Update Operator to update (i.e., add object to the empty array field lessons, using the update operator $push) the document for the specified \"sections._id\" field value.For example, the following mongo shell update method will update the specific sub-document with a matching \"sections._id\":",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "It works perfectly fine. Now I understand it. Thank you very much!",
"username": "Soulaimane_Benmessao"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Add object to empty array | 2020-07-26T21:58:13.976Z | Add object to empty array | 8,682 |
null | [] | [
{
"code": "",
"text": "How can I downgrade from a shared M5 to M2 cluster?",
"username": "royce_chan"
},
{
"code": "",
"text": "Hi @royce_chan,You can’t use the UI to downgrade clusters lower than M10 (Including).What you can do, considering that your M2 cluster will be compatible with your current M5 to its collection/size limitation, is to obtain a recent backup or use mongodump to get your database copy and restore it to a new M2 cluster.Please let me know if you have any additional questions.Best regards,\nPavel",
"username": "Pavel_Duchovny"
}
] | Downgrade from M5 to M2 cluster | 2020-07-26T21:58:31.922Z | Downgrade from M5 to M2 cluster | 5,483 |
null | [
"golang"
] | [
{
"code": "",
"text": "Hi,Is it possible to control the sort order and limit when usung the go driver’s collection.Distinct method ?Looking at DistinctOptions, and following the links to the implementation, i can’t see how to do this.Thanks,Robin",
"username": "Robin_Bryce"
},
{
"code": "distinct",
"text": "Hi @Robin_Bryce, and welcome to the forumIs it possible to control the sort order and limit when usung the go driver’s collection.Distinct method ?The distinct method here is referring to the MongoDB’s distinct command. Unfortunately the original command does not have options to sort/limit as well.An alternative to this is to utilise aggregation pipeline $group stage, and combined with $sort and $limit. See also Retrieve Distinct Values for more information.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Distinct and sort | 2020-07-02T20:31:12.235Z | Distinct and sort | 4,811 |
null | [] | [
{
"code": "",
"text": "I have an app that involves processing tickets in temporal order. There are about 3,000 tickets added each day. To find the next ticket to process, I will do a query on timestamps (i.e. find all tickets with timestamps after the last processed timestamp).So as tickets are processed, they stay in a collection that stores all the tickets. My question is, should I worry about a collection getting too large (it might eventually have like 100,000 documents)? Will this dramatically effect the performance of my queries? Or should I run something that automatically archives processed tickets that don’t need to be queried anymore?",
"username": "Neil_Chowdhury"
},
{
"code": "",
"text": "When I have a situation like that, I like to have 2 collections. The first with unprocessed documents and one for processed documents for historical data queries.",
"username": "steevej"
},
{
"code": "",
"text": "Hello @Neil_Chowdhury, welcome to the forum!If you are having 3000 tickets per day, in a year it will be more than a million documents. To query (and update) documents efficiently based on a field, index the field(s). Working with a million documents is not difficult, but as the time passes, the number of documents increase and it can tax on the system resources (like, index size and memory, as these are shared with other queries).As @steevej has mentioned, maintain two collections. You can periodically (e.g., monthly or quarterly), move some of the already processed data to the history collection using a scheduled batch process.",
"username": "Prasad_Saya"
}
] | How large can I make my collection? | 2020-07-25T20:41:56.992Z | How large can I make my collection? | 1,291 |
null | [] | [
{
"code": "",
"text": "Whenever my charts re-rerender, the series numbers are recalculated internally and any customized colors associated with the series change. So on a stacked bar chart, series one which I’ve mapped as green and series 2 mapped as red, swap colors. Or even worse, if a third series shows up, I get yet another color. How can I control series mapping to prevent this. Otherwise its worthless to my customers as a dashboard.",
"username": "Richard_Williams"
},
{
"code": "$sort",
"text": "Hi @Richard_Williams -You are correct that currently Charts picks colours for series using a “first come, first serve” approach. We don’t yet have a way of assigning specific colours to specific series, although that is something we plan to add. You may want to suggest this on feedback.mongodb.com for others to vote on.In the meantime, while not a perfect solution, you improve the predictability of series ordering by adding a $sort stage using your series field in the query bar. That way, the documents will always be returned to the chart with the series in the same order, although if any expected series are not present then the colours could still change.HTH\nTom",
"username": "tomhollander"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Series changes on rerender | 2020-07-26T12:06:28.714Z | Series changes on rerender | 1,618 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "We are currently working on a .NET Core application and we are using MongoDB for it. I am using .NET Driver to access the data. All the data we are saving in a collection has different types of data structure.For example, it has first document which has Name, Phone and a Payload which has embedded document in which we are saving address:\n{ “Name”: “TestName”, “Phone”: “23846787”, “Payload”: { “Address”: “TestAddress”, “City”: “TestCity” }, “Active”: true }Then in the same collection we have another document which has Name, Phone and a Payload which is completely different from first one:{ “Name”: “TestName2”, “Phone”: “54568765”, “Payload”: { “Weight”: “70”, “Age”: “45”, “Gender”: “Female” } }Now when we use .NET driver to get both of these records, we get an error because it cannot cast the embedded document into an object (as it doesnt know about the object). We need to tell it, which type of object is the embedded document. But we dont want to do it because we have several types of payload we want to save. I tried using discriminator “_t” but it didn’t help.Can someone please suggest how we can read the data when we have different elements in the document and also has embedded documents ??Thank you\nJW",
"username": "Jason_Widener"
},
{
"code": "",
"text": "Hi @Jason_Widener and welcome to the forum,Can someone please suggest how we can read the data when we have different elements in the document and also has embedded documents ??You can utilise BsonDocument, which is the default type used for documents. It should be able to handle dynamic documents.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Thank you for your reply. Yes I can utilize BsonDocument but when I get the BsonDocument, I need to cast it as original object which is not same always. For example, an order has a property of type IPayment which can be CreditCard or Giftcard. In one case payment type can be CreditCard and in another case it can GiftCard. So sometimes its has discriminator as _CreditCard and sometime it has _Giftcard. I can use BsonClassMap.RegisterClassMap but its a big class with several subclasss in the property. I dont want to register every single one of them.",
"username": "Jason_Widener"
}
] | Read Documents From A Collection With Different Structure | 2020-07-17T02:10:47.328Z | Read Documents From A Collection With Different Structure | 2,111 |
null | [
"sharding"
] | [
{
"code": "events{\n user_id: <ObjectId>,\n company_id: <ObjectId>,\n event_id: <string>,\n start_date: <datetime>,\n end_date: <datetime>\n}\n{company_id: 1, user_id: 1, event_id: 1}{company_id: 1, user_id: 1, start_date: 1, end_date: 1}{company_id: ...., user_id: ...., start_date: ...., end_date: ....}event_idmongosevent_id{company_id: 1, user_id: 1}",
"text": "Hello, I hope you’re all doing well.NOTE: the mongodb’s version in question is 3.0We have a sharded setup of mongodb across several nodes.\nLet’s say there is a collection events with documents like:The index used as a shard key is {company_id: 1, user_id: 1, event_id: 1}. Also, there is a multikey index on this collection {company_id: 1, user_id: 1, start_date: 1, end_date: 1}Now, the query like {company_id: ...., user_id: ...., start_date: ...., end_date: ....} comes in. (NOTE: the query doesn’t have the event_id field).My questions are:Would mongos be ALWAYS sending this query to all shards because it couldn’t figure out the exact shard without the event_id field?The same situation, but with the shard key by {company_id: 1, user_id: 1}. Will mongos be asking only a single shard for data without unnecessary requests to other shards?Thank you.",
"username": "Anton_Koval"
},
{
"code": "",
"text": "Hello @Anton_Koval, welcome to the forum!In the two questions you have mentioned, the query criteria use the Shard Key Prefix. Hence, both the queries will be Targeted Operations.A targeted operation, uses a shard key (or its prefix, in case of a compound shard key) and accesses a specific shard or set of shards (but, not all the shards). As long as you are specifying any of the possible prefixes of the shard key in the query criteria, it will be a targeted operation.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "NOTE: the mongodb’s version in question is 3.0Hi Anton,Please note that MongoDB 3.0 reached end of life in February 2018, and no longer receives any security or maintenance updates. It’s definitely time to plan your upgrade to a supported version (currently MongoDB 3.6 or newer). There have been significant stability, performance, security, and feature improvements since the 3.0 release series.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thank you. It makes sense.",
"username": "Anton_Koval"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB shard key targeting questions | 2020-07-24T19:04:18.812Z | MongoDB shard key targeting questions | 2,163 |
null | [] | [
{
"code": "",
"text": "Hi All,Realm DB wants/prefers a fixed schema which is great.I have a project that has over 100 collections and some collections over 100,000 documents.How can one:automate a process that will go through each collection and apply a forced schema to each document. without DELETING any document or data?What is the correct way using Node.JS to enforce a data schema when creating/modifiying a document?Thanks in advance",
"username": "Barry_Fawthrop"
},
{
"code": "",
"text": "Hi @Barry_Fawthrop,You can sample your collections using the realm rules section to auto generate the schema.Please note that this reqired for configuration mode sync and graphQL , other sections of realm apps does not require this.\nPerhaps you can automate this this through administration API.Best regards\nPavel",
"username": "Pavel_Duchovny"
}
] | Update collection to enforce schema | 2020-07-26T15:32:08.286Z | Update collection to enforce schema | 1,406 |
null | [] | [
{
"code": "",
"text": "Hi,\nI am looking for steps to setup and test a sharded colleciton in MongoDB Atlas but not able to find the correct steps. It seems it needs to be done using a mix of GUI and command options. The steps mentioned on this page are not correct.Can anyone help with the steps to shard a collection. Not using the command line but from Atlas console.\nThanks!",
"username": "Juned_Ahsan"
},
{
"code": "",
"text": "Hi @Juned_Ahsan,The steps presented with GUI are only relevant if you deployed a “Global Sharded Cluster” and not just any sharded cluster.If you have a standard 1 region Sharded Cluster you need to use the mongo shell to shard collections:Best regards\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks Pavel. Any plans to bring those features on GUI?",
"username": "Juned_Ahsan"
},
{
"code": "",
"text": "Hi @Juned_Ahsan,Not sure if this will become available anytime soon.Please raise your interest on https://feedback.mongodb.comThanks!",
"username": "Pavel_Duchovny"
}
] | How to shard a collection in MongoDB Atlas GUI | 2020-07-25T08:46:57.524Z | How to shard a collection in MongoDB Atlas GUI | 1,774 |
null | [
"app-services-user-auth"
] | [
{
"code": "",
"text": "Hi,The documentation is not clear but is it true I cannot use google authentication as an option for authenticating a Webhook call ?Is there an alternative ?",
"username": "ChrisNAU"
},
{
"code": "",
"text": "@ChrisNAU Did you ever figure this out?\nI have similar concerns. I want to create several microservices using ‘3rd Partly Services’ webhooks.\nBefore I go down a rabbit hole, I would like to know how to authenticate users who call my webhooks.\nAs you’ve mentioned documentation is lacking and Realm still appears to be a BETA product. The UI and features are constantly changing. It’s a bit frustratiing, but I am determined to worked through the Realm evolution.",
"username": "Herb_Ramos"
},
{
"code": "",
"text": "@Herb_Ramos - Nope have not found a way. Would really love this to work",
"username": "ChrisNAU"
}
] | Login via Webhook using google authentication | 2020-07-11T05:53:18.788Z | Login via Webhook using google authentication | 1,446 |
null | [] | [
{
"code": "",
"text": "Hey there, I’m running on a Raspberry Pi 4 with Ubuntu 20.04 and MongoDB version 4.2.7. Today I installed Mongodb but by binding the ip of my Workstation I got this Error (48 - \" Failed to set up listener: SocketException: Cannot assign requested address\"). But by pinging this address I get a response.I’ve tried to bind this address over the config and also with “monogd --bind_ip 127.0.0.1:28000,192.168.xxx.xxx”. I’ve also tried to use another distro but no success…",
"username": "J_Hoffm"
},
{
"code": "bind_ipmongod --bind_ip 127.0.0.1,192.168.xxx.xxx\nmonogdmongod--port 28000",
"text": "Hi @J_Hoffm,I’ve tried to bind this address over the config and also with “monogd --bind_ip 127.0.0.1:28000,192.168.xxx.xxx”. I’ve also tried to use another distro but no success…The bind_ip is a comma separated list of IP addresses without port number. Have you triedNote you also had monogd which I assume was just a typo when posting your question.If you want to change the port that mongod listens on you can add --port 28000 to the command above.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Hey @Doug_Duncan,\nthank you for your reply.Now I’ve tried to bind these ips without the port but it makes no difference (tried mongo config & command). I’ve also tired to bind the ips without localhost (127.0.0.1).But every time I get the same Error (48): “SocketException: Cannot assign requested address”.",
"username": "J_Hoffm"
},
{
"code": "",
"text": "You may be having another mongod instance already running on that portTry with another port and see if it works",
"username": "Ramachandra_Tummala"
},
{
"code": "ifconfig | grep [i]net\n",
"text": "Please provide the output of",
"username": "steevej"
}
] | Failed to set up listener: Cannot assign requested address when binding ips | 2020-06-05T19:02:44.641Z | Failed to set up listener: Cannot assign requested address when binding ips | 5,686 |
null | [
"java"
] | [
{
"code": "MongoDB.utilMongoDB.bytes",
"text": "While migrating Mongo Java driver from 3.6.3 to 4.0.0, I found that MongoDB.util and MongoDB.bytes are deprecated in the newer version.Is there an alternative that I can use? Since the above packages are deprecated, it results in compilation issues in code",
"username": "390ed733ef3432fd811d"
},
{
"code": "com.mongodb.BytesJSONJSONCallbackJSONSerializerscom.mongodb.utiltoJsonparseBasicDBObject",
"text": "com.mongodb.Bytes class is deprecated with Java driver v3.9. The deprecation note says: “there is no replacement for this class”.Three classes (JSON, JSONCallback and JSONSerializers) in com.mongodb.util package are deprecated in Java Driver v3.5 (not Java driver v3.6).The deprecation note says: “This class has been superseded by to toJson and parse methods on BasicDBObject”, for all the three classes.",
"username": "Prasad_Saya"
}
] | Java driver migration from 3.6 to 4.0 | 2020-07-21T10:23:47.047Z | Java driver migration from 3.6 to 4.0 | 3,146 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "I’m looking at using MongoDB Realm Sync as the backend for an app that will allow users to upload and share reviews of music albums, follow other users, and search for albums and artists.Looking at the docs, I get a bit stuck re: partition keys. Most of the data needs to be public (i.e. album, artist, review data), but if I partition these all into a ‘public’ realm, then the cached database will end up growing to a huge size in local storage. Is there a better way of handling this/am I understanding how partitioning works correctly? Also, is it true that the whole database will be downloaded/cached to the device – I assume this could be problematic with large data sets? Any docs you could point me to as well could be great!The other way I was thinking of going about this was to partition data into realms for each user, and then use a separate API that I write using MongoDB Realm 3rd party services to get review or artist or album data, OR use MongoDB realm functions to basically do the same thing but avoid having to use HTTP requests.I’ve also been looking at the code for RealmSwift, and stumbled upon RealmApp.mongoClient, which I couldn’t find any documentation for. Is this a way to access this ‘public’ data (or any cluster data for that matter) without using Realm Sync? Or is this feature not fully fleshed out yet.Sorry for the long post – I’m just trying to figure out whether MongoDB Realm will suit my use case or if I’ll have to go another route. Thanks so much! ",
"username": "Pierre_Rodgers"
},
{
"code": "",
"text": "Hi @Pierre_Rodgers,I think your second part is the correct approach.Not all data within the realm application have to be synced to the device. You can choose which databases or collections you are going to sync and partition those by a logical device partition key.The other parts which does not need “offline-first” access can be accessed via the Realm sdk directly query or function.Pushing aggregations or text search to the Atlas platform will result in better performance.I think the mongoClient is a last resort option if your queries are not available through standard collection api.Let me know if that covers your questions.Best regards,\nPavel",
"username": "Pavel_Duchovny"
}
] | MongoDB Realm for large public data sets | 2020-07-24T19:04:49.614Z | MongoDB Realm for large public data sets | 2,503 |
null | [] | [
{
"code": "",
"text": "Hi\nI built a mobile app using mongodb stitch and atlas and it was great. I used mongodb authentication email/password, used stitch functions as well as connecting with AWS services (ses in this case)\nHowever, I needed to add a live chat feature and at that point I had to use an third party service to manage that. I was wondering if with realm, there is a possibility to build live type of functionality (mainly be able to listen to the server for any data change)",
"username": "ismael"
},
{
"code": "",
"text": "@ismael You certainly could implement a chat like feature using realm but it really depends on your use case because realm sync uses partitions to synchronize data between users - https://docs.mongodb.com/realm/sync/partitioning/So if the chat rooms are limited and can be statically defined then you can share realms(partitions) between users which would replicate chat messages. For instance, an application where users are grouped into shared branch locations, and at those small branch locations users can chat with each other, individually or in groups.The issue that can arise is if you need more complicated permissions within the branch or within chatrooms - the partitioning model is read or read/write per realm/partition per-user.A chat application is pretty complicated in and of itself to implement which is why there are so many 3rd party providers that just do chat as a service.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Thanks Ian\nIs it not possible to define realm partitions on the fly?",
"username": "ismael"
},
{
"code": "",
"text": "@ismael It is possible to define partitions on the fly, yes",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@ismael I recently wrote a sample chat example app for MongoDB Realm. I have made the code open source in a public GitHub repository here. It is not but a few hundred lines of code.Super Simple Chat app for MongoDB Realm. Contribute to Cosync/SuperSimpleChat development by creating an account on GitHub.Please feel free to download it and play with it.",
"username": "Richard_Krueger"
}
] | Live chat or similar type of application using Realm | 2020-06-17T13:14:59.199Z | Live chat or similar type of application using Realm | 2,977 |
null | [
"swift",
"atlas-device-sync"
] | [
{
"code": "class UserData: Object {\n @objc dynamic var _id = ObjectId.generate()\n @objc dynamic var _partition = \"\"\n @objc dynamic var uid = \"\"\n @objc dynamic var name = \"\"\n override static func primaryKey() -> String? {\n return \"_id\"\n }\n override static func indexedProperties() -> [String] {\n return [\"uid\"]\n }\n \n convenience init(uid: String, partition: String, name: String) {\n self.init()\n self._partition = partition\n self.uid = uid\n self.name = name\n }\n}\n let results = RealmManager.shared.userRealm.objects(UserData.self)\n \n self.notificationToken = results.observe { (changes: RealmCollectionChange) in\n \n switch changes {\n case .initial:\n NSLog(\"initial\")\n if results.count > 0 {\n self.name = results[0].name\n }\n \n case .update(let results, _, _, _):\n NSLog(\"update\")\n if results.count > 0 {\n self.name = results[0].name\n }\n \n case .error(let error):\n // An error occurred while opening the Realm file on the background worker thread\n fatalError(\"\\(error)\")\n }\n }\n\n",
"text": "We have this random bug that is occurring on reading data from a user partition. We are writing a simple Swift app that syncs. Our table is called UserDataThe partition is always set to the uid. Upon signup we create a user data for each user. That works fine. It’s later on when we try to read it, we have problems.This code works most of the time, but for some users we can never retrieve the record. We see the record in Compass, but always get a result count of zero back in code. If we terminate the “Sync” - not pause it - the bug goes away. This seems completely random. I was wondering if we need to make the partition key indexed?Thanks",
"username": "Richard_Krueger"
},
{
"code": "",
"text": "@Richard_Krueger How are you opening the Realm? Are you making sure to use realm.asyncOpen? This will download the data to disk first before returning a valid realm reference for you to observe",
"username": "Ian_Ward"
},
{
"code": "self.userRealm = \n try! Realm(configuration: user.configuration(partitionValue: uid))\n",
"text": "@Ian_Ward This is what I was usingI was not using realm.asyncOpen.",
"username": "Richard_Krueger"
},
{
"code": "",
"text": "@Ian_Ward Ok I found the section describing this issue in the docshttps://docs.mongodb.com/realm/ios/sync-data/Quick question, we are also writing a Node.js app as well. Is there an equivalent to realm.asyncOpen in Node.js?",
"username": "Richard_Krueger"
},
{
"code": "Realm.open",
"text": "Yes. For node.js you can use Realm.open",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Ian_Ward We have switched to using realm.asyncOpen and the problem seems to have gone away. Thanks for the tip.I would suggest getting the docs folks to update the Quick Start section on “Open A Realm” to reflect this.",
"username": "Richard_Krueger"
},
{
"code": "",
"text": "Thank you guys !!\nI was beating my head having the same issue asyncOpen did it.",
"username": "Barry_Fawthrop"
},
{
"code": "",
"text": "@Barry_Fawthrop your welcome, I still have a head wound, but my program runs great!",
"username": "Richard_Krueger"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | User partition not reading data | 2020-07-13T15:47:06.903Z | User partition not reading data | 2,283 |
null | [] | [
{
"code": "",
"text": "Hello All,Our App runs at Azure with Mongo on Atlas and I am wondering how to set a custom domain name (DNS) at azure to our Atlas Cluster DB.Ex: in my APP we will use ( mongo-db-00.linkle.io / mongo-db-01.linkle.io / mongo-db-02.linkle.io ) and the Azure DNS will point to the correct Mongo DB in Atlas.Thank you\nM.",
"username": "MarceloRamos"
},
{
"code": "",
"text": "Hi Marcelo,This concept is generally to be avoided because MongoDB uses client-side load balancing and primary-discovery within the driver tier: Basically the MongoDB driver powering your application will ask the MongoDB Atlas cluster what it believes its hostnames to be as well as which node is currently the Primary for taking writes. If you add a layer of indirection here this introduces risk: the driver will discovery the cluster’s identity and start using the cluster hostnames and you might not realize that you could be in a degraded availability state. Note for completeness that MongoDB Sharded clusters – where you connect through a set of mongos’s – are less susceptible to these issues.Bottom line here is I highly recommend you keep it simple and use the built-in Atlas cluster hostnames.Cheers\n-Andrew",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "Hello Andrew, thank you.I will keep simple.",
"username": "MarceloRamos"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | DNS - How to use Azure custom DNS with MongoDB in Atlas | 2020-07-21T19:23:53.536Z | DNS - How to use Azure custom DNS with MongoDB in Atlas | 3,203 |
null | [
"swift"
] | [
{
"code": "",
"text": "Is there a plan to provide full support for the Swift Package Manager soon?Currently realm-cocoa can be added as an SPM dependency but only local database functionality is included (no Realm Sync) and there are issues like SwiftUI Previews failing to compile and run.Last time I asked (sometime last year I think) I was told the blocker is Realm Sync - because it’s closed source and SPM didn’t have support for binary dependencies yet.In the meantime, SPM gained support for binary dependencies and I heard that Realm Sync has been open sourced recently - two different alternatives that remove the blocker Moving to rely on SPM as the sole dependency manager is a priority for us and the only dependency currently holding us back is realm-cocoa. I’d love to hear news regarding this.",
"username": "nimi"
},
{
"code": "",
"text": "@nimi Yes we are actively working on it and should be released in the next quarter",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Great news! Thanks Ian.",
"username": "nimi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm Cocoa Full SPM Support | 2020-07-24T15:58:40.956Z | Realm Cocoa Full SPM Support | 2,704 |
null | [] | [
{
"code": "",
"text": "Hello,Please let me know best practice Ubuntu/Linux OS directory structure for MongoDB.Please let me know if I am incorrect for the following:We need the following LVM as FileSystems in Ubuntu/Linux for MongoDB",
"username": "Jitender_Dudhiyani"
},
{
"code": "",
"text": "Hi @Jitender_Dudhiyani,Please review our guide for filesystem and linux :Unfortunately,we cannot recommend on a proper size for your instances as we do not know the data volume/type or the compression rate you will have. We also cannot predict your future growth.Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "dbPath",
"text": "Hi @Jitender_Dudhiyani,Appropriate filesystem sizes will depend on your use case, but I expect the majority of space should be allocated to your dbPath (data and indexes).The MongoDB binaries add up to a few hundred MB at most, and could be installed on the same filesystem as your O/S or other applications.Log files depend on your system activity and retention period, but if you rotate and compress old logs 50GB could represent a very generous time period.As @Pavel_Duchovny suggested, you need to plan according to your use case and expected growth.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "@Pavel_Duchovny, @Stennie_X Thank you for your suggestions. Keeping the size requirements as side, is it a best practice to separate Data & Log? Or, it is okay to place both on the same drive?",
"username": "SatyaKrishna"
},
{
"code": "",
"text": "Hi @SatyaKrishna,Separating data and log is not that impacting. However, we do recommend separating journal directoy and the data.Having said that, you should only do that if the two devices are completely different with parallel write capabilities and your backup method is compatible with this separation.Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thank you for your feedback",
"username": "SatyaKrishna"
}
] | Linux Directory Requiremnts | 2020-07-20T20:32:44.551Z | Linux Directory Requiremnts | 1,566 |
null | [
"atlas-functions"
] | [
{
"code": "const query = { };\nconst projection = { _id_:1, studentId: \"$_id\", };\nawait students.find(query, projection).toArray()\n",
"text": "Hello,I am trying to alias the _id field in a Realm function using the find() method.\nAccording to the documentation, the second parameter in the find() method is a projection.I have a collection called students with the following fields\n_id, firstName, lastName, accountIdI am attempting to alias the return using a projection\nHere is a code snippet:Here are my results[\n{\"_id\":“5e8fc09770f5e3f18c730a21”},{\"_id\":“5e8fc40470f5e3f18c730a22”},{\"_id\":“5e90838738eff556f0fa48d0”},{\"_id\":“5e9086a138eff556f0fa48d1”},{\"_id\":“5e908dba38eff556f0fa48d2”},{\"_id\":“5eea898c1a3f23cd2980e699”}]The alias did not work and I am also missing the other fields.Anyone have an idea? I went through the Realm docs and the samples are very scarce",
"username": "Herb_Ramos"
},
{
"code": "exports = function(arg){\n\nconst mongodb = context.services.get(\"mongodb-atlas\");\nconst itemsCollection = mongodb.db(\"db\").collection(\"Cat\");\n\n const query = { \"name\": \"joey\"};\n const projection = { \"_id\": 1, \"StudentId\": \"$_id\"};\n\nreturn itemsCollection.find(query, projection)\n .toArray()\n .then(items => {\n console.log(`Successfully found ${items.length} documents.`)\n items.forEach(console.log)\n return items\n })\n .catch(err => console.error(`Failed to find documents: ${err}`))\n\n};",
"text": "Hi Herb -Can you try putting quotes around “_id” and “studentId”? Also, to return any other field you will have to add it to your projection.For example, I tried something similar and got this snippet to work:",
"username": "Sumedha_Mehta1"
},
{
"code": " \"_id\": \"5e8fc40470f5e3f18c730a22\",\n \"accountId\": \"5e89f69d1c9d440000929b9a\",\n \"firstName\": \"Pat\",\n \"lastName\": \"Doe\"\n },\n {\n \"_id\": \"5e90838738eff556f0fa48d0\",\n \"accountId\": \"5e89f69d1c9d440000929b9a\",\n \"firstName\": \"Michael\",\n \"lastName\": \"Doe\"\n },\n {\n \"_id\": \"5e9086a138eff556f0fa48d1\",\n \"accountId\": \"5e89f69d1c9d440000929b9a\",\n \"firstName\": \"Sam\",\n \"lastName\": \"Doe\"\n },\n {\n\n \"_id\": \"5e908dba38eff556f0fa48d2\",\n \"accountId\": \"5e89f69d1c9d440000929b9a\",\n \"firstName\": \"Maria\",\n \"lastName\": \"Doe\"\n },\n {\n\n \"_id\": \"5f18c5e9c0764f3b76890023\",\n \"accountId\": \"5e89f69d1c9d440000929b9a\",\n \"firstName\": \"John\",\n \"lastName\": \"Doe\"\n }\n]",
"text": "Hello Sumedha,I tried putting the quotes around the fields and still can’t get the alias to work\nconst projection = { “_id”: 1, “studentId”:\"$_id\", “firstName”:\"$firstName\",“lastName”:\"$lastName\", “accountId”:\"$accountId\"};Unfortunately, the alias is not working. I also tried aliasing other fields (ex: “first”:\"$firstName\")\nOther alias fields don’t appear at all in the outputTo be clear, I am using a Realm ‘3rdPartly’ HTTP Get function, not Compass or the CLI{\n[\n{\n“_id”: “5e8fc09770f5e3f18c730a21”,\n“accountId”: “5e89f69d1c9d440000929b9a”,\n“firstName”: “Nancy”,\n“lastName”: “Doe”\n},\n{",
"username": "Herb_Ramos"
},
{
"code": "",
"text": "How are you testing your function? Is it with the Realm UI Console?Do you mind pasting your entire function snippet here as well.",
"username": "Sumedha_Mehta1"
},
{
"code": "getStudents accountId:${accountId}const query = { accountId:accountId };\n//const projection = { \"studentId\": \"$_id\", };\nconst projection = { \"_id\": 1, \"studentId\":\"$_id\", \"firstName\":\"$firstName\",\"lastName\":\"$lastName\", \"accountId\":\"$accountId\"};\n\nconsole.log ('getting students query:', JSON.stringify(query));\n\nresponse.setHeader(\"Content-Type\",\"application/json\");\n\n//await students.find(query).toArray()\nawait students.find(query, projection).toArray()\n.then(result => {\n if(result) {\n response.setStatusCode(200);\n //response.setBody(`{\"students\":${result}`);\n response.setBody(`{\"students\":${JSON.stringify(result)}}`);\n }\n else {\n console.log(\"students not found:\",JSON.stringify(result));\n response.setStatusCode(404);\n response.setBody(`{message:\"No students not found for given criteria\"}`);\n }\n}).catch(err => {\n console.log(\"error getting students:\",err);\n response.setStatusCode(500);\n response.setBody(`{error:${err}}`);\n})\n",
"text": "Sure. Its a webhook in Realm.\nI’m not actually using a Realm “Function” but a Realm 3rdParty HTTP webhook.Here is the code:/*exports = async function getStudents(payload, response) {\nconst {accountId } = payload.query;\nconst db = context.services.get(“mongodb-atlas”).db(“usersDB”);\nconst students = db.collection(“students”);\nconsole.log(getStudents accountId:${accountId});}",
"username": "Herb_Ramos"
},
{
"code": "",
"text": "Hey Herb,I’m looking into why renaming aliases seems to work sometimes on Realm (e.g. my function), but not in your caseIn the meantime @wan has an alternative solution using a pipeline that should give you the same result.",
"username": "Sumedha_Mehta1"
},
{
"code": "const pipeline = [\n { \"$addFields\": {\"StudentId\": \"$_id\" } }\n ]\n\n return students.aggregate(pipeline).toArray()\n .then(students => {\n console.log(`Successfully grouped purchases for ${students.length} customers.`)\n students.forEach(console.log);\n return students\n })\n .catch(err => console.error(`Failed to find documents: ${err}`));*/\n",
"text": "Hi @Herb_Ramos,The alias did not work and I am also missing the other fields.If you’re looking to project a field under a different name, and also include other fields as well you should try Realm aggregation pipeline: project document fields. An example for your use case would be:Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Thanks Wan. I will give this a try. I am however curious if there is a bug based on my original request.",
"username": "Herb_Ramos"
},
{
"code": "find()",
"text": "Hi @Herb_Ramos,I am however curious if there is a bug based on my original request.It is not a bug, and apologies for the confusion.If your Realm application is linked to MongoDB Atlas version 4.4 (currently available in beta in select regions only), you should be able to project field name alias using find() projection syntax. If you are using the current stable MongoDB Atlas version 4.2, you should use the aggregation pipeline to project field name aliases.Since your valid code function did not return the projection output aliases, I’m assuming that your MongoDB Atlas cluster is on version 4.2Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Thanks Wan. I will mark this as a solution. I am on 4.2. I have not tried the projection on 4.4 and I am assuming it will work. For now I will used the pipeline aggregate you suggested on 4.2. Thanks again.",
"username": "Herb_Ramos"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Alias field in Realm function | 2020-07-22T20:22:01.993Z | Alias field in Realm function | 12,806 |
null | [
"sharding",
"configuration"
] | [
{
"code": "2020-07-22T11:19:30.519+0200 E NETWORK [ReplicaSetMonitor-TaskExecutor] replset name mismatch: expected \"db_rs006\", but remote node mongop_db0063:27018 has replset name \"db_rs017\", ismaster: { hosts: [ \"mongop_db0171:27018\" ], passives: [ \"mongop_db0172:27018\", \"mongop_db0173:27018\" ], setName: \"db_rs017\", setVersion: 5, ismaster: false, secondary: true, primary: \"mongop_db0171:27018\", passive: true, me: \"mongop_db0172:27018\", lastWrite: { opTime: { ts: Timestamp(1595409557, 109), t: 1 }, lastWriteDate: new Date(1595409557000), majorityOpTime: { ts: Timestamp(1595409557, 109), t: 1 }, majorityWriteDate: new Date(1595409557000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1595409537042), logicalSessionTimeoutMinutes: 30, connectionId: 46, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ \"snappy\", \"zstd\", \"zlib\" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1595409557, 109), $configServerState: { opTime: { ts: Timestamp(1595409549, 36), t: 5 } }, $clusterTime: { clusterTime: Timestamp(1595409572, 11), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1595409557, 109) }$ mongo localhost:27018 -u mongo-admin -p$MyPwd --authenticationDatabase admin --eval \"db.adminCommand({replSetGetStatus:1})\" MongoDB server version: 4.2.8 { \"set\" : \"db_rs006\" ... }",
"text": "hi all,\nwe are deploying a number of replica-sets, each consisting of 3 servers. For one of those servers of a new replica set “db_rs017”, we made an error and assigned the IP addr of another server that is already configured in its own, other replica set “db_rs006”\nThat has been corrected fairly quick, but all other servers is our cluster are since then reporting their confusion:\n2020-07-22T11:19:30.519+0200 E NETWORK [ReplicaSetMonitor-TaskExecutor] replset name mismatch: expected \"db_rs006\", but remote node mongop_db0063:27018 has replset name \"db_rs017\", ismaster: { hosts: [ \"mongop_db0171:27018\" ], passives: [ \"mongop_db0172:27018\", \"mongop_db0173:27018\" ], setName: \"db_rs017\", setVersion: 5, ismaster: false, secondary: true, primary: \"mongop_db0171:27018\", passive: true, me: \"mongop_db0172:27018\", lastWrite: { opTime: { ts: Timestamp(1595409557, 109), t: 1 }, lastWriteDate: new Date(1595409557000), majorityOpTime: { ts: Timestamp(1595409557, 109), t: 1 }, majorityWriteDate: new Date(1595409557000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1595409537042), logicalSessionTimeoutMinutes: 30, connectionId: 46, minWireVersion: 0, maxWireVersion: 8, readOnly: false, compression: [ \"snappy\", \"zstd\", \"zlib\" ], ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1595409557, 109), $configServerState: { opTime: { ts: Timestamp(1595409549, 36), t: 5 } }, $clusterTime: { clusterTime: Timestamp(1595409572, 11), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1595409557, 109) }I tried already a few actions:If I read out the replica-set info from server “mongop_db0063”, it correctly reports that it belongs to “db_rs006”:\n$ mongo localhost:27018 -u mongo-admin -p$MyPwd --authenticationDatabase admin --eval \"db.adminCommand({replSetGetStatus:1})\" MongoDB server version: 4.2.8 { \"set\" : \"db_rs006\" ... }I can not trash this replica set, cos it contains data.=> Where sits the misconfiguration ? Is it in the config servers (how to fix it), or in all the shard servers (how to fix it) ?many thx in advance to anyone who finds the time for a quick answer !",
"username": "Rob_De_Langhe"
},
{
"code": "",
"text": "Super fun. Sounds more like a networking issue than a mongo one.After 9hrs hopefully this is corrected.I would be checking/clearing arp entries on the mongos and nodes in the db_rs-006. Might be required to do this on network switches and routers too.",
"username": "chris"
},
{
"code": "",
"text": "hi Chris,\nthx very much for taking the time to answer.\nOur issue is lasting already for a few weeks, so slightly longer than 9hrs \nARP cache entries have a Time-To-Live of 180 secs, so that won’t be the solution to clear now the ‘history’ of this incorrect replica-set name for server “mongop_db0063”.\nI assume this correlation between server and replica-set name is stored somewhere in a file on the servers (which servers? which file(s) ?)",
"username": "Rob_De_Langhe"
},
{
"code": "db_rs017",
"text": "As you say the node itself is configured correctly. The other nodes are connnecting to a node that is configured for db_rs017.I’m still pretty certain you are experiencing a network issue, not a mongo one. I’m pretty sure that mongo is not caching a name to ip. This is evident as when your network misconfiguration occurred the cluster stared reporting the error.You should try connecting to mongop_db0063:27018 from one of it replicaset peers and run the same repSetGetStatus. I’d be surprised if it returns db_rs006.Have you restarted the host or network stack of mongop_db0063 ?",
"username": "chris"
},
{
"code": "",
"text": "ok, I have restarted the entire cluster… The issue is gone now. Good to know in case this might happen again (we won’t make IP mistakes anymore, for sure )thx Chris for your replies",
"username": "Rob_De_Langhe"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to force a reload of the replica-set config in shard servers | 2020-07-22T11:47:06.570Z | How to force a reload of the replica-set config in shard servers | 3,180 |
null | [
"keyhole"
] | [
{
"code": "",
"text": "I’m currently working on attempting to automate a process that would use both keyhole and maobi and wondered if this is something that been investigated before?\nThe docs for maobi imply I can pass cluster details but I can’t for the life of me figure out what those params might be. I’ve inspected the docker container and had a look about but cannot see anything that I think I can tap into.Does you happen to know if it’s possible to automate the creation of the HTML doc via a script / jenkins pipeline? I had considered trying to fluff the upload action of the maobi form and then either use wget/curl but again, hit the wall there also.I’m using Atlas with the purpose of this script being to make a daily report simpler to achieve. If I were then able to filter on long running queries that would also be fantastic but just automating the first part would be great.\nThanks",
"username": "Ste_G"
},
{
"code": "",
"text": "You’re are already on Atlas and I suggest you take advantage of Performance Advisor which has a sophisticated algorithm based on usages and other factors to make recommendations. I haven’t retied on what you intend to do. But, it’s doable. It’s an HTML form and you should be able to curl to the endpoint and redirect the output to a file.",
"username": "ken.chen"
},
{
"code": "",
"text": "Hey Ken,Thanks for taking the time to reply. I was using the Atlas free tier as this is part of a learning project / experiment. I will further my attempts with curl to reach some sort of automation and share any results if they’re…somewhat credible ",
"username": "Ste_G"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Automating Keyhole / Maobi via a pipeline/script | 2020-07-24T07:01:21.640Z | Automating Keyhole / Maobi via a pipeline/script | 4,427 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "With a lookup aggregation, is the order of the returned array configurable?\nIdeally I would want it to be in order of _id or date of the looked up records.",
"username": "Neil_Albiston1"
},
{
"code": "db.test2.insertOne({ position: 1, linkedTo: 'B' });\ndb.test2.insertOne({ position: 2, linkedTo: 'A' });\ndb.test2.insertOne({ position: 3, linkedTo: 'C' });\ndb.test1.insertMany([ \n { name: 'B' }, \n { name: 'C' }, \n { name: 'A' } \n]);\ndb.test1.aggregate([\n {\n $lookup: {\n from: 'test2',\n localField: 'name',\n foreignField: 'linkedTo',\n as: 'joined',\n }\n },\n {\n // this stage used just to simplify the output\n $project: {\n 'joined.position': 1,\n }\n },\n]).pretty();\n[\n {\n \"_id\" : ObjectId(\"5f11e59833d75e5a740b1790\"),\n \"joined\" : [\n { \"position\": 1 }\n ]\n },\n {\n \"_id\" : ObjectId(\"5f11e59833d75e5a740b1791\"),\n \"joined\" : [\n { \"position\": 3 }\n ]\n },\n {\n \"_id\": ObjectId(\"5f11e59833d75e5a740b1792\"),\n \"joined\": [\n { \"position\": 2 }\n ]\n }\n]\ndb.test1.aggregate([\n {\n $lookup: {\n from: 'test2',\n localField: 'name',\n foreignField: 'linkedTo',\n as: 'joined',\n }\n },\n {\n // this stage used just to simplify the output\n $project: {\n 'joined.position': 1,\n }\n },\n {\n $sort: {\n 'joined.position': 1,\n }\n }\n]).pretty();\n[\n {\n \"_id\" : ObjectId(\"5f11e59833d75e5a740b1790\"),\n \"joined\" : [\n { \"position\": 1 }\n ]\n },\n {\n \"_id\" : ObjectId(\"5f11e59833d75e5a740b1792\"),\n \"joined\" : [\n { \"position\": 2 }\n ]\n },\n {\n \"_id\" : ObjectId(\"5f11e59833d75e5a740b1791\"),\n \"joined\" : [\n { \"position\": 3 }\n ]\n }\n]\n",
"text": "Ok, so let’s solve this by example.Insert those 3 documents separately, so it will be clear, in what order they were inserted (‘position’ property reflects the insertion order):Now, insert linked documents:If we use this aggregation:We will get the documents, in the order they were inserted in the ‘test1’ collection.Now, if we want to sort the documents by some property from joined (looked-up) collections, we can add $sort stage:Here you go! All sorted by joined document’s property:",
"username": "slava"
},
{
"code": "",
"text": "@slava is Correct.In any query language i would say you need to use sorts to gurantee the order of the records/documents otherwise its more a matter of luck …",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "In my use case test 2 has multiple records with linkedTo = B. Therefore the lookup will return more than one test2 item. I want to ensure I can pick the first version, by _id, of the joined test 2 array.In a $group, I would sort the values first to ensure their eventual order, … but in a lookup there does not seem to be a way to sort the results from the ‘from table’.Option: 1. Lookup returns a List . This infers an order. Is the joined list from test2 ordered by _id already? If so, job done. From initial tests this seems to be the case but I would like confirmation.Option 2 : The order is not guaranteed. I’ll need to add extra code to filter the result by the minimum _id, which will hit performance on an already slow and creaking aggregation pipeline.Any guidance gratefully accepted.",
"username": "Neil_Albiston1"
},
{
"code": "db.teams.insertMany([\n { _id: 't2', name: 'B', country: 'US' },\n { _id: 't1', name: 'A', country: 'Canada' },\n]);\n\ndb.players.insertMany([\n { _id: 'p5', fromTeam: 'A', player: 'Bob' },\n { _id: 'p4', fromTeam: 'B', player: 'Bill' },\n { _id: 'p2', fromTeam: 'B', player: 'Luke' },\n { _id: 'p1', fromTeam: 'A', player: 'Drake' },\n { _id: 'p3', fromTeam: 'B', player: 'Oswald' },\n]);\ndb.teams.aggregate([\n {\n $lookup: {\n from: 'players',\n localField: 'name',\n foreignField: 'fromTeam',\n as: 'players',\n }\n },\n]).pretty();\n[\n {\n \"_id\" : \"t2\",\n \"name\" : \"B\",\n \"country\" : \"US\",\n \"players\" : [\n { \"_id\" : \"p4\", \"fromTeam\" : \"B\", \"player\" : \"Bill\" },\n { \"_id\" : \"p2\", \"fromTeam\" : \"B\", \"player\" : \"Luke\" },\n { \"_id\" : \"p3\", \"fromTeam\" : \"B\", \"player\" : \"Oswald\" }\n ]\n },\n {\n \"_id\" : \"t1\",\n \"name\" : \"A\",\n \"country\" : \"Canada\",\n \"players\" : [\n { \"_id\" : \"p5\", \"fromTeam\" : \"A\", \"player\" : \"Bob\" },\n { \"_id\" : \"p1\", \"fromTeam\" : \"A\", \"player\" : \"Drake\" }\n ]\n }\n]\ndb.teams.aggregate([\n {\n $sort: {\n // sort documents in 'teams' collection\n _id: 1\n }\n },\n {\n $lookup: {\n from: 'players',\n let: {\n // this is needed, so we can use it in \n // the $match stage below\n teamName: '$name',\n },\n pipeline: [\n {\n // sort documents in 'players' collection\n $sort: {\n _id: 1\n }\n },\n {\n $match: {\n $expr: {\n $eq: ['$fromTeam', '$$teamName'],\n },\n }\n },\n ],\n as: 'players',\n }\n },\n]).pretty();\n[\n {\n \"_id\" : \"t1\",\n \"name\" : \"A\",\n \"country\" : \"Canada\",\n \"players\" : [\n { \"_id\" : \"p1\", \"fromTeam\" : \"A\", \"player\" : \"Drake\" },\n { \"_id\" : \"p5\", \"fromTeam\" : \"A\", \"player\" : \"Bob\" }\n ]\n },\n {\n \"_id\" : \"t2\",\n \"name\" : \"B\",\n \"country\" : \"US\",\n \"players\" : [\n { \"_id\" : \"p2\", \"fromTeam\" : \"B\", \"player\" : \"Luke\" },\n { \"_id\" : \"p3\", \"fromTeam\" : \"B\", \"player\" : \"Oswald\" },\n { \"_id\" : \"p4\", \"fromTeam\" : \"B\", \"player\" : \"Bill\" }\n ]\n }\n]\n",
"text": "Hello, @Neil_Albiston1, @Pavel_Duchovny!Is the joined list from test2 ordered by _id already?By default, documents are returned in the order, they were written to DB. There is a chance, that the documents may be returned from the collection in the desired order. But, as @Pavel_Duchovny stated above, it is chance, not a guarantee. So, better to use $sort stage and add indexes on sorting fields to improve performance of the aggregation queries, so they do not ‘creak’ In my use case test 2 has multiple records with linkedTo = B. Therefore the lookup will return more than one test2 item. I want to ensure I can pick the first version, by _id, of the joined test 2 array.Let’s make another dataset for this case:If we run this aggregation:We will get this result:Notice, that documents are returned in order, they has been created, not by _id field.Let’s add some sorting:Sorted ouput:I want to ensure I can pick the first version, by _id, of the joined test 2 arrayIf you need to get only 1 (and first, according to your ordering rules, defined by $sort stage) document from joined collection, you need to add $limit stage after your $match stage.",
"username": "slava"
},
{
"code": "",
"text": "Notice, that documents are returned in order, they has been created, not by _id field.Fortunately my _ids are mongo ObjectIds which contain a timestamp so the _ids are in creation order.\n…but I think I will be safe , as you suggest, and add a filter on maximum date, or a sort & limit. Hope it doesn’t impact performance too much.Thank you for your help.",
"username": "Neil_Albiston1"
}
] | Order of returned array in a Lookup | 2020-07-17T15:47:56.686Z | Order of returned array in a Lookup | 9,009 |
null | [] | [
{
"code": "",
"text": "As a part of working with a POC, we are thinking of a approach like extracting data from lotus notes ( as json files) and load the same into MongoDB staging collection and then to target collections. A question arised like … Is there any way to load the fine records ( proper w.r.t format of JSON and proper as per the mongodb utility expects) into the staging collection and discard the bad records.?Appreciate any sort of help on this…Best Regards\nKesav",
"username": "ramgkliye"
},
{
"code": "mongoimport",
"text": "Hi @ramgkliye,How do you plan to load the records? Using a driver or mongoimport?With a driver you can catch errors and continue. With a mongoimport you can use the --parseGraceRegards\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Data Loading approach | 2020-07-24T07:44:43.683Z | Data Loading approach | 1,784 |
null | [
"keyhole"
] | [
{
"code": "",
"text": "Keyhole is a performance analytics tool, written in GO (Golang), to collect stats from MongoDB instances and to analyze performance of a MongoDB cluster. Golang was chosen to eliminate the needs to install an interpreter or software modules. To generate HTML reports use Maobi, a Keyhole reports generator.Peek at your MongoDB Clusters like a Pro with Keyhole",
"username": "ken.chen"
},
{
"code": "",
"text": "hi Ken,\nI installed keyhole and ran it with “–info” to collect info from our cluster, which creates a (gzip’d) BSON file and not JSON as you described.\nThe BSON is not viewable in a web browser as it is proprietary to MongoDB.\n-> is there any way to force “keyhole” to generate a JSON file ?",
"username": "Rob_De_Langhe"
},
{
"code": "",
"text": "I had to modify keyhole to output in bson format to support all data types. You can use -print to output JSON to a file.",
"username": "ken.chen"
},
{
"code": "keyhole --info \"mongodb://.../\" -print",
"text": "thx Ken, with the command\nkeyhole --info \"mongodb://.../\" -print\nI indeed do now get JSON output. Now trying to get any numbers in the (still blank) Grafana charts…",
"username": "Rob_De_Langhe"
},
{
"code": "",
"text": "I can’t really tell what you did wrong, but check out Grafana document for the error message implication. I can provide a few pointers. Keyhole works as a single json server to feed back into Grafana. You should check out my blog part 2 for details instructions and the Keyhole GitHub wiki page.",
"username": "ken.chen"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Survey Your Mongo Land with Keyhole | 2020-07-21T12:41:04.659Z | Survey Your Mongo Land with Keyhole | 5,223 |
null | [] | [
{
"code": "mongodumpmongorestore",
"text": "Hi All,We have just started using MongoDB Atlas. We are currently in development stage and only have a few alpha/early users in production, so the data isn’t that huge. (~few hundreds of mb).We would like to wipe our staging environment and copy data from production on a scheduled basis to have a consistent testing environment for our QA team.We are considering an approach using mongodump to dump production data and copy to staging using mongorestore with dropping the existing database. To get around the scheduling part, We are considering if we can execute some scripts in our CICD setup or see if we can use, ‘MongoDB Realm Functions’ as triggers in the ‘MongoDB Atlas’.Is this a viable approach? Is there any better way to do this?I would like to hear thoughts of the community who have been the same path and share some wisdom from your experience.",
"username": "Subbu_Lakshmanan"
},
{
"code": "",
"text": "Hi Subbu, if your cluster is M10+ then another option is to use Atlas Backup/Restore to periodically restore from your Prod env to your Stage env (this can be done through the Atlas API and Terraform provider for example).Yes you can use scheduled Triggers to do this on a cron as well.-Andrew",
"username": "Andrew_Davidson"
}
] | Wiping and re-creating staging environment using mongodump & mongorestore | 2020-07-21T22:45:19.214Z | Wiping and re-creating staging environment using mongodump & mongorestore | 3,157 |
null | [] | [
{
"code": "",
"text": "Hi. Since this morning all my dashboards look different. The colors are not applied according to the color palette , the order of the fields are scrambled and the series does not display on bar charts. Was there an update or is there one in progress?",
"username": "johan_potgieter"
},
{
"code": "",
"text": "There was an update, but it shouldn’t have broken your charts. Can you show an example? If you don’t want to share publicly you can email me at tom.hollander at MongoDB.com.",
"username": "tomhollander"
},
{
"code": "",
"text": "No not a problem I can share it here.Mongo11222×665 49.8 KB mongo21206×540 34.9 KB\nThe numbers are not centered either.",
"username": "johan_potgieter"
},
{
"code": "",
"text": "Ah right. This was a deliberate design change - previously when you mapped aggregation channels with no category channel, it resulted in a single bar with multiple series. Now it shows each value as a category. If you want the old behaviour you could add a calculated field with am empty string value and put that in the category channel. Sorry for breaking your charts… We think the new behaviour is more sensible most of the time, but I see that the change was unexpected/unwanted in your case.Regarding the off centre numbers, we’ll need to look into that.Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "Sorry im not sure I follow. Does this influence the colors as well? I have about 10 dashboards with 25 charts in each mostly bar. Does this mean I have to change each one? Also when you move the fields the formatting disappears like in this photo and it goes back to showing all the decimals. How do you mean adding a empty string. Like this?\nmongo41430×675 64.9 KB",
"username": "johan_potgieter"
},
{
"code": "",
"text": "Is there anywhere where I can read up on the new changes?",
"username": "johan_potgieter"
},
{
"code": "",
"text": "Release notes should be updated in a few hours",
"username": "tomhollander"
},
{
"code": "",
"text": "Okay thanks. Until then can you just give me an example of adding the empty string please. So I still want my data after aggregation to show just as an average , i then use the filter toe let clients view the records on different dates. Showing it like this with 40-50 documents will become messy.\nmongo51422×565 59.6 KB",
"username": "johan_potgieter"
},
{
"code": "$addFields",
"text": "So my idea was to do something like this:\nimage1920×1040 98.9 KBHere I’m using $addFields in the query bar to create the null category. Will that work for you?Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "If I try doing that like this.mongo6693×406 13.6 KBThis happens.\nmongo71421×554 44.6 KB",
"username": "johan_potgieter"
},
{
"code": "",
"text": "Yeah there also seems to be a problem with empty strings. Try making it a space.\nSorry about the issues - we’ll make sure we get these addressed.",
"username": "tomhollander"
},
{
"code": "",
"text": "Okay thanks , yes that works. The only thing that it influences is that on the x axis it doesn’t show the values of the categories but it shows in the series so its not a train smash.mongo81430×702 64.6 KB",
"username": "johan_potgieter"
},
{
"code": "",
"text": "Great that you’ve got something workable!\nFor charts with no category channel, I’m pretty sure we never showed the category labels on the axis. The new behaviour does this - but since it’s a single series chart you can’t show each bar in its own colour, which I presume is what you’re after.\nThanks for taking the time to show your scenarios - we’ll try not to unleash any surprises like this next time!Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "Yes thanks back to normal . Yes you are correct there was no label on the axis. Will go through the release notes and adapt the aggregation for future charts.",
"username": "johan_potgieter"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongo DB Charts dashboard malfunction | 2020-07-24T06:44:58.310Z | Mongo DB Charts dashboard malfunction | 3,583 |
null | [
"containers",
"kubernetes-operator"
] | [
{
"code": " apiVersion: storage.k8s.io/v1\n kind: StorageClass\n metadata:\n name: mongo-data-xfs\n provisioner: kubernetes.io/aws-ebs\n parameters:\n type: io1\n iopsPerGB: \"10\"\n fstype: xfs\n encrypted: \"true\"\n reclaimPolicy: Retain\n allowVolumeExpansion: true\napiVersion: mongodb.com/v1\nkind: MongoDB\nmetadata:\n name: ct-sharded-cluster\nspec:\n shardCount: 1\n mongodsPerShardCount: 3\n mongosCount: 1\n configServerCount: 1\n version: 4.2.2-ent\n opsManager:\nconfigMapRef:\n name: shard-cluster-config\n # Must match metadata.name in ConfigMap file\n credentials: ops-manager-api\n type: ShardedCluster\n persistent: true\n shardPodSpec:\ncpuRequests: 500m\ncpu: 500m\nmemoryRequests: 1G\nmemory: 1G\npersistence:\n multiple:\n data:\n storage: 5G\n storageClass: mongo-data-xfs\n journal:\n storage: 2G\n storageClass: mongo-beside-data\n logs:\n storage: 2G\n storageClass: mongo-beside-data \nfsType=xfs",
"text": "I have installed MongoDB Enterprise Kubernetes Operator using this stepsWhen tried to add different storageClass for data and journal volumes on shard members. I want to add xfs file system for data volume. so created below storage class:my sharded cluster deployment looks like this:No when i deploy this cluster in my operator, journal and logs pods are coming up because it is is fsType=ext4, data volume is coming up, it stuck in ContainerCreating state always. if i describe particular pod, got this failed volume mount message:Warning FailedMount 30m (x300 over 13h) kubelet, ip-10-0-23-43.ec2.internal Unable to mount volumes for pod “test-cluster-0-0_mongo-operator(385f92e8-07e3-460f-af48-b80c4fe28e45)”: timeout expired waiting for volumes to attach or mount for pod “mongo-operator”/“test-cluster-0-0”. list of unmounted volumes=[data]. list of unattached volumes=[data journal logs mongodb-enterprise-database-pods-token-kdhvp]Warning FailedMount 5m12s (x458 over 13h) kubelet, ip-10-0-23-43.ec2.internal (combined from similar events): Unable to mount volumes for pod “test-cluster-0-0_mongo-operator(385f92e8-07e3-460f-af48-b80c4fe28e45)”: timeout expired waiting for volumes to attach or mount for pod “mongo-operator”/“tet-cluster-0-0”. list of unmounted volumes=[data]. list of unattached volumes=[data journal logs mongodb-enterprise-database-pods-token-kdhvp]mount: /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-1a/vol-0143fd796b141be64: wrong fs type, bad option, bad superblock on /dev/xvdch, missing codepage or helper program, or other error.Any one tried fsType=xfs volume of ebs provisoner in k8s? Any help appreciated.Thanks.",
"username": "anand_babu"
},
{
"code": "",
"text": "hi @anand_babu\nYour spec looks ok (though the formatting is a bit shifted) and I think the issue (as you noted) is in the cloud provider, not in the Operator. To simplify the scenario I’d recommend to play with some simple pod and mount the PV with this storage class manually to reproduce the issue.\nI believe some EBS docs/forums can be of help here, not sure we’ve met this problem before",
"username": "Anton_Lisovenko"
}
] | Unable to mount fsType:xfs to mongo operator mongodb resource | 2020-07-21T11:08:08.266Z | Unable to mount fsType:xfs to mongo operator mongodb resource | 3,240 |
[
"dot-net"
] | [
{
"code": "",
"text": "Hello everybody.\nI encounter a real problem I want to have a link on my MangoDB Database on MongoDB Atlas.I followed a lot of different tutorial and i finaly found how to connect a c# programm to MongoDB :1861×964 139 KBBut the problem is when i want to do it with Godot.1564×820 91.4 KBThe line 161 crash.\nAfter few test, the conclusion is the next :Godot.Collection is in conflict with System.CollectionSo, maybe do anybody know a solution to acces my database without using System.Collection ?It is 2 days that i work on this problem and i don’t found a solution… i hope that a person have a solution ! ",
"username": "Xemnai_Sekai"
},
{
"code": "collectionIMongoCollection<BsonDocument> collection = database.GetCollection<BsonDocument>(\"user\");\n",
"text": "Hi @Xemnai_Sekai, and welcome to the forum,So, maybe do anybody know a solution to acces my database without using System.Collection ?Could you try to explicitly define the type of collection variable is, for example:Also, would you be able to provide the full error stack strace? Trying to figure out whether this issue is related to MongoDB.Driver namespace or just Godot with System namespace issues.Regards,\nWan.",
"username": "wan"
}
] | C# conflict between Godot.Collection and System.Collection | 2020-07-15T12:51:01.978Z | C# conflict between Godot.Collection and System.Collection | 2,392 |
|
null | [] | [
{
"code": "",
"text": "HelloHow to connect hadoop to mongodb with maven project?",
"username": "Abdourahime_Diallo"
},
{
"code": "",
"text": "Please check this link.It may helpHere I will show you step by step Hadoop connection with mongodb using mongoDBConnector.",
"username": "Ramachandra_Tummala"
},
{
"code": "hdfs",
"text": "Hi @Abdourahime_Diallo, and welcome to the forum,How to connect hadoop to mongodb with maven project?Depending on your use case, I’d recommend to try MongoDB Connector for Spark. You can read hdfs from Spark, and the connector you have access to all Spark libraries for use with MongoDB datasets.See also MongoDB Spark Connector Java Guide.Regards,\nWan.",
"username": "wan"
}
] | Hadoop mongodb connector | 2020-07-21T14:16:30.657Z | Hadoop mongodb connector | 1,247 |
null | [] | [
{
"code": "",
"text": "We have a collection of users with email addresses from a different database. We want to make each one of those users an authentication account for the email/password provider.We were considering using the Realm Admin API, since there appears to be no library within Functions to accomplish this task. All the examples in the Realm Admin API docs are written for curl. We are struggling to write these requests in JavaScript. We are using the Realm Function tool with the context.http methods to perform the HTTP requests.Does anyone have any tips or references to other docs that could help us?",
"username": "Jake_O_Toole"
},
{
"code": "",
"text": "Hi @Jake_O_Toole,If you could provide me with the code/function link I can try to help you.In general, the context.http should be set with a header having the correct access_token as a Berear created from an Atlas API login. The body should include the object describing the user fields…Thanks,\nPavel",
"username": "Pavel_Duchovny"
}
] | Realm Admin API | 2020-07-23T21:15:33.490Z | Realm Admin API | 1,502 |
null | [
"mongoid-odm"
] | [
{
"code": "mlaunch --replicaset\nrails c\nProject.with_session do |session|\n session.start_transaction\n Project.create!(name: 'Example')\n session.commit_transaction\nend\nMongoid::Errors::InvalidSessionUse:\nmessage:\n Sessions are not supported by the connected server(s).\nsummary:\n A session was attempted to be used with a MongoDB server version that doesn't support sessions. Sessions are supported in MongoDB server versions 3.6 and higher.\nresolution:\n Verify that all servers in your deployment are at least version 3.6 or don't attempt to use sessions with older server versions.\nfrom /Users/victor/.rvm/gems/ruby-2.6.5@pocket/gems/mongoid-7.0.5/lib/mongoid/clients/sessions.rb:98:in `rescue in with_session'\nCaused by Mongo::Error::InvalidSession: Sessions are not supported by the connected servers.\nfrom /Users/victor/.rvm/gems/ruby-2.6.5@pocket/gems/mongo-2.12.1/lib/mongo/client.rb:865:in `start_session'\n$ mlaunch list\n\nPROCESS PORT STATUS PID\n\nmongod 27017 running -\nmongod 27018 running 33353\nmongod 27019 running 33356\nmongoid.ymldevelopment:\n clients:\n default:\n database: pocket_api_development\n hosts:\n - \"<%= ENV[\"MONGO_HOST\"] %>:27017\"\n options:\n user: 'root'\n password: 'ri8Oogu6'\n auth_source: admin\n replica_set: replset\n options:\ntest:\n clients:\n default:\n database: pocket_api_test\n hosts:\n - \"<%= ENV[\"MONGO_HOST\"] %>:27017\"\n options:\n read:\n mode: :primary\n max_pool_size: 1\n user: 'root'\n password: 'ri8Oogu6'\n auth_source: admin\nmongo --port 27017\n> db.serverStatus().storageEngine\n{\n \"name\" : \"wiredTiger\",\n \"supportsCommittedReads\" : true,\n \"oldestRequiredTimestampForCrashRecovery\" : Timestamp(0, 0),\n \"supportsPendingDrops\" : true,\n \"dropPendingIdents\" : NumberLong(0),\n \"supportsSnapshotReadConcern\" : true,\n \"readOnly\" : false,\n \"persistent\" : true,\n \"backupCursorOpen\" : false\n}\nmongo --port 27018\nreplset:PRIMARY> db.serverStatus().storageEngine\n{\n \"name\" : \"wiredTiger\",\n \"supportsCommittedReads\" : true,\n \"oldestRequiredTimestampForCrashRecovery\" : Timestamp(1595260796, 1),\n \"supportsPendingDrops\" : true,\n \"dropPendingIdents\" : NumberLong(0),\n \"supportsSnapshotReadConcern\" : true,\n \"readOnly\" : false,\n \"persistent\" : true,\n \"backupCursorOpen\" : false\n}\nmongo --port 27019\n> db.serverStatus().storageEngine\n{\n \"name\" : \"wiredTiger\",\n \"supportsCommittedReads\" : true,\n \"oldestRequiredTimestampForCrashRecovery\" : Timestamp(0, 0),\n \"supportsPendingDrops\" : true,\n \"dropPendingIdents\" : NumberLong(0),\n \"supportsSnapshotReadConcern\" : true,\n \"readOnly\" : false,\n \"persistent\" : true,\n \"backupCursorOpen\" : false\n}\n",
"text": "Hello!I want to use the Transactions with Sessions feature on the Mongoid, the officially supported object-document mapper (ODM) for MongoDB in Ruby.To make this possible, I already did setup a Replica Set on my system, following this tutorial:\nhttp://blog.rueckstiess.com/mtools/mlaunch.htmlAbout the topology , I am currently using the default configuration of this tutorial => http://blog.rueckstiess.com/mtools/mlaunch.htmlTherefore, I am starting a replica set with (by default) 3 nodes on ports 27017 , 27018 , 27019 .However, I am having some problems when I try to invoke sessions on the ruby code. The error message says: “A session was attempted to be used with a MongoDB server version that doesn’t support sessions. Sessions are supported in MongoDB server versions 3.6 and higher.”. However, when I call db.version() , I can see it is on version 4.2.3 already Some additional infos…",
"username": "Victor_Costa"
},
{
"code": "",
"text": "@Victor_Costa,What is the mongoid version you are using?Can you run any other commands without explicit sessions?\nHave you tested specifying all 3 hosts in connection conf?I would suggest doing our atlas guide while using an Atlas free cluster rather then mlaunch…Please upload the rs.status() and logs.Best regards\nPavel",
"username": "Pavel_Duchovny"
}
] | Help setting up a development replica set with transactions for Mongoid | 2020-07-23T04:04:18.077Z | Help setting up a development replica set with transactions for Mongoid | 4,351 |
null | [
"kafka-connector"
] | [
{
"code": "db.collection.aggregate([{\"$match\": {bid: ObjectId(\"591480926fb3a6a6d5c5174b\")}}])\n",
"text": "Hello, when i use MongoDB Kafka Connector, link https://docs.mongodb.com/kafka-connector/master/kafka-source/I want to use pipeline config.\nHowever, how should i expresess $match and ObjectId in config file.What i need is this:",
"username": "Yanjie_Wang"
},
{
"code": " \"bid\": {\n \"$oid\": \"591480926fb3a6a6d5c5174b\"\n }\n",
"text": "Hi @Yanjie_Wang,Try to use the extended json representation :Let me know if that works for you.Best regards\nPavel",
"username": "Pavel_Duchovny"
}
] | [Kafka Connect]How to use $match to express field equal to Mongo's ObjectId in pipeline | 2020-07-23T14:49:58.022Z | [Kafka Connect]How to use $match to express field equal to Mongo’s ObjectId in pipeline | 1,989 |
null | [
"atlas-device-sync"
] | [
{
"code": "",
"text": "Over past 3 years, I have been using Realm as our local datastore in an app. We have currently implemented our sync based on Firestore but it is definitely NOT elegant as now there are 2 data sources to meddle with.I was very excited seeing the Twitch session and other videos in youtube. Especially the session by CBT Nuggets almost covered every usecase I need.So I immediately signed up & created a working prototype within a day.The main app (that has been in works for 3 years now) is just 45 days away from launch. We are in need of just the Bidirectional Sync Solution & authentication but are ready take a plunge if MongoDB Realm is stable. But it is currently in BETA only.Could you guide us? Is MongoDB Realm ready for Production use? Are 7/11 and CBT Nuggets already using it in production?",
"username": "Ram_Sundaram"
},
{
"code": "",
"text": "Hi Ram – Thanks for reaching out! While Sync is relatively newly released and still in Beta we are working towards a GA release. For Sync, our GA release will mostly focus on ensuring scalability of Sync and making sure that we can guarantee Atlas’s Uptime SLA. Since every case is different, we recommend trying out Sync, load testing, and reaching out to the team if you run into any issues. We’re working to support customers in production even while we’re in Beta, but as every case/set of requirements is different is probably best to discuss your specific production requirements. If you’d like, you can shoot me a note at [email protected] .",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "Thanks so much for your response. Definitely I have started working with you sales team to take this forward.I will definitely keep in touch with you on our progressThanks\nRam",
"username": "Ram_Sundaram"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Is SYNC in MongoDB Realm Ready for Production | 2020-07-23T03:39:54.632Z | Is SYNC in MongoDB Realm Ready for Production | 1,968 |
null | [
"node-js",
"realm-web"
] | [
{
"code": "",
"text": "I am trying to use realm to make a pipeline/function to import data from our database to our app. I can’t find any code examples to build off on how to make these database calls. Do you guys have any suggestions or sources I can build from?",
"username": "Josh_Stout"
},
{
"code": "",
"text": "Hey Josh -Once you’ve been able to authenticate through the Realm SDK, you should be able to access MongoDB data (provided the user can access the data from the defined Rules) in the following ways:GraphQL queries via our GraphQL API (via a GraphQL Client)orCalling Serverless Function from the Web SDK which is defined in Realm Cloud and called from the client (some data access snippets here)Let me know if you have any follow-up questions.",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Thanks for the response!I will look at the materials and follow up if needed. However, I think I can take it from here ",
"username": "Josh_Stout"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Using Realm to make a pipeline/function to import data from our database to our app | 2020-07-22T21:31:48.087Z | Using Realm to make a pipeline/function to import data from our database to our app | 1,824 |
null | [] | [
{
"code": "",
"text": "Migrating an app to Cosmos DB and running into issues. I’m using pymongo but I don’t think that’s terribly relevant.Where before, I used something like:\nMongoclient(uri)[db_name][collection_name]But cosmos is unhappy with that. It throws the error : “document does not contain a shard key”. It’s worth noting that I can change collection_name to a shard key name and the error changes to: Resource Type Collection is unexpected.How to reference a collection in cosmos?",
"username": "Josh_Restivo"
},
{
"code": "",
"text": "Hello @Josh_Restivo welcome to the MongoDB community,in case you think that your issue has MongoDB related roots, can you please elaborate on this? In that case it would be ideal when you could show what steps you have made until you get the error.\nFor CosmosDB related questions you may want to check the comosdb community support or Azure Professional SupportRegards,\nMichael",
"username": "michael_hoeller"
}
] | Referencing a collection in cosmos | 2020-07-23T19:27:23.568Z | Referencing a collection in cosmos | 1,302 |
[] | [
{
"code": "",
"text": "Currently I am tring to do real time indexing of mongo data in solr.I am referring the below link:Indexing MongoDB Data in Apache SolrI am using below system:Windows 10,\nsolr 8.3.1\nmongo 4.2.8Starting MongoDB Server and create replica set using below command:mongod --replSet rs0\n\nimage733×570 10.7 KB\n\nimage1303×675 35.3 KB\nAfert that In a new tab I try to start and initiate replica set,but I am getting error.\n\nimage1233×580 17.7 KB\nPlease help me on that.\nThanks in advance!",
"username": "shaunak_mandal"
},
{
"code": "",
"text": "Did you try adding the replSetName to your config file. And then you can reference the config file when you start your mongod process. mongod --config You can add the replica set name under replication in your config file\nreplication:\nreplSetName: “rs0”",
"username": "tapiocaPENGUIN"
}
] | Indexing mongodb data in solr | 2020-07-23T15:36:38.888Z | Indexing mongodb data in solr | 4,307 |
|
null | [
"production",
"server"
] | [
{
"code": "",
"text": "MongoDB 3.6.19 is out and is ready for production deployment. This release contains only fixes since 3.6.18, and is a recommended upgrade for all 3.6 users.Fixed in this release:3.6 Release Notes | All Issues | Downloads\n\nAs always, please let us know of any issues.\n\n– The MongoDB Team",
"username": "Jon_Streets"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB 3.6.19 is released | 2020-07-23T14:38:38.417Z | MongoDB 3.6.19 is released | 1,309 |
[
"database-tools"
] | [
{
"code": "",
"text": "Is there any way to load the data from single json file to multiple collections…? Example: based on a name like FORM, data belonging to FORM1 shld be loaded into FORM1 collection and FORM2 data should be loaded into FORM2 collection.I mean using the utilities like mongodump or mongoimport or mongorestore… or … do we need to write any script programmatically?Best Regards\nKesavsample-data480×567 11.4 KB",
"username": "ramgkliye"
},
{
"code": "--filter",
"text": "Hi, unfortunately there’s currently no way of doing this with mongoimport or mongorestore.There is an open feature request to add a --filter option to mongorestore which would allow you to do this (TOOLS-2148). But for now you would have to write your own script.",
"username": "Tim_Fogarty"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | JSON files - data loading | 2020-07-23T07:40:23.702Z | JSON files - data loading | 5,555 |
|
[
"installation"
] | [
{
"code": "",
"text": "i follow the official install instructionhttps://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/",
"username": "Danh_Le"
},
{
"code": "",
"text": "Hi @Danh_LeYou may have skipped over this section:MongoDB 4.2 Community Edition supports the following 64-bit Ubuntu LTS (long-term support) releases on x86_64 architecture:See the commenst from @Stennie_X Ubuntu 20.04 support - #2 by Stennie_X\nMongodb error in Ubuntu 20.04 - #3 by Stennie_X",
"username": "chris"
},
{
"code": "mongodFailed to unlink socket file /tmp/mongodb-27017.sock Operation not permitted",
"text": "Thanks for reply @chris i installed version 4.4 and follows some instructions on Stackoverflow which i dont even rememer what i did lol. but i made progress, i could active the mongo sever but could active the mongod command, every time i type mongod i got Failed to unlink socket file /tmp/mongodb-27017.sock Operation not permitted i trying to change the port… any advice i should know?",
"username": "Danh_Le"
},
{
"code": "sudo mkdir -p /data/dbsudo mongod",
"text": "hi @chris just let you know. i got my error fixed. by sudo mkdir -p /data/db and run mongodb with sudo mongod i works greet now, i am so happy its been four days thanks god.",
"username": "Danh_Le"
},
{
"code": "sudo mkdir -p /data/dbsudo mongod",
"text": "i installed version 4.4 and follows some instructions on Stackoverflow4.4 is not general availability. Basically be aware you are not running a currently supported configuration.While the 4.4 release candidates are available, these versions of MongoDB are for testing purposes only and not for production use .For new features in MongoDB 4.4, see Release Notes for MongoDB 4.4 (Release Candidate).hi @chris just let you know. i got my error fixed. by sudo mkdir -p /data/db and run mongodb with sudo mongod i works greet now, i am so happy its been four days thanks god.The official packages come with integrations for the appropriate init system(systemd, system v init). Negating the need to execute mongod manually.You are effectively running this mongod as root, not a good practice, and you may run into file/directory permissions later if you transition to using it the ‘regular’ way.Although you have the installation guide you have not diligently applied it’s guidance and wisdom, luck be with you.",
"username": "chris"
}
] | Errors when installing MongoDB on Ubuntu 20.04 | 2020-07-22T11:45:16.461Z | Errors when installing MongoDB on Ubuntu 20.04 | 13,961 |
|
null | [] | [
{
"code": "",
"text": "Hello,I would like to store strings larger than 16 MB into MongoDB. From the documentation I learned that there is GridFS that I can use to store files into the database.Here’s how my software works:So the file (Point 3) that the user can select will be converted to a string and then transfered to the server.In the documentation I couldn’t find a class method to store strings to GridFS. Is there a way I can store them into GridFS? Or is there a way I can store it normally in a collection? For example, can I set the size limit for data in MongoDB? Is that possible?",
"username": "Cryptnote"
},
{
"code": "",
"text": "Hello, @Cryptnote!In the documentation I couldn’t find a class method to store strings to GridFS.You can check MongoDB driver’s tutorials for GridFS usage examples",
"username": "slava"
},
{
"code": "",
"text": "Well yes, I know the documentation. There’s no method for GridFS which you can use to store a large string.",
"username": "Cryptnote"
},
{
"code": "uploadFromStream($filename, $source, array $options = [])$filenamefilenamefs.files$source$streamToUploadFromInputStream streamToUploadFrom = new ByteArrayInputStream(inputString.getBytes(\"UTF-8\"));$bucket = (new MongoDB\\Client)->test->selectGridFSBucket();\n$bucket->uploadFromStream('my-file', $streamToUploadFrom);\n_idfs.chunksfs.filesmongoMongoDB\\GridFS\\Bucket::downloadToStreamdownloadToStream",
"text": "I am a Java developer (am not familiar with PHP; I think you are using MongoDB PHP driver). I tried your issue of storing a string into GridFS collections. It works both ways, writing a string and retrieving it. Ofcourse, I used Java APIs. I can relate to what I did (code) in PHP, and I think a PHP developer can figure the rest.Writing to GridFS:Start from this topic: Uploading Files with Writable StreamsSee the bucket class function: MongoDB\\GridFS\\Bucket::uploadFromStreamThe function definition:uploadFromStream($filename, $source, array $options = [])The Readable stream is an input stream which is created from the input string; and this needs to be created in PHP code ($streamToUploadFrom). With Java, the stream is created as:\nInputStream streamToUploadFrom = new ByteArrayInputStream(inputString.getBytes(\"UTF-8\"));Write the string to the MongoDB GridFS collections:The function returns: The _id field of the metadata document associated with the newly created GridFS file.After the successful write, MongoDB creates two collections in the specified database: fs.chunks and fs.files. These collections can be queried from the mongo shell or from the driver APIs.Reading from GridFS:To read from the GridFS collection, use MongoDB\\GridFS\\Bucket::downloadToStream function. In this case, an output steam is to be created and the downloadToStream function reads from the stored string to the stream. Then, extract the string from the stream.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Before you answered I already figured this out, thank you anyway for posting this. I’m pretty sure that it will help someone if he searches Google. After solving this problem I wrote a short “tutorial” on how to store and retrieve files from the database, you can find it here.",
"username": "Cryptnote"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Storing strings in MongoDB larger than 16 MB | 2020-07-22T11:46:20.919Z | Storing strings in MongoDB larger than 16 MB | 11,367 |
null | [] | [
{
"code": "",
"text": "Hi Guys,I haven’t seen many blogs about C100DBA, so I decided to share my tips.\nI hope it will be useful for you!I haven’t seen many blogs about C100DBA certification from the NoSQL MongoDB database, so I decided to share my tips.\nReading time: 9 min read\nBR\nArkadiusz",
"username": "Arkadiusz_Borucki"
},
{
"code": "",
"text": "Well explained and Well drafted",
"username": "ramgkliye"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | C100DBA exam tips | 2020-07-21T10:51:19.904Z | C100DBA exam tips | 1,922 |
null | [
"swift"
] | [
{
"code": "",
"text": "Hey All - we’ve released a blog post and demo app that details how to use RealmSwift’s Frozen Object implementation along with how to integrate with SwiftUI and Combine. Please have a look here:https://www.mongodb.com/article/realm-cocoa-swiftui-combineWe are eager to have you try it out and welcome your feedbackBest\nIan",
"username": "Ian_Ward"
},
{
"code": "var subjects: Results<Asset> {\n return assets.sorted(byKeyPath: \"subject\").distinct(by: [\"subject\"])\n }\n\n func filteredSubjectsCollection() -> AnyRealmCollection<Asset> {\n return AnyRealmCollection(self.subjects)\n }\n\n // Subjects\n DisclosureGroup(isExpanded: $model.isSubjectsShowing) {\n \n VStack(alignment:.trailing, spacing: 4) {\n \n ForEach(filteredSubjectsCollection().freeze()) { asset in\n CheckBoxSelection(label: asset.subject, isSelected: self.model.selectedSubjects.contains(asset.subject))\n .onTapGesture { self.model.addSubject(subject: asset.subject) }\n }\n }.frame(maxWidth:.infinity)\n .padding(.leading, 20).padding(.trailing, 0)\n \n } label: {\n HStack(alignment:.center) {\n Image(systemName: \"flag\")\n CheckBoxSelection(label: \"Subjects\", isSelected: self.model.selectAllSubjects)\n .font(.system(.title3))\n .onTapGesture { self.addAllSubjects() }\n \n }.padding([.top, .bottom], 8).foregroundColor(.secondary)\n .padding(.trailing, 1)\n }",
"text": "Sees to work a treat for getting a ‘live’ list of unique keywords from database records.",
"username": "Duncan_Groenewald"
}
] | Realm Cocoa 5.0 - Multithreading Support with Integration for SwiftUI & Combine | 2020-07-02T17:25:12.129Z | Realm Cocoa 5.0 - Multithreading Support with Integration for SwiftUI & Combine | 1,741 |
null | [] | [
{
"code": "db.getCollection('events').aggregate([\n{\n $unwind: \"$payload.latest\"\n},\n{\n $unwind: {\n path: \"$payload.authorisation.instrument\"\n }\n},\n{\n $lookup: {\n from: \"request\",\n localField: \"payload.uuid\",\n foreignField: \"uuid\",\n as: \"request\"\n }\n},\n{\n $unwind: {\n path: \"$request\"\n }\n},\n{\n $unwind: {\n path: \"$request.responses\",\n preserveNullAndEmptyArrays: true\n }\n},\n{\n $lookup: {\n from: \"questionSet\",\n localField: \"request.questionSet\",\n foreignField: \"uuid\",\n as: \"set\"\n }\n},\n{\n $unwind: {\n path: \"$set\",\n preserveNullAndEmptyArrays: true\n }\n},\n{\n $unwind: {\n path: \"$set.questions\",\n preserveNullAndEmptyArrays: true\n }\n },\n {\n $unwind: {\n path: \"$set.questions.responses\",\n preserveNullAndEmptyArrays: true\n }\n },\n {\n $match: {\n $expr: {\n $and: [\n { $eq: [ \"$request.responses.questionId\", \"$set.questions.id\" ] },\n { $eq: [ \"$request.responses.responseId\", \"$set.questions.responses.id\" ] }\n ]\n }\n }\n},\n{\n $project: {\n instr: \"$payload.authorisation.instrument\",\n subjects: \"$request.subjects\",\n purchaser: \"$request.createdBy\",\n orderReference: \"$request.submissionReference\",\n businessUnit: \"$createdByBusinessUnit\",\n serviceCode: \"$payload.latest.targeting.serviceCode\",\n bizCaseField1: \"$request.bizCaseField1\",\n bizCaseField2: \"$request.bizCaseField2\",\n bizCaseField3: \"$request.bizCaseField3\",\n questions: {\n questionText: \"$set.questions.text\",\n responseText: \"$set.questions.responses.name\"\n }\n }\n},\n{ $group: {\n _id: { id: \"$_id\", instr: \"$instr\", product: \"$product\" },\n root: { $mergeObjects: '$$ROOT' },\n questions: { $push: \"$questions\" }\n }\n},\n{\n $replaceRoot: {\n newRoot: {\n $mergeObjects: ['$root', '$$ROOT']\n }\n }\n},\n{\n $project: {\n _id: 0,\n root: 0\n }\n},\n{\n $sort: {\n instr: 1,\n subjects: 1\n }\n}]).pretty();\n",
"text": "Hi.I am new to Mongo and have been given the following query to use, but my Mongo DB install is 3.4 and the query uses operators only available in 3.6 and above.I think it is the $expr and $mergeObjects operators that aren’t supported.I’m not sure what to replace these with.Any ideas please?",
"username": "Sean_Barry"
},
{
"code": "db.figures.insertMany([\n { _id: 'A', width: '15cm', height: '15cm' },\n { _id: 'B', width: '15cm', height: '10cm' }\n]);\ndb.figures.aggregate([\n {\n $match: {\n $expr: {\n $eq: ['$width', '$height'],\n }\n }\n }\n]);\ndb.figures.aggregate([\n {\n $addFields: {\n // calculate intermediate prop\n isSquare: {\n $cond: {\n if: {\n // put here all the conditions,\n // that your have inside $expr\n $eq: ['$width', '$height'],\n },\n then: true,\n else: false,\n }\n }\n }\n },\n {\n $match: {\n // match by that intermediate prop\n isSquare: true\n }\n },\n {\n $project: {\n // remove intermediate prop from ouput\n isSquare: false,\n }\n }\n]);\n{ \"_id\" : \"A\", \"width\" : \"15cm\", \"height\" : \"15cm\" }\ndb.learningPlans.insertOne({\n initialPlan: {\n learnJavascript: true,\n learnDesignPatterns: true,\n learnAgile: true,\n },\n currentPlan: {\n learnMongoDB: true,\n learnAgile: false,\n }\n});\ndb.learningPlans.aggregate([\n {\n $project: {\n latestPlan: {\n $mergeObjects: ['$initialPlan', '$currentPlan']\n }\n }\n }\n]);\ndb.learningPlans.aggregate([\n {\n $addFields: {\n // disassemble arrays for further manipulations\n intermediateBase: {\n $objectToArray: '$initialPlan',\n },\n intermediateOverwrite: {\n $objectToArray: '$currentPlan',\n }\n },\n },\n {\n // add another $addFields stages,\n // so props from previous $addFields\n // will be accessible here\n $addFields: {\n intermediateFinal: {\n // order of arguments must be the same, that \n // was used in $mergeObjects operator\n $concatArrays: ['$intermediateBase', '$intermediateOverwrite']\n }\n }\n },\n {\n $project: {\n latestPlan: {\n $arrayToObject: ['$intermediateFinal']\n }\n }\n }\n]).pretty();\ndb.learningPlans.aggregate([\n {\n $project: {\n latestPlan: {\n learnJavascript: {\n $cond: {\n if: {\n $eq: ['$currentPlan.learnJavascript', undefined],\n },\n then: '$initialPlan.learnJavascript',\n else: '$currentPlan.learnJavascript'\n }\n },\n learnDesignPatterns: {\n $cond: {\n if: {\n $eq: ['$currentPlan.learnDesignPatterns', undefined],\n },\n then: '$initialPlan.learnDesignPatterns',\n else: '$currentPlan.learnDesignPatterns'\n }\n },\n learnAgile: {\n $cond: {\n if: {\n $eq: ['$currentPlan.learnAgile', undefined],\n },\n then: '$initialPlan.learnAgile',\n else: '$currentPlan.learnAgile'\n }\n },\n learnMongoDB: {\n $cond: {\n if: {\n $eq: ['$currentPlan.learnMongoDB', undefined],\n },\n then: '$initialPlan.learnMongoDB',\n else: '$currentPlan.learnMongoDB'\n }\n }\n }\n }\n }\n]).pretty();\n[\n {\n \"_id\" : ObjectId(\"5f18afb74d4bfee817d2e4e2\"),\n \"latestPlan\" : {\n \"learnJavascript\" : true,\n \"learnDesignPatterns\" : true,\n \"learnAgile\" : false,\n \"learnMongoDB\" : true\n }\n }\n]\n",
"text": "Hello, @Sean_Barry!Example dataset:Example of aggregation pipeline with the $expr opetator:Example of aggregation pipeline with the intermediate matching field:Output of both aggregations is the same:Example dataset:Example of aggregation pipeline with $mergeObjects operator:Example of aggregation pipeline with objects-array-object manipulations.\nImportant: $objectToArray and $arrayToObject were added in MongoDB v3.4.4.Example of aggregation pipeline with prop list and $cond operator.\nNotice, that you need to list all the props, that appear in both objects, that you need to merge and for each property you need to add logic to handle missing properties.Output of the above 3 aggregations is the same:",
"username": "slava"
}
] | Downgrade Mongo 3.6+ query for 3.4 | 2020-07-22T20:53:38.078Z | Downgrade Mongo 3.6+ query for 3.4 | 1,680 |
[
"server"
] | [
{
"code": "",
"text": "HiI am noticing many mongod running on my server. Could someone explain what could cause this behavior?Below an image of many instances running at my server.image938×326 8.75 KB",
"username": "Ezequias_Rocha"
},
{
"code": "",
"text": "You have probably enabled threads view in top. This is expected in this case.",
"username": "chris"
}
] | Many mongod process running on server | 2020-07-22T19:56:19.877Z | Many mongod process running on server | 1,606 |
|
null | [
"on-premises"
] | [
{
"code": ">cd mongodb-charts\n>docker swarm init\n>docker pull quay.io/mongodb/charts:19.12.1\n>docker run --rm quay.io/mongodb/charts:19.12.1 charts-cli test-connection 'mongodb://username:password@hostname:27017/DATABASE?replicaSet=rd0&authSource=admin'\n\nMongoDB connection URI successfully verified.\n\n>echo \"mongodb://username:password@hostname:27017/DATABASE?replicaSet=rd0&authSource=admin\" | docker secret create charts-mongodb-uri -\n\n>docker stack deploy -c charts-docker-swarm-19.12.1.yml mongodb-charts\n Creating network mongodb-charts_backend\n Creating service mongodb-charts_charts\n>docker service ls\n\nID NAME MODE REPLICAS IMAGE PORTS\n44nfwa84c9ug mongodb-charts_charts replicated 1/1 quay.io/mongodb/charts:19.12.1 *:80->80/tcp, *:443->443/tcp\n\n>docker exec -it $(docker container ls --filter name=_charts -q) charts-cli add-user --first-name \"Admin\" --last-name \"Admin\" --email \"[email protected]\" --password \"admin1234\" --role \"UserAdmin\"\n",
"text": "I am trying to install MongoDB charts on my local (MacBook)add-user command error: An error occurred authenticating: request to http://localhost:8080/api/admin/v3.0/auth/providers/local-userpass/login failed, reason: connect ECONNREFUSED 127.0.0.1:8080docker service logs 44nfwa84c9ug\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | parsedArgs\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | installDir (‘/mongodb-charts’)\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | log\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | salt\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | productNameAndVersion ({ productName: ‘MongoDB Charts Frontend’, version: ‘1.9.1’ })\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | gitHash (‘1a46f17f’)\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | supportWidgetAndMetrics (‘off’)\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | tileServer (undefined)\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | tileAttributionMessage (undefined)\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | rawFeatureFlags (undefined)\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | encryptionKeyPath\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | featureFlags ({})\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | tokens\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | stitchMigrationsLog ({ completedStitchMigrations: [ ‘stitch-1332’, ‘stitch-1897’, ‘stitch-2041’, ‘migrateStitchProductFlag’, ‘stitch-2041-local’, ‘stitch-2046-local’, ‘stitch-2055’, ‘multiregion’, ‘dropStitchLogLogIndexStarted’ ] })\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | chartsMongoDBUri\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | stitchConfigTemplate\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | lastAppJson ({ stitchAppId: ‘5eab6b1a0ce2af366132fc26’, stitchClientAppId: ‘mongodb-charts-jhyce’, stitchGroupId: ‘5eab6b190ce2af366132fc14’, gitHash: ‘1a46f17f’, tenantId: ‘1f7ac5f8-7f37-4e8d-8796-5b3d70e498e8’, featureFlags: {}, tileAttributionMessage: ‘’, appName: ‘MongoDB Charts Frontend’, appVersion: ‘1.9.1’, deployDate: ‘Fri, 15 May 2020 16:42:36 GMT’, target: ‘on-prem’, telemetry: { enabled: false, stitch: { appId: ‘datawarehouseprod-compass-nqnxw’ }, intercom: { appId: ‘w5bmt65h’, enabled: true, panelEnabled: true } } })\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | existingInstallation (true)\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | tenantId (‘1f7ac5f8-7f37-4e8d-8796-5b3d70e498e8’)\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | libMongoIsInPath (true)\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | mongoDBReachable (true)\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | stitchMigrationsExecuted (‘not required’)\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | minimumVersionRequirement (true)\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | stitchConfig\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | stitchConfigWritten (true)\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | stitchChildProcess\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | indexesCreated (true)\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | stitchServerRunning (true)\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | stitchAdminCreated (false)\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | lastKnownVersion (‘1.9.1’)\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | existingClientAppIds ()\nmongodb-charts_charts.1.lydkoixon6e7@docker-desktop | migrationsExecuted ({})mongodb-charts_charts.1.lydkoixon6e7@docker-desktop | stitchUnconfigured failure: app “mongodb-charts-jhyce” not found. To reconfigure Charts with a fresh database, delete the mongodb-charts_keys volume and deploy again.We are using MongoDB Version 4.0.5 community edition\nHow to fix this? @tomhollander any suggestion?",
"username": "alak_patel"
},
{
"code": "mongodb-charts_keysdocker volume rm mongodb-charts_keys",
"text": "Hi @alak_patel -The last line in the log shows what’s going on. Basically - the mongodb-charts_keys volume contains some files which expect certain data to exist in the database, but it’s not there. I’m not sure how you got into that state, but you can start fresh by deleting that volume using docker volume rm mongodb-charts_keys.Let me know if this works.\nTom",
"username": "tomhollander"
},
{
"code": "➜ mongodb-charts > docker exec -it $(docker container ls --filter name=mongodb-charts_chart -q) charts-cli add-user --first-name \"admin\" --last-name \"admin\" --email \"[email protected]\" --password \"adminadmin\" --role \"UserAdmin\"\n",
"text": "@tomhollander\nI deleted mongodb-charts_keys and also deleted app, auth, metadata and config. Then redeployed.\nNow I getting following error ➜ add-user command error: clientAppId not found. No Charts apps configured to add user to.Service logs➜ mongodb-charts docker service logs yzcg7xb5jqep\nmongodb-charts_charts.1.xsmlaf4b71n8@moby | parsedArgs\nmongodb-charts_charts.1.xsmlaf4b71n8@moby | installDir (’/mongodb-charts’)\nmongodb-charts_charts.1.xsmlaf4b71n8@moby | log\nmongodb-charts_charts.1.xsmlaf4b71n8@moby | salt\nmongodb-charts_charts.1.xsmlaf4b71n8@moby | productNameAndVersion ({ productName: ‘MongoDB Charts Frontend’, version: ‘1.9.1’ })\nmongodb-charts_charts.1.xsmlaf4b71n8@moby | gitHash (‘1a46f17f’)\nmongodb-charts_charts.1.xsmlaf4b71n8@moby | supportWidgetAndMetrics (‘off’)\nmongodb-charts_charts.1.xsmlaf4b71n8@moby | tileServer (undefined)\nmongodb-charts_charts.1.xsmlaf4b71n8@moby | tileAttributionMessage (undefined)\nmongodb-charts_charts.1.xsmlaf4b71n8@moby | rawFeatureFlags (undefined)\nmongodb-charts_charts.1.xsmlaf4b71n8@moby | stitchMigrationsLog ({ completedStitchMigrations: })\nmongodb-charts_charts.1.xsmlaf4b71n8@moby | featureFlags ({})\nmongodb-charts_charts.1.xsmlaf4b71n8@moby | lastAppJson ({})\nmongodb-charts_charts.1.xsmlaf4b71n8@moby | existingInstallation (false)\nmongodb-charts_charts.1.xsmlaf4b71n8@moby | tenantId (‘52ca28a0-841d-4207-b14f-bcf27ae1307f’)\nmongodb-charts_charts.1.xsmlaf4b71n8@moby | chartsMongoDBUri\nmongodb-charts_charts.1.xsmlaf4b71n8@moby | tokens\nmongodb-charts_charts.1.xsmlaf4b71n8@moby | encryptionKeyPath\nmongodb-charts_charts.1.xsmlaf4b71n8@moby | stitchConfigTemplate\nmongodb-charts_charts.1.xsmlaf4b71n8@moby | libMongoIsInPath (true)I am usingMacOS 10.13.6 High Sierra\nDocker Version : 17.09.1-ce\nMongpDB Version: 4.0.5 Community\nMongoDB Charts : 19.12.1@tomhollander How do I fix this error ?",
"username": "alak_patel"
},
{
"code": "add-user",
"text": "@alak_patel - that log file does not look complete. Possibly you ran the add-user command before the services had all started. Please try again after waiting a bit longer, or if all else fails delete the stack and try again.",
"username": "tomhollander"
},
{
"code": "Removing service mongodb-charts_charts\nRemoving network mongodb-charts_backend\nNode left the swarm.\nBefore starting Charts, please create a Docker Secret containing this connection URI using the following command:\necho \"mongodb://username:password@host1:27017,host2:27017,host3:27017/DB_NAME?replicaSet=rd0&readPreference=secondary&authSource=admin\" | docker secret create charts-mongodb-uri -\n59omuwaf2abnwd0oumpeimp18\n",
"text": "After deleting auth, app, config and metadata databases I ran following commands:docker stack rm mongodb-charts\ndocker secret rm charts-mongodb-uridocker volume rm mongodb-charts_keyscharts-mongodb-uridocker swarm leave --forcedocker swarm initSwarm initialized: current node (ves6vggzkjgqg0d3b5ctdbpth) is now a manager.To add a worker to this swarm, run the following command:docker swarm join --token SWMTKN-1-1q6w99q3ydmhvvo5qpbi0vn9lo8nkc0izisobutq5nzw1f4nov-5xlxse8y9eg0t7bvgbbx4rrbz 192.168.65.2:2377To add a manager to this swarm, run ‘docker swarm join-token manager’ and follow the instructions.docker pull Quay19.12.1: Pulling from mongodb/charts\nf7e2b70d04ae: Already exists\n08dd01e3f3ac: Already exists\nd9ef3a1eb792: Already exists\n4581df1af3c1: Already exists\nd8fcb2ad31d7: Already exists\nfd691f0ac82b: Already exists\n24f9651c9353: Already exists\n2e5fadc35a21: Already exists\nf608968bc4d5: Already exists\n06ee49454ba1: Already exists\n41d2ff67bea6: Already exists\nf5e63860bf14: Already exists\n5650d9c5fe37: Already exists\ndd45ce71d88d: Already exists\n892241cbafcd: Already exists\nbb13ccab3410: Already exists\neb7cec6a6d6e: Already exists\n2515f3cd3dc7: Already exists\n36f48ca24f54: Already exists\nDigest: sha256:395c819f9b0faa05f80180eb3725a2539d69cd79a1dbeb8390b7b2e2922f6b54\nStatus: Image is up to date for Quaydocker run --rm Quay charts-cli test-connection “mongodb://username:password@host1:27017,host2:27017,host3:27017/DB_NAME?replicaSet=rd0&readPreference=secondary&authSource=admin”MongoDB connection URI successfully verified.echo \" mongodb://username:password@host1:27017,host2:27017,host3:27017/DB_NAME?replicaSet=rd0&readPreference=secondary&authSource=admin \" | docker secret create charts-mongodb-uri -docker stack deploy -c charts-docker-swarm-19.12.1.yml mongodb-chartsdocker service lsID NAME MODE REPLICAS IMAGE PORTS\nm5fggs1vludg mongodb-charts_charts replicated 1/1 Quay :80->80/tcp,:443->443/tcpdocker service logs m5fggs1vludgmongodb-charts_charts.1.7jv37pezwjv1@moby | parsedArgs\nmongodb-charts_charts.1.7jv37pezwjv1@moby | installDir (‘/mongodb-charts’)\nmongodb-charts_charts.1.7jv37pezwjv1@moby | log\nmongodb-charts_charts.1.7jv37pezwjv1@moby | salt\nmongodb-charts_charts.1.7jv37pezwjv1@moby | productNameAndVersion ({ productName: ‘MongoDB Charts Frontend’, version: ‘1.9.1’ })\nmongodb-charts_charts.1.7jv37pezwjv1@moby | gitHash (‘1a46f17f’)\nmongodb-charts_charts.1.7jv37pezwjv1@moby | supportWidgetAndMetrics (‘off’)\nmongodb-charts_charts.1.7jv37pezwjv1@moby | tileServer (undefined)\nmongodb-charts_charts.1.7jv37pezwjv1@moby | tileAttributionMessage (undefined)\nmongodb-charts_charts.1.7jv37pezwjv1@moby | rawFeatureFlags (undefined)\nmongodb-charts_charts.1.7jv37pezwjv1@moby | encryptionKeyPath\nmongodb-charts_charts.1.7jv37pezwjv1@moby | featureFlags ({})\nmongodb-charts_charts.1.7jv37pezwjv1@moby | lastAppJson ({})\nmongodb-charts_charts.1.7jv37pezwjv1@moby | existingInstallation (false)\nmongodb-charts_charts.1.7jv37pezwjv1@moby | tenantId (‘ce63a26f-eb0c-4d5c-9313-a396a8bf0214’)\nmongodb-charts_charts.1.7jv37pezwjv1@moby | chartsMongoDBUri\nmongodb-charts_charts.1.7jv37pezwjv1@moby | tokens\nmongodb-charts_charts.1.7jv37pezwjv1@moby | stitchMigrationsLog ({ completedStitchMigrations: [ ‘stitch-1332’, ‘stitch-1897’, ‘stitch-2041’, ‘migrateStitchProductFlag’, ‘stitch-2041-local’, ‘stitch-2046-local’, ‘stitch-2055’, ‘multiregion’, ‘dropStitchLogLogIndexStarted’ ] })\nmongodb-charts_charts.1.7jv37pezwjv1@moby | stitchConfigTemplate\nmongodb-charts_charts.1.7jv37pezwjv1@moby | libMongoIsInPath (true)docker exec -it $(docker container ls --filter name=mongodb-charts_chart -q) charts-cli add-user --first-name “admin” --last-name “admin” --email “[email protected]” --password “adminadmin” --role “UserAdmin”add-user command error: clientAppId not found. No Charts apps configured to add user to.@tomhollander I see that service is started, but still getting error on add-user.",
"username": "alak_patel"
},
{
"code": "add-user✔ supervisorStarted (true)\n✔ libMongoIsInPath (true)✔ mongoDBReachable (true)/mongodb-charts/logs/charts-cli.log",
"text": "The service is running, but the startup process is not complete which explains why add-user isn’t working. There should be several more lines written to the log, finishing with:The line expected immediately after ✔ libMongoIsInPath (true) is ✔ mongoDBReachable (true) which makes me think it’s having some trouble with the DB connection, despite the fact that it validated. Can you look at the file /mongodb-charts/logs/charts-cli.log within the container and see if that offers any clues? Feel free to email me at tom.hollander at mongodb.com if you don’t want to share your logs with the world.Tom",
"username": "tomhollander"
},
{
"code": "/mongodb-charts/logs/charts-cli.log/mongodb-charts/logs//mongodb-charts/logs/ # This environment variable controls the built-in support widget and\n # metrics collection in MongoDB Charts. To disable both, set the value\n # to \"off\". The default is \"on\".\n CHARTS_SUPPORT_WIDGET_AND_METRICS: \"off\"\n # Directory where you can upload SSL certificates (.pem format) which\n # should be considered trusted self-signed or root certificates when\n # Charts is accessing MongoDB servers with ?ssl=true\n SSL_CERT_DIR: /mongodb-charts/volumes/db-certs\nnetworks:\n - backend\nsecrets:\n - charts-mongodb-uri\n",
"text": "@tomhollander, I don’t see any logs file created. There is no /mongodb-charts/logs/charts-cli.logI even tried creating directory /mongodb-charts/logs/ manually and redeployed.\nBut don’t see any files within /mongodb-charts/logs/This is the yml file I am usingversion: “3.3”services:\ncharts:\nimage: Quay\nhostname: charts\nports:\n# host:container port mapping. If you want MongoDB Charts to be\n# reachable on a different port on the docker host, change this\n# to :80, e.g. 8888:80.\n- 80:80\n- 443:443\nvolumes:\n- keys:/mongodb-charts/volumes/keys\n- logs:/mongodb-charts/volumes/logs\n- db-certs:/mongodb-charts/volumes/db-certs\n- web-certs:/mongodb-charts/volumes/web-certs\nenvironment:\n# The presence of following 2 environment variables will enable HTTPS on Charts server.\n# All HTTP requests will be redirected to HTTPS as well.\n# To enable HTTPS, upload your certificate and key file to the web-certs volume,\n# uncomment the following lines and replace with the names of your certificate and key file.\n# CHARTS_HTTPS_CERTIFICATE_FILE: charts-https.crt\n# CHARTS_HTTPS_CERTIFICATE_KEY_FILE: charts-https.keynetworks:\nbackend:volumes:\nkeys:\nlogs:\ndb-certs:\nweb-certs:secrets:\ncharts-mongodb-uri:\nexternal: trueIs something wrong here?",
"username": "alak_patel"
},
{
"code": "docker exec -it $(docker container ls --filter name=_charts -q) cat /mongodb-charts/logs/charts-cli.log\n",
"text": "Your Swarm file looks fine to me.Just to double check the log file (which should be there): can you try running the following from outside the container, while the container is running?Tom",
"username": "tomhollander"
},
{
"code": "> docker exec -it $(docker container ls --filter name=_charts -q) cat /mongodb-charts/logs/charts-cli.log \n> \n> 2020-07-15T23:18:07.028+00:00 INFO called charts-cli startup with arguments {\"_\":[\"startup\"],\"debug\":false,\"help\":false,\"version\":false,\"with-test-facilities\":false,\"withTestFacilities\":false,\"d\":\"/mongodb-charts\",\"directory\":\"/mongodb-charts\",\"$0\":\"mongodb-charts/bin/charts-cli.js\"} \n> 2020-07-15T23:18:07.033+00:00 INFO parsedArgs task success \n> 2020-07-15T23:18:07.035+00:00 INFO installDir task success ('/mongodb-charts') \n> 2020-07-15T23:18:07.035+00:00 INFO log task success \n> 2020-07-15T23:18:07.036+00:00 INFO salt task success \n> 2020-07-15T23:18:07.043+00:00 INFO productNameAndVersion task success ({ productName: 'MongoDB Charts Frontend', version: '1.9.1' }) \n> 2020-07-15T23:18:07.043+00:00 INFO gitHash task success ('1a46f17f') \n> 2020-07-15T23:18:07.043+00:00 INFO supportWidgetAndMetrics task success ('off') \n> 2020-07-15T23:18:07.043+00:00 INFO tileServer task success (undefined) \n> 2020-07-15T23:18:07.043+00:00 INFO tileAttributionMessage task success (undefined) \n> 2020-07-15T23:18:07.043+00:00 INFO rawFeatureFlags task success (undefined) \n> 2020-07-15T23:18:07.049+00:00 INFO stitchMigrationsLog task success ({ completedStitchMigrations: [] }) \n> 2020-07-15T23:18:07.050+00:00 INFO featureFlags task success ({}) \n> 2020-07-15T23:18:07.059+00:00 INFO lastAppJson task success ({}) \n> 2020-07-15T23:18:07.059+00:00 INFO existingInstallation task success (false) \n> 2020-07-15T23:18:07.060+00:00 INFO tenantId task success ('61e0ad7f-1aed-4f3d-b4a6-a183921a84c8') \n> 2020-07-15T23:18:07.062+00:00 INFO chartsMongoDBUri task success \n> 2020-07-15T23:18:07.064+00:00 INFO tokens task success \n> 2020-07-15T23:18:07.064+00:00 INFO encryptionKeyPath task success \n> 2020-07-15T23:18:07.064+00:00 INFO stitchConfigTemplate task success \n> 2020-07-15T23:18:07.066+00:00 INFO libMongoIsInPath task success (true) \n> 2020-07-15T23:18:07.165+00:00 INFO waiting for MongoDB, attempt #1 to connect to MongoDB at mongodb://username:*********@host1:27017,host2:27017,host3:27017/DB_NAME?replicaSet=rd0&readPreference=secondary&authSource=admin.",
"text": "@tomhollander,After deployment service was up and running. After service was up, the log file was just stuck atwaiting for MongoDB, attempt #1 to connect to MongoDBI thought it might take some time. So I checked every 1 hour. But still same status. Here is what I. see when I run the. command that you sent:",
"username": "alak_patel"
},
{
"code": "isMastertest-connection",
"text": "Thanks - at least we’re narrowing down the source of the problem. Basically it looks like it’s hanging when attempting to connect to the database. I’ve never seen this before so unfortunately I don’t have any great ideas on the cause or the solution. The code that’s hanging is basically just connecting to the DB and running isMaster, after which it should immediately log a success or failure message. If it fails, it will back off and try again a number of times, but there should still be multiple log events written.All I can really suggest is to try a simpler database configuration, e.g. a single node DB instead of a replica set, just to see if you can get it working. Once you have something working, maybe you can gradually add more things and figure out the cause. That said, connecting to a replica set normally works, and the fact that your test-connection script succeeded implies it’s set up fine. But I think you’ve reached a point where more exploratory options are needed.Tom",
"username": "tomhollander"
},
{
"code": "2020-07-22T15:44:26.708+00:00 INFO called charts-cli startup with arguments {\"_\":[\"startup\"],\"debug\":false,\"help\":false,\"version\":false,\"with-test-facilities\":false,\"withTestFacilities\":false,\"d\":\"/mongodb-charts\",\"directory\":\"/mongodb-charts\",\"$0\":\"mongodb-charts/bin/charts-cli.js\"} \n2020-07-22T15:44:26.712+00:00 INFO parsedArgs task success \n2020-07-22T15:44:26.714+00:00 INFO installDir task success ('/mongodb-charts') \n2020-07-22T15:44:26.714+00:00 INFO log task success \n2020-07-22T15:44:26.714+00:00 INFO salt task success \n2020-07-22T15:44:26.715+00:00 INFO productNameAndVersion task success ({ productName: 'MongoDB Charts Frontend', version: '1.9.1' }) \n2020-07-22T15:44:26.715+00:00 INFO gitHash task success ('1a46f17f') \n2020-07-22T15:44:26.715+00:00 INFO supportWidgetAndMetrics task success ('off') \n2020-07-22T15:44:26.715+00:00 INFO tileServer task success (undefined) \n2020-07-22T15:44:26.716+00:00 INFO tileAttributionMessage task success (undefined) \n2020-07-22T15:44:26.716+00:00 INFO rawFeatureFlags task success (undefined) \n2020-07-22T15:44:26.719+00:00 INFO stitchMigrationsLog task success ({ completedStitchMigrations: [] }) \n2020-07-22T15:44:26.719+00:00 INFO featureFlags task success ({}) \n2020-07-22T15:44:26.725+00:00 INFO lastAppJson task success ({}) \n2020-07-22T15:44:26.725+00:00 INFO existingInstallation task success (false) \n2020-07-22T15:44:26.726+00:00 INFO tenantId task success ('a47f2a2f-f3a6-4496-a413-036d9883b861') \n2020-07-22T15:44:26.727+00:00 INFO chartsMongoDBUri task success \n2020-07-22T15:44:26.728+00:00 INFO tokens task success \n2020-07-22T15:44:26.729+00:00 INFO encryptionKeyPath task success \n2020-07-22T15:44:26.729+00:00 INFO stitchConfigTemplate task success \n2020-07-22T15:44:26.730+00:00 INFO libMongoIsInPath task success (true) \n2020-07-22T15:44:26.829+00:00 INFO waiting for MongoDB, attempt #1 to connect to MongoDB at mongodb://user:*******@host:27017/DB?replicaSet=rd0&authSource=admin. \n2020-07-22T15:44:27.117+00:00 INFO waiting for MongoDB, successfully connected to MongoDB at mongodb://user:*******@host:27017/DB?replicaSet=rd0&authSource=admin after 1 attempts. \n2020-07-22T15:44:27.119+00:00 INFO mongoDBReachable task success (true) \n2020-07-22T15:44:27.124+00:00 INFO stitchMigrationsExecuted task success ([ 'stitch-1332', 'stitch-1897', 'stitch-2041', 'migrateStitchProductFlag', 'stitch-2041-local', 'stitch-2046-local', 'stitch-2055', 'multiregion', 'dropStitchLogLogIndexStarted' ]) \n2020-07-22T15:44:27.373+00:00 INFO minimumVersionRequirement task success (true) \n2020-07-22T15:44:27.376+00:00 INFO stitchConfig task success \n2020-07-22T15:44:27.378+00:00 INFO stitchConfigWritten task success (true) \n2020-07-22T15:44:27.385+00:00 INFO stitchChildProcess task success \n2020-07-22T15:44:27.485+00:00 INFO waiting for Stitch to start, attempt #1 to connect to Stitch server at http://localhost:8080. \n2020-07-22T15:44:27.490+00:00 WARN waiting for Stitch to start, attempt #1 failed: connect ECONNREFUSED 127.0.0.1:8080 \n2020-07-22T15:44:27.590+00:00 INFO waiting for Stitch to start, attempt #2 to connect to Stitch server at http://localhost:8080. \n2020-07-22T15:44:27.591+00:00 WARN waiting for Stitch to start, attempt #2 failed: connect ECONNREFUSED 127.0.0.1:8080 \n2020-07-22T15:44:27.791+00:00 INFO waiting for Stitch to start, attempt #3 to connect to Stitch server at http://localhost:8080. \n2020-07-22T15:44:27.792+00:00 WARN waiting for Stitch to start, attempt #3 failed: connect ECONNREFUSED 127.0.0.1:8080 \n2020-07-22T15:44:27.976+00:00 INFO indexesCreated task success (true) \n2020-07-22T15:44:28.095+00:00 INFO waiting for Stitch to start, attempt #4 to connect to Stitch server at http://localhost:8080. \n2020-07-22T15:44:28.096+00:00 WARN waiting for Stitch to start, attempt #4 failed: connect ECONNREFUSED 127.0.0.1:8080 \n2020-07-22T15:44:28.598+00:00 INFO waiting for Stitch to start, attempt #5 to connect to Stitch server at http://localhost:8080. \n2020-07-22T15:44:28.599+00:00 WARN waiting for Stitch to start, attempt #5 failed: connect ECONNREFUSED 127.0.0.1:8080 \n2020-07-22T15:44:29.399+00:00 INFO waiting for Stitch to start, attempt #6 to connect to Stitch server at http://localhost:8080. \n2020-07-22T15:44:29.400+00:00 WARN waiting for Stitch to start, attempt #6 failed: connect ECONNREFUSED 127.0.0.1:8080 \n2020-07-22T15:44:30.703+00:00 INFO waiting for Stitch to start, attempt #7 to connect to Stitch server at http://localhost:8080. \n2020-07-22T15:44:31.383+00:00 INFO waiting for Stitch to start, successfully connected to Stitch at http://localhost:8080 after 7 attempts. \n2020-07-22T15:44:31.383+00:00 INFO stitchServerRunning task success (true) \n",
"text": "@tomhollander, thanks a lot for your help I deleted app, auth, metadata and config databases. Then tried with the simplest connection URI. After 7 attempts it finally connected to Stitch and its working now.Logs:➜ mongodb-charts docker exec -it $(docker container ls --filter name=_charts -q) cat /mongodb-charts/logs/charts-cli.logI will now try adding more details to connection URL.Thank you for your time. Appreciate your help.",
"username": "alak_patel"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Charts - Add-user command error: An error occurred authenticating | 2020-07-09T19:10:38.634Z | MongoDB Charts - Add-user command error: An error occurred authenticating | 8,437 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "Hello everyoneI would like to know if it could be possible to make an aggregation of multiple collections in a single aggregation.I do not want to use any javascript function to perform this but only a single aggregation call.Is that possible?Sincerely\nEzequias",
"username": "Ezequias_Rocha"
},
{
"code": "",
"text": "Take a look atYou might also be interested in course M121 fromDiscover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.",
"username": "steevej"
},
{
"code": "",
"text": "I don’t really want to make a lookup but some summarization steps like $count in multiple collections at once.Thank you.",
"username": "Ezequias_Rocha"
},
{
"code": "db.players.insertMany([\n { player: 'Bob', fromTeam: 'A' },\n { player: 'Sam', fromTeam: 'B' },\n { player: 'Steeve', fromTeam: 'B' },\n]);\n\ndb.teams.insertMany([\n { team: 'A', country: 'US' },\n { team: 'B', country: 'Canada' },\n]);\n\ndb.coaches.insertMany([\n { coach: 'Daniel', yearOfExperience: 10 },\n]);\n{ \"totalPlayers\" : 3, \"totalTeams\" : 2, \"totalCoaches\" : 1 }\ndb.players.aggregate([\n // count documents in the current collection\n {\n $count: 'totalPlayers',\n },\n // join other collections, in which you need\n // to count documents\n {\n $lookup: {\n from: 'teams',\n pipeline: [\n {\n // count the documents in this specific\n // collection with the $count stage\n $count: 'result',\n },\n ],\n as: 'totalTeams',\n },\n },\n {\n $lookup: {\n from: 'coaches',\n pipeline: [\n {\n $count: 'result',\n },\n ],\n as: 'totalCoaches',\n },\n },\n // $convert arrays, returned by $lookup pipelines,\n // so we count easily reach the 'result' prop\n {\n $unwind: '$totalTeams',\n },\n {\n $unwind: '$totalCoaches',\n },\n // reset the total-props by reaching the 'result' value\n {\n $addFields: {\n totalTeams: '$totalTeams.result',\n totalCoaches: '$totalCoaches.result',\n },\n },\n]).pretty();\n",
"text": "Hello, @Ezequias_Rocha!I don’t really want to make a lookup but some summarization steps like $count in multiple collections at once.As @steevej already pointed out, it is possible to do with $lookup operator.Let me show you by an example.Assume, we have the following dataset:To count the documents in each collection and get the result like this:We can use the following aggregation, that uses a set of $lookup stages with nested pipeline:",
"username": "slava"
},
{
"code": "db.players.aggregate([\n // count documents in the current collection\n {\n $count: 'totalPlayers',\n },\n // join other collections, in which you need\n // to count documents\n {\n $lookup: {\n from: 'teams',\n pipeline: [\n {\n // count the documents in this specific\n // collection with the $count stage\n $count: 'result',\n },\n ],\n as: 'totalTeams',\n },\n },\n {\n $lookup: {\n from: 'coaches',\n pipeline: [\n {\n $count: 'result',\n },\n ],\n as: 'totalCoaches',\n },\n },\n // $convert arrays, returned by $lookup pipelines,\n // so we count easily reach the 'result' prop\n {\n $unwind: '$totalTeams',\n },\n {\n $unwind: '$totalCoaches',\n },\n // reset the total-props by reaching the 'result' value\n {\n $addFields: {\n totalTeams: '$totalTeams.result',\n totalCoaches: '$totalCoaches.result',\n },\n },\n]).pretty();\n",
"text": "Thank you @slava in my case i have no match (I would like to perform an full outer join) and would like to filter elements by a common field type (a date type) .I could apply the same strategy?BTW: I am in v4.0.6Best regards\nEzequias",
"username": "Ezequias_Rocha"
},
{
"code": "",
"text": "in my case i have no match (I would like to perform an full outer join) and would like to filter elements by a common field type (a date type) .I could apply the same strategy?In the example aggregation above we count all the documents from certain collections.\nBut, you can also selectively filter the documents form each collection. Just add the $match stage before any $count stage, depending what collection you want to filter before count.",
"username": "slava"
},
{
"code": "",
"text": "It worked! @slava ",
"username": "Ezequias_Rocha"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Is it possible make an aggregation of multiples collections at once? | 2020-07-22T12:57:32.040Z | Is it possible make an aggregation of multiples collections at once? | 2,123 |
null | [
"student-developer-pack"
] | [
{
"code": "",
"text": "I have signed up in MongoDB with my GitHub Student Account. Is it right that I can get free certification if i have signed with student account?",
"username": "Nirmal_Patel"
},
{
"code": "",
"text": "Here is a link to the MongoDB Student Pack.Free certification means, I think, free registration to take the certification exam. Normally there is a fee to take the certification exam.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Hi @Prasad_Saya, that’s correct Once you’ve completed one of our Learning Paths, you’ll receive 100% discount to the exam.",
"username": "Lieke_Boon"
},
{
"code": "",
"text": "Hello @Lieke_Boonis this discount bound to the student program or does this count in general?Michael ",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Hi Michael,This offer is part of our MongoDB Students pack Best,Lieke",
"username": "Lieke_Boon"
},
{
"code": "",
"text": "It’s written\nDuring this COVID-19 time, we are here to help you. Complete one of our learning paths and enrich your resume with our free certification!So it will be available only for some Time",
"username": "Alex_Beckham"
},
{
"code": "",
"text": "Hi @Alex_BeckhamWelcome to the forum!That’s correct. It might change in the future, but at this moment we offer the free certification temporarily now that most schools & universities are closed. We don’t know when the COVID-19 pandemic will end, but we’re expecting that this situation will last for a while longer, and we’re not planning on changing this offer in the close future.",
"username": "Lieke_Boon"
},
{
"code": "",
"text": "So Which Certificates do we get for freeBoth Exams Listed here MongoDB Courses and Trainings | MongoDB UniversityBoth have price tags of 150.00 USD , this price will be forgiven , Just asking",
"username": "Alex_Beckham"
},
{
"code": "",
"text": "Hi @Alex_Beckham I’m pretty sure that the voucher is only for a single exam. You would have to choose between the DBA or the Developer certification.If my understanding on that is incorrect then hopefully @Lieke_Boon will come by soon and correct me.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "That’s correct @Doug_Duncan, thank you!Once you’ve completed one of our Learning Paths , you’ll receive 100% discount to the exam. So it’s either for the DBA or for the Developer certification exam.",
"username": "Lieke_Boon"
},
{
"code": "",
"text": "@Lieke_Boon , Please could you explain how to request a 100% discount? I completed the “Developer Learning Path” using my Github Students pack",
"username": "Osama_Rashwan"
},
{
"code": "",
"text": "Hi @Osama_Rashwan,Welcome to the forum! I’ve responded to your email with the code Good luck!Lieke",
"username": "Lieke_Boon"
},
{
"code": "",
"text": "HiTo receive a code, please follow the instructions mentioned here: MongoDB Student Pack.Thank you!",
"username": "Lieke_Boon"
},
{
"code": "",
"text": "I have completed the DBA Path.\nWhen will I get the voucher?",
"username": "Ajinkya_Bapat"
},
{
"code": "",
"text": "Hi @Ajinkya_BapatPlease go to your dashboard at MongoDB Student Pack and follow the instructions underneath ‘Free Certification’.Thank you ",
"username": "Lieke_Boon"
}
] | Free Certification for Student | 2020-05-03T06:24:57.988Z | Free Certification for Student | 19,440 |
null | [
"student-developer-pack"
] | [
{
"code": "",
"text": "I have signed up for MongoDB University using my GitHub Students Pack.\nI have already completed DBA Certification Path.\nBut I have not yet received any voucher/code for the exam.Could you please let me know about the same.",
"username": "Ajinkya_Bapat"
},
{
"code": "",
"text": "Hi @Ajinkya_Bapat,Welcome to the community!Please follow the instructions on MongoDB Student Pack on how to receive the Free Certification voucher.Thank you!Lieke",
"username": "Lieke_Boon"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Completed DBA Path, waiting for Free Certification voucher | 2020-07-22T05:33:22.658Z | Completed DBA Path, waiting for Free Certification voucher | 7,606 |
[
"c-driver"
] | [
{
"code": "|#0|0x00007fff68e78e52 in _platform_strlen ()|\n|---|---|\n|#1|0x00000001003cb97f in _mongoc_handshake_build_doc_with_application at /Users/reeteshranjan/dev/mongo-c-driver-1.16.2/src/libmongoc/src/mongoc/mongoc-handshake.c:525|\n|#2|0x00000001003e6186 in _build_ismaster_with_handshake [inlined] at /Users/reeteshranjan/dev/mongo-c-driver-1.16.2/src/libmongoc/src/mongoc/mongoc-topology-scanner.c:125|\n|#3|0x00000001003e613f in _mongoc_topology_scanner_get_ismaster at /Users/reeteshranjan/dev/mongo-c-driver-1.16.2/src/libmongoc/src/mongoc/mongoc-topology-scanner.c:156|\n|#4|0x00000001003e6c7c in _begin_ismaster_cmd at /Users/reeteshranjan/dev/mongo-c-driver-1.16.2/src/libmongoc/src/mongoc/mongoc-topology-scanner.c:184|\n|#5|0x00000001003e6bc2 in mongoc_topology_scanner_node_setup_tcp at /Users/reeteshranjan/dev/mongo-c-driver-1.16.2/src/libmongoc/src/mongoc/mongoc-topology-scanner.c:703|\n|#6|0x00000001003e67b6 in mongoc_topology_scanner_node_setup at /Users/reeteshranjan/dev/mongo-c-driver-1.16.2/src/libmongoc/src/mongoc/mongoc-topology-scanner.c:823|\n|#7|0x00000001003e710c in mongoc_topology_scanner_start at /Users/reeteshranjan/dev/mongo-c-driver-1.16.2/src/libmongoc/src/mongoc/mongoc-topology-scanner.c:947|\n|#8|0x00000001003e2c01 in mongoc_topology_scan_once at /Users/reeteshranjan/dev/mongo-c-driver-1.16.2/src/libmongoc/src/mongoc/mongoc-topology.c:587|\n|#9|0x00000001003e2f15 in _mongoc_topology_do_blocking_scan [inlined] at /Users/reeteshranjan/dev/mongo-c-driver-1.16.2/src/libmongoc/src/mongoc/mongoc-topology.c:621|\n|#10|0x00000001003e2ef0 in mongoc_topology_select_server_id at /Users/reeteshranjan/dev/mongo-c-driver-1.16.2/src/libmongoc/src/mongoc/mongoc-topology.c:854|\n|#11|0x00000001003b6465 in _mongoc_cluster_stream_for_optype at /Users/reeteshranjan/dev/mongo-c-driver-1.16.2/src/libmongoc/src/mongoc/mongoc-cluster.c:2282|\n|#12|0x00000001003bfc1e in _mongoc_cursor_fetch_stream at /Users/reeteshranjan/dev/mongo-c-driver-1.16.2/src/libmongoc/src/mongoc/mongoc-cursor.c:662|\n|#13|0x00000001003c2c1a in _prime at /Users/reeteshranjan/dev/mongo-c-driver-1.16.2/src/libmongoc/src/mongoc/mongoc-cursor-find.c:40|\n|#14|0x00000001003c1038 in _call_transition [inlined] at /Users/reeteshranjan/dev/mongo-c-driver-1.16.2/src/libmongoc/src/mongoc/mongoc-cursor.c:1199|\n|#15|0x00000001003c102b in mongoc_cursor_next at /Users/reeteshranjan/dev/mongo-c-driver-1.16.2/src/libmongoc/src/mongoc/mongoc-cursor.c:1275|\n2020/07/16 01:46:50.0873: [77295]: TRACE: mongoc: ENTRY: mongoc_topology_description_init():75\n2020/07/16 01:46:50.0874: [77295]: TRACE: mongoc: EXIT: mongoc_topology_description_init():94\n2020/07/16 01:46:50.0874: [77295]: TRACE: mongoc: ENTRY: mongoc_server_description_init():115\n2020/07/16 01:46:50.0874: [77295]: TRACE: mongoc: EXIT: mongoc_server_description_init():139\n2020/07/16 01:46:50.0874: [77295]: TRACE: cluster: ENTRY: mongoc_cluster_init():2147\n2020/07/16 01:46:50.0874: [77295]: TRACE: cluster: EXIT: mongoc_cluster_init():2174\n2020/07/16 01:46:50.0874: [77295]: TRACE: database: ENTRY: _mongoc_database_new():66\n2020/07/16 01:46:50.0874: [77295]: TRACE: database: EXIT: _mongoc_database_new():82\n2020/07/16 01:46:50.0874: [77295]: TRACE: collection: ENTRY: _mongoc_collection_new():172\n2020/07/16 01:46:50.0874: [77295]: TRACE: collection: EXIT: _mongoc_collection_new():197\n2020/07/16 01:46:50.0874: [77295]: TRACE: collection: ENTRY: _mongoc_collection_new():172\n2020/07/16 01:46:50.0880: [77295]: TRACE: collection: EXIT: _mongoc_collection_new():197\n2020/07/16 01:46:50.0880: [77295]: TRACE: collection: ENTRY: _mongoc_collection_new():172\n2020/07/16 01:46:50.0880: [77295]: TRACE: collection: EXIT: _mongoc_collection_new():197\n2020/07/16 01:46:53.0744: [77295]: TRACE: cursor: ENTRY: _mongoc_cursor_new_with_opts():245\n2020/07/16 01:46:53.0744: [77295]: TRACE: cursor: EXIT: _mongoc_cursor_new_with_opts():388\n2020/07/16 01:46:53.0744: [77295]: TRACE: cursor: ENTRY: mongoc_cursor_error():1139\n2020/07/16 01:46:53.0744: [77295]: TRACE: cursor: EXIT: mongoc_cursor_error():1141\n2020/07/16 01:46:53.0744: [77295]: TRACE: cursor: ENTRY: mongoc_cursor_error_document():1150\n2020/07/16 01:46:53.0744: [77295]: TRACE: cursor: EXIT: mongoc_cursor_error_document():1172\n2020/07/16 01:46:53.0744: [77295]: TRACE: cursor: ENTRY: mongoc_cursor_next():1213\n2020/07/16 01:46:53.0744: [77295]: TRACE: cursor: TRACE: mongoc_cursor_next():1218 cursor_id(0)\n2020/07/16 01:46:53.0744: [77295]: TRACE: cursor: ENTRY: _mongoc_cursor_fetch_stream():651\n2020/07/16 01:46:53.0752: [77295]: TRACE: cluster: ENTRY: _mongoc_cluster_stream_for_optype():2278\n2020/07/16 01:46:53.0752: [77295]: TRACE: topology_scanner: ENTRY: mongoc_topology_scanner_node_setup_tcp():661\n",
"text": "I am using community edition 4.2.8 (installed via brew) and libmongoc 1.16.2 compiled on my OSX Catalina.I have setup a DB, few collections and indexes using the mongo shell (no data added though).Built my app following the tutorials/documentation for libmongoc and libbson.I’m getting a crash while trying to use the cursor obtained for a find operation. The crash is similar to https://groups.google.com/g/mongodb-user/c/TqC185jDfAA/m/oxgko4PiDgAJ. However; I could not use the thread to solve my issue. The user having reported that seems to have some memory allocation issues, which I don’t see to be the case about my implementation.This is the stack trace I see in Xcode.I got this trace log after compiling with tracing on:The attached screenshot shows the bad access reported by Xcode.Has anyone seen this and knows what causes this?\nScreenshot 2020-07-16 at 1.39.31 AM989×671 93 KB\n",
"username": "Reetesh_Ranjan"
},
{
"code": "mongoc_cleanup()mongoc_cleanup()",
"text": "Hi @Reetesh_Ranjan, the stack trace looks similar to the one reported here: https://jira.mongodb.org/browse/CDRIVER-3674In that case, the application was calling mongoc_cleanup() before the application terminated. mongoc_cleanup cleans up global state and can only be called once. After it is called, it is invalid to call other C driver functions. Is it possible mongoc_cleanup() is getting called before your application terminates? Perhaps in a separate thread?",
"username": "Kevin_Albertson"
}
] | Handshake crashes while trying to work with cursor | 2020-07-15T20:30:06.101Z | Handshake crashes while trying to work with cursor | 2,363 |
|
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Let’s say I have an entity (Document) with 20 fields, one of them is an array of objects with 2 fields each. I read that when a single document is queried, mongo stores in memory that document (Not 100% sure of this fact though, feel free to correct me if I am wrong).\nIf I query that document but I want to retrieve just the rest of the fields (Without the array of objects), mongo will map the whole document or just the fields I specify? I am a little worried about this fact since that array con growth to likely 500 or more elements.In one of the basics queries in my Workload that array is essential, but not when I query the document directly (By ID).Any info or suggestions will be welcome.\nThanks in advance!",
"username": "Gabriel_Betancourt"
},
{
"code": "",
"text": "Hello @Gabriel_Betancourt welcome to the community!mongo stores in memory that document (Not 100% sure of this fact though, feel free to correct me if I am wrong).That is true, MongoDB will read the data into your working set (say: RAM) . Basically you need to make sure that your data fits into RAM to get a good performance. For a first step to get familiar I like to point you to the free MongoDB University classes (in this sequence)Concerning your query question: By default, queries in MongoDB return all fields in matching documents. To limit the amount of data that MongoDB sends to applications, you can include a projection document to specify or restrict fields to return. Taken from: Project Fields to Return from Query\nIn this document you can read how to limit the files returned, and further down also how to show only specific fields of an array (just in case).\nStepping forward, I like to mention that infinity growing arrays or very large arrays can lead to performance issues. How to workaround that can vary and depends on your use case. One common path is to move a huge constantly growing array to an extra collection and utilize the MongoDBs features e.g. indexing, covered index searches, …Hope that helps\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Hello @michael_hoeller,\nThanks for the fast and helpful reply!I am relative new in the NoSQL world from a SQL background so there is a lot to learn. I am also indeed checking those lesson from MongoDB university at the same time I am building the project with Atlas.I was aware about the projection feature, but my doubt was if even when I specify certain fields to return in the query, the full document still will be managed in memory (RAM as you mentioned). If I understood well what you said, it actually does, so yes, probably that array there can be problematic over time.My use case can be described like this:\nI need to query a collection of users and filter for a few fields, but also, check into that embedded array that the user who made the query doesn’t exist there. The embedded option for this particular query looks useful, as the array is embedded I can add a new condition to the query in an easy way, but now with the fact that mongo ‘‘read’’ the whole document , the growing possibility of the array don’t looks very well.About your approach, resolve the array growing problem but I’ll need then to make join with the other documents to validate its ‘‘non-existen’’ state, and we are talking about a critical and very often operation in the system. So in terms of performance not sure what is better for my case.",
"username": "Gabriel_Betancourt"
},
{
"code": "",
"text": "Hello @Gabriel_Betancourt,sorry for the delay, I was abroad. Concerning your use case: I don’t think there is enough clarity to provide a recommendation. You mention that you commonly want to look for fields in an embedded array (so all of the array is being used?), but you are also concerned about RAM. I’d like point you to the following documents to support your decision:When you still feel unsure after visiting the mentioned docs, feel free to provide some sample data and what you want to archive. I am pretty sure that we, as in the community, will find an answer.Regards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "{\n\"degree\" : NumberInt(1), \n\"rating\" : NumberInt(3), \n\"records\" : [\n {\n \"userId\" : ObjectId(\"5f0b29c78f491172cfe8b049\"), \n \"type\" : \"pending\"\n },\n {\n \"userId\" : ObjectId(\"5f0b29c78f491172ct48b077\"), \n \"type\" : \"done\"\n }\n]}\n{ \n\"_id\" : ObjectId(\"5f0fb901b320f5ec21269279\"), \n\"userId\" : ObjectId(\"5f0b29c78f491172cfe8b04a\"), \n\"record\" : [\n {\n \"user\" : ObjectId(\"5f0b29c78f491172cfe8b049\"), \n \"name\" : \"Gabriel\", \n \"type\" : \"progress\"\n }, \n {\n \"user\" : ObjectId(\"5f0b29c78f491172cfe8b04b\"), \n \"name\" : \"Rivaldo\", \n \"type\" : \"sended\"\n }\n]\n",
"text": "Hello @michael_hoeller.\nI was researching and taking a look of all the docs, so allow me to share with you the conclusions for a second approach about it.My use case is the next: I have a Users collection, but I need to keep a record of the interactions of that user with other users, in an array. Something like this:This is in case of the embedded solution. One of the main queries is to filter the Users collection and fetch those users with whom I, (The user who query) did not have any interaction yet, so I need to check in the ‘‘records’’ array that my ID is not there.The thing is that this array can be short in some cases (40, 50) in the minor case, but it can be hundreds or thousands as well, so, the Users collection is getting queried very often (and the previous array check is not always necessary), Taking that in consideration, I thought having that array embedded is not a good idea.So, I think the other solution is to have the Records in another collection, One to One relationship and made the query via $lookup (I already tested in Compass and it’s possible, it works).The records collection will look something like:}Also it allows me to add more fields or modify the Records schema if the specificities of the project changes, (very high probability) without worry about growing, or modifying the users collection too often, since it is the most important collection of the project.What are your thoughts about it? Thanks in advance!Regards, Gabriel",
"username": "Gabriel_Betancourt"
}
] | Related with querying a single document | 2020-07-13T07:04:27.340Z | Related with querying a single document | 2,708 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Hi, I am new to MongoDB. Have a basic question of avoiding duplicate inserts when using update() with upsert=true. Based on the concurrency control document (https://docs.mongodb.com/manual/core/write-operations-atomicity/index.html#concurrency-control), it requires a unique index to prevent insertions or updates from creating duplicate data. But due to the nature of our data, it is not possible define a unique index on documents in a collection. Is there any other way to avoid the duplicate insert problem? Thanks.",
"username": "Angelsfly_Because"
},
{
"code": "",
"text": "Welcome to the community @Angelsfly_Because!due to the nature of our data, it is not possible define a unique index on documents in a collectionCan you provide more detail on your use case and why a unique index would not be an option? An example document or two would be helpful.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi Stennie,Thanks for asking. Here is my explanation of our unique situation that really makes a unique index or compound unique index really difficult.The data model is a bit complicated. My explanation could be long. Please bear with me.\nInitially, we have 3 types of incoming documents and they are placed into the same collection.type 1 could have 3 attributes: X, Y, Z\ntype 2 could have 4 attributes: X, Y, M, B\ntype 3 could have 2 attributes: L, M, BIn each type, each attribute value is uniquely defined. For instance, 2 type(3) documents have the\nsame L value will be considered as equivalent and they will be merged into a new document, and old\ndocuments will be removed. It is also true for M and B for type(3). Each attribute is optional but\nas least on of them will be present. So it is possible, one type(3) has L, B and the other type(3)\nonly has a M. So if a new type(3) comes in with L, M and their values are the same. This new document\nwill trigger a merge of all three.It is possible after merge, some attribute could have multiple values, for instance, two type(3)\ndocuments with the same L but different M, then the new document will have both values for M\nattribute. The merge also applies to the different types. For instance, one type(1) with X, Y\nand Z, and one type(2) with X, M, and B. They both have the same X value, then they will be merged\nas well. After the merge, the new document will have X, Y, Z, M, and B attributes. Say a new\ntype(3) document comes in with B and it has the same value as B, then it will be merged into as well.\nIn other words, a type(1) and a type(3) could be merged because of a type(2).Now the problem is we have a type(1) document with X, Y and a type(2) document with Y, B. During the\ndatabase query stage, both find there is no conflict. So each one tries to write into the db. But\nthat is incorrect because they have the same Y value. Therefore, they should be written into db as\none document.We cannot define a unique index. For instance, if we set the X as the unique index, then it is\npossible type(2) and type(3) conflict because of B. If we define a compound index X and B, then it\nis possible we have two type(3) documents with the same L value being written into db and that is\na conflict.",
"username": "Angelsfly_Because"
}
] | How to avoid duplicate upserts without a unique index | 2020-07-15T12:50:41.189Z | How to avoid duplicate upserts without a unique index | 8,560 |
null | [
"aggregation"
] | [
{
"code": "{\n \"id\": \"...\",\n \"section\": \"x\"\n \"items\": [\n {\n \"id\": 1,\n \"count\": 10,\n }, {\n \"id\": 2,\n \"count\": 20,\n }\n ]\n},\n{\n \"id\": \"...\",\n \"section\": \"x\"\n \"items\": [\n {\n \"id\": 1,\n \"count\": 100,\n }, {\n \"id\": 2,\n \"count\": 200,\n }\n ]\n}\ndb.c.aggregate([\n { $unwind: \"$items\"},\n { $group :\n {\n _id : \"$items.id\",\n SumCount: { $sum: \"$items.count\" },\n }\n }\n]);\n1 | 110\n2 | 220\nCOUNT(section) OVER (PARTITION BY section) AS [COUNT_Section]\n1 | 110 | 2 -- two document with section:'x'\n2 | 220 | 2\n",
"text": "How i can do mongodb group by query with like sql over by PARTITION ?Now result:But need add column count by field ‘section’ like in sql :Need result:",
"username": "alexov_inbox"
},
{
"code": "",
"text": "any news , idea, variants ?",
"username": "alexov_inbox"
},
{
"code": "",
"text": "mb question is wrong or something is not clear?",
"username": "alexov_inbox"
},
{
"code": "db.c.aggregate([\n {\n $unwind: '$items',\n },\n {\n $group: {\n _id: '$items.id',\n sumCount: {\n $sum: '$items.count',\n },\n totalDocs: {\n $sum: 1,\n }\n }\n }\n]);\n[\n { \"_id\" : 2, \"sumCount\" : 220, \"totalDocs\" : 2 },\n { \"_id\" : 1, \"sumCount\" : 110, \"totalDocs\" : 2 }\n]\n",
"text": "Hello, @alexov_inbox!You just need to sum up the documents in your $group stage. Like this:The output will be:",
"username": "slava"
},
{
"code": "{\n \"id\": \"...\",\n \"section\": \"x\"\n \"items\": [\n {\n \"id\": 1,\n \"count\": 10,\n },\n {\n \"id\": 1,\n \"count\": 10,\n }, {\n \"id\": 2,\n \"count\": 20,\n }\n ]\n},",
"text": "its ‘live hack’ not real over by partition\nbecause if nested array contain two items with equal keys , will return “totalDocs” : 3 , but documents with section ‘x’ = 2",
"username": "alexov_inbox"
},
{
"code": "{\n \"id\": \"...\",\n \"section\": \"x\"\n \"items\": [\n {\n \"id\": 1,\n \"count\": 10,\n },\n {\n \"id\": 1,\n \"count\": 10,\n }, {\n \"id\": 2,\n \"count\": 20,\n }\n ]\n},\n[\n { \"_id\" : 2, \"sumCount\" : 20, \"totalDocs\" : 1 },\n { \"_id\" : 1, \"sumCount\" : 20, \"totalDocs\" : 2 }\n]\n",
"text": "For this dataset the above aggregation returns the following result:Isn’t this what you expect to achieve?",
"username": "slava"
},
{
"code": "{\n \"id\": \"...\",\n \"section\": \"x\"\n \"items\": [\n {\n \"id\": 1,\n \"count\": 10,\n },\n {\n \"id\": 1,\n \"count\": 10,\n }, {\n \"id\": 2,\n \"count\": 20,\n }\n ]\n},\n\n{\n \"id\": \"...\",\n \"section\": \"x\"\n \"items\": [\n {\n \"id\": 1,\n \"count\": 100,\n }, {\n \"id\": 2,\n \"count\": 200,\n }\n ]\n}\n[\n { \"_id\" : 2, \"sumCount\" : 220, \"totalDocsPerFieldSection\" : 2 },\n { \"_id\" : 1, \"sumCount\" : 120, \"totalDocsPerFieldSection\" : 2 }\n]",
"text": "yes , its wrong, i have only 2 document with section: ‘x’. SQL COUNT(section) OVER (PARTITION BY section) AS [COUNT_Section] return 2 for any result rowsshould return: (and better rename field totalDocs to totalDocsPerFieldSection)",
"username": "alexov_inbox"
},
{
"code": "db.test1.aggregate([\n {\n $unwind: '$items',\n },\n {\n $group: {\n _id: '$items.id',\n sumCount: {\n $sum: '$items.count',\n },\n docsInvolved: {\n $addToSet: '$_id',\n },\n },\n },\n {\n $project: {\n sumCount: true,\n totalSectionsThatContainThisItem: {\n $size: '$docsInvolved',\n },\n },\n },\n]).pretty();\n[\n { \"_id\" : 2, \"sumCount\" : 220, \"totalSectionsThatContainThisItem\" : 2 },\n { \"_id\" : 1, \"sumCount\" : 120, \"totalSectionsThatContainThisItem\" : 2 }\n]\n",
"text": "Try this:It returns:",
"username": "slava"
}
] | MongoDB group by with over by partition as SQL | 2020-07-12T22:10:46.216Z | MongoDB group by with over by partition as SQL | 5,212 |
null | [
"swift",
"app-services-user-auth"
] | [
{
"code": "",
"text": "Just started experimenting with MongoDB Realm using the iOS Swift SDK. If I want to present a different UI based on the user’s role(s), is there a built in mechanism to introspect the user roles using the SDK, or do I need to define custom user data to achieve that?",
"username": "nimi"
},
{
"code": "",
"text": "Using custom user data that is used for defining roles and the UI is probably the way to go here.",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Trying to do that but the custom user data on the client remains nil.This is what I did:What am I missing? Also, is there a tutorial on using custom user data in the iOS SDK?",
"username": "nimi"
},
{
"code": "",
"text": "Hi Nimi,Can you paste the document you added? The object_id will need to be in string format.On top of that, it would also help to logout of your application and log back in to the application for that user to see the changes.",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "{\"_id\":{\"$oid\":“5f16ec54edf52a664a47709b”},“userID”:{\"$oid\":“5f144bbdf6355fb1cb482543”},“roles”:{\"$numberInt\":“1”}}The collection for the custom data is called User. Is that a valid name or can it cause some kind of collision with the user object?I did not include a _partitionKey in the custom data. Is that required? I’m assuming not as this object supposedly lives outside of any realm, right?Logging out and back in did not resolve the issue.",
"username": "nimi"
},
{
"code": "{\"_id\":{\"$oid\":“5f16ec54edf52a664a47709b”},“userID”: “5f144bbdf6355fb1cb482543”,“roles”:{\"$numberInt\":“1”}}",
"text": "the userID will have to be a string as I mentioned in the previous comment. Can you try:{\"_id\":{\"$oid\":“5f16ec54edf52a664a47709b”},“userID”: “5f144bbdf6355fb1cb482543”,“roles”:{\"$numberInt\":“1”}}",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "It works! Sorry, I rushed through your suggestion to turn it into a string and just looked at the value itself, not the ObjectID wrapper…Thanks so much for your prompt and useful responses!",
"username": "nimi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Introspecting the user's role on the client | 2020-07-20T12:23:26.283Z | Introspecting the user’s role on the client | 1,854 |
null | [
"flutter"
] | [
{
"code": "",
"text": "Dear Sir,\nHello,Could you please create a MongoDB Driver for Dart and Flutter like node-js driver from MongoDB company itself because Dart is very near to JS and node-js has many tutorials and the Dart driver from dart community is not that good. ( mongo_dart | Dart Package ) It missing many features and the documentation is not good and the creator is busy to fix bugs and add the new features.So, please Dart and Flutter is increasing everyday, please we want our driver from the MongoDB company like node-jsBest Regards,",
"username": "Tom_William"
},
{
"code": "",
"text": "Hi @Tom_William,There’s a dedicated MongoDB Feedback site for product feature and improvement suggestions.Please upvote Official Dart driver and watch the issue for updates.A related request is Dart/Flutter Support for MongoDB Realm.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Driver for Dart and Flutter | 2020-07-21T23:51:17.440Z | MongoDB Driver for Dart and Flutter | 7,335 |
null | [] | [
{
"code": "{\n\"authInfo\" : \n {\"authenticatedUsers\" : [{ \"user\" : \"~~\", \"db\" : \"$external\" }],\n \"authenticatedUserRoles\" : [{\"role\" : \"root\",\"db\" : \"admin\"}]\n },\n \"ok\" : 1\n}\nuncaught exception: Error: shutdownServer failed: {\n \"ok\" : 0,**\n \"errmsg\" : \"shutdown must run from localhost when running db without auth\",**\n \"code\" : 13,\n \"codeName\" : \"Unauthorized\"\n} :\n_getErrorWithCode@src/mongo/shell/utils.js:25:13\nDB.prototype.shutdownServer@src/mongo/shell/db.js:426:19\n@(shell):1:1\n",
"text": "I made user - db.getSiblingDB(\"$external\").runCommand(~~)I authenticated - db.getSiblingDB(\"$external\").auth(~~)I checked - db.runCommand({‘connectionStatus’ : 1})lastly I want - use admin - db.shutdownServer()\nbut, this message appears.how to shutdown when x.509",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "Hi @Kim_Hakseon,According to the error it seems you have not enabled auth at all.Without auth we require a shutdown from a local connection only.Perhaps your configuration does not take place. Can you share the guide and configuration you have for your auth mechanism.Thanks,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "# mongod.conf# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/# where to write logging data.\nsystemLog:\ndestination: file\nlogAppend: true\npath: /mongodb/log/mongodb.log# Where and how to store data.\nstorage:\ndbPath: /mongodb/data\njournal:\nenabled: true\n# engine:\n# wiredTiger:# how the process runs\nprocessManagement:\nfork: true # fork and run in background\npidFilePath: /mongodb/mongod.pid #location of pidfile# network interfaces\nnet:\nport: 27017\nbindIp: 0.0.0.0 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.\ntls:\nmode: requireTLS\ncertificateKeyFile: /mongodb/key/server.pem\nCAFile: /mongodb/key/ca.crt# securityThis is my config file.Do I have to do <authorization: “enabled”> even though I set up TLS?",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "Hi @Kim_Hakseon,Ssl configuration does not imply authorization and this is just an authentication method.Autherzation, is who is allowed to do what according to the specified roles. Our best practice is to set at least one Autherzation (users/LDAP etc.).Otherwise we allow shutdown only from local host.Kind regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Is it different from this authorization?db.getSiblingDB(\"$external\").runCommand(\n{\ncreateUser: “CN=myName,OU=myOrgUnit,O=myOrg,L=myLocality,ST=myState,C=myCountry”,\nroles: [\n{ role: “root”, db: “admin” }\n]\n}\n)db.getSiblingDB(\"$external\").auth(\n{\nmechanism: “MONGODB-X509”\n}\n)",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "So I tried, modified the config file like this,security:\nkeyFile: /key/mongodb-keyfile\nauthorization: “enabled”and typed this.>db.createUser({user:“admin”,pwd:“admin”,roles:[{role:“root”,db:“admin”}]})but,uncaught exception: Error: couldn’t add user: command createUser requires authentication :\n_getErrorWithCode@src/mongo/shell/utils.js:25:13\nDB.prototype.createUser@src/mongo/shell/db.js:1343:11\n@(shell):1:1I tried, modified the config file like this,#security:\n#keyFile: /key/mongodb-keyfile\n#authorization: “enabled”and typed this.>use admin\n>db.createUser({user:“admin”,pwd:“admin”,roles:[{role:“root”,db:“admin”}]})\nSuccessfully added user: {\n“user” : “admin”,\n“roles” : [\n{\n“role” : “root”,\n“db” : “admin”\n}\n]\n}\n> db.auth(“admin”,“admin”)\n1but, I tried this command… I’m so sad>db.shutdownServer()\nError: shutdownServer failed: {\n“ok” : 0,\n“errmsg” : “shutdown must run from localhost when running db without auth”,\n“code” : 13,\n“codeName” : “Unauthorized”\n} :\n_getErrorWithCode@src/mongo/shell/utils.js:25:13\nDB.prototype.shutdownServer@src/mongo/shell/db.js:426:19\n@(shell):1:1",
"username": "Kim_Hakseon"
},
{
"code": "kill",
"text": "Hi @Kim_Hakseon,First you can ssh to the box and perform a regular kill or windows service stop which will result in a graceful shutdown.If you need a user/pwd auth you need to create the user before you enable auth. Afterwards you have to authenticate with that user.Please note that x509 associated roles are autherzation as well, perhaps you do the authentication wrong.Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Please verify you do everything according",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "I was inspired by your answer and tried it.I finally succeeded.\nI knew I had to do it all at once.\nThank you… ",
"username": "Kim_Hakseon"
}
] | How to shutdown when x.509 | 2020-07-21T01:56:49.330Z | How to shutdown when x.509 | 4,656 |
null | [] | [
{
"code": "",
"text": "I have next document\nuser = {\n_id: ObjectID\nname: string // it is unique\nmodified: Time,\nfield1:\nfield2:\n…\nfieldN\n}\nAnd I want to add new_user to collection with next logic.It is possible to do this using findOne and then insertOne or updateOne commands. But can be such logic done with single command?And additional question. I have a bunch of users and I want to do this logic for these users. Can I perform it with single command, like updateMany with some params?",
"username": "Roman_Buzuk"
},
{
"code": "db.collection.updateupdateMany",
"text": "It is possible to do this using findOne and then insertOne or updateOne commands. But can be such logic done with single command?The db.collection.update method has features that address the questions you have about insert when not exists, update and conditional update. Please refer these documentation links (all are on one page and related):And additional question. I have a bunch of users and I want to do this logic for these users. Can I perform it with single command, like updateMany with some params?The updateMany can be used to update multiple documents. If you are able to apply your logic with one user data, I think, it can be applied to multiple documents at a time using this method.",
"username": "Prasad_Saya"
}
] | Update with conditions | 2020-07-21T14:17:07.204Z | Update with conditions | 3,996 |
null | [] | [
{
"code": "",
"text": "Realm database size increases when updating more than 100 of data via the socket.how to further proceed please help me",
"username": "Saravanan_M"
},
{
"code": "",
"text": "@Saravanan_M Likely you are doing something on the background thread that is ballooning the file size - see here: Overview (Realm 10.10.1)",
"username": "Ian_Ward"
},
{
"code": "socket.on(Socket.EVENT_CONNECT, Emitter.Listener { args: Array < Any ?> -> }).on(\"server\") {\n args: Array ->\n val obj = args[0] as JSONObject\n val person = Person()\n person.id = obj.getInt(\"id\")\n person.name = obj.getString(\"name\")\n Realm.getDefaultInstance().use {\n realm -> realm.executeTransaction {\n realm1 -> realm1.copyToRealmOrUpdate(person)\n }\n }\n}\n",
"text": "i did for example like that",
"username": "Saravanan_M"
}
] | Realm database size increases when updating more than 100 of data via the socket | 2020-07-16T11:08:32.991Z | Realm database size increases when updating more than 100 of data via the socket | 2,536 |
null | [] | [
{
"code": "",
"text": "What’s the best way to assign a unique timestamp whenever the document is created or updated ?Client(s) can use that timestamp to get the updated / new documents from the last timestamp.",
"username": "Shanth_Kumar_Khandre"
},
{
"code": "$currentDate$$NOW$$CLUSTER_TIME",
"text": "Hi @Shanth_Kumar_Khandre,It depends what version of the server you are using.Prior to 4.2 you should use $currentDate operator , but starting 4.2 you can use $$NOW or $$CLUSTER_TIME:Best,\nPavel",
"username": "Pavel_Duchovny"
}
] | Assigning a unique timestamp on creation or updation of document | 2020-07-21T18:50:49.313Z | Assigning a unique timestamp on creation or updation of document | 1,601 |
null | [
"data-modeling",
"swift",
"atlas-device-sync"
] | [
{
"code": "\"quoteItems\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"objectId\"\n }\n}\n{\n \"quoteItems\": {\n \"foreign_key\": \"_id\",\n \"ref\": \"#/relationship/mongodb-atlas/Libraries/Items\",\n \"is_list\": true\n }\n}\n",
"text": "Hello everybody !I’m working on an iOS app with Realm and I’m having some difficulties to work with the RealmSwift.List type. I’m trying to implement the “To-Many Relationship” described here: https://realm.io/docs/swift/0.102.0/I have 2 different collections “Items” & “QuoteInformation” in 2 different databases. In “QuoteInformation” I want to have a var (called “quoteItems”) which is a list of “Items”.Here is how I defined the quoteItems var in the QuoteInformation schema:And I added the following dependency between quoteItems and the “_id” property of Items:In my iOS app, I append some Items in my quoteItems list but when I try to upload the quoteItems with realm.add() I got the following error:\n\"Attempting to create an object of type ‘Items’ with an existing primary key value ‘5f0a27e2bf392975530711d3’ \"I don’t understand why Realm is thinking that i’m adding a new Items while i’m just trying to save it in a list.Thanks for you help ! ",
"username": "Julien_Chouvet"
},
{
"code": ".append()",
"text": "@Julien_Chouvet What does your Realm Schema look like? I think you will want to use the .append() method - https://realm.io/docs/swift/latest/api/Classes/List/append(objectsIn:).html",
"username": "Ian_Ward"
},
{
"code": "",
"text": "The question is a bit confusing as the question includes two totally separate definitions for quoteItems and we don’t know what quoteItems are - is that a Realm object?As Ian, mentioned we also don’t know what QuoteInformation looks like.Lastly, the question states “quoteItems is a var which is a list of items”, but within quoteItems, there’s an items (list?) inside that?Can you clarify the question and show your actual objects and explain the relationship?",
"username": "Jay"
},
{
"code": "",
"text": "Hi all!Thanks for your help! I find my mistake and now it’s working.\nHowever, I have another question: I’m trying to get a specific data that is in my Quoteinformation collection. To do so, I want to filter the objects by _id which is of type ObjectId but:When I do: self .realm?.objects(QuoteInformation. self ).filter(\"_id == $0\", self ._id)\nI got the following error: Unable to parse the format string \"_id == $0\"Unable to parse the format string “_id == $0”When I do: let result = self .realm?.objects(QuoteInformation. self ).filter(\"_id == “$0\"”, self ._id)\nI got the following error: “Expected object of type object id for property ‘_id’ on object of type ‘QuoteInformation’, but received: $0”Do you know what is the good syntax?Thanks!",
"username": "Julien_Chouvet"
},
{
"code": "\"_id == %@\", some_var",
"text": "$0 is used in swift filters._id == $0I think you want to use a placeholder\"_id == %@\", some_varHave a look at Realm Filtering for some additional reading",
"username": "Jay"
},
{
"code": "{\n \"title\": \"Items\",\n \"bsonType\": \"object\",\n \"required\": [\n \"_id\",\n \"_parentId\"\n ],\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"_parentId\": {\n \"bsonType\": \"string\"\n },\n \"name\": {\n \"bsonType\": \"string\"\n }\n }\n}\n{\n \"title\": \"QuoteInformation\",\n \"bsonType\": \"object\",\n \"required\": [\n \"_id\",\n \"_parentId\"\n ],\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"_parentId\": {\n \"bsonType\": \"string\"\n },\n \"title\": {\n \"bsonType\": \"string\"\n },\n \"quoteItems\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"objectId\"\n }\n }\n }\n}\n{\n \"quoteItems\": {\n \"ref\": \"#/relationship/mongodb-atlas/Libraries/Items\",\n \"foreign_key\": \"_id\",\n \"is_list\": true\n }\n}\nlet result = self.realm?.objects(QuoteInformation.self).filter(\"_id == %@\", self._id)\nif let quoteInformation = result?.first{\n try! self.quoteInformationRealm?.write{\n quoteInformation.quoteItems.append(item)\n }\n}\nSync: Connection[2]: Session[2]: Received: ERROR(error_code=212, message_size=22, try_again=0)\n",
"text": "I come back to this topic because I still have a problem.\nTo clarify, here are the realm schemas of my 2 collections “Items” and “QuoteInformation”:***** Items ********** QuoteInformation Schema ********** QuoteInformation Relationship *****In my iOS app, I tried to append an instance of Items in the quoteItems, like this:But when I do that I have the following error:Tell me if you need more information.\nThanks for your help!",
"username": "Julien_Chouvet"
},
{
"code": "try! self.quoteInformationRealm?.writequoteInformation.quoteItems.append(item)item",
"text": "Two things.It’s much easier for us to understand what you’re doing when we can see the actual Realm objects as they are defined in code.Secondly, the included section of code is a bit unclear. We don’t know whattry! self.quoteInformationRealm?.writeis and it looks like you trying to append an itemquoteInformation.quoteItems.append(item)but we don’t know what item is as it’s not shown, and it appears quoteItems is an array, not a realm object (?)Can you update your question with the actual Realm Object models as code as well a providing a bit more info about that section of code?",
"username": "Jay"
},
{
"code": "let result = self.realm?.objects(QuoteInformation.self).filter(\"_id == %@\", self._id)\nif let quoteInformation = result?.first{\n try! self.realm?.write{\n quoteInformation.quoteItems.append(item)\n }\n}\nitemItemsconvenience init()Items(_parentId: \"parent_id\", name: \"item_name\")\nclass Items: Object {\n @objc dynamic var _id: ObjectId = ObjectId.generate()\n @objc dynamic var _parentId: String = \"\"\n @objc dynamic var name: String? = nil\n override static func primaryKey() -> String? {\n return \"_id\"\n }\n\n convenience init(_id: ObjectId = ObjectId.generate(), _parentId: String = \"\", name: String? = nil) {\n self.init()\n self._id = _id\n self._parentId = _parentId\n self.name = name\n }\n}\nclass QuoteInformation: Object {\n @objc dynamic var _id: ObjectId = ObjectId.generate()\n @objc dynamic var _parentId: String = \"\"\n let quoteItems = RealmSwift.List<Items>()\n @objc dynamic var title: String? = nil\n override static func primaryKey() -> String? {\n return \"_id\"\n }\n}",
"text": "Yes sorry there is a mistake in the section of code, I wanted to simplify it so I replaced self.quoteInformationRealm by self.realm. Anyway, here is the good one:The item is of type Items and created with the convenience init() function (see below) like:Here below are the Realm objects as they are defined in code (coming from the SDKs -> Data Models of the Realm app):***** Items ********** QuoteInformation *****",
"username": "Julien_Chouvet"
},
{
"code": "@objc dynamic var _id: ObjectId = ObjectId.generate()convenience init(_id: ObjectId = ObjectId.generate()convenience init(_parentId: String = \"\", name: String? = nil) {\n self.init()\n self._parentId = _parentId\n self.name = name\n}",
"text": "Not sure why you’re defining a class var that will automativally populate but you’re also populating it within the init. So here@objc dynamic var _id: ObjectId = ObjectId.generate()Will generate an object id when the object is initialized.but then thisconvenience init(_id: ObjectId = ObjectId.generate()will auto generate an object id when initialized? I don’t think you want that. This would be more appropriate when a new object is created as self.init will populate the ObjectId",
"username": "Jay"
},
{
"code": "",
"text": "Yes you’re right! However this does not solve my problem with the quoteItems list",
"username": "Julien_Chouvet"
},
{
"code": "quoteInformation.quoteItems.append(item)class Items: Object {\n @objc dynamic var _id = UUID().uuidString\n @objc dynamic var _parentId: String = \"\"\n @objc dynamic var name: String? = nil\n override static func primaryKey() -> String? {\n return \"_id\"\n }\n\n convenience init(_parentId: String = \"\", name: String? = nil) {\n self.init()\n self._parentId = _parentId\n self.name = name\n }\n}\n\nclass QuoteInformation: Object {\n @objc dynamic var _id = UUID().uuidString\n @objc dynamic var _parentId: String = \"\"\n let quoteItems = RealmSwift.List<Items>()\n @objc dynamic var title: String? = nil\n override static func primaryKey() -> String? {\n return \"_id\"\n }\n}\n let item = Items(_parentId: \"1\", name: \"some name\")\n let qi = QuoteInformation()\n qi.quoteItems.append(item)\nprint(qi)QuoteInformation {\n _id = 6EB9A252-0D8B-4AAA-BE58-66A2531A8962;\n _parentId = ;\n quoteItems = List<Items> <0x60000177d780> (\n [0] Items {\n _id = 1AC4E456-0707-41ED-9EB6-735750D83124;\n _parentId = 1;\n name = some name;\n }\n );\n title = (null);\n}\n",
"text": "Your initial description of the issue was thisI try to upload the quoteItems with realm.add() I got the following error:Your current code however, is showing thisquoteInformation.quoteItems.append(item)so we need some clarification; you are using realm.add or no? Why are you using ObjectId here?Also, your code essentially works for me. I modified the two classes to use UUID instead of ObjectIdthen build two objectsthen added the item to the QuoteInformation()then print(qi) showsSo as you can see, it works correctly.",
"username": "Jay"
},
{
"code": "quoteInformation.quoteItems.append(item)ItemsquoteItemstitlelet result = self.realm?.objects(QuoteInformation.self).filter(\"_id == %@\", self._id)\nif let quoteInformation = result?.first{\n try! self.realm?.write{\n //quoteInformation.quoteItems.append(item)\n quoteInformation.title = \"abcd\"\n }\n}\nquoteItems",
"text": "Hello I’m using ObjectId because this is what is used in the SDKs/Data Models of the Realm app website.Regarding quoteInformation.quoteItems.append(item), if I print my quoteInformation instance I also have the same good result but when I look at my QuoteInformation collection online nothing changed.Instead of trying to append an Items in quoteItems, I tried to modify the title like in the following code and it is working:However, I noticed that for the data stored online in my QuoteInformation collection, the quoteItems attribute is not present, but I don’t know why it is not initialized like the others are.\nCapture d’écran 2020-07-18 à 17.11.591176×398 37.7 KB",
"username": "Julien_Chouvet"
},
{
"code": "// Write to the realm. No special syntax required for synced realms.\ntry! realm.write {\n realm.add(Task(partition: partitionValue, name: \"My task\"))\n}\n",
"text": "I’m using ObjectId because this is what is used in the SDKs/Data Models of the Realm app website.Well, actually no. The current Realm Guide suggests using UUID as shown in the Models section on their website under the Auto-Incrementing section.Also note the documentation linked in your original question is pretty outdated so don’t go by that.That being said, the ObjectId is part of the MongoDB Realm BETA SDK. As with any BETA software, you never know! In this case however, it’s probably not causing any issues as it’s function is similar.Now that we know you’re using MongoDB Ream Sync BETA (per the screenshot) there’s more to this.The issue here is there are three versions of the documentation. The current Realm documentation as I linked above, the BETA MongoDB Realm Documentation which is where you got ObjectId from, and then the third is the BETA MongoDB Realm Sync documentation. All three are slightly different depending on what you’re doing; Local Current Realm, Local Beta MongoDB Realm, or Sync Beta MongoDB Realm.Assuming your schema is created in code with your Realm Objects and Atlas is in Development mode, you’ll need to go through the Sync guide because Sync’d objects need to include a partition property key, which needs to match the partition key you set up when configuring your Realm within Atlas. See Partition Atlas Data.Then when writing to a sync’d realm you can do thisNote this comment No special syntax required for synced realms. in the guide is inaccurate as writing to a sync’d realm DOES require special syntax - it must include the partitionValue which is different than when writing to a local Realm with either the current Realm or Beta MongoDB Realm (not sync)",
"username": "Jay"
},
{
"code": "quoteItemsquoteItems",
"text": "Thanks for clarifying!Assuming your schema is created in code with your Realm Objects and Atlas is in Development mode, you’ll need to go through the Sync guide because Sync’d objects need to include a partition property key, which needs to match the partition key you set up when configuring your Realm within Atlas. See Partition Atlas Data.As I mentionned previously (post #6), my shcema is defined in the Realm web app (and I’m not using the development mode).\nIn my Swift code I use the sync (with _parentId as partition key) in others situations and everyting is working well. I just have a problem here with the quoteItems list. Do you think it can come from the fact that the quoteItems attribute is not initialized like the others attributes are in the QuoteInformation collection (cf post #12)Thanks!",
"username": "Julien_Chouvet"
},
{
"code": "class Items: Object {\n @objc dynamic var _id = UUID().uuidString\n @objc dynamic var _parentId: String = \"\"\n @objc dynamic var name: String? = nil\n override static func primaryKey() -> String? {\n return \"_id\"\n }\n\n convenience init(_parentId: String = \"\", name: String? = nil) {\n self.init()\n self._parentId = _parentId\n self.name = name\n }\n}\n_parentIdclass QuoteInformation: Object {\n @objc dynamic var _id = UUID().uuidString\n @objc dynamic var _parentId: String = \"\" //<-- this needs to be populated\n let quoteItems = RealmSwift.List<Items>()\n @objc dynamic var title: String? = nil\n override static func primaryKey() -> String? {\n return \"_id\"\n }\n}",
"text": "The Items object is being initialized correctly with the partition keybut I don’t see your QuoteInformation class being initialized in the same way _parentId is just an empty string, so it won’t be synced.",
"username": "Jay"
},
{
"code": "_parentIdquoteItemstitlelet result = self.realm?.objects(QuoteInformation.self).filter(\"_id == %@\", self._id)\nif let quoteInformation = result?.first{\n try! self.realm?.write{\n //quoteInformation.quoteItems.append(item)\n quoteInformation.title = \"abcd\"\n }\n}\n",
"text": "My QuoteInformation is init (with a not empty _parentId) and saved in Realm in another part of my app, that’s why I first retrieve it before trying to append in quoteItems. Moreover, I know that it is synced because as I said in the post #12 I’m able to edit others attributes like the title:",
"username": "Julien_Chouvet"
},
{
"code": "quoteInformation.quoteItems.append(item)let result = self.realm?.objects(QuoteInformation.self).filter(\"_id == %@\", self._id)\nif let quoteInformation = result?.first{\n print(quoteInformation) //<---------- Add this\n try! self.realm?.write{\n print(item) // <---------- Add this\n quoteInformation.title = \"Hello, World\" // <- see if this is sync'd\n }\n}",
"text": "As I mentioned in an above comment - the code in that comment works correctly for me. At this point the only thing we don’t know is how/what the ‘item’ you’re attempting to append looks likequoteInformation.quoteItems.append(item)I assume it’s an Items object but if it’s not properly initialized, it will not be sync’d. So in this section of code, print the item and let’s see what it looks like",
"username": "Jay"
},
{
"code": "QuoteInformation {\n _id = 5f170ba9f83817673abb764f;\n _parentId = 5f0c6b003a8a4b66dbacf15e;\n quoteItems = List<Items> <0x6000019603c0> (\n\n );\n title = (null);\n}\nItems {\n _id = 5f170bc1f83817673abb76cf;\n _parentId = 5f0a1b534699cbf17d4536b3;\n name = name of the item;\n}\ntitle",
"text": "Here is the result:And the title of the QuoteInformation has been correctly edited to “Hello, World”.",
"username": "Julien_Chouvet"
},
{
"code": "QuoteInformation {\n _parentId = 5f0c6b003a8a4b66dbacf15e;\nItems {\n _parentId = 5f0a1b534699cbf17d4536b3;\nquoteInformation.quoteItems.append(item) // <--- HERE we need to see how this item is init'ed",
"text": "Well, as I mentioned a few times previously, if the partitionValue of _parentId are different it’s not going to work. These objects are not in the same app partition:andThis all comes back to how your item object is being initializedquoteInformation.quoteItems.append(item) // <--- HERE we need to see how this item is init'edcan you please show is the code of that process?",
"username": "Jay"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Difficulties working with RealmSwift.List to implement "To-Many Relationship" | 2020-07-12T01:28:47.952Z | Difficulties working with RealmSwift.List to implement “To-Many Relationship” | 5,907 |
null | [
"dot-net",
"atlas-device-sync",
"graphql",
"realm-web"
] | [
{
"code": "",
"text": "From noon (4pm UTC ) until 1:30pm U.S. Eastern Time (5:30pm UTC). on Thursday, July 16, we’ll be hosting an AMA (Ask Me Anything) with the MongoDB Realm team. Join us for a 90 minute Twitch session hosted by @JoeKarlsson as we answer your questions live at Twitch.We’d love to have some of your questions in advance so we can come prepared with as many details as possible. Please post them below as a comment on this thread. And, after the Twitch stream is done, we’ll continue to answer your questions in real time here until 5PM ET (9PM UTC) on Thursday.Chat with you soon!",
"username": "Jamie"
},
{
"code": "",
"text": "When is EST? Is that an American time zone? When is it UTC?",
"username": "John_Nicolson"
},
{
"code": "",
"text": "Hello @John_Nicolson welcome to the community and thanks for the question!The EST Zone encompasses all the eastern US states and many more other countries on the same north-south axis. New York, where MongoDB’s headquarter is, is also part of the EST zone. EST is -5 to UTC. BUT only some locations are currently on EST because most places in this time zone are currently on summer time / daylight saving time and are observing EDT. EDT is -4 to UTC. So I would assume that the event ends at 21:00 UTC, hope @Jamie can confirm.Cheers,\nMichael\n(CET with daylight saving, so UTC +2)",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Hi @John_Nicolson,The date & time reference is for US Eastern Time (currently GMT-4 as pointed out by @michael_hoeller).The Twitch stream will be live from noon ET (4pm UTC ) through to 1:30pm ET (5:30pm UTC).After the Twitch stream wraps up, additional Realm questions will be answered in real time on the forum until 5pm ET (9pm UTC).Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi @John_Nicolson, my apologies for the confusion in the initial post. @michael_hoeller and @Stennie_X are on point with their responses.Cheers,Jamie",
"username": "Jamie"
},
{
"code": "",
"text": "Will we ever get again an option to host our own realm server for free? For example for hobby projects?",
"username": "Nils_Bergmann"
},
{
"code": "",
"text": "when is realm coming for flutter ?",
"username": "boi_boi"
},
{
"code": "",
"text": "No worries Jamie. I appreciate you used the full month July. Non-Americans are often confused by the middle-endian date format used in the US ",
"username": "John_Nicolson"
},
{
"code": "",
"text": "I’d like to see some light thrown on the roadmap for the DotNET SDK. We’re an ISV headquartered in Europe. We have desktop business products, one of which we ported to Realm Cloud as an early adopter a couple of years ago.The DotNET SDK had some rough edges and was not as feature complete as the Swift SDK. That’s not an issue if the DotNET SDK continues to move forward as we are happy for others to blaze the trail.However the DotNET SDK seems to have stagnated. We are told the MongoDB Realm DotNET SDK is coming, coming , coming …We also note with concern that MongoDB Realm docs, not only omit DotNET but conspicuously ignore desktop development.The app we ported to Realm Cloud is offered to our customers as Preview software. Our customers like it. Realm ticks lots of boxes for them so we would like to push on.Is there a reliable timetable for the MongoDB Realm DotNET SDK?Is the DotNET SDK adequately resourced to meet the timetable?What is the MongoDB vision for Realm on the desktop?",
"username": "Nosl_O_Cinnhoj"
},
{
"code": "",
"text": "A few questions:What is the future of the local Realm database, does the mongodb aquisition mean everything will be focused on sync now?A recent survey sent to me from mongodb had the question “how sad would you be if Realm was discontinued” giving me the impression that mongodb may kill the realm project at some point. Would the database remain open source if this happened?There doesn’t currently seem to be an official place to file bug reports, I asked in the support chat and was told to upgrade to a paid support plan in order to file bugs, can we still report these on the respective realm github repos? - Edit: managed to report on github so that works for me.Would you consider making a Kotlin Native / Multiplatform port of Realm? perhaps making realm-java compatible?",
"username": "Theo_Miles"
},
{
"code": "",
"text": "Here’s my question, which I already got a response from the Realm team on, but bear with me!“When following the tutorial, it says for React Native to use Sync but for Web to use GraphQL. So for each one the way permissions are handled is different, and the way data is sent across the wire is also different. So does this mean that we would need to create 2 Realm Apps to support both mobile and web users? (which seems therefore that users would exist in both apps - but not be able to see data created in one app, since the user_ids will not be the same)?”The answer was:\n“GraphQL and Sync are both services that you can use with one Realm App. The way to define permissions for both are slightly different at the moment, but you are not required to create two applications if you want to share the same users across Web and mobile.”This answer is conflicting - it says you can use one App, but permissions don’t work the same ‘at the moment’. It’s not clear at all.In the Realm settings, if you ENABLE Realm Sync, then the permissions you set in the RULES section do not apply. Surely that means GraphQL queries will fail?So does the ‘at the moment’ mean it’s not possible yet? Or is it actually possible, today, to setup users in one Realm App and have the permissions work from React Native client using Sync and a Web client using GraphQL?",
"username": "Richard_McSharry"
},
{
"code": "",
"text": "I recently went through the Realm tutorial “Create a Task Tracker App” specifically focused on building a Web Application. The web instructions have you use TypeScript, React, GraphQL, and Apollo. As a new developer, this was a lot to process.After reading through the docs I realized you can just use Realm and a provided Web SDK. I also noticed that the docs have a heavy emphasis on using GraphQL, which recommends using Apollo, which is heavily focused on React.Is this “tech stack” the recommended way to go with Realm?\nCan other solutions work well like Realm, the Web SDK, and Angular/Vue?\nIt would be nice if there were more tutorials that built up to such a dense application.I hope the AMA goes well!-Jake",
"username": "Jake_O_Toole"
},
{
"code": "",
"text": "Hi Nils – It’s something that we’re looking into, likely first for the development/testing purposes. It doesn’t looks like there’s a request tracking this on our feedback forum but if you’d like to add something then we’ll track/update on our end.",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "Hi – We’re actively looking into how we can support Flutter and work around some limitations within Dart. There is a Github Issue tracking this as well as a post on our feedback forum.",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "Hi @Nosl_O_Cinnhoj – We’re hoping to release a new version of the .NET SDK that takes advantage of new Realm Database features and incorporates new Sync in the near future. We’re actually looking to scale up our .NET team to help this happen more quickly – so if you know anyone who might be interested feel free to point them to this job.",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "Hi @Theo_Miles – Great questions!",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "Hi @Richard_McSharry – Good follow-up question! You should be able to use a single application. If you start using Sync then the permissions you set-up in Sync will also apply to other requests. We can definitely look into clarifying this. Separately, we’re working on merging all the permissions together in the future.",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "Hi Jake – This is a a great point. While we started with a Web tutorial that combined these technologies you should be able to use the GraphQL service or the Realm Web SDK with different frameworks. We’re looking to expand the number of tutorials/examples we have in the future!",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "Thanks for taking the time to answer @Drew_DiPalma !",
"username": "Theo_Miles"
},
{
"code": "",
"text": "Are there any demo repos anywhere that I can look at that uses Realm Sync, GraphQL and both a React Native and Web App using a single Realm App?",
"username": "Richard_McSharry"
}
] | Post your questions for the Realm AMA on July 16th, 2020 (EST) | 2020-07-09T01:30:22.941Z | Post your questions for the Realm AMA on July 16th, 2020 (EST) | 5,095 |
null | [] | [
{
"code": "",
"text": "I would like to know if we can lock the documents in Mongo. We have a .NET service which has 3 different instances of itself. It looks into a collection and gets the document from Mongo.What we want is, if the first instance of the service gets the Document-1 from the collection, then the second instance should not be able to get the same document (Document-1) as it is being processed by another instance. The second instance should get next available document (e.g Document-2). Same with third instance, it should not be able to find the Document-1 or Document-2. It should pick Document-3. But if the first document is not processed correctly, then the lock from that document should be released and the next available service instance should pick it up.Please help. Thank you.",
"username": "Jason_Widener"
},
{
"code": "findAndModifydb.coll.findAndModify({id : \"doc1\", status : \"pending\"},{$set : { status : \"processing\"}});\n",
"text": "Hi @Jason_Widener,In MongoDB we recommend using the findAndModify command for this scenario.This command is atomic and thus lock the document for a status change.Each service instance should do:This way only one service only will see the pending document.If the processing fails you can change the status back to pending and sort by creation datefor another service to pick it up.If there is a more complex logic consider using transactions.Please let me know if you have any questions.Best regards\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hello Pavel,Thank you so much for your reply. It is really helpful. Just one more question, let say we use findAndModify and update the status to “Processing” when a service picks up the document. And after some time, the service gets stuck for some time and does not respond anything (not error and also not successful). So in case if the document is in “processing” status for more then 15 min, we want to change the status to “Available” so that next time service runs, document can be picked again. Is it possible to do in Mongo ?Thank you,\nJason",
"username": "Jason_Widener"
},
{
"code": "",
"text": "Hi @Jason_Widener,We can only offer a TTL partial index on status : “processing” with 15 min time to live however, it will only remove the document.Then you can consider listen to those deletes via a changestream or an Atlas trigger (if this is running on MongoDB Atlas cluster) and recreate the record.However, this might be challenging and not really error proof.I will suggest 2 approaches:Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Awesome. thank you so much for your help.",
"username": "Jason_Widener"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Locking Documents In Mongo | 2020-07-20T03:32:04.230Z | Locking Documents In Mongo | 28,530 |
null | [
"backup"
] | [
{
"code": "",
"text": "Hi, in MongoDB 3.6 Is there any away (system collection, log file etc…) to retrieve the status of (last) backups?How Mongo DBA are monitoring the status of backups ?Best regards",
"username": "mohamed_bouarroudj"
},
{
"code": "",
"text": "Hi @mohamed_bouarroudj,The MongoDB server doesn’t have any metadata to keep track of backups: these are managed using tooling outside of the core server.However, you should be able to determine the timing of recent backups based on your approach (for example, using the timestamps on the backup file or snapshot). If you want to make this more explicit (and queryable), you could add this metadata to your deployment as part of your backup routine.What backup method are you using and what sort of deployment do you have (standalone, replica set, or sharded cluster)?Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi and thx for the reply, I have standalone and replica and there is a timestamp for each backups, because there is no metadata for db backups (like Oracle or SQL Server) the plan is to parse the output of mongodump, I am hoping the monitoring tool we are using will be able to parse flat files and raise alarms if there are some errorsbest regards",
"username": "mohamed_bouarroudj"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Retrieve the status of last backups | 2020-07-14T21:44:59.364Z | Retrieve the status of last backups | 2,380 |
null | [
"cxx",
"release-candidate"
] | [
{
"code": "",
"text": "The MongoDB C++ Driver Team is pleased to announce the availability of mongocxx-3.6.0-rc0, the first release candidate in the 3.6.x series of the MongoDB C++11 Driver. This release candidate has been published for testing and is not recommended for production.This release provides support for new features in MongoDB 4.4.Please note that this version of mongocxx requires the MongoDB C driver 1.17.0.See the MongoDB C++ Driver Manual and the Driver Installation Instructions for more details on downloading, installing, and using this driver.NOTE: The mongocxx 3.6.x series does not promise API or ABI stability across patch releases.Please feel free to post any questions on the MongoDB Community forum in the Drivers, ODMs, and Connectors category tagged with cxx. Bug reports should be filed against the CXX project in the MongoDB JIRA. Your feedback on the C++11 driver is greatly appreciated.Sincerely,The C++ Driver Team",
"username": "Clyde_Bazile_III"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB C++11 Driver 3.6.0-rc0 Released | 2020-07-21T15:53:16.774Z | MongoDB C++11 Driver 3.6.0-rc0 Released | 2,015 |
null | [
"aggregation"
] | [
{
"code": "{ id1:101,\n id2:1011, \n id3:1012, \ntotals:[ \n {\n date: ISODate('2020-01-01') ,\n amt: 123, \n counts: 12}, {date: ISODate('2020-01-02'), amt: 111, counts: 13}, ........]}\ndb.delta.aggregate([ {$unset:\"_id\"},\n{$merge:{\n into: 'history',\n on: ['id1','id2','id3'],\n whenMatched: 'merge',\n whenNotMatched: 'insert'\n}}])\n",
"text": "In my use case I’m working with below json format documentObjective is to Update the ‘amt’ and ‘counts’ for which there is a match in ‘date’ in the HISTORY collection. Data from another collection named DELTA, which has exactly same document structure and contains delta data i.e. containing updates for the other fields on the ‘date’ which is present in HISTROY and new dates as well.Expectation is while merging the data from DELTA to HISTORY it will update the other fields in ‘totals’ array where there is a match between the dates and add new element in array for the new ‘date’ while keeping the other historical data as it is.\nBelow is my queryThe “whenMatched: ‘merge’” is not working expected. In the DOCS nothing is mentioned about use of arrays for merge .Do you have any suggestions ??",
"username": "sukanya_ghosh"
},
{
"code": "deltahistorywhenMatched$merge{ _id: 1,\n id1:101,\n id2:1011, \n id3:1012, \n totals: [ \n { date: '2020-01-01' , amt: 4, counts: 2 }, \n { date: '2020-01-02', amt: 3, counts: 5 },\n { date: '2020-01-03', amt: 11, counts: 11 }\n ]\n}\n{ _id: 9,\n id1:101,\n id2:1011, \n id3:1012, \n totals: [ \n { date: '2020-01-01' , amt: 2, counts: 4 }, \n { date: '2020-01-02', amt: 3, counts: 10 },\n { date: '2020-01-04', amt: 9, counts: 9 }\n ]\n} \non{ id1: 1, id2: 1, id3: 1 }db.delta.aggregate([ \n { \n $merge: {\n into: \"hist\",\n on: [ \"id1\", \"id2\", \"id3\" ],\n whenMatched: pipe\n }\n }\n])\nhist{ _id: 9,\n ...,\n totals: [\n { date: '2020-01-01' , amt: 6, counts: 6 },\n { date: '2020-01-02', amt: 6, counts: 15 },\n { date: '2020-01-03', amt: 11, counts: 11 },\n { date: '2020-01-04', amt: 9, counts: 9 },\n ]\n}\npipe = [\n { \n $addFields: { \n newDelta: { \n $filter: { \n input: \"$$new.totals\",\n as: \"dtot\",\n cond: {\n $eq: [ \n { $size: { \n $filter: { input: \"$totals\", \n as: \"htot\", \n cond: { $eq: [ \"$$dtot.date\", \"$$htot.date\" ] } \n } \n } }, \n 0 ]\n }\n }\n }\n }\n },\n { \n $addFields: { \n newDelta: \"$$REMOVE\", \n totals: { \n $reduce: { \n input: \"$totals\", \n initialValue: \"$newDelta\",\n in: {\n $let: {\n vars: { \n match: { \n $arrayElemAt: [ \n { $filter: { \n input: \"$$new.totals\", \n as: \"dtot\", \n cond: { $eq: [ \"$$this.date\", \"$$dtot.date\" ] } \n } },\n 0 ]\n }\n },\n in: { \n $cond: [ \n { $eq: [ { $ifNull: [ \"$$match\", \"\" ] } , \"\" ] },\n { $concatArrays: [ \"$$value\", [ \"$$this\" ] ] },\n { $concatArrays: [ \"$$value\",\n [ { date: \"$$this.date\",\n amt: { $add: [ \"$$this.amt\", \"$$match.amt\" ] },\n counts: { $add: [ \"$$this.counts\", \"$$match.counts\" ] }\n } ]\n ] }\n ]\n }\n }\n }\n }\n }\n }\n }\n]",
"text": "Hello @sukanya_ghosh,The way to match array elements of the delta and the history collections is, I think, by comparing elements in the arrays. This needs usage of an Aggregation Pipeline option for the whenMatched field of the $merge. The pipeline iterates over the two arrays and does the matching, updating and adding new elements.I have some sample data, the aggregation merge operation (and it works), and the updated history collection.The input collections:delta:hist:The aggregation with the merge operation:NOTE: Before running the following aggregation, make sure there must be unique indexes created on the on field properties of both the collections. That is create a compound unique index: { id1: 1, id2: 1, id3: 1 }.The updated hist collection:NOTE: The order of the elements is not the same as the input.\nThe pipeline used with the merge operation:",
"username": "Prasad_Saya"
}
] | Observed issues in $merge unable resolve | 2020-07-21T06:44:48.054Z | Observed issues in $merge unable resolve | 1,459 |
null | [] | [
{
"code": "{recordId:1}, {unique:true}\n",
"text": "Let’s say I have a unique index on a field, or combination of fields:However, I also have a field “isDeleted: boolean”. If isDeleted is true, I don’t want this object to be considered by the uniqueness constraint. In other words, there can only be 1 record for a given recordId that has isDeleted:false.Is there a way to express this?One alternative that was suggested was instead of having isDeleted be a boolean, make it a random string. I’m wondering if there is a more elegant way.",
"username": "Nathan_Hazout"
},
{
"code": "",
"text": "I can see 2 or 3 ways.",
"username": "steevej"
}
] | Conditional uniqueness | 2020-07-21T08:08:15.175Z | Conditional uniqueness | 3,111 |
null | [
"app-services-user-auth"
] | [
{
"code": "",
"text": "Realm gives a nice way of authenticating and accessing data from a serverless SPA.Now - let’s say I have a custom backend as well. Written in c#.\nI would like the SPA to make requests to this backend, and let the backend verify that it is a authenticated user doing the request. And what user as well.Is this possible?",
"username": "Vegar_Vikan"
},
{
"code": "",
"text": "Yes, it’s possible through the Custom Function provider. You can connect to your own service or collection and retrieve the user id. You just need to return a string in this function and that will map your user to the Realm user.",
"username": "Marlon_Guerios"
},
{
"code": "",
"text": "So what you are saying is that The front end spa does not have a token that the backend can verify and use, but the backend can use that token when calling a function? And if that function doesn’t return what you expect, that will be the hint the user is not authenticated or authorized?Sounds a little unusual…I had a look at Auth0’s way of doing it. They have a concept of api tokens. When setting up the auth configuration I can also give them the url for my backend. The front end can then ask Auth0 for a token to be used for my api. This token is a regular jwt with a rs256 signature that the backend can verify without external calls.Is there anything similar in Realm? Or would a better solution be to setup Realm with an external token provider and just use Auth0?",
"username": "Vegar_Vikan"
},
{
"code": "",
"text": "You could also use custom JWT authentication. The front-end can make a request to your backend and get a token from you which provides the two specifications you needed 1. verify that the user can be authenticated to do requests 2. provide user data.Realm will then provide the session token once it uses your backend that you can use for Realm services, having authenticated with your backend first.",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "It’s a pity, though, that this kinda turns everything upside-down: instead of allowing the realm token to be used with the back end, the back end token can be used with realm. Even though I very much like Auth0, it would be nice if I got everything out of realm.So consider this my proposal for a new feature ",
"username": "Vegar_Vikan"
}
] | Verify authentication in custom backend | 2020-07-17T23:52:58.514Z | Verify authentication in custom backend | 2,579 |
null | [] | [
{
"code": "",
"text": "i want to run mongodb with pm2 in window operationg system.i am searching on this topic for very long but still i am unable to run mongodb with pm2.so please help",
"username": "Nabeel_Hassan"
},
{
"code": "pm2",
"text": "Welcome to the community @Nabeel_Hassan!I assume you are referring to PM2, a process manager for Node.js applications. Please clarify if that is not the case.Can you provide more information on what you are trying to achieve with PM2? Are you trying to use this as a wrapper for your MongoDB application or for the MongoDB server? If you are trying to set this up for your application, I’d suggest asking on Stack Overflow with the pm2 tag.If you are trying to setup your MongoDB server to run as a background service, that is one of the options provided during installation: Install MongoD as a Service.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Yes Stennie i am refferring to Pm2,a process manager for node js application.What i am trying to do is to run mongodb server with pm2 as backgroun procss with pm2",
"username": "Nabeel_Hassan"
}
] | Run mongodb with pm2 | 2020-07-20T14:01:00.728Z | Run mongodb with pm2 | 4,074 |
null | [
"swift"
] | [
{
"code": "RealmSwiftUISwiftUI ListRealmSwiftUI ListListListclass Dog: Object {\n @objc dynamic var name = \"\"\n @objc dynamic var age = 0\n @objc dynamic var createdAt = NSDate()\n \n @objc dynamic var userID = UUID().uuidString\n override static func primaryKey() -> String? {\n return \"userID\"\n }\n}\nclass BindableResults<Element>: ObservableObject where Element: RealmSwift.RealmCollectionValue {\n var results: Results<Element>\n \n private var token: NotificationToken!\n \n init(results: Results<Element>) {\n self.results = results\n lateInit()\n }\n func lateInit() {\n token = results.observe { [weak self] _ in\n self?.objectWillChange.send()\n }\n }\n deinit {\n token.invalidate()\n }\n}\n\nstruct DogRow: View {\n var dog = Dog()\n var body: some View {\n HStack {\n Text(dog.name)\n Text(\"\\(dog.age)\")\n }\n }\n}\n\n\nstruct ContentView : View {\n\n @ObservedObject var dogs = BindableResults(results: try! Realm().objects(Dog.self))\n\n var body: some View {\n VStack{\n \n List{\n ForEach(dogs.results, id: \\.name) { dog in\n DogRow(dog: dog)\n }.onDelete(perform: deleteRow )\n }\n \n Button(action: {\n try! realm.write {\n realm.delete(self.dogs.results[0])\n }\n }){\n Text(\"Delete User\")\n }\n }\n }\n \n private func deleteRow(with indexSet: IndexSet){\n indexSet.forEach ({ index in\n try! realm.write {\n realm.delete(self.dogs.results[index])\n }\n })\n }\n}\n23Realm",
"text": "Hi all,Has anyone been able to successfully integrate Realm with SwiftUI, especially deleting records/rows from a SwiftUI List? I have tried a few different things but no matter what I do I get the same error. After reading some related threads I found out that other people have the same issue.The following code successfully presents all of the items from Realm in a SwiftUI List, I can create new ones and they show up in the List as expected, my issues is when I try to delete records from the List by either manually pressing a button or by left-swiping to delete the selected row, I get an Index is out of bounds error.Here is my code:Terminating app due to uncaught exception ‘RLMException’, reason: ‘Index 23 is out of bounds (must be less than 23).’Of course, the 23 changes depending on how many items are in the Realm database, in this case, I had 24 records when I swiped and tapped the delete button.",
"username": "fs.dolphin"
},
{
"code": "",
"text": "Hey, I’m currently facing the same issue unfortunately.Did you ever discover a solution?If so, could you please share it with me/ this thread?Best Regards,",
"username": "Kamron_Hopkins"
},
{
"code": "",
"text": "Same thread posted on stackoverflow\nPlease check.You may get some clues",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Unfortunately no, I had to stop working on that project and I haven’t dig into it more. Check out the thread from Stackoverflow, it may help you. Thanks.",
"username": "fs.dolphin"
},
{
"code": "",
"text": "Following this tutorial: https://www.mongodb.com/article/realm-cocoa-swiftui-combine\nand this demo: realm-swift/examples/ios/swift/ListSwiftUI at master · realm/realm-swift · GitHubshows how to work around this issue using the new frozen objects, at the cost of introducing a new Realm object just to wrap a list. Still hoping to find a solution that would not require this wrapper when I can find the time for it…",
"username": "nimi"
},
{
"code": "",
"text": "@nimi What wrapper are you referring to? How would you prefer it to look?",
"username": "Ian_Ward"
},
{
"code": "",
"text": "I’m referring to the Recipes object.The ListSwiftUI demo defines both Recipe (the actual model) and Recipes which just wraps a list of Recipe. The reason for this to my understanding is just to be able to use the model with SwiftUI because List is an ObservableObject and Results is not.I’d prefer to be able to show a list of Recipe objects in SwiftUI without needing to define and store another Realm object.",
"username": "nimi"
}
] | Error deleting data from SwiftUI List and Realm | 2020-04-14T14:26:16.798Z | Error deleting data from SwiftUI List and Realm | 5,391 |
[
"security"
] | [
{
"code": "",
"text": "Hello everyone!I have a problem and am not sure if I’m doing something wrong.As I understand it, if I grant the privilege like so:\ngrafik828×439 11.5 KB\nThis should let the user see and modify documents in the testcollection collection in the test database.\nHowever, using Compass, I can see that I’m in the test database but no Collection is shown:\ngrafik1415×420 10.1 KBIf I grant readWrite for every Collection by leaving the Collection field empty, I can see and modify all Collections in the database as expected.\nHow can I grant access via Compass to only a certain Collection of a database?Thanks for any help in advance!",
"username": "Magnus_Lubkowitz"
},
{
"code": "",
"text": "Hi Magnus,Collections in MongoDB are essentially a virtual construct until used for something: In other words, you’ve created a user that can now readWrite to collection testcollection: you will need to write a document to test.testcollection (something you should have permission to do) to actually start seeing it–the Compass UI does show the concept of creating a collection I believe which you should be able to do.Andrew",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "Sorry, I should’ve mentioned that in advance:\nThere are several documents in testcollection, added at first through the atlas webinterface but later on also successfully with the user in question through the api. Accordingly the user does have write access to the collection and the privileges work correctly. (Also if I change the privilege to a different Collection for the user and try again through the api, I get an appropriate error from Atlas, so I’m definetely accessing via said user)\nStill, I can’t see any Collection with the user in Compass, also when setting the privilege to a different non-empty Collection. Only when I grant readWrite for the whole database I can see all Collections, even empty ones, in Compass.If it’s important: I simply copied the connection string from Cluster -> Connect -> Connect using MongoDB Compass and filled in the credentials. The end of the connection string also shows /test, so the correct database is addressed. The version of Compass is 1.21.2 and thereby above 1.12.\nAfter adding the string to Compass, it added some things like “…Compass&retryWrites=true&ssl=true” which look to me like simply the default parameters.I also verified multiple times that I don’t have a typo somewhere: The collection I see at Cluster -> Collections in the webinterface is definitely called test.testcollection and the MongoDB role for the user is stated to be “[email protected]”.",
"username": "Magnus_Lubkowitz"
},
{
"code": "",
"text": "I agree with your observationsTried creating a user from Atlas with access to a specific collection\nIt is not working.Cannot see any collections.Just displays the DBname\nWhen you leave collection name field empty it worksI tried even custom role but even that does not workMay be Mongodb staff can help on how to give collection level access to a user from Atlas",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Thank you for verifying that it doesn’t work for you too, now it’s less probably that it has something to do with my/our setup in general!\nI’ll contact the support and see if they can help to solve my issue and report back when I found out something new.",
"username": "Magnus_Lubkowitz"
},
{
"code": "",
"text": "I had a chat with a very accomodating support staff member, but sadly he had to inform me that the issue is a bug in the current version of Compass.\nThe 1.22beta2, which I had tried beforehand too, still has the bug as well.\nThe issue is tracked internally though and a fix should be released in the next versions!",
"username": "Magnus_Lubkowitz"
},
{
"code": "",
"text": "Thanks for the update",
"username": "Ramachandra_Tummala"
}
] | MongoDB Compass & Atlas: Restrict user access to only seeing a Collection? | 2020-07-16T20:58:25.863Z | MongoDB Compass & Atlas: Restrict user access to only seeing a Collection? | 5,660 |
|
null | [
"python"
] | [
{
"code": "from pymongo import MongoClient\nclient = MongoClient()\ndb=client.market_data #market_data is my db name\nkws = KiteTicker(\"4515kn****\", \"pOYgD0hvSfofC********\") #stock market data client connection\n\ndb.tick.insertMany(kws.connect()); \ndb.tick.insertMany()",
"text": "I am trying to save time series data (generated using web sockets) in MongoDB using Python.#tick is my mongo collection name kws.connect() generates timeseries data in below format.DEBUG:root:Ticks: [{‘timestamp’: datetime.datetime(2020, 7, 20, 13, 25, 5), ‘last_price’: 1912.5, ‘oi_day_low’: 0, ‘volume’: 9522995, ‘sell_quantity’: 1213264, ‘last_quantity’: 31, ‘change’: 0.04184757022545141, ‘oi’: 0, ‘average_price’: 1913.44, ‘ohlc’: {‘high’: 1929.9, ‘close’: 1911.7, ‘open’: 1917.8, ‘low’: 1899.65}, ‘tradable’: True, ‘depth’: {‘sell’: [{‘price’: 1912.8, ‘orders’: 3, ‘quantity’: 62}, {‘price’: 1912.95, ‘orders’: 1, ‘quantity’: 61}, {‘price’: 1913.15, ‘orders’: 1, ‘quantity’: 6}, {‘price’: 1913.2, ‘orders’: 2, ‘quantity’: 194}, {‘price’: 1913.25, ‘orders’: 1, ‘quantity’: 50}], ‘buy’: [{‘price’: 1912.5, ‘orders’: 16, ‘quantity’: 1409}, {‘price’: 1912.45, ‘orders’: 1, ‘quantity’: 50}, {‘price’: 1912.25, ‘orders’: 1, ‘quantity’: 1}, {‘price’: 1912.2, ‘orders’: 1, ‘quantity’: 1}, {‘price’: 1912.15, ‘orders’: 1, ‘quantity’: 104}]}, ‘mode’: ‘full’, ‘last_trade_time’: datetime.datetime(2020, 7, 20, 13, 25, 5), ‘buy_quantity’: 806075, ‘oi_day_high’: 0, ‘instrument_token’: 738561}]DEBUG:root:Ticks: [{‘timestamp’: datetime.datetime(2020, 7, 20, 13, 25, 6), ‘last_price’: 1912.5, ‘oi_day_low’: 0, ‘volume’: 9523089, ‘sell_quantity’: 1214242, ‘last_quantity’: 1, ‘change’: 0.04184757022545141, ‘oi’: 0, ‘average_price’: 1913.44, ‘ohlc’: {‘high’: 1929.9, ‘close’: 1911.7, ‘open’: 1917.8, ‘low’: 1899.65}, ‘tradable’: True, ‘depth’: {‘sell’: [{‘price’: 1912.8, ‘orders’: 3, ‘quantity’: 62}, {‘price’: 1912.95, ‘orders’: 1, ‘quantity’: 61}, {‘price’: 1913.15, ‘orders’: 1, ‘quantity’: 6}, {‘price’: 1913.2, ‘orders’: 2, ‘quantity’: 194}, {‘price’: 1913.25, ‘orders’: 1, ‘quantity’: 50}], ‘buy’: [{‘price’: 1912.5, ‘orders’: 17, ‘quantity’: 1354}, {‘price’: 1912.45, ‘orders’: 1, ‘quantity’: 50}, {‘price’: 1912.25, ‘orders’: 1, ‘quantity’: 1}, {‘price’: 1912.2, ‘orders’: 1, ‘quantity’: 1}, {‘price’: 1912.15, ‘orders’: 1, ‘quantity’: 104}]}, ‘mode’: ‘full’, ‘last_trade_time’: datetime.datetime(2020, 7, 20, 13, 25, 6), ‘buy_quantity’: 805788, ‘oi_day_high’: 0, ‘instrument_token’: 738561}]db.tick.insertMany() method is not saving this data in my mongo collection.Any help?Thanks.",
"username": "Market_Learner"
},
{
"code": "kws.connect()[ { id: 1, timestamp: 1234, price: 12.34 }, { id: 2, timestamp: 7890, price: 34.90 }, ... ]kws.connect()",
"text": "Hello and welcome to the forum!The insertMany method takes an array of documents as parameter to insert into a collection.So, the kws.connect() returns what kind of data? It needs to be a JSON array like:\n[ { id: 1, timestamp: 1234, price: 12.34 }, { id: 2, timestamp: 7890, price: 34.90 }, ... ] .You can assign the kws.connect() returned value to a variable and use it with the insert method. In case the data is not in the required array field type, it needs some kind of transformation to an array of documents before using it with the insert.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thanks a lot @Prasad_Saya for your quick and timely input.I figured out that my websocket data is not a strict JSON array [{},{},{}…] as you mentioned that it is needed for the insertMany method to work. I am checking with the publisher of this data to see if JSON array can be supported.I will get back on this thread with updates.Thanks again for your input.",
"username": "Market_Learner"
},
{
"code": "",
"text": "@Prasad_Saya - I could make the db save work. I was not using the right tick data variable which had proper JSON array format.Thank you so much for highlighting the error.",
"username": "Market_Learner"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Saving time series data in MongoDB collection | 2020-07-20T09:19:09.191Z | Saving time series data in MongoDB collection | 3,101 |
[] | [
{
"code": "",
"text": "Hi. I am trying to set up charts using an aggregation. My question is how do i apply a filter on a document if the project stage only outputs the new fields I created. I use certain fields in the pipeline and even if I carry them through to the end it doesn’t seem that the filter applies to within the pipeline. I want to apply the filter to a field that is present in the original document and then want it to change the result of the aggregation. Below i will attach a pic of the aggregation as well as my data setup. So i am trying to see all the stats that each individual player performed which is listed under the value key. Later i then want to be able to filter on the value and that should then exclude certain player numbers. mongo2876×601 32.1 KB",
"username": "johan_potgieter"
},
{
"code": "",
"text": "Hello, @johan_potgieter!Provide a readable javascript object (or array of objects), that you want to have as a result of your aggregation.",
"username": "slava"
},
{
"code": "",
"text": "I want to be able to filter all the documents on a specific field that occurs within the documents but want to apply the filter with the javascript sdk. So at the moment i can filter the documents to a specif team name because only some of the documents contain that name. But I want to be able to create a filter that can be applied to values that occur all the documents after i run an aggregation. It is difficult to explain. So basically i want to output something like area with A,B,C,D and then be able to filter on that area.",
"username": "johan_potgieter"
},
{
"code": "",
"text": "It is difficult to explainSometimes few small visual examples better than 1000 words Better provide:",
"username": "slava"
},
{
"code": "",
"text": "Hi @johan_potgieter -I think you’ve already figured out a lot of this, but you may want to look at this article which explains how Charts builds its aggregation pipelines: https://docs.mongodb.com/charts/saas/aggregation-pipeline-generation/Your challenge is that the your embedded charts can only influence the pipeline at a specific point (#4 in the list). So anything that you want to filter on must be set up in one of the preceding stages, e.g. a chart query or data source pipeline. Since those cannot be parameterised, you may not be able to solve your problem but I’m not sure I fully understand the scenario.Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Filter on MongoDB Charts | 2020-06-25T13:44:42.018Z | Filter on MongoDB Charts | 2,168 |
|
null | [] | [
{
"code": "",
"text": "Hi,I’m pretty new at MongoDB Atlas, and have been testing various features in past weeks.\nI’m planning to migrate my mongo database from AWS to Atlas. In AWS, I usually manage mongodb behind a private subnet, and applications connect to the db using private ip of the db server. This way, I can make sure that the db is completely isolated from the internet and doesn’t have a publicly reachable IP or DNS.\nI couldn’t find similar option in Atlas. Replicasets and clusters launched in Atlast have publicly reachable DNS. There’s an option to connect to the AWS VPC using private link, but the atlas replica set will still have a publicly reachable DNS.Is there any way to start an Atlas cluster (in AWS) inside a private subnet?\nAny insight on this would be much appreciated.",
"username": "Fastest_Turtle"
},
{
"code": "",
"text": "There is a concept of IP white-list. I am not sure it will fit your requirements but I think it is a good start.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks Steeve. I’m familiar with IP Whitelist. The cluster/replica would be still reachable publicly (only from allowed IP addresses though).\nWhat I’m specifically looking for is a way to deploy the cluster inside a private subnet in Atlas.",
"username": "Fastest_Turtle"
}
] | Atlas in private subnet | 2020-07-20T06:18:43.458Z | Atlas in private subnet | 2,240 |
[
"aggregation"
] | [
{
"code": "{\n id: 2,\n name: 'Factory'\n}\n{\n id:19,\n name:'Account'\n}\nlookup:\n{\n from: 'tags',\n localField: 'tag_ids',\n foreignField: 'id',\n as: 'str_tags_id'\n}\n",
"text": "HiI have a collection with an attribute called tag_ids. It is a array and the data is like:0:2\n2:19And I have another collection called tags with a structure like:After joining this two tables with my aggregation I got another field but instead of two elements in my new array I got 7The stranger is that when I try to filter the array the filter is not applied.“str_tags_id.project_id”:393382image806×235 14.1 KBCould someone tell me why this occurs?",
"username": "Ezequias_Rocha"
},
{
"code": "",
"text": "Hello, @Ezequias_Rocha!Probably, you have documents with duplicate ‘id’ field.\n‘id’ field, unlike ‘_id’ does not have to be necessarily unique.",
"username": "slava"
},
{
"code": "",
"text": "Yes @slava I have duplicate Ids but I have another field I would like to use in the lookup. Is that possible? Is there any alternative if my lookup collection has duplicate Ids?If I could use the project_id field in the same step or in another I would like to use.Regards\nEzequias",
"username": "Ezequias_Rocha"
},
{
"code": "",
"text": "Can someone tell me why this kind of thing occurs?image799×249 15.8 KB",
"username": "Ezequias_Rocha"
},
{
"code": "db.players.aggregate([\n {\n $match: {},\n },\n {\n $lookup: {\n from: 'teams',\n let: {\n team: '$fromTeam'\n },\n pipeline: [\n {\n $match: {\n $expr: {\n $eq: ['$teamName', '$$team']\n }\n },\n },\n {\n $group: {\n _id: '$teamName',\n country: {\n $first: '$country',\n },\n trainer: {\n $first: '$trainer',\n }\n }\n }\n ],\n as: 'joinedTeam',\n }\n },\n]).pretty();\ndb.players.insertMany([\n { player: 'Oleg', fromTeam: 'A' },\n { player: 'Piotr', fromTeam: 'B' }\n]);\n\ndb.teams.insertMany([\n { teamName: 'A', country: 'Ukraine', trainer: 'Stas' },\n { teamName: 'B', country: 'Poland', trainer: 'Pawel' },\n { teamName: 'B', country: 'Poland', trainer: 'Pawel' }\n]);\n[\n {\n \"player\" : \"Oleg\",\n \"fromTeam\" : \"A\",\n \"joinedTeam\" : [\n {\n \"_id\" : \"A\",\n \"country\" : \"Ukraine\",\n \"trainer\" : \"Stas\"\n }\n ]\n },\n {\n \"player\" : \"Piotr\",\n \"fromTeam\" : \"B\",\n \"joinedTeam\" : [\n {\n \"_id\" : \"B\",\n \"country\" : \"Poland\",\n \"trainer\" : \"Pawel\"\n }\n ]\n }\n]\n",
"text": "I have another field I would like to use in the lookup. Is that possible?Yes. You can use any fields to join collections. See examples.It is possible to filter out the duplicates in the nested pipeline, like this:For this dataset:You will get this result:But, that will probably be not as performant, as you may want \nBetter use a prop, that contain a unique value for document identification, like _id.",
"username": "slava"
},
{
"code": "",
"text": "Thank you @slava I peform the filter and it works perfectly.",
"username": "Ezequias_Rocha"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Lookup returning more than the number of elements in the default array | 2020-07-20T14:18:43.688Z | Lookup returning more than the number of elements in the default array | 5,364 |
|
null | [] | [
{
"code": "",
"text": "Hi I have a problem with the way mongo charts calculates the average of documents. After an aggregation I calculate the percentage of a specific field. Which gives the correct value for each individual document. But when it is averaged over all the documents is uses the average of each document to calculate an average. This yields the wrong result as it takes the average of averages. I want the average to be calculated by still using the average of the count of the 2 fields to give an overall average.",
"username": "johan_potgieter"
},
{
"code": "db.players.insertMany([\n { playerName: 'Bob', playerAge: 27, belongsToTeam: 'A' },\n { playerName: 'Jack', playerAge: 33, belongsToTeam: 'A' },\n { playerName: 'Bill', playerAge: 21, belongsToTeam: 'B' },\n { playerName: 'Sam', playerAge: 22, belongsToTeam: 'B' },\n { playerName: 'Sam', playerAge: 23, belongsToTeam: 'B' },\n]);\n{\n \"teams\": [\n {\n \"team\": \"B\",\n // (21+22+23)/3 = 22\n \"averageAgeInTeam\": 22\n },\n {\n \"team\": \"A\",\n // (27+33)/2 = 30\n \"averageAgeInTeam\": 30\n }\n ],\n // (21+22+23+27+33)/5 = 25.2\n \"averageAgeInAllTeams\": 25.2\n}\ndb.players.aggregate([\n {\n $group: {\n _id: '$belongsToTeam',\n averageAgeInTeam: {\n $avg: '$playerAge',\n }\n }\n },\n {\n $group: {\n _id: null,\n teams: {\n $push: {\n team: '$_id',\n averageAgeInTeam: '$averageAgeInTeam',\n }\n },\n averageAgeInAllTeams: {\n $avg: '$averageAgeInTeam',\n }\n }\n }\n]).pretty();\n\ndb.players.aggregate([\n {\n $group: {\n _id: '$belongsToTeam',\n agesOfPlayers: {\n $push: '$playerAge',\n },\n averageAgeInTeam: {\n $avg: '$playerAge',\n }\n }\n },\n {\n $group: {\n _id: null,\n teams: {\n $push: {\n team: '$_id',\n averageAgeInTeam: '$averageAgeInTeam',\n }\n },\n arraysOfAgesOfPlayers: {\n $push: '$agesOfPlayers',\n }\n }\n },\n {\n $addFields: {\n singleArrayOfAgesOfPlayers: {\n $reduce: {\n input: '$arraysOfAgesOfPlayers',\n initialValue: [],\n in: {\n $concatArrays: ['$$value', '$$this'],\n }\n }\n }\n }\n },\n {\n $project: {\n teams: true,\n averageAgeInAllTeams: {\n $avg: '$singleArrayOfAgesOfPlayers',\n }\n }\n }\n]).pretty();\n\n",
"text": "Hello, @johan_potgieter!Let’s create an example dataset to work with:Let’s assume, we want to have the following averages in the output:This can be achieved with this aggregation:",
"username": "slava"
}
] | Average of average MongoDB Charts | 2020-07-13T06:46:01.259Z | Average of average MongoDB Charts | 2,976 |
[
"aggregation"
] | [
{
"code": "db.Slava1.aggregate([\n {\n$project: {\n 'teamStats.data': {\n $map: {\n input: '$teamStats.data',\n in: {\n $let: {\n vars: {\n newArea: {\n $substr: ['$$this.area', 0, 1],\n },\n },\n in: {\n $mergeObjects: [\n '$$this',\n {\n area: '$$newArea',\n },\n ],\n },\n },\n },\n },\n },\n},\n },\n{\n $unwind: '$teamStats.data',\n },\n {\n $group: {\n _id: {\n Game: '$_id',\n Area: '$teamStats.data.area',\n },\n},\n},\n\n{\n $project: {\n _id: 0,\n Game: '$_id.Game',\n Area: '$_id.Area',\n},\n},\n]);\n",
"text": "Hi @slava and the rest. I got the previous problem to work. Now the new struggle I have is i want to group my data on area but still retain all the other values that are contained in each array element. I tried this.and then i get this result.\nSo i want it to be grouped like that with the game name but i still want all the other data also retained in each array object.",
"username": "johan_potgieter"
},
{
"code": "",
"text": "Hello, @johan_potgieter! Sorry of a big delay - work does not wait In order to help you, provide:",
"username": "slava"
}
] | Unwind , Group and still keep original data | 2020-07-09T11:46:28.906Z | Unwind , Group and still keep original data | 2,374 |
|
[
"indexes"
] | [
{
"code": "",
"text": "I am trying to create a unique constraint with partialfilterexpression. The partial filter expression only applies the last condition. Is there a better way to handle multiple conditions with creating indexes with partial filter expression. It should creating indexes where type equals ‘ROLE’ or ‘RANK’. But it is only setting to last condition statement.Screen Shot 2020-07-16 at 1.03.55 PM874×653 46 KB",
"username": "Michael_Sor"
},
{
"code": "{ $eq: 'RANK', $eq: 'ROLE' }\n",
"text": "Welcome to the community, @Michael_Sor!Notice, that you use same key for two different fields in the object:A MongoDB query object can not have two keys with the same name (in your case it is ‘$eq’). If you try to declare two keys with same name in an object, the last one (in your case - '$eq: ‘ROLE’) would overwrite all the previous ones.You can not use $or operator as it is not supported by partialFilterExpression.Alternatively, on application-level, you can add field ‘useByPartialIndex’ and set it to ‘true’, if the ‘type’ is either ‘RANK’ or ‘ROLE’. And then include that prop in your partialFIlterExpression.",
"username": "slava"
},
{
"code": "{ \"type\" : [ \"ROLE\" , \"RANK\"] }",
"text": "Hi @Michael_Sor,I would suggest testing the following partial expression:{ \"type\" : [ \"ROLE\" , \"RANK\"] }MongoDB knows how to equal scalar and arrays as a “in” expression.Let us know if that works.Best regards\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "MongoDB knows how to equal scalar and arrays as a “in” expression.Let us know if that works.Thanks for the prompt response. When I tried the array method, it still allowed duplicate entriesScreen Shot 2020-07-17 at 1.34.07 PM942×740 57.1 KB",
"username": "Michael_Sor"
},
{
"code": "",
"text": "Hi @Michael_Sor,Not sure what you mean by duplicate entries. Can you show the documents that have name,type duplicate with one of the types specified?",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hey @Pavel_Duchovny,The name and type should be unique. But it allows me to add duplicate entry of name:‘E-2’ and type: ‘RANK’. I am expected MongoDB to complain that ‘E-2’ of type Rank already exists.MikeScreen Shot 2020-07-17 at 1.36.54 PM486×542 40.3 KB",
"username": "Michael_Sor"
},
{
"code": "",
"text": "Oh I see…The unique constraint is looking on the query shape and not attempting to use the index therefore will enforce only a value of the exact array.Yea, in this case you need a compute field or separate the unique into a different collection",
"username": "Pavel_Duchovny"
},
{
"code": "db.unqTest.createIndex( { \"name\" : 1, \"type\" : 1, \"role_unq\" : 1}, {\"unique\" : true, \"partialFilterExpression\" : { \"type\" : \"ROLE\" } });\ndb.unqTest.createIndex( { \"name\" : 1, \"type\" : 1, \"rank_unq\" : 1}, {\"unique\" : true, \"partialFilterExpression\" : { \"type\" : \"RANK\" } });\ndb.unqTest.createIndex( { \"name\" : 1, \"type\" : 1, \"class_unq\" : 1}, {\"unique\" : true, \"partialFilterExpression\" : { \"type\" : \"CLASSIFICATION\" } });\n> db.unqTest.insert({name : \"E-2\" , \"type\" : \"ROLE\"});\nWriteResult({ \"nInserted\" : 1 })\n> db.unqTest.insert({name : \"E-2\" , \"type\" : \"ROLE\"});\nWriteResult({\n\t\"nInserted\" : 0,\n\t\"writeError\" : {\n\t\t\"code\" : 11000,\n\t\t\"errmsg\" : \"E11000 duplicate key error collection: test.unqTest index: name_1_type_1_role_unq_1 dup key: { name: \\\"E-2\\\", type: \\\"ROLE\\\", role_unq: null }\"\n\t}\n})\n> db.unqTest.insert({name : \"E-2\" , \"type\" : \"CLASSIFICATION\"});\nWriteResult({ \"nInserted\" : 1 })\n> db.unqTest.insert({name : \"E-2\" , \"type\" : \"CLASSIFICATION\"});\nWriteResult({\n\t\"nInserted\" : 0,\n\t\"writeError\" : {\n\t\t\"code\" : 11000,\n\t\t\"errmsg\" : \"E11000 duplicate key error collection: test.unqTest index: name_1_type_1_class_unq_1 dup key: { name: \\\"E-2\\\", type: \\\"CLASSIFICATION\\\", class_unq: null }\"\n\t}\n})\n> db.unqTest.insert({name : \"E-2\" , \"type\" : \"DUMMY\"});\nWriteResult({ \"nInserted\" : 1 })\n> db.unqTest.insert({name : \"E-2\" , \"type\" : \"DUMMY\"});\nWriteResult({ \"nInserted\" : 1 })\n>\n",
"text": "Hi @Michael_Sor,Actually I have a nice workaround for you suggested by a colleague of mine .You can workaround the several type uniqueness by creating a partial index for each type. The trick is to add a dummy field, which will never be written in your documents, to the keys object to allow us creating mulitple indexes on the “name” and “type”:Now we can only insert one name&type document with one of those types:While other types allow that:Please let me know if you have any additional questions.Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "The trick is to add a dummy field, which will never be written in your documents, to the keys object to allow us creating mulitple indexes on the “name” and “type”:Thanks Paul. I will look into this, to see if this how we want to proceed.",
"username": "Michael_Sor"
}
] | Creating a unique constraint with partial filter expressions | 2020-07-16T17:17:08.270Z | Creating a unique constraint with partial filter expressions | 6,989 |
|
null | [
"atlas-functions"
] | [
{
"code": "",
"text": "I’ve created a function with an aggregate on a collection. The last stage is the $out.When I run the function in the function editor, the function create the collection in the $out stage.When I create a trigger that schedules the function. The function doesn’t seems to work, there is no collection created.All seems configured correct, but off course I’m missing something. Anyone some pointerd for me to look at?thnxNanno",
"username": "Nanno_Scheringa"
},
{
"code": "",
"text": "Hi Nanno,What is the authentication . method defined on the function?Make sure it is set to SYSTEM, $otherwise not sure the $out can work otherwise.Also can you send us the link to the relevant trigger?Thanks,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi Pavel,in the settings of the function, the authentication is set to System.@Pavel_Duchovny Can I send you the link to the function in a DM? Or do have to post it here?thnanks,\nNanno",
"username": "Nanno_Scheringa"
},
{
"code": "",
"text": "Hi @Nanno_Scheringa,Sure you can send me a DM…Regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny,don’t see how I can send you a DM in the community.So this is the link to the trigger: Cloud: MongoDB Cloudregards,\nNanno",
"username": "Nanno_Scheringa"
},
{
"code": "exports = async function() { \n...\n...\nvar aggResult = await shipmentsCollection.aggregate(pipeline).toArray();\nreturn aggResult;\n",
"text": "Hi @Nanno_Scheringa,I see that the trigger works fast and not producing any errors. I am afraid its due to the asynchronous logic of a return command used in the function not allowing the $out to complete.Can you edit the function as following:Let me know how that works.Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_DuchovnyI changed the function as mentioned. And now I’m getting the results I’m expecting. Great!Thanx for the support and your help.Regards\nNanno",
"username": "Nanno_Scheringa"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Function aggregate with out stage | 2020-07-15T20:30:33.219Z | Function aggregate with out stage | 1,557 |
Subsets and Splits