image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"atlas-device-sync"
] | [
{
"code": "Error:\n\nintegrating changesets failed: error finding the previous history size for version 80: client is disconnected (ProtocolErrorCode=101)\nSource:\n\nWrite originated from MongoDB\nPartition:\n\n/6008bff965d278e7510d2dd1/feed\nWrite Summary:\n{\n \"_Following\": {\n \"inserted\": [\n \"6024a9eea15b0ffbf42bf39f\"\n ]\n },\n \"_Request\": {\n \"updated\": [\n \"600d08d91caf6170b86666b6\"\n ]\n }\n}\nError:\n\nEnding session with error: integrating changesets failed: error finding the previous history size for version 80: client is disconnected (ProtocolErrorCode=101)\nSource:\n\nEnding sync session to MongoDB\nLogs:\n[\n \"Session was active for: 7h44m11s\"\n]\nPartition:\n\n/6008bff965d278e7510d2dd1/feed\nSession Metrics:\n{\n \"uploads\": 1\n}\n",
"text": "I just recently came across this in my release version (i.e. I can’t trace what was happening when the client device made the request) - the execution is in a cloud functionError Log contents:Following that error in the logs, there is this:The session was active for almost 8 hours, and both the logs say the “client is disconnected”. Can anyone shed any light on what this means? I still haven’t found a table of Protocol Error Codes to aid in diagnosing these errors when they happen.Thanks!",
"username": "Eric_Lightfoot"
},
{
"code": "",
"text": "Hi Eric,It has been a while since you posted this, are you still experiencing the issue?Was this a one-off error or has it happened consistently?Have you made any changes to the app recently? (specifically schema changes)\nIf destructive schema changes were made for example, then you may need to terminate sync and re-enable if possible.Regards\nManny",
"username": "Mansoor_Omar"
},
{
"code": "ProtocolErrorCode",
"text": "Hello,This has happened once so far. There have been no schema changes or app releases for ~20 days (at the time of the original post it had been 12 days since any deployments, which may not have included schema changes). I am familiar with the need to reset sync when making destructive schema changes.Thank you for replying - now, is there a place to find information on the list of different ProtocolErrorCode values?",
"username": "Eric_Lightfoot"
},
{
"code": "ProtocolErrorCodde",
"text": "Hi Eric,Thank you for replying - now, is there a place to find information on the list of different ProtocolErrorCodde values?This is not documented at this time but I have included the list of error codes.The reason for this is that the error codes are internal statuses that are more relevant for the MongoDB engineering team to investigate the server-side component of Realm Sync. If you have any issues with sync, it would be best to open a chat or ticket directly with the MongoDB Support team: Cloud: MongoDB Cloud.Please know that the list below is subject to change and may not be consistent in the future.ErrorCode = 100 // Connection closed (no error)\nErrorCode = 101 // Other connection level error\nErrorCode = 102 // Unknown type of input message\nErrorCode = 103 // Bad syntax in input message head\nErrorCode = 104 // Limits exceeded in input message\nErrorCode = 105 // Wrong protocol version (CLIENT) (obsolete)\nErrorCode = 106 // Bad session identifier in input message\nErrorCode = 107 // Overlapping reuse of session identifier (BIND)\nErrorCode = 108 // Client file bound in other session (IDENT)\nErrorCode = 109 // Bad input message order\nErrorCode = 110 // Error in decompression (UPLOAD)\nErrorCode = 111 // Bad syntax in a changeset header (UPLOAD)\nErrorCode = 112 // Bad size specified in changeset header (UPLOAD)\nErrorCode = 200 // Session closed (no error)\nErrorCode = 201 // Other session level error\nErrorCode = 202 // Access token expired\nErrorCode = 203 // Bad user authentication (BIND, REFRESH)\nErrorCode = 204 // Illegal Realm path (BIND)\nErrorCode = 205 // No such Realm (BIND)\nErrorCode = 206 // Permission denied (STATE_REQUEST, BIND, REFRESH)\nErrorCode = 207 // Bad server file identifier (IDENT) (obsolete!)\nErrorCode = 208 // Bad client file identifier (IDENT)\nErrorCode = 209 // Bad server version (IDENT, UPLOAD, TRANSACT)\nErrorCode = 210 // Bad client version (IDENT, UPLOAD)\nErrorCode = 211 // Diverging histories (IDENT)\nErrorCode = 212 // Bad changeset (UPLOAD)\nErrorCode = 213 // Superseded by new session for same client-side file (deprecated)\nErrorCode = 214 // Partial sync disabled (BIND, STATE_REQUEST)\nErrorCode = 215 // Unsupported session-level feature\nErrorCode = 216 // Bad origin file identifier (UPLOAD)\nErrorCode = 217 // Synchronization no longer possible for client-side file\nErrorCode = 218 // Server file was deleted while session was bound to it\nErrorCode = 219 // Client file has been blacklisted (IDENT)\nErrorCode = 220 // User has been blacklisted (BIND)\nErrorCode = 221 // Serialized transaction before upload completion\nErrorCode = 222 // Client file has expired\nErrorCode = 223 // User mismatch for client file identifier (IDENT)\nErrorCode = 224 // ErrorSessionsLimitExceeded\nErrorCode = 225 // Changeset contained an invalid schema change (UPLOAD)Regards\nManny",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "Thank you Good Sir. I check your remarks about the caveats that come with this list. I will not hesitate to open a support ticket if something crops up.Best, Eric",
"username": "Eric_Lightfoot"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Integrating changesets failed: error finding the previous history size for version 80: client is disconnected (ProtocolErrorCode=101) | 2021-02-11T04:39:17.282Z | Integrating changesets failed: error finding the previous history size for version 80: client is disconnected (ProtocolErrorCode=101) | 4,649 |
null | [] | [
{
"code": "",
"text": "Hey everybody! I have a couple of questions about using Mongo Charts on a free tier of Mongo Atlas.It is unclear for me what would happen if I exceed 1 GB limit of a data transfer. Did I get any alert if I am near the limit or it doesn’t inform you at all about it and just charge you? Also as I am on a free tier I haven’t provided any billing information so what actually would happen in a such situation?Is it possible to check how much of a data transfer is used by each chart?It is connected to the question 2. I am a little bit suprised by the amount of data transfer in my dashboard. I have a dashboard with 10 charts. And it is half of February and I used 0.45 GB of the data transfer. I have a couple of numeric charts and bar charts that should not use that much of a transfer. I also have one heatmap and a map of my sample location that I suspect for eating all of my transfer. So my question is, is it normal?Is it possible to fine tune auto refresh of charts? For example to only refresh charts at specific time of a day or only do it during the working hours.",
"username": "Mateusz_Jundzill"
},
{
"code": "",
"text": "I have asked the charts team to respond. They should get back to you tomorrow (they are in Australia and about to knock off for the day).",
"username": "Joe_Drumgoole"
},
{
"code": "maxDataAge",
"text": "Hi @Mateusz_Jundzill -If you exceed the 1GB monthly bandwidth allowance, billing kicks in. You can track your usage of the free tier on the Charts Settings page but there isn’t any alert when you go past the threshold. However if you are a free tier Atlas user we don’t currently take any action if you exceed the 1GB threshold.and 3. The meter measures the amount of data transferred out of the Charts server, so the browser dev tools or similar should give you a good indication. A number chart or a bar chart with few bars should not use much data; it’s normally large tables or scatter charts that are the heaviest. Heatmaps may also be largeish depending on the number of categories. Also the more times your charts are rendered/refreshed, the more the meter will tick over. But keep in mind that the overage is only $1 for each additional GB so the overall cost of the service is usually much lower than alternative options, even if you do go beyond the free tier.Today you can influence the refresh the behaviour on the Dashboard Refresh Settings page, or by using the maxDataAge parameters for embedded charts. We are also planning on building a mechanism that will let you schedule dashboard refreshes at specific times as you suggested.HTH\nTom",
"username": "tomhollander"
},
{
"code": "",
"text": "Thank you @tomhollander for your answer. Everything is clear now!",
"username": "Mateusz_Jundzill"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongo Charts Data usage | 2021-02-18T10:31:47.459Z | Mongo Charts Data usage | 2,778 |
null | [
"text-search"
] | [
{
"code": "",
"text": "db.t.insertMany([\n{ “_id” : 1, “t” : “a” },\n{ “_id” : 2, “t” : “b” }\n])\ndb.t.createIndex({t: “text”})\ndb.t.find({$text:{$search: “a”}}) => no match\ndb.t.find({$text:{$search: “b”}}) => match\nwhy ??",
"username": "Heliang_Peng"
},
{
"code": "texttheanaandaior",
"text": "Hello @Heliang_Peng, welcome to the MongoDB Community forum!why ??This is the reason: Text Indexes - Stop Words, and it says -text indexes drop language-specific stop words (e.g. in English, the , an , a , and , etc.) and use simple language-specific suffix stemming.So, if you try to search for a, i, or, etc., the text search will fail. I think the text index is not used for searching stop words.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "thanks for your reply\ni get it",
"username": "Heliang_Peng"
},
{
"code": "none db.collection.createIndex(\n { t : \"text\" },\n { default_language: \"none\" }\n)\n",
"text": "Hi @Heliang_Peng,You can index stop words if you specify a language of none when creating the index:Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "thanks @Pavel_Duchovny\nit works with {default_language: “none”}",
"username": "Heliang_Peng"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Cannot search for letter "a" with text indexed search | 2021-02-19T05:46:04.187Z | Cannot search for letter “a” with text indexed search | 2,606 |
[
"mongodb-shell"
] | [
{
"code": "",
"text": "Every time when I’m trying access my DB for any CRUD operation I’m getting this below erroruncaught exception: TypeError: db.auth.insert is not a functionSearched a lot but didn’t get any answer. Normal commands are working like show dbs, show collections all are working.I’m using MongoDB Community edition 4.4.Thanks in advanceReference Image\nCaptured with Lightshot",
"username": "Sandip_Dhang"
},
{
"code": "",
"text": "Welcome to the community!\nIt is a syntax error\nWhat is pos in your command?\nIf its is dbname no need to mention it as you are already connected to it by use db command\nJust use db.collection.insert(…) or db.collection.find(…) and so on",
"username": "Ramachandra_Tummala"
}
] | Getting TypeError in every Command | 2021-02-18T17:25:51.735Z | Getting TypeError in every Command | 1,785 |
|
null | [
"java"
] | [
{
"code": "mongodb+srv://...",
"text": "Hi,Currently, the Java driver expects a TXT DNS record when using a seed list mongodb+srv://... connection string.\nFrom the documentation it seems that the TXT record is optional and the mongo shell client implementation doesn’t expect it either.\nI was wondering if the Java driver can be adjusted so that TXT DNS record becomes optional.Deniz",
"username": "Deniz_K"
},
{
"code": "com.mongodb.client.InitialDnsSeedlistDiscoveryTest",
"text": "Hi @Deniz_K,I had a look at this and I think you are mistaken in your analysis of the Java driver code. If there is no TXT record, a NamingException is not thrown, so that catch block is not entered. Rather, the Attribute for “TXT” will be null, the conditional here will be skipped, and the method will just return the empty string.To confirm this, you can run the test com.mongodb.client.InitialDnsSeedlistDiscoveryTest, which implements this test suite, which contains several hosts with no TXT records.Let me know if you have additional questions, or if you’re seeing something different in actual testing.Regards,\nJeff",
"username": "Jeffrey_Yemin"
},
{
"code": "NameNotFoundExceptionNOERRORNXDOMAIN@Test\npublic void testNOERROR() {\n ConnectionString connectionString = new ConnectionString(\"mongodb+srv://test1.test.build.10gen.cc/\");\n assertEquals(connectionString.getConnectionString(), \"mongodb+srv://test1.test.build.10gen.cc/\");\n}\n\n@Test\npublic void testNXDOMAIN() {\n // _mongodb._tcp.srv-test.lens.org SRV record exists\n // srv-test.lens.org record does not exist\n new ConnectionString(\"mongodb+srv://srv-test.lens.org/\");\n}\ncom.mongodb.MongoConfigurationException: Unable to look up TXT record for host srv-test.lens.org\n\n\tat com.mongodb.internal.dns.DefaultDnsResolver.resolveAdditionalQueryParametersFromTxtRecords(DefaultDnsResolver.java:131)\n\tat com.mongodb.ConnectionString.<init>(ConnectionString.java:378)\n\tat MongoDbSrvTest.testNXDOMAIN(MongoDbSrvTest.java:34)\n\t...\nCaused by: javax.naming.NameNotFoundException: DNS name not found [response code 3]; remaining name 'srv-test.lens.org'\n\tat jdk.naming.dns/com.sun.jndi.dns.DnsClient.checkResponseCode(DnsClient.java:661)\n\tat jdk.naming.dns/com.sun.jndi.dns.DnsClient.isMatchResponse(DnsClient.java:579)\n\tat jdk.naming.dns/com.sun.jndi.dns.DnsClient.doUdpQuery(DnsClient.java:427)\n\tat jdk.naming.dns/com.sun.jndi.dns.DnsClient.query(DnsClient.java:212)\n\tat jdk.naming.dns/com.sun.jndi.dns.Resolver.query(Resolver.java:81)\n\tat jdk.naming.dns/com.sun.jndi.dns.DnsContext.c_getAttributes(DnsContext.java:434)\n\tat java.naming/com.sun.jndi.toolkit.ctx.ComponentDirContext.p_getAttributes(ComponentDirContext.java:235)\n\tat java.naming/com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.getAttributes(PartialCompositeDirContext.java:141)\n\tat java.naming/com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.getAttributes(PartialCompositeDirContext.java:129)\n\tat java.naming/javax.naming.directory.InitialDirContext.getAttributes(InitialDirContext.java:142)\n\tat com.mongodb.internal.dns.DefaultDnsResolver.resolveAdditionalQueryParametersFromTxtRecords(DefaultDnsResolver.java:114)\n\t... 67 more\n> dig -t TXT test1.test.build.10gen.cc\n\n; <<>> DiG 9.10.6 <<>> @8.8.8.8 -t TXT test1.test.build.10gen.cc\n; (1 server found)\n;; global options: +cmd\n;; Got answer:\n;; ->>HEADER<<- opcode: QUERY, status: **NOERROR**, id: 59059\n;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1\n...\n> dig @8.8.8.8 -t TXT srv-test.lens.org\n\n; <<>> DiG 9.10.6 <<>> @8.8.8.8 -t TXT srv-test.lens.org\n; (1 server found)\n;; global options: +cmd\n;; Got answer:\n;; ->>HEADER<<- opcode: QUERY, status: **NXDOMAIN**, id: 38942\n;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1\n...\n",
"text": "Hi Jeff,Thanks for looking into this.\nIt seems a NameNotFoundException is thrown if the DnsClient receives a status 3 (NXDOMAIN) for a non-existent domain, which I believe is a valid scenario.To reproduce, you should be able to use the tests below with two connection strings.\nThe first one succeeds, with a NOERROR response for the TXT DNS query. I assume there is another non-TXT record for this name, or special DNS configuration.\nThe second test fails, due to a NXDOMAIN response (AWS Route53 resolver). There is no DNS entry at all for that domain name.Deniz",
"username": "Deniz_K"
},
{
"code": "",
"text": "Hi @Deniz_K,I opened JAVA-4018 (and DRIVERS-1566) to track this. Please follow those issues to keep abreast of any updates or to clarify anything.In the mean time, are you able to work around the issue?Regards,\nJeff",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "Hi @Jeffrey_Yemin,Thanks for ticketing this issue.All good at my end, for now I’m creating an empty TXT record as a work around.Best,\nDeniz",
"username": "Deniz_K"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Accept mongodb+srv connection string without corresponding TXT DNS record | 2021-02-09T10:53:27.553Z | Accept mongodb+srv connection string without corresponding TXT DNS record | 12,009 |
null | [
"student-developer-pack"
] | [
{
"code": "",
"text": "Hello y’all,on 09/23/20 I added my promo code from my Github Education Pack. I didn’t see the point to redeem it just yet until now! When I want to redeem the promo code it says it’s already expired.I’m not too sure if I’m using the correct code…\n( Yes! The expire date is still well ahead, somewhere in 2022. )Cheers!\nCody Lynn",
"username": "codiq"
},
{
"code": "",
"text": "Hi Cody,Thank you four your message and welcome to the forums!\nCan you send me a DM with the email address that you used?Thank you!Lieke",
"username": "Lieke_Boon"
},
{
"code": "",
"text": "Already figured it out by reading some other topics. Thanks!",
"username": "codiq"
},
{
"code": "",
"text": "",
"username": "Lieke_Boon"
}
] | Github Student PROMO code not redeemable? | 2021-02-18T10:32:04.152Z | Github Student PROMO code not redeemable? | 5,764 |
null | [
"node-js"
] | [
{
"code": "",
"text": "Got this deprecation warning on Node v14.15.3 and Mongo v4.4.3. Is it safe to ignore?(node:44612) DeprecationWarning: Listening to events on the Db class has been deprecated and will be removed in the next major version.",
"username": "cpeDilson"
},
{
"code": "",
"text": "@Lauren_Schaefer Is this warning safe to ignore? I too am getting this error with Mongodb version 4.4, Mongoose version ^5.10.12 and Node v12.16.3 Got your reference from the answer Warning: Accessing non-existent property ‘MongoError’ of module exports inside circular dependency",
"username": "Avani_Khabiya"
},
{
"code": "",
"text": "If you are using mongoose version 5.11.16 or a higher version as your ODM. You can solve this issue by downgrading it to version 5.11.15\nnpm uninstall mongoose\nnpm i [email protected] more here solved stackOverflow",
"username": "Charles_Kyalo"
},
{
"code": "",
"text": "Yep, you can ignore this one for now. It’s just warning you that this will be an issue when you upgrade to the next major version.A little more info from one of the driver engineers:Db is no longer the place to listen to events, you should listen to your MongoClient instead like so:const client = new MongoClient(…)\nclient.on(‘Event name I want to listen too’, () => {…})\nawait client.connect()The reason for this style is because by registering your listeners before connecting you can be sure that you’re able to capture every event that is triggered",
"username": "Lauren_Schaefer"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | (node:44612) DeprecationWarning: Listening to events on the Db class has been deprecated and will be removed in the next major version | 2021-02-13T05:36:33.255Z | (node:44612) DeprecationWarning: Listening to events on the Db class has been deprecated and will be removed in the next major version | 14,408 |
null | [
"queries",
"node-js"
] | [
{
"code": "",
"text": "While working on the mflix tutorial I have run into an issue defining a projection on db.model.find command.In mongo shell I can run a command:movies.find({countries: { $in: {countries}}, {title: 1})and get either one country or multiple countries returned with {_id, title}.In the moviesDAO.js {getMoviesByCountry} if the find command is formatted as above it fails the projections test. When I log the results to the movies variable it has the entire record not just {_id, title}.If I modify the find command to be:movies.find({countries: { $in: {countries}}, {projection: {title: 1}})then the result is once again just {_id, title} as expected.I cannot find in the MongoDB documentation any reference to using “projection” as a keyword to define a projection on a find.And why does the result differ between mongo shell and javascript implementation?",
"username": "Harold_Breeden"
},
{
"code": "",
"text": "Hello welcome : )node.js driver and mongo-shell are different.\nThey have similar syntax but not excactly the same.When you use the driver see the driver api\nhttps://mongodb.github.io/node-mongodb-native/3.6/api/Collection.html#find\nHere you can see the projection as option.",
"username": "Takis"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Javascript: requires a projection keyword otherwise projection is ignored | 2021-02-18T18:20:41.939Z | Javascript: requires a projection keyword otherwise projection is ignored | 2,166 |
null | [
"kafka-connector"
] | [
{
"code": "",
"text": "Hi,I would like to share collections between 3 microservices and one of my solutions is to use Kafka MongoDB Connector to achieve it. That means, first source the collection into a dedicated topic, then sink this topic to save data into microservices DBs. I don’t know if it is the best solution but how can I make sure to only source fullDocument and sink it including deleted data.Any configuration suggestion?Thank you for your help!",
"username": "OUEDRAOGO_Pegdwinde"
},
{
"code": "",
"text": "I am also looking for this. Did you find any solution ?",
"username": "Ajamal_Khan"
},
{
"code": "",
"text": "There is n example of the Kafka Connector here : https://docs.mongodb.com/kafka-connector/master/kafka-docker-example/basically, your source configuration will look something like:curl -X POST -H “Content-Type: application/json” --data ’\n{“name”: “mongo-source”,\n“config”: {\n“tasks.max”:“1”,\n“connector.class”:“com.mongodb.kafka.connect.MongoSourceConnector”,\n“key.converter”:“org.apache.kafka.connect.json.JsonConverter”,\n“key.converter.schemas.enable”:false,\n“value.converter”:“org.apache.kafka.connect.json.JsonConverter”,\n“value.converter.schemas.enable”:false,\n“publish.full.document.only”: true,\n“connection.uri”:“mongodb://mongo1:27017,mongo2:27017,mongo3:27017”,\n“topic.prefix”:“aprefix”,\n“database”:“MyDB”,\n“collection”:“SomeCollection”\n}}’ http://localhost:8083/connectors -w “\\n”the sink would look something like:\ncurl -X POST -H “Content-Type: application/json” --data ’\n{“name”: “mongo-atlas-sink”,\n“config”: {\n“connector.class”:“com.mongodb.kafka.connect.MongoSinkConnector”,\n“tasks.max”:“1”,\n“topics”:“aprefix.MyDB.SomeCollection”,\n“connection.uri”:“'<>”,\n“database”:“DestinationDBName”,\n“collection”:“DestinationCollection”,\n“key.converter”:“org.apache.kafka.connect.json.JsonConverter”,\n“key.converter.schemas.enable”:false,\n“value.converter”:“org.apache.kafka.connect.json.JsonConverter”,\n“value.converter.schemas.enable”:false\n}}’ http://localhost:8083/connectors -w “\\n”",
"username": "Robert_Walters"
},
{
"code": " {\n _id: { <BSON Object> },\n \"operationType\": \"<operation>\",\n \"fullDocument\": { <document> },\n \"ns\": {\n \"db\": <database>,\n \"coll\": <collection>\n },\n \"to\": {\n \"db\": <database>,\n \"coll\": <collection>\n },\n \"documentKey\": {\n _id: <value>\n },\n \"updateDescription\": {\n \"updatedFields\": { <document> },\n \"removedFields\": [ <field>, ... ]\n },\n \"clusterTime\": <Timestamp>,\n \"txnNumber\": <NumberLong>,\n \"lsid\": {\n \"id\": <UUID>,\n \"uid\": <BinData>\n }\n }\nchange.data.capture.handler",
"text": "@Robert_Walters Thanks for the response but the problem with this approach is that the source connector generator event stream which containsthis format all we want is a full document. The sink connector has a change.data.capture.handler the field which supports CDC using Debezium Connector.But connecting Debezium Connector to MongoDB Atlas is another challenge that we are facing write.Is there a CDC handler for the MongoDB official source connector that we can use in the sink connector?",
"username": "Ajamal_Khan"
},
{
"code": "",
"text": "At this time you’ll have to use the Debezium Connector for MongoDB as the source. That connector supports MongoDB CDC events. The MongoDB Connector for Apache Kafka supports “sinking” to MongoDB, CDC events sourced from the Debezium Connectors for MongoDB, MySQL and Postgres.",
"username": "Robert_Walters"
},
{
"code": "Error while reading the 'shards' collection in the 'config' database: Timed out after 30000 ms while waiting to connect",
"text": "@Robert_Walters thanks for your response again.\nWe tried using Debezium Connector but we are getting Error while reading the 'shards' collection in the 'config' database: Timed out after 30000 ms while waiting to connect this error while the connector is trying to connector to MongoDB Atlas instance.This Debezium connection issue was resolved by enabling SSL on the connector config.",
"username": "Ajamal_Khan"
},
{
"code": "",
"text": "Update, starting in Version 1.4 of the Kafka Connector, we added a CDC handler for MongoDB so you can now apply change stream events to a sink. See details in this blog:Version 1.4 of the MongoDB Connector for Apache Kafka focused on customer requested features that give the MongoDB Connector the flexibility to route MongoDB data within the Kafka ecosystem.",
"username": "Robert_Walters"
}
] | Kafka Connect source collection and sink the topic from source | 2020-05-15T20:53:09.013Z | Kafka Connect source collection and sink the topic from source | 5,323 |
null | [] | [
{
"code": "",
"text": "Hi,I want to set a default “dbPath” value that will be different from the exist default which is “C:/data/db”.Currently I do it with “mongod --dbPath” or “mongod --config” (When “dbPath” is defined there),\nBut how can I change the default “dbPath” so when I type only “mongod” it will run it with my other default value for “dbPath”?Thanks.",
"username": "burekas_burekas"
},
{
"code": "",
"text": "Hi @burekas_burekasThe way you are doing it is the correct and only way of changing it.If you want to change the default compiled into mongod then you would have to pull the source code and build you own binary.",
"username": "chris"
},
{
"code": "",
"text": "Understood, Thanks.I hope you will consider it as a request for the future.",
"username": "burekas_burekas"
},
{
"code": "",
"text": "@Joe_Drumgoole Yep, @burekas_burekas already called out as one of their methods.",
"username": "chris"
},
{
"code": "",
"text": "You are right. I missed that. I have deleted my reply ",
"username": "Joe_Drumgoole"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How can I change the default settings of "mongod"? | 2021-02-17T12:08:38.043Z | How can I change the default settings of “mongod”? | 2,303 |
null | [
"configuration"
] | [
{
"code": "",
"text": "Hello,we have default “snappy” as compression for our existing collection. we are looking it to change for “zstd”. Can we do this change without dump and restore whole collection?\nAlso my collection is sharded",
"username": "Aayushi_Mangal"
},
{
"code": "",
"text": "Hi @Aayushi_MangalIn brief, with your sharded replicasets. Reinitialize each node with the desired compressor configured. Wait until that node is in sync, then move to the next one.I discussed in my reply with Eric migrating to zstd.",
"username": "chris"
}
] | WT compression - zstd | 2021-02-18T11:49:48.070Z | WT compression - zstd | 2,534 |
[
"swift"
] | [
{
"code": "NavigationLink(destination: HuntView(hunt: self.hunt), isActive: $realmReady) {\n Button(action: {\n getHuntRealm()\n}\n }) {\n Text(huntlet.title)\n }\ngetHuntRealm() {\n let realmConfig = user?.configuration(partitionValue: \"hunt=\\(huntlet.code)\")\n do {\n self.huntRealm = try! Realm(configuration: realmConfig!)\n }\n if self.huntRealm != nil {\n let hunts = self.huntRealm.objects(Hunt.self)\n if hunts.count > 0 {\n self.hunt = hunts[0]\n self.realmReady = true\n }\n }\n}\nhunts.count > 0",
"text": "I am attempting to open a read only Realm and then passing in an object from that Realm before entering a view because I remember reading somewhere that this is what I should do.I am using SwiftUI and have a View called HuntCard that is reading values from a Huntlet object that is part of user.hunts in the “user=user._id” partition.When tapping on the card it needs to load the read-only Realm that contains a Hunt object.I have attempted to do it like this:Then, the important code:The code never gets past hunts.count > 0 even though there is a hunt in this partition.Here is the Huntlet object in the User Realm:\nHere is the corresponding Hunt object in the Hunt Realm:\nScreen Shot 2021-02-17 at 7.23.36 AM768×446 79.7 KBI haven’t seen any clear documentation on how this should be done, opening the Realm and loading the object and then passing that object to the view (to HuntView in this case).Is this in the documentation somewhere? The opening/closing of these seem to be done in quite a few different ways amongst examples and I can’t for the life of me decipher how it works and implement it in my own app.Thanks for any guidance here.–Kurt",
"username": "Kurt_Libby1"
},
{
"code": "import SwiftUI\nimport RealmSwift\n\nstruct HuntletView: View {\n @State var showingHunt = false\n let huntlet: Huntlet\n \n var body: some View {\n NavigationView {\n Button(action: { self.showingHunt.toggle() }) {\n Text(huntlet.title)\n }\n NavigationLink(\n destination: HuntView().environment(\\.realmConfiguration, app.currentUser!.configuration(partitionValue: \"hunt=\\(huntlet.code)\")),\n isActive: $showingHunt) { EmptyView() }\n }\n }\n}\n\nstruct HuntView: View {\n @ObservedResults(Hunt.self) var hunts\n \n var body: some View {\n if let hunt = hunts.first {\n Text(hunt.title)\n }\n }\n}",
"text": "Hi Kurt, using Realm-Cocoa 10.6, I’d start with something like this (apologies if I have my Hunts and Huntlets the wrong way around):",
"username": "Andrew_Morgan"
},
{
"code": ".environment(\\.realmConfiguration, app.currentUser!.configuration(partitionValue: \"hunt=\\(huntlet.code)\")),realmConfiguration.environment().environment()",
"text": ".environment(\\.realmConfiguration, app.currentUser!.configuration(partitionValue: \"hunt=\\(huntlet.code)\")),Thanks Andrew!This works. I caught most of this during the webinar yesterday and really appreciate it.Side note for people that might see this and run into this issue, the realmConfiguration that is passed in .environment() is passed down to other sub-views. I accidentally passed it down again. It worked locally, but it breaks sync and stopped reaching Atlas. When I removed the second .environment() and just trusted that it would be passed down, everything works as it should.",
"username": "Kurt_Libby1"
}
] | Opening read-only Realm before entering view | 2021-02-17T13:39:09.581Z | Opening read-only Realm before entering view | 2,068 |
|
null | [] | [
{
"code": "",
"text": "Is there a way to develop Realm apps with a desktop IDE?\nFor example, using Jetrbains IDE?Using the Realm CLI an application can be exported however, it is obvious when exported, there’s no runtime for the exported code, is there a way to execute the exported code on the cloud runtime but through an IDE or plugin?",
"username": "cyberquarks"
},
{
"code": "",
"text": "Welcome to the community.I think that sounds like a cool idea. I’d suggest that you post this as a feature suggestion at Realm: Top (70 ideas) – MongoDB Feedback Engine and then post the link back here so that others have the chance to up-vote it.I take that back, there’s already a request there - up-vote this one instead: Local development tooling – MongoDB Feedback Engine",
"username": "Andrew_Morgan"
}
] | Developing with an IDE | 2021-02-17T12:06:54.593Z | Developing with an IDE | 1,657 |
null | [
"rust"
] | [
{
"code": "",
"text": "I am building a website using actix-web and mongodb crate v1.1.1. At first, I tested the driver for connecting to the database, get the collection handle, create and read form this collection. That test was in the main fn and it works. But, when adding the client and collection handles as an application state extractor the build process is hanging.public_types.rsmain.rsuse actix_web::{get, post, web, App, HttpResponse, HttpServer, Responder};\nuse mongodb::bson::{self, doc, Bson};\nuse mongodb::error::Error;\nuse std::env;\nuse chrono::{DateTime, TimeZone, NaiveDateTime, Utc};\nuse futures::stream::{StreamExt, Next};\nuse std::sync::Arc;\nuse mongodb::Client;\nuse public_types::public_types::AppState;\nmod routing;\nmod public_types;#[actix_web::main]\nasync fn main() -> std::io::Result<()> {// Load the MongoDB connection string from an environment variable:\nlet client_uri = “mongodb+srv://admin-zc:[email protected]/test?retryWrites=true&w=majority”;\n// env::var(“MONGODB_URI”).expect(“You must set the MONGODB_URI environment var!”);\n// A Client is needed to connect to MongoDB:\nlet client = mongodb::Client::with_uri_str(client_uri.as_ref()).await;\nlet client = client.unwrap();\nlet reservation_collection = client.database(“zcdb”).collection(“reserveation”);let data = AppState {\nclient: Arc::new(client),\nreservation_col: Arc::new(reservation_collection),\n};HttpServer::new(move || {\nApp::new()\n.data(data.clone())\n.configure(routing::routing::home_routing)\n.configure(routing::routing::book_routing)\n.configure(routing::routing::query_routing)\n.configure(routing::routing::login_routing)\n})\n.bind(“127.0.0.1:8080”)?\n.run()\n.await\n}",
"username": "Ahmed_Yasen"
},
{
"code": "rustuprustup override set 1.44",
"text": "Hi @Ahmed_YasenThis is a known issue with rustc 1.45-1.47. (I had problems with this myself!) You can fix it by either pinning your Rust version to 1.44, or upgrading to 1.48+.If you need to pin to an older release of rust (rather than just upgrading and using the latest version) and you’re using rustup, you can set it for your project by running rustup override set 1.44 in your project directory.Hope this helps!",
"username": "Mark_Smith"
}
] | Build process is hanging when inserting a document | 2021-02-16T11:13:19.068Z | Build process is hanging when inserting a document | 3,829 |
null | [] | [
{
"code": "",
"text": "Hey guys,\nIm very new to databases so sorry if thats a stupid question, but I would like to use a NoSQL database to store assets for 3d projects, and those files can get big. So is there a way to store files in MongoDB and if so, what file sizes can it handle?\nIf it was answered somewhere else please link me to it, I could not find it via search.\nThanks for your time!",
"username": "Patrick_Glockner"
},
{
"code": "",
"text": "Hi Patrick, welcome to the community!Storing large files in a database is always a contentious topic, but I’ll skip over that.As you’ve likely discovered, there is a 16 MB size limit for MongoDB documents. If you need to store files larger than that then you can use GridFS.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Storing large files in a database is always a contentious topic, but I’ll skip over that.I am interested in different point of views in relation to the above as I want to start experimenting with GridFS. Do you have some links you could share? Thanks",
"username": "steevej"
},
{
"code": "",
"text": "Hopefully others will chip in, but from my perspective.Storing a large number of big files in a database is expensive in terms of local storage and memory. Storing the files in your local file system (and then including a reference to it in your documents) is one option. If you can afford the network latency then offloading the files to cheap block storage such as S3 is a great option.Having said that, there are occasions where storing a modest number of modest-sized files in the database can make sense. As an example, I have an app where the MongoDB data is synced to a Realm database running inside a mobile app. I want some thumbnail images to always be up to date and available in the mobile app - even when offline (the app is designed to be used out at sea). By storing those thumbnails in MongoDBgoDB, MongoDB Realm Sync makes sure that they’re always available locally in the mobile app (where an S3 link would be useless when there’s no cell coverage.)As always, it depends on the characteristics of your data and app.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Thanks that’s exactly the kind of feedback I am interested.Hopefully others will chip inI too hope so.In my case, I have some backup sharing functionalities that I want to migrate out of Dropbox. GridFS with Atlas could be use as an alternative. It would make sense as other parts are already using Atlas. The goal is not to store the files permanently on GridFS but to use it as transient transport mechanism. But more importantly to unify how the database is backed up with the other files.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks for the input guys. I wanted to save my files in a database, so I do not have to worry about breaking links to the programs they are connected to when I decide to restructure something. But from you answers it seems it is not the best idea. I will have a look at GridFS, maybe I will store only the smaller files let’s say up to 100mb. Some type of assets like textures will probably never exceed this limit. Again, thanks for your time!",
"username": "Patrick_Glockner"
}
] | Using MongoDB to store files up to 20GB | 2021-02-15T19:47:35.326Z | Using MongoDB to store files up to 20GB | 13,095 |
null | [
"aggregation"
] | [
{
"code": " { \"_id\" : ObjectId(\"6024656264c97c56e0705cd6\"), \"company_id\" : ObjectId(\"602454847756575710020545\"), \"email\" : \"[email protected]\" }\n { \"_id\" : ObjectId(\"6024656264c97c56e0705cd7\"), \"company_id\" : ObjectId(\"602454847756575710020545\"), \"email\" : \"[email protected]\" }\n [\n {\n \"$facet\": {\n \"emails\": [\n {\n \"$project\": {\n _id: false,\n \"k\": \"$email\",\n \"v\": \"1\"\n }\n }\n ]\n }\n },\n {\n \"$project\": {\n \"emails\": {\n $arrayToObject: \"$emails\"\n }\n }\n }\n ]\n",
"text": "My M10 cluster is complaining about exicding a memory limit. My collection looks like this:And I am running this aggregation:I understand the 100M state limit, but shouldn’t allowDiskUse:true address that? I am using mongoose and am passing that parameter like so:Collection.aggreagate([…]).allowDiskUse(true).then(result => {…This is a similar post here that seems to suggest that the issue has to do with the 16MB doc limit. Why, then, does is mention the “the limit of 104857600 bytes” which is about 100MB? I don’t see it complaining about 16 MB anywhere.",
"username": "Zarif_Alimov"
},
{
"code": "allowDiskUse",
"text": "Hi Zarif, welcome to the community!In addition to the result size, there is a 100 MB limit on each stage.Have you checked the profiler or diagnostic logs to confirm whether allowDiskUse is being correctly set by Mongoose?",
"username": "Andrew_Morgan"
},
{
"code": "Exceeded memory limit for $group, but didn't allow external sort. Pass allowDiskUse:true to opt in.\nCollection.aggreagate([…]).allowDiskUse(true).then(result => {…\n$push used too much memory and cannot spill to disk. Memory limit: 104857600 bytes\n",
"text": "Hey Andrew,The reason I believe Mongoose properly sets allowDiskUse to true is because I had first attempted to write that same aggregation using a somewhat different pipeline which involved $push inside a $group. Without passing allowDiskUse:true, I got this error:When I set the flag to true like so:and rerun the aggregation, the error turns into:So clearly Atlas sees my passing the flag, but somehow is unable to do anything with it. This makes me think there is some setting in my Atlas panel that I need to flip on.Do you have any insight on this?",
"username": "Zarif_Alimov"
},
{
"code": "",
"text": "Checking with one of our experts on the implementation of the aggregation framework.",
"username": "Andrew_Morgan"
}
] | Document constructed by $facet is 104857822 bytes, which exceeds the limit of 104857600 bytes | 2021-02-11T08:57:50.704Z | Document constructed by $facet is 104857822 bytes, which exceeds the limit of 104857600 bytes | 4,258 |
null | [
"swift"
] | [
{
"code": "override func viewDidAppear(_ animated: Bool) {\n super.viewDidAppear(animated)\n self.navigationController?.navigationBar.isHidden = true\n logoAnimationView.logoGifImageView.startAnimatingGif()\n checkingConfiguration()\n}\n\nfileprivate func checkingConfiguration(){\n if let _ = app.currentUser{\n print(\"User is logged in\")\n var configuration = app.currentUser!.configuration(partitionValue: \"user=\\(app.currentUser!.id)\", cancelAsyncOpenOnNonFatalErrors: true)\n configuration.objectTypes = [User.self, Season.self, HandbookEntry.self, Block.self, SubBlock.self, Cache.self, BagUpInput.self, PlotInput.self, CoordinateInput.self, Coordinate.self, ExtraCash.self]\n Realm.asyncOpen(configuration: configuration, callbackQueue: .main, callback: { result in\n switch result {\n case .failure( _):\n print(\"Failed to load aysnc\")\n if Realm.fileExists(for: configuration){\n self.enterapp(configuration: configuration, foundRealm: nil)\n }else{\n print(\"User not logged in; present sign in/sign up view\")\n self.navigationController?.pushViewController(WelcomeViewController(), animated: false)\n }\n case .success(let realm):\n self.enterapp(configuration: configuration, foundRealm: realm)\n }\n })\n }else {\n print(\"User not logged in; present sign in/sign up view\")\n self.navigationController?.pushViewController(WelcomeViewController(), animated: false)\n }\n}\n",
"text": "Problem\nHello I am developing a swift application for IOS and using a MongoDB Realm database. I am having problems with Realm.asyncOpen(configuration: <#T##Realm.Configuration#>, callbackQueue: <#T##DispatchQueue#>, callback: <#T##(Result<Realm, Error>) -> Void#>) as it only works 90% of times (Around that percentage). I have tried the other variations of Realm.async() to see if it was just that function but it is not. I am using this function in my SplashScreen (First View Controller) in order to grab the realm and any changes done to it from my MongoDB Cluster as it using Realm Sync. Problem is that sometimes when it is called there is no callback being given, no result or fail, it’s just in an indefinite loop which lasts from a minute to ten minutes (normally takes couple seconds). This causes my animation screen to run indefinitely as I am waiting for the callback. Once this loop happens the next several runs are fine, but then it comes back again.This is the printed console when the indefinite loop occurs:\nUser is logged in2021-02-16 17:13:51.964779-0500 PlantersHandbook[2968:968859] Sync: Connection[1]: Session[1]: client_reset_config = false, Realm exists = true, async open = false, client reset = false2021-02-16 17:13:52.051113-0500 PlantersHandbook[2968:968859] Sync: Connection[1]: Connected to endpoint ‘34.227.4.145:443’ (from ‘192.168.0.22:51480’)Code\nNote: Code does not get into the callback function when in the indefinite loop (1-10minutes)My Questions\nDoes anyone have similar problems? Can I set a timer somewhere to break the loop that is stuck and restart the Realm.asyncOpen somehow?Set Up\nXcode 12.4\nIOS Deployment Target - 14.3\nUpdated Cocoa Pods for Realm and RealmSwift",
"username": "Sebastian_Gadzinski"
},
{
"code": "Realm.asyncOpen(configuration: config) { result incase .success(let realm):\n print(\"Successfully opened realm: \\(realm)\")\n",
"text": "It’s a good idea to include code as text in your question. That way, it we want to test it or use in an answer we don’t have to retype it.I would suggest a couple of things to do to begin troubleshooting - try these suggestions and see if it makes a difference.Call checkingConfiguration outside of the DispatchQueue (e.g. comment out the DispatchQueue code).Also, change your Realm.asyncOpen to thisRealm.asyncOpen(configuration: config) { result inlastly, comment out everything within that closure and leave it as",
"username": "Jay"
},
{
"code": "",
"text": "Thanks for your response Jason, I have changed the picture of the original code to text and tried your solutions but sadly I am still getting the same result.Im going to try to code a way that if the Realm.async lasts over 15 seconds, to check if the realm file exists on the device and open the realm normally, not communicating with the remote database first. If there is no file on the device then the user will just be sent to the welcome screen.",
"username": "Sebastian_Gadzinski"
},
{
"code": "",
"text": "I copy and pasted your code and it is working correctly.That would make me think it’s something else - maybe a firewall or network issue or some other code causing the delays.",
"username": "Jay"
},
{
"code": "",
"text": "@Jay did you run it a bunch of times and it worked 100%? Did you have a logo animation running or just the normal start up? If so it could be because the logo animation is occupying something that blocks the callback, or it could be what you said about some firewall or network issue.If it is the firewall or network issue, I think the best bet would still be to have some sort of timer that just pulls the local realm after a decent time of trying to grab from the remote database and use it instead.",
"username": "Sebastian_Gadzinski"
},
{
"code": "logoAnimationView.logoGifImageView.startAnimatingGif()cancelAsyncOpenOnNonFatalErrors",
"text": "logoAnimationView.logoGifImageView.startAnimatingGif()I do not have that in my project because that code wasn’t specified in the question. But yeah - if you’re doing something in the main thread it could be gumming up the works.I ran it about 15 times over the course of two hours with no issues. Once the code was whittled down it’s kinda standard connection code that closely matches the code in the getting started guide so I would expect it to work.Comment out that animation call and see what happens. You could also add a breakpoint and see specifically what line it causing the delay.I tried with with and without cancelAsyncOpenOnNonFatalErrors as well.",
"username": "Jay"
},
{
"code": "",
"text": "@Jay I took out the logo animation and the loop is still happening. It only happens when I am connected to internet (wifi or cellular), so it must be an issue with the connection to my realm remote database.Thanks for all the help so far.",
"username": "Sebastian_Gadzinski"
}
] | Realm.asyncOpen Works 9/10 times | 2021-02-17T00:20:48.558Z | Realm.asyncOpen Works 9/10 times | 3,208 |
[
"monitoring"
] | [
{
"code": "",
"text": "Hey there,\nrelated to this topic → Atlas Cluster connections and memory allocation that can not be explained\nWe are facing similar issues. I’ve checked the following metrics:\n\nMetric556×670 23.8 KB\n\nThe first chart represents the number of connections, the second is the “Memory”-metirc and the last is “System-Memory”-metric. How should I interpret this? I’m right if I assume, that the virtual memory is just simulated and the real used memory is just about 500MB? Because system memory is about 1.5GB and we are having 3 shards. I’m asking because the creator of the topic linked is thinking about upgrading from M10 to M30. We are also having M10 but if I’m right with the virtual memory, there is no need for an upgrade, right?Greetings.",
"username": "Andy_Hermann"
},
{
"code": "",
"text": "Hi @Andy_Hermann,\nWelcome to the MongoDB Developer Community Forum!The first chart represents the number of connections, the second is the “Memory”-metirc and the last is “System-Memory”-metric. How should I interpret this?The “Memory” metric is associated with the memory for mongod and wiredTiger processes running on the system. The “System-Memory” metric is associated with the memory for all processes running on the system.You can find a more in-depth explanation of the metrics by hovering your mouse over the metric name in the charts view and selecting the info “i” button as shown below:Screen Shot 2021-02-18 at 11.01.55 am695×477 44.5 KBWe are also having M10 but if I’m right with the virtual memory, there is no need for an upgrade, right?Purely based off your screenshot, it does not appear an upgrade to M30 is required (from Memory + System Memory) alone. However, this question may be best suited for Atlas support as they would have a greater view of the metrics outside of Memory and System Memory. There is free Atlas chat support when on the Atlas UI in the bottom right hand corner, it will be a green bubble icon which will open a chat box when selected.Best Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to interpret virtual Memory Metric | 2021-01-29T19:26:25.270Z | How to interpret virtual Memory Metric | 3,878 |
|
null | [
"react-native"
] | [
{
"code": "useRealmimport Realm from \"realm\";\nimport { getRealmApp } from \"../functions/realmConfig\";\nimport { ItemSchema } from \"./itemSchema\";\n\nexport const app = getRealmApp();\nexport const useRealmApp = getRealmApp();\n\nexport const user = useRealmApp.currentUser;\nexport const partitionValue = useRealmApp.currentUser.id;\n\nexport const config = {\n schema: [ItemSchema], //other schema will be added in the future\n sync: {\n user: user,\n partitionValue: partitionValue, //app.currentUser.id,\n },\n};\n\nexport const useRealm = new Realm(config);\nindex.ts?77fd:9 Uncaught TypeError: Cannot read property 'id' of null\n at eval (index.ts?77fd:9)\n at Object../src/realm/index.ts (renderer.js:5394)\n at __webpack_require__ (renderer.js:791)\n at fn (renderer.js:102)\n at eval (testIndex.tsx?956d:10)\n at Object../testIndex.tsx (renderer.js:5438)\n at __webpack_require__ (renderer.js:791)\n at fn (renderer.js:102)\n at eval (App.tsx?d35d:4)\n",
"text": "I’m getting an error when I want to export a default realm configuration useRealm so that I will be able to use it other filesexpect results is, if the user is not logged in it should show a login screen. But it throws this error:",
"username": "Tony_Ngomana"
},
{
"code": "",
"text": "Hi Tony,It has been a while since you posted this, are you still experiencing this error?if the user is not logged in it should show a login screenCould you please share the part of your code that handles this logic?At which point does this error occur? Does it happen before or after the logic you described?Regards\nManny",
"username": "Mansoor_Omar"
}
] | Cannot read property 'id' of null | 2021-02-08T08:44:47.539Z | Cannot read property ‘id’ of null | 5,035 |
null | [] | [
{
"code": "",
"text": "Hi,Potentially super noob question, but could not find anything here or at stackoverflow.In my application (currently working on the backend with express), I want users to store data based on their own JSON schema definitions. So they should be able to store a JSON schema and then later on upload data that should be validated using this schema. The data is similar (in a way that I want to put it in the same collection), but still different (in the way that it needs their own validation).What is the best way to do this?\nI was planning on having just one collection, make users submit the schemaID they want to use to validate their data, get this schema from the database and use it for validation. For this, my question would be how to properly create a collection of JSON schemas.However, MongoDB uses these schemas for collections, anyway, so should I rather create individual (user defined) collections for each submitted schema so I can take advantage of the built-in validation? If yes, I then have to ask somewhere else how to dynamically create the respective routes in my applicationn:)Thanks in advance.",
"username": "Nicholas_Jones"
},
{
"code": "{\n user: \"Nicholas\",\n schema: { ... }, \n payload: { ... }\n}\n",
"text": "Hi Nicholas, welcome to the community!If you want to use a single collection then you could use a 3rd party schema validation library. e.g., if you’re using Node then there’s AJV which uses JSON Schema. Your document could then looks something like this:Then when your app has data to store, it can fetch the document from MongoDB, validate the data against the schema, and if it passes store it in the payload attribute.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Thanks, I have done it almost exactly like this now.",
"username": "Nicholas_Jones"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | User defined Schemas for Validation | 2021-02-16T12:20:01.668Z | User defined Schemas for Validation | 1,940 |
null | [] | [
{
"code": "",
"text": "Hello!\nI followed the advice from hereI changed the branch to 4.4.4 instead of the 4.4.0 listed there. The build was successful, though it took a few hours to complete on my 8GB Pi 4.The hiccup I’m running into now is that the process won’t stay running. I can run mongod, and it’ll spit out a bunch of messages, then terminate. If I run as root, everything is fine, but I get a warning about running the process as root user. I also get a warning stating the rlimits are too low, current value 1024.I don’t mind running as root for the time being, but will the rlimits warning become a problem later?",
"username": "Darren_Swan"
},
{
"code": "",
"text": "Hi -Can you provide the text of the rlimit warning? And what does it say right before it terminates?Thanks,\nAndrew",
"username": "Andrew_Morrow"
},
{
"code": "2021-02-17T11:08:58.159-06:00: Soft rlimits too low\n 2021-02-17T11:08:58.159-06:00: currentValue: 1024\n 2021-02-17T11:08:58.159-06:00: recommendedMinimum: 64000\n {\"t\":{\"$date\":\"2021-02-17T13:09:29.594-06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":100}}{\"t\":{\"$date\":\"2021-02-17T13:09:29.593-06:00\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":20557, \"ctx\":\"initandlisten\",\"msg\":\"DBException in initAndListen, terminating\",\"attr\":{\"error\":\"IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db\"}}",
"text": "Here’s the last line before it terminates\n {\"t\":{\"$date\":\"2021-02-17T13:09:29.594-06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":100}}Earlier on in the output, I see this :\n{\"t\":{\"$date\":\"2021-02-17T13:09:29.593-06:00\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":20557, \"ctx\":\"initandlisten\",\"msg\":\"DBException in initAndListen, terminating\",\"attr\":{\"error\":\"IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db\"}}So I think it might be a permissions error on that folder. I’ll try changing that, and see what happens. Thanks!",
"username": "Darren_Swan"
}
] | ARM64 Build from Source on Raspberry Pi 4 | 2021-02-17T18:39:21.697Z | ARM64 Build from Source on Raspberry Pi 4 | 3,622 |
null | [] | [
{
"code": "",
"text": "Hello\nI have a collection and in it there is a field with number datatype. I am using Spring Mongo repository class and need to get all the records where the number field contains a specific number. For example, give me all the records where 2 is present in the number field. This will give me all the records with field values - 1234,2345,3421 etc… but not 1345.\nI used @Query and $where clause but getting an error “Query failed with error code 2 and error message ‘$where cannot be applied to a field’”.",
"username": "ragarwal1_N_A"
},
{
"code": "",
"text": "Hi @ragarwal1_N_A,Welcome to MongoDB community.I would say that using $where is not recommended and should be avoided.The optimal way to attack this search issue is by:You should be using Atlas as your hosting (very recommended) and use Atlas search with the regex clause on this field whithin $search stage.\nhttps://docs.atlas.mongodb.com/reference/atlas-search/regex/If you cannot use atlas, you should change your data model to host a field which has all digits in an array : [1, 2,3 …] . Index that field and search it using standard filter in a find query.If none of the above is possible I would suggest using a $regex in a standard filter\nThanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "the above works fine for empName but not for empId. For example if I want to find all employees which has a 1 in it.\nAs an alternative I tried\n\n> public interface EmployeeRepository extends MongoRepository<Employee,String> {\n> @Query(\" {$or :[{'empName': {'$regex':$0}}, {'empId':{'$eq':$0}}]}\")\n> List<Employee> findAllSearch(String pattern);\n> }\n\nor\n\n> public interface EmployeeRepository extends MongoRepository<Employee,String> {\n> @Query(\" {$or :[{'empName': {'$regex':$0}}, {'empId':$0}]}\")\n> List<Employee> findAllSearch(String pattern);\n> }\nbut none of it works. I have a GUI application where the user can specify pattern search on any of the display column (taking the above example it can be name or id) and hence I am wondering if writing a generic method is possible since the datatype for empName (String) and empId (number) is different.",
"text": "Hello\nThanks for your answer. However the regex approach was tried first and it did not work. I am giving more details{“empId”:1, “empName”:“Tom”} {“empId”:10, “empName”:“Peter”}\nthe empId field is of type number.public interface EmployeeRepository extends MongoRepository<Employee,String> {\n@Query(\" {$or :[{‘empName’: {’$regex’:$0}}, {‘empId’:{’$regex’:$0}}]}\")\nList findAllSearch(String pattern);\n}",
"username": "ragarwal1_N_A"
},
{
"code": " db.employee.aggregate([{ $addFields : {\n empIdStr: {\n $toString: '$empId'\n }\n},{ $match {\n empIdStr : /2/\n}\n}, { $project : {empIdStr : 0} }\n]\ndb.employee.updateMany({},[{$addFields: {\n empIdStr: { $toString : \"$empId\"}\n}}, {$addFields: {\n idArr: {$map : {\n input : {$range : [ 1, {$add : [{ $strLenBytes : \"$empIdStr\" },1]}]}, \n in : {$toDouble : { $substr : [\"$empIdStr\", {$subtract : [\"$$this\",1]}, 1] }}\n }\n }\n}}]);\nidArr : 1db.employee.find({idArr : 2 });\n",
"text": "HI @ragarwal1_N_A,You are correct the $regex is operating on strings.You can do a non-optimal workaround using aggregation $addFields and adding a string variant:This will allow you to search the needed documents but it will do a collection scan and connot use an index.What you might do is to add the number as idArr from now on. If you wish to morph the documents to have it you can use the following Aggregation pipeline in an update:Now you can index idArr : 1 and search easily:Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks for the details. I have removed the @Query and used my custom class implementation with the repository class. I used Criteria class along with Query (and orCondition on the fields) for this task.",
"username": "ragarwal1_N_A"
}
] | Querying a number field with pattern search | 2021-02-15T21:39:17.383Z | Querying a number field with pattern search | 16,997 |
null | [
"queries",
"dot-net"
] | [
{
"code": "var query = collection.AsQueryable()\n.Where(p => p.Age > 21)\n.Select(p => new { p.Name, p.Age });\n[\n { $match: { Age: { $gt: 21 } } },\n { $project: { Name: 1, Age: 1, _id: 0 } }\n]\n",
"text": "Hey!So I have read the documentation and learned about the Linq methods that maps to aggragate methods of mondoDB but…I just could not put things together and haven’t seen any examples of that.For example, here, LINQ, we see how to have a IQueryable instance and there’s this:So how do we go one step further and actually query our collection using this query variable?after running the code above I just receive something like:As I said I want to learn how to fetch the documents that satisfy the conditions above.Thanks!",
"username": "Atalay_Han"
},
{
"code": " var query = collection.AsQueryable()\n .Where(p => p.Age > 21)\n .Select(p => p.Name);\nqueryListvar peopleList = query.ToList<string>();\npeopleList.ForEach(s => Console.WriteLine(s));\n",
"text": "Hello @Atalay_Han, welcome to the MongoDB Community forum!Th variable query is of type IMongoQueryable interface. You can apply any of its methods to get the output of your choice. For example, the following will get a List and print its contents (the names of the persons).",
"username": "Prasad_Saya"
},
{
"code": "var peopleList = query.ToList<string>();\n> var peopleList = query.ToList<string>(CancellationToken.None);peopleListvar peopleList = query.ToList<personDTO>(CancellationToken.None);",
"text": "Hey Thank you so much! That made things very clear for me!I tried to apply this but when I codeI get the error:Cannot resolve method ‘ToList()’, candidates are: System.Collections.Generic.List ToList(this MongoDB.Driver.IAsyncCursorSource, System.Threading.CancellationToken) (in class IAsyncCursorSourceExtensions) System.Collections.Generic.List ToList(this System.Collections.Generic.IEnumerable) (in class Enumerable)so instead I coded this:> var peopleList = query.ToList<string>(CancellationToken.None);however this time I got this error:Cannot convert instance argument type ‘{MongoDB.Driver.Linq.IMongoQueryable,System.Collections.Generic.IEnumerable,System.Linq.IQueryable}’ to ‘MongoDB.Driver.IAsyncCursorSource’So do you know how I can solve this? Frankly I would like to find out how to map the peopleList to something likevar peopleList = query.ToList<personDTO>(CancellationToken.None);How can this be done?Thanks!",
"username": "Atalay_Han"
},
{
"code": "var peopleList = query.ToList<string>();var peopleList = query.ToList()\npeopleList.ForEach(p => Console.WriteLine(p));\npeopleList.ForEach(p => Console.WriteLine(p.toJson()));\npeopleList.ForEach(p => Console.WriteLine(p.Name));\npPersonDTOList<PersonDTO> dtoList = new List<PersonDTO>();\nforeach (var p in peopleList)\n{\n dtoList.Add(new PersonDTO() { Name = p.Name, Age = p.Age });\n}\ndtoListPersonDTO",
"text": "var peopleList = query.ToList<string>();Try this, it works fine:The type of p is, I believe, is anonymous. So, you can’t map it to something like PersonDTO.But, you can do this:Now, the dtoList has the PersonDTO objects.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Oh thank you so much for this! I should have thought of a way like this to do it! I need to be more creative I think.",
"username": "Atalay_Han"
}
] | How To Query Collections Using Linq Methods? | 2021-02-15T12:37:38.275Z | How To Query Collections Using Linq Methods? | 25,853 |
null | [
"dot-net",
"atlas-device-sync",
"xamarin"
] | [
{
"code": "",
"text": "Hi, I have wrote a simple code like old Realm Legacy.\nI write a .Net Web App that connect to Sync MongoDbRealm. Collection appear on server, and when I create a new object from client, I view it on server.\nBut if I update or delete any document from server nothing appened on client\nPermission is true/true for read & writeIt’s a bug of 10.1.0?",
"username": "Luigi_De_Giacomo"
},
{
"code": "",
"text": "What do you mean when you say nothing is happening on the client? Do you have some code you can share that demonstrates what you’re trying to do?",
"username": "nirinchev"
},
{
"code": " [MapTo(\"_partition\")]\n public string _partition { get; set; }\n\n [MapTo(\"nome\")]\n public string Nome { get; set; }\n\n [MapTo(\"cognome\")]\n public string Cognome { get; set; }\n\n [MapTo(\"isDone\")]\n public bool IsDeleted { get; set; }\n\n [MapTo(\"timestamp\")]\n public DateTimeOffset Timestamp { get; set; } = DateTime.Now;\n}\nApp app = App.Create(new AppConfiguration(Settings.AppId)\n{\n MetadataPersistenceMode = MetadataPersistenceMode.NotEncrypted,\n LocalAppName = \"TestNewRealm\",\n LocalAppVersion = \"1.0.0\"\n});\n\nUser user = await app.LogInAsync(Credentials.Anonymous());\n\nvar config = new SyncConfiguration(\"test\", user);\nRealm realm = await Realm.GetInstanceAsync(config);\n\nreturn realm;\nvar vRealmDb = await GetRealm();\n\nTransaction trans = vRealmDb.BeginWrite();\n\nCliente cliente = new Cliente\n{\n Nome = nome,\n Cognome = cognome\n};\n\nvRealmDb.Add(cliente); \n\ntrans.Commit();\n\nreturn Redirect(nameOf(Index));\nvar list = vRealmDb.All<Cliente>();\n\nreturn View(list);\n",
"text": "Hi! this is my code (in Legacy Realm works fine)With MongoDbRealm, when I run NewCliente(…), a new Record is added in MongoCollection and it’s visible from view Index.\nIf I add record on MondoDB collection, Index() not show new recordnamespace StudioProTest.Models\n{\npublic class Cliente : RealmObject\n{\n[PrimaryKey]\n[MapTo(\"_id\")]\npublic string ID { get; set; } = Guid.NewGuid().ToString();}Controller.csasync public static Task GetRealm()\n{}public async Task NewCliente(string nome, string cognome)\n{}public async Task Index()\n{\nvar vRealmDb = await GetRealm();}Index.cstml:@model IEnumerable<StudioProTest.Models.Cliente>",
"username": "Luigi_De_Giacomo"
},
{
"code": "",
"text": "Is there anything in the server logs that may look like an error?",
"username": "nirinchev"
},
{
"code": "",
"text": "No, this is log console:\nConnection[1]: Session[1]: client_reset_config = false, Realm exists = true, async open = false, client reset = false\nConnection[1]: Connected to endpoint ‘34.241.208.56:443’ (from ‘192.168.1.167:63069’)When I update or delete a document in the collection Cliente on server, client not receive any change!",
"username": "Luigi_De_Giacomo"
},
{
"code": "",
"text": "I observe that client update his data after that it changes any record or after a restart of web application.",
"username": "Luigi_De_Giacomo"
},
{
"code": "realm.Refresh()",
"text": "Can you try calling realm.Refresh() before querying the Realm?",
"username": "nirinchev"
},
{
"code": "",
"text": "It’s works!!!\nBut Legacy Realm not required to call Refresh.\nIt’s also necessary for Xamarin App? The method is light for CPU/Processor?",
"username": "Luigi_De_Giacomo"
},
{
"code": "GetInstanceGetInstanceAsync",
"text": "When working with Realm instances on background threads (such as in console app or a web server), you’ll need to manually call Refresh to force the Realm to update itself. Opening a Realm instance for every request should be fine and it should automatically get refreshed, but you’re hitting a combination of issues:The combination of these means that you may end up with a stale Realm that needs to be manually refreshed. While we will fix the bug described in 2., I would strongly recommend that you dispose your Realm instances when you’re no longer using them. This will free native resources predictably and, more importantly, will prevent explosive file size growth.Regarding Xamarin - this should not happen there because Xamarin apps have a main thread which allows Realm to automatically refresh itself in the background. If you open Realm instances on background threads in Xamarin apps, the recommendation is, again, to dispose of them as soon as you’re done using them.",
"username": "nirinchev"
},
{
"code": "",
"text": "Ok perfect!Thanks\nLuigi",
"username": "Luigi_De_Giacomo"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm 10.1.0 not sync from server to client | 2021-02-17T02:42:27.638Z | Realm 10.1.0 not sync from server to client | 5,647 |
null | [
"aggregation"
] | [
{
"code": "[\n{\n \"t\": {\n \"$date\": \"2021-01-01T00:00:01.000Z\"\n },\n \"id\": \"1234\",\n \"a\": {\n \"y\": \"y1\",\n \"x\": \"x1\"\n },\n \"b\": {\n \"c\": \"c1\",\n \"d\": {\n \"dd\":\"dd1\" \n }\n }\n},\n{\n \"t\": {\n \"$date\": \"2021-01-01T00:00:02.000Z\"\n },\n \"id\": \"1234\",\n \"a\": {\n \"y\": \"y2\"\n },\n \"b\": {\n \"c\": \"c1\",\n \"d\": {\n \"ee\":\"ee1\" \n }\n }\n},\n{\n \"t\": {\n \"$date\": \"2021-01-01T00:00:03.000Z\"\n },\n \"id\": \"1234\",\n \"b\": {\n \"d\": {\n \"ee\":\"ee2\" \n }\n }\n]\n[{\n \"id\": \"1234\",\n \"a\": {\n \"x\": \"x1\",\n \"y\": \"y2\"\n },\n \"b\": {\n \"c\": \"c1\",\n \"d\": {\n \"dd\":\"dd1\" ,\n \"ee\":\"ee2\" \n }\n }\n}]\ndb.collection.aggregate([\n {\n $sort: {\n \"t\": 1\n }\n },\n {\n $match: {\n \"id\": \"1234\"\n }\n },\n {\n $group: {\n _id: null,\n result: {\n $mergeObjects: \"$$ROOT\"\n }\n }\n }\n])\n[\n {\n \"_id\": null,\n \"result\": {\n \"_id\": ObjectId(\"5a934e000102030405000002\"),\n \"a\": {\n \"y\": \"y2\"\n },\n \"b\": {\n \"d\": {\n \"ee\": \"ee2\"\n }\n },\n \"id\": \"1234\",\n \"t\": ISODate(\"2021-01-01T00:00:03Z\")\n }\n }\n]\n",
"text": "I am trying to use aggregate to get latest and complete document , but my subDocument is not incomplete.this is my document list example.And I expected result isSo I try to use aggregate like thisBut actual result is this , because aggregate just get first level keys.So How to meet my expectations?\nI have no idea.\nThanks!!",
"username": "_Steven"
},
{
"code": "[{$match: {\n id: '1234'\n}}, {$sort: {\n t: 1\n}}, \n{$group: {\n _id: null,\n a: {$mergeObjects : \"$a\"},\n b : { $mergeObjects : \"$b\" },\n d : { $mergeObjects : \"$b.d\" },\n top : { $mergeObjects : \"$$ROOT\"}\n}},\n {$project: {\n _id : \"$top._id\",\n t : \"$top.t\",\n id : \"$top.id\",\n a : 1,\n b : {\n c :1,\n d : \"$d\"\n }\n}}] \n",
"text": "Hi @_Steven,Welcome to MongpDB Community.This behaviour is expected as mergeObjects can only merge the top level fields and cannot understand how to merge low level nested ones.If you need to merge them you need to do each one separately:Thanks,\nPavel",
"username": "Pavel_Duchovny"
}
] | How to get latest and complete document? | 2021-02-17T12:09:34.603Z | How to get latest and complete document? | 1,756 |
null | [] | [
{
"code": "{\n $match: { name: { $in: [ /tom/i, /harry/i ] }}\n}\n{\n $match: { name: { $in: [ re.compile('/tom/i'), re.compile('/harry/i') ] }}\n}\n",
"text": "Hi Team,I faced issue while using regex pattern in $in Operator using aggregation.Above example works well with mongo compass but the same using in python wont work shows compile error near / .I’ve tried using re package for this using pymongo in python base, but the aggregation query formed like below,Please suggest what is that issue or I’m missing something here.Regards,\nJitendra.",
"username": "Jitendra_Patwa"
},
{
"code": "[\n {\n $match: { name: { $in: [ /tom/i, /harry/i ] }}\n }\n]\n[\n {\n '$match': {\n 'name': {\n '$in': [\n re.compile(r\"tom(?i)\"), re.compile(r\"harry(?i)\")\n ]\n }\n }\n }\n]\n",
"text": "You may always use Compass and export in the language of choice.When I exportto python3 it givesI have not tested the above, so I hope it works.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Use regex in $in operator using aggregation | 2021-02-17T12:08:19.320Z | Use regex in $in operator using aggregation | 2,050 |
null | [
"text-search"
] | [
{
"code": "root(object)\n items(object)\n item(array)\n item[index](object)\n itemname(string)\n itenmprice(string)\n",
"text": "Hi,I have the following document structure:I’m looking for a way to search inside of the item object for names and prices based on partial matches and return the item object back, text search isn’t supported for the needed locale so i am left with regex i think.i’ve created a collation index for itemname and tried using regex and string values for find but i can’t seem to make it work with the input i provide.also what’s the difference between locale and locale@collation=search?",
"username": "John_Smith"
},
{
"code": "",
"text": "Anyone has any ideas?",
"username": "John_Smith"
},
{
"code": "",
"text": "Please post sample input documents and sample of the expected results.Coming up with an answer for a question like that implies that we have to have documents in our local mongod instance. Entering documents with the correct schema takes too much time for most of us that are here on our free time.",
"username": "steevej"
}
] | Best way to work with partial text search | 2021-02-09T19:20:12.000Z | Best way to work with partial text search | 2,788 |
null | [] | [
{
"code": "",
"text": "In this section, Julia went straight to explaining in browser IDE, and shell.\nYes she mentioned briefly that it’s a space to practice what we learn and get graded.\nI looked and cannot find where to access it, so it got me a bit confused continuing the video lecture.\nIs it a common term IDE, does it stand for something?\nWhat is shell and how does it work with mongoDB, is it a linux thing I think I read about it before.\nWould be great for a bit clarity on these two questions (IDE and shell) from anyone, thanks.\nI think I’ll just continue with the next lecture to finish it quickly.",
"username": "yogiHalim"
},
{
"code": "",
"text": "Is it a common term IDE, does it stand for something?Yes. Integrated Development EnvironmentI looked and cannot find where to access itAs explained in the lesson where the IDE is presented, it is in the web browser as part of the course material. In my version of the course, the following is indicated in this lesson:In this course you will find IDE labs in the following chapters:",
"username": "steevej"
},
{
"code": "",
"text": "",
"username": "system"
}
] | Lecture Introducing IDE in-browser | 2021-02-17T10:33:59.151Z | Lecture Introducing IDE in-browser | 3,107 |
null | [] | [
{
"code": "",
"text": "Hi, we have an issue about on-premise mongodb at linux os. the PROD DB has been deleted and we want to get it back. we have no back-up. could you please help us, if there is a anyway to undo delete operation.p.s: the deletion process is done using noSQL manager app and after deletion process, we did not do any operation.thank you",
"username": "Sema_ELBAY"
},
{
"code": "",
"text": "Hi Sema,Sorry to hear about your issue. Unfortunately, without some form of backup, there will be no way to restore this data.I would check with your storage team to see if the data volume itself has been backed-up anywhere. If so, then this could be restored.Thanks,\nMark",
"username": "Mark_Baker-Munton"
}
] | Recover deleted database | 2021-02-15T18:47:29.442Z | Recover deleted database | 2,967 |
null | [
"production",
"server"
] | [
{
"code": "",
"text": "MongoDB 4.4.4 is out and is ready for production deployment. This release contains only fixes since 4.4.3, and is a recommended upgrade for all 4.4 users.\nFixed in this release:",
"username": "Jon_Streets"
},
{
"code": "",
"text": "Thanks, @Jon_Streets for sharing the release note for the new MongoDB update!",
"username": "Soumyadeep_Mandal"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB 4.4.4 is released | 2021-02-16T17:24:51.877Z | MongoDB 4.4.4 is released | 2,433 |
null | [
"production",
"rust"
] | [
{
"code": "mongodb",
"text": "The MongoDB Rust driver team is pleased to announce version 1.2.0 of the official Rust driver for MongoDB. You can read more about the release on Github, and the release is published on https://crates.io under the package name mongodb . If you run into any issues, please file an issue on JIRA.Thank you, and we hope you enjoy using the driver!",
"username": "isabelatkinson"
},
{
"code": "",
"text": "Thanks, @isabelatkinson for sharing the new release note of MongoDB Rust Driver!",
"username": "Soumyadeep_Mandal"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Rust Driver 1.2.0 Released | 2021-02-16T22:45:25.227Z | MongoDB Rust Driver 1.2.0 Released | 3,559 |
null | [
"monitoring"
] | [
{
"code": "",
"text": "We have recently moved to Mongo and deployed all of our microservices in production. We have about 7 .NET microservices and 1 Node.js services which uses Mongo. We are using Mongo for logging and we write about 100,000 logs everyday in the database. After moving to production, I am seeing that RAM usage on the server is increasing every day. On first day it was at 10% and just in 3 days, it is using 35% of the RAM.I am wondering if we are doing anything wrong. Do we need to close the connection with Mongo every time we access it from out microservices ? Can it be happening because of some types of locks ?When I looked in detail, it is says “RAM Used” about 15GB and “RAM Cache + Buffer” is also about 15 GB.Please guide as this is prod, I am worried that applications might get very slow if the RAM usage keeps increasing like this.Thanks\nJW",
"username": "Jason_Widener1"
},
{
"code": "",
"text": "Hi Jason,If you are looking at the RAM utilisation on the server that MongoDB is hosted then you can expect the RAM allocated to MongoDB to grow up until the wiredTigerCacheSizeGb threshold. If this hasn’t been set manually, then this threshold will be 50% of system memory - 1GB.Thanks,\nMark",
"username": "Mark_Baker-Munton"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB RAM Usage Increasing Too Much | 2021-02-17T04:54:40.195Z | MongoDB RAM Usage Increasing Too Much | 11,016 |
[
"queries",
"monitoring"
] | [
{
"code": "\"queryPlanner\"\"executionStats\"\"allPlansExecution\"",
"text": "The document says: The default verbosity for explain is allPlansExecution like below.The possible modes are:But in the DBA Practice exam, it says the correct answer is queryPlanner. Can someone explain to me this confusion?\nimage985×488 32.7 KB\n",
"username": "Yunus_UYANIK"
},
{
"code": "explain()verbosity",
"text": "Hello @Yunus_UYANIK, welcome to the MongoDB Community forum!The link you had provided (and referred above) is for the explain database command.The exam is referring to the db.collection.explain method. The documentation says the same thing as that of the exam problem’s answer. The explain() method’s verbosity parameter description:verbosity\tstring\t\nOptional. Specifies the verbosity mode for the explain output. The mode affects the behavior of explain() and determines the amount of information to return. The possible modes are:“queryPlanner” (Default)\n“executionStats”\n“allPlansExecution”",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | What is the explain's default option? | 2021-02-17T00:20:41.998Z | What is the explain’s default option? | 1,813 |
|
null | [
"aggregation",
"java"
] | [
{
"code": "$addField: \n{\n transientloadIndex: {\n $cond: [ \n { $gte: [ \"$batchWindowSessionCount\", 3 ] },\n 1, \n 0 \n ]\n }\n}\n Aggregates.addFields(new Field(\"transientloadIndex\", ConditionalOperators.when(Criteria.where(\"$batchWindowSessionCount\").gte(batchSize)).then(1).otherwise(0))),\n",
"text": "Need help with java driver aggregation pipeline stage to add a field based on a field derived in earlier stages..\n.\n..\n.\n.Getting following error with above:com.mongodb.MongoCommandException: Command failed with error 40180 (Location40180): ‘Invalid $addFields :: caused by :: an empty object is not a valid value. Found empty object at path transientloadIndex’ on server localhost:27017. The full response is {“ok”: 0.0, “errmsg”: “Invalid $addFields :: caused by :: an empty object is not a valid value. Found empty object at path transientloadIndex”, “code”: 40180, “codeName”: “Location40180”}Can someone please help with what I am missing here? Thanks!",
"username": "Abhishek_Kumar_Singh"
},
{
"code": "Aggregates.addFields(new Field(“transientloadIndex”, ConditionalOperators.when(Criteria.where(\"$batchWindowSessionCount\").gte(batchSize)).then(1).otherwise(0))),Aggregates.addFieldsConditionalOperators.when(Criteria.where...",
"text": "Aggregates.addFields(new Field(“transientloadIndex”, ConditionalOperators.when(Criteria.where(\"$batchWindowSessionCount\").gte(batchSize)).then(1).otherwise(0))),Hello @Abhishek_Kumar_Singh, the method Aggregates.addFields is from the Java Driver APIs, but you are also using the Spring Data MongoDB API ConditionalOperators.when(Criteria.where... within it. That will not work. You need to use only one of them for writing the query.Some references about using the aggregation queries with Java driver:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to use aggregate addFields using Java driver? | 2021-02-17T04:12:41.302Z | How to use aggregate addFields using Java driver? | 10,047 |
null | [
"atlas-device-sync"
] | [
{
"code": "",
"text": "Sync was working yesterday. Today I’m getting the error shown below. I’m not sure where the 4CC… value is coming from. I have the expected partition value in my realm configurationuser.configuration(partitionValue: “0C946EBD-D806-4408-86FE-9EB9A2611D6E”)and in the uuidFamily of all of my local Realm objects.uuidFamily = 0C946EBD-D806-4408-86FE-9EB9A2611D6E;Any suggestions?I’ve tried querying for “4CCE48C9-02D0-4DB7-A2D3-ECE55F268454” on all of my local collections uuidFamily fields and found nothing. Also found nothing searching for it on MongoDB Compass. Any suggestions?Using iOS and Swift.\nRealm 10.5.2\nRealmDatabase 10.4.0In the cloud, my clusters uses Version 4.4.3.Log Error:failed to validate upload changesets: SET instruction had incorrect partition value for key “uuidFamily” { expectedPartition: {0C946EBD-D806-4408-86FE-9EB9A2611D6E}, foundPartition: 4CCE48C9-02D0-4DB7-A2D3-ECE55F268454 } (ProtocolErrorCode=212)\nPartition:0C946EBD-D806-4408-86FE-9EB9A2611D6E\nWrite Summary:\n{\n“JournalsDB”: {\n“inserted”: [\n“DE6D218A-A496-4F8F-8F03-FF868DFB8BAA”,\n“25B7388D-FE41-412D-8DD0-B9B55E347107”,\n“831811E3-8938-4EAA-8895-8F86C7314DE4”,\n“B48E8CD9-51D7-45D7-B1FB-1CFACDD8A505”, …",
"username": "Adam_Ek"
},
{
"code": "",
"text": "Might be worth using Realm Studio to check the contents of the realm(s) in your frontend app. If you’re developing on iOS then you can find the instructions here on how to do that.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "4CCE48C9By any chance, is the 4CC a users uid?",
"username": "Jay"
},
{
"code": "Realm.Configuration.defaultConfiguration.fileURL",
"text": "4CCE48C9-02D0-4DB7-A2D3-ECE55F268454 looks like the id of an application/device in xcode simulator. Example:/Users/user.name/Library/Developer/CoreSimulator/Devices/5241DED7-3B5E-42F0-8797-CD91789330FA/data/Containers/Data/Application/F254B6DD-DB91-4D34-8B66-673E27441067/Documents/You could try printing the line below at the beginning of your app and see if the id’s in the output match the 4CC number:Realm.Configuration.defaultConfiguration.fileURL",
"username": "Mansoor_Omar"
}
] | BadChangeSet: failed to validate upload changesets: SET instruction had incorrect partition value for key "uuidFamily" | 2021-02-13T22:24:35.480Z | BadChangeSet: failed to validate upload changesets: SET instruction had incorrect partition value for key “uuidFamily” | 2,716 |
null | [
"mongoose-odm"
] | [
{
"code": "",
"text": "I’m trying to populate object that also contains reference to another but i don’t need that.\nWhile doing so getting error \" MissingSchemaError: Schema hasn’t been registered for model \"\nPlease help me resolving this problem.",
"username": "Vinod_S"
},
{
"code": "",
"text": "Hi Vinod,Thanks for posting your first question and welcome to the community!We will need some more information around this issue to help you with this.\nBased on the error, it sounds like you have a data model defined on your client but the Schema on your Realm app has not been defined yet or is inconsistent with your coded data model.Can you please advise if you have created a schema yet?Please see the two approaches linked here for configuring your data model.\nHave you gone through these steps yet? If so, which approach did you take?What were you trying to do when this error appeared?Regards\nManny",
"username": "Mansoor_Omar"
}
] | MissingSchemaError | 2021-02-16T07:44:01.061Z | MissingSchemaError | 2,161 |
[
"atlas-functions"
] | [
{
"code": "exports = function(changeEvent) {\n\nconsole.log(\"changeEvent.fullDocument \", JSON.stringify(changeEvent.fullDocument)); \n\nconst mongodb = context.services.get(\"mongodb-atlas\");\nconst masterHunts = mongodb.db(\"HuntMobDB\").collection(\"Hunt\");\n\nvar newHunt = { \"_id\": changeEvent.fullDocument._id + \"_shared\",\n \"_partition\": changeEvent.fullDocument.code,\n \"title\": changeEvent.fullDocument.title,\n \"text\": changeEvent.fullDocument.text,\n \"scheduled\": changeEvent.fullDocument.scheduled,\n \"date\": changeEvent.fullDocument.date,\n \"photo\": changeEvent.fullDocument.photo,\n \"code\": changeEvent.fullDocument.code,\n \"maxUsers\": changeEvent.fullDocument.maxUsers,\n \"tasks\": changeEvent.fullDocument.tasks,\n \"rules\": changeEvent.fullDocument.rules\n};\n\nmasterHunts.updateOne({_id: changeEvent.fullDocument._id + \"_shared\"}, newHunt, {upsert: false});\n};\n",
"text": "Is there an obvious method I’m missing?I have these Read only realms for a scavenger hunt object with embedded objects of Tasks, Rules and Hunters.I have a hunt type object in the user realm that creates the Read only Hunt and the function will update the Tasks and Rules when those are added.However, unlike creating a trigger for when it is updated in the User realm, copying that change to the shared Read only realm, I need to create and add an embedded object to the hunt when someone joins it.The Hunter embedded object need to be added (.append in Swift) to Hunt.hunters when someone joins.Am I missing something obvious? How do I create a function that will add an embedded object to a list in an existing Realm object?This seems like it should be a straight forward, typical use of Realm and functions, but nothing in the documentation or examples seem to show this functionality.Or am I wrong and this isn’t/shouldn’t be possible?Here is a visualization of my data. Blue is the main object, yellow are embedded objects.Also, a trigger observes a database change in the private Hunt object in the user partition and triggers this function (which is working for the Tasks and Rules lists):Thanks.–Kurt",
"username": "Kurt_Libby1"
},
{
"code": "if (user.conversations && user.conversations.length > 0) {\n for (i = 0; i < user.conversations.length; i++) {\n let membersToAdd = [];\n if (user.conversations[i].members.length > 0) {\n for (j = 0; j < user.conversations[i].members.length; j++) {\n if (user.conversations[i].members[j].membershipStatus == \"User added, but invite pending\") {\n membersToAdd.push(user.conversations[i].members[j].userName);\n user.conversations[i].members[j].membershipStatus = \"Membership active\";\n conversationsChanged = true;\n }\n }\n } \n if (membersToAdd.length > 0) {\n userCollection.updateMany({userName: {$in: membersToAdd}}, {$push: {conversations: user.conversations[i]}})\n .then (result => {\n console.log(`Updated ${result.modifiedCount} other User documents`);\n }, error => {\n console.log(`Failed to copy new conversation to other users: ${error}`);\n });\n }\n }\n}\n",
"text": "I think I’m doing something similar in my chat app.My User object/document contains a list of all the conversations that the user has been invited too.When a user creates a new conversation, the frontend app adds the conversation details (including the list of members) to just that user’s User object. When that change is synced, a Realm trigger runs a function which adds the conversation to the User document for all of the other users that are part of that new conversation.Details on the data/partitioning model can be found here and other details on the app here.",
"username": "Andrew_Morgan"
},
{
"code": "membersToAddhunterToAdd.push(hunt.hunters)",
"text": "Yes, thanks.I think what you do here with membersToAdd is what I’m looking for.I can, I think, do it like this:User joins a hunt.\nadd huntlet object and add it to user.hunts in “user=user.id” partitionWhen that sync is changed , a Realm trigger runs a function which\ncreates a Hunter object in the “hunt=hunt.code” partition\nand then adds that withhunterToAdd.push(hunt.hunters)There is a lot going on in that example with the nested for loops.Maybe there is some room here to add some more simple examples of Realm functions? I spoke with Drew about this last week, but the lack of straight forward examples (do this in the SDK, mimic the same behavior in a realm function) has made it pretty difficult for a novice dev like myself to get going with Realm, which I know is the opposite of the desired effect.Maybe even just more commenting in the examples would be a good place to start.–Kurt",
"username": "Kurt_Libby1"
}
] | Adding realm objects to lists in Realm Functions | 2021-02-14T18:17:41.058Z | Adding realm objects to lists in Realm Functions | 3,115 |
|
null | [] | [
{
"code": "",
"text": "Hi,I’m looking to download the MongoDB 3.6 and 4.4 ARM64 builds packaged in a tgz.But such packages is not available for download at Download MongoDB Community Server | MongoDBDo you have any plans to provide them?Moreover, I’m evaluating having MongoDB 3.6 and 4.4 ARM64 as an RPM dependency for my application.Again, looking at your yum repo at MongoDB Repositories, you only provide RPMs for x64 architectures.What is the status of Centos7/8 ARM64 support, and do you plan to provide RPMs for the ARM64 architecture?Thanks you very much!",
"username": "Evan_L"
},
{
"code": "",
"text": "hi Evan,\nThanks for the questions!We have Ubuntu ARM64 builds on the Current releases & packages page and we are planning to provide releases for RHEL 8 ARM64 builds in MongoDB 4.4.I’m not aware of plans to provide RHEL 7 ARM64 support in MongoDB 4.4, or for RHEL 7/8 ARM64 support in MongoDB 3.6.I hope this answers your questions.Cheers,\nJon",
"username": "Jon_Streets"
},
{
"code": "",
"text": "hi Evan,\njust to follow up. We’ve released MongoDB 4.4.4 today, and this has support for both Centos 8 ARM 64 and Amazon Linux 2 ARM 64 architectures. I hope this will help with your evaluation.\nCheers\nJon",
"username": "Jon_Streets"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 3.6 and 4.4 ARM64 builds | 2021-01-20T13:11:05.745Z | MongoDB 3.6 and 4.4 ARM64 builds | 7,701 |
null | [] | [
{
"code": "",
"text": "Lucene has 2 analyzers for Japanese full text searchAre these supported by Mongo Atlas?",
"username": "Supriya_Bansal"
},
{
"code": "lucene.cjk",
"text": "Hi @Supriya_Bansal,MongoDB Atlas search analysers support lucene.cjk which is good for Japanese:Please let me know if you have any additional questions.Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thank you @Pavel_Duchovny\nJapanese text has a mix of different phonetics such as Kanji, Kana, Kuromoji. Would CJK suffice for all these?",
"username": "Supriya_Bansal"
}
] | Analyzers for Japanese text | 2021-02-02T19:53:41.388Z | Analyzers for Japanese text | 2,317 |
null | [
"atlas-device-sync",
"indexes"
] | [
{
"code": "",
"text": "In my Realm app my collection is partitioned by user id.Currently, I only have the default index on the _id field, but no index on the _partition field.My question:Shouldn’t I add another index on the _partition field to make it perform well as my user count starts to increase?Thanks,\nThomas",
"username": "Thomas_Hansen"
},
{
"code": "_partition",
"text": "Hi Thomas, welcome to the forum!Adding an index to the _partition attribute would speed up your initial sync (when your app opens a realm for the first time).",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Hi AndrewAh, I see. Thank you!",
"username": "Thomas_Hansen"
}
] | Index on _partition field? | 2021-02-15T19:47:18.601Z | Index on _partition field? | 1,445 |
null | [
"spark-connector"
] | [
{
"code": "",
"text": "First of all, i really sorry if my question it is simply for this forum. I am new in machine learning community and I am heaving to figure solutions during my Phd’s project flows.I am trying to integrate Spark with my MongoDb Cluster in Atlas and unfortunately, I was unable to do it until now.I’ve cloned the github repository and run sbt check, but until now I did not realize where is the jars files or what I supposed to do.I have already the jars files in .ivy2 directory from the haddop-mongoDb connector but it does not working too. Even I am trying to config during launching SparkSession.If someone could help me i really thanks",
"username": "Marcos_Alberto_Perei"
},
{
"code": "config(\"spark.jars.packages\", \"org.mongodb.spark:mongo-spark-connector_2.12:3.0.0\")from pyspark.sql import SparkSession\n\nspark = SparkSession.\\\n\nbuilder.\\\n\nappName(\"pyspark-notebook2\").\\\n\nmaster(\"spark://spark-master:7077\").\\\n\nconfig(\"spark.executor.memory\", \"1g\").\\\n\nconfig(\"spark.mongodb.input.uri\",\"mongodb://mongo1:27017,mongo2:27018,mongo3:27019/Stocks.Source?replicaSet=rs0\").\\\n\nconfig(\"spark.mongodb.output.uri\",\"mongodb://mongo1:27017,mongo2:27018,mongo3:27019/Stocks.Source?replicaSet=rs0\").\\\n\nconfig(\"spark.jars.packages\", \"org.mongodb.spark:mongo-spark-connector_2.12:3.0.0\").\\\n\ngetOrCreate()\n",
"text": "It might be easier to just use the compiled Spark Connector that is already available in Maven.config(\"spark.jars.packages\", \"org.mongodb.spark:mongo-spark-connector_2.12:3.0.0\")Something like this works in Python -Here is an example of using the Spark Connector with MongoDBLearn how to leverage MongoDB data in your Jupyter notebooks via the MongoDB Spark Connector and PySpark. We will load financial security data from MongoDB, calculate a moving average, and then update the data in MongoDB with the new data.",
"username": "Robert_Walters"
}
] | Help using mongo-spark-2.2.x with Atlas | 2021-02-12T19:22:08.637Z | Help using mongo-spark-2.2.x with Atlas | 2,887 |
null | [
"data-modeling",
"atlas-device-sync"
] | [
{
"code": "",
"text": "Hi there.I’m trying to figure out the best way to apply write permission to a collection.\nSay i have a collection ‘CarAd’:\nI use the partition ‘car=public’, and everyone has read-rights because i want everyone to be able to se my CarAd.\nI don’t want anyone but the creator of the CarAd to be able to edit the data…\nI’ve tried toying around triggers and functions, but have not been succesfull.So my question is:\nIs there anyway to determine if a user has write-rights based on another field value - say ‘creator_id’ or such?I realize that i could create two collection types with different names, and use a trigger to copy inserted object into the public realm, but i would like to avoid the redundancy and it just seems wrong somehow.Hope this makes sense",
"username": "Rasmus_B"
},
{
"code": "",
"text": "I think i found the answer here (or at least a solution that will do):I’ll accept redundancy, and copy object to public realm with a new id. When updating the object, i’ll use a trigger to update the object in the public realm.",
"username": "Rasmus_B"
},
{
"code": "{\n \"%%true\": {\n \"%function\": {\n \"arguments\": [\n \"%%partition\"\n ],\n \"name\": \"canWritePartition\"\n }\n }\n}\ncanWritePartitioncontext.userconst splitPartition = partition.split(\"=\");\nif (splitPartition.length == 2) {\n carID = splitPartition[1];\n} else {\n console.log(`Couldn't extract the partition key/value from ${partition}`);\n return false;\n}\n",
"text": "For your sync write permissions, you can specify a function like this:and then in canWritePartition you can access the requesting user from context.user.If you set the partition value to “car=123456789” then the function can get to the car’s ID like this:You can then fetch the document for the car and check whether it’s owned by this user.",
"username": "Andrew_Morgan"
}
] | WritePermission based on field value | 2021-02-12T17:48:28.041Z | WritePermission based on field value | 2,082 |
null | [
"atlas-triggers"
] | [
{
"code": "exports = async function(changeEvent) {\n console.log(changeEvent.operationType);\n \n const db = context.services.get(\"project\").db(\"test\");\n const worktimeCollection = db.collection(\"Worktime\");\n const teamCollection = context.services.get(\"project\").db(\"test\").collection(\"Team\");\n \n var worktime = JSON.parse(JSON.stringify(changeEvent.fullDocument));\n const docId = changeEvent.documentKey._id;\n \n if (changeEvent.operationType == \"insert\") {\n const team = await teamCollection.findOne({ _id: new BSON.ObjectId(worktime.teamId) });\n team.membersAdmin.forEach(userId => {\n if (userId !== worktime.createdBy) {\n worktime._id = new BSON.ObjectId();\n worktime._partition = `user=${userId}`;\n worktime._wid = docId;\n worktimeCollection.insertOne(worktime).then(() => {\n console.log('Document added');\n });\n }\n });\n }\n \n if (changeEvent.operationType == \"update\") {\n worktimeCollection.updateMany({ _wid: docId }, worktime, { upsert: true }).then(() => {\n console.log('Document updated');\n });\n }\n \n if (changeEvent.operationType == \"delete\") {\n worktimeCollection.deleteMany({ _wid: docId }).then(() => {\n console.log('Document deleted');\n });\n }\n};\n",
"text": "hi,I stuck in a infinte loop with my trigger. does anyone know how to prevent from getting in the infinite loop?I have a worktime document in a user realm (partition: user=userId). A admin-member should be allowed to edit the worktime from other user. For that I want to copy or update the worktime into their realms. The trigger runs on the Worktime-Collection so obviously the trigger ends in a loop…Here my trigger function:thank you",
"username": "rouuuge"
},
{
"code": "",
"text": "I’ve hit a similar problem in the past.What I did was to include some extra information in the document so that I could tell that the write had been made by the trigger.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Prevent trigger from infinite loop | 2021-02-13T09:49:48.601Z | Prevent trigger from infinite loop | 3,836 |
null | [
"dot-net"
] | [
{
"code": "parent objectchild objectschild objectsparent object var query = _testResourcesDb.Collection.AsQueryable()\n .Where(x => x.TestId == request.TestId)\n .Select(x => x.Logs);\n\n var orderedQuery = pagingParams.Sort switch {\n SortOrdering.Ascending => query.Select(x => x.OrderBy(tl => tl.DateTime)),\n SortOrdering.Descending => query.Select(x => x.OrderByDescending(tl => tl.DateTime)),\n _ => throw new ArgumentOutOfRangeException()\n };\n\n var result = orderedQuery.Select(x => x.Skip(skip).Take(take))\n .FirstOrDefault();\n",
"text": "I am using MONGODB .NET DRIVER to write a query using LINQ and I am wondering how to make sure that the query was correctly translated and performed on the database side instead of downloading unnecessary data to memory and applying a filter afterward?The best option would be to see the translated query. Is there any way to see it?This is an example of the query that I’m building. I have a list of objects (let’s call it parent object) where each parent object contains a very large list of o child objects. My goal is to order child objects inside each parent object and take e.g. first 100 child objects. The sorting order can vary. My current solution:Will it be correctly translated and filtered on the database side?",
"username": "Martin_Horak"
},
{
"code": "var query = collection.AsQueryable()\n .Where<Book>(b => b.author == \"Mark Twain\")\n .Select(b => b.title);\nqueryConsole.WriteLine(query);Console.WriteLine(query.GetExecutionModel());aggregate([{ \"$match\" : { \"author\" : \"Mark Twain\" } }, { \"$project\" : { \"title\" : \"$title\", \"_id\" : 0 } }])",
"text": "Hello @Martin_Horak, welcome to the MongoDB Community forum!Will it be correctly translated and filtered on the database side?In general, any code that is accessing the database needs to be tested before using it in production. The same applies here also.If you want to know the actual query that is executed on the server for the LINQ query, you can view it by printing the query.For example, consider the following LINQ query:I just printed the query variable to view the actual query using Console.WriteLine(query); or Console.WriteLine(query.GetExecutionModel());. The output I got was this (the actual query, an aggregate with match and project stages which is what the LINQ’s Where and Select correspond to respectively):aggregate([{ \"$match\" : { \"author\" : \"Mark Twain\" } }, { \"$project\" : { \"title\" : \"$title\", \"_id\" : 0 } }])",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Here a post about Linq queries, which explains how the output of the query can be printed or viewed:",
"username": "Prasad_Saya"
}
] | How to make sure that LINQ query is performed on database side? | 2021-02-15T18:47:42.238Z | How to make sure that LINQ query is performed on database side? | 6,785 |
null | [
"change-streams"
] | [
{
"code": "Itemswokrspace_idworkspace_idfullDocumentdelete",
"text": "I have a collection named Items with wokrspace_id (ObjectId) field on it. So whenever an item inserted/updated or deleted i want to use change stream watch method to broadcast it to users that belong to this workspace.\nI have no difficulties dealing with insert/update operationTypes, i am able to access workspace_id within fullDocument prop. But it turns out to be not that straightforward to access it on delete operation.i’ve encountered this SO issue, where @Asya_Kamsky mentioned in comments that it can be done by composing the documentKey, but unfortunately i could not accomplish this (would be grateful if there are any gists on that)So, the question is how this can be done in most elegant way?",
"username": "Ivan_Lysov"
},
{
"code": "",
"text": "Hi @Ivan_Lysov,Welcome to MongoDB community!So there is a nice workaround I presented for Atlas triggers. Since they are based on change stream events eventually same one can be applied to your scenario.The main Idea in the linked post is:To me this is the most elegant way I can think of.Let me know if that helpsThanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "wow, thanks a lot @Pavel_Duchovny! this is an elegant solution indeed ",
"username": "Ivan_Lysov"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Change stream fullDocument on delete | 2021-02-15T18:48:22.351Z | Change stream fullDocument on delete | 10,160 |
null | [
"atlas-device-sync",
"app-services-user-auth"
] | [
{
"code": "const appId = 'myAppId';\n const appConfig = {\n id: appId,\n timeout: 100000,\n };\n const app = new Realm.App(appConfig);\nconsole.log(\"credentials \",Realm.Credentials.anonymous())\napp is crashing while using Realm.Credentials.anonymous()\n",
"text": "",
"username": "Ayushi_Mishra"
},
{
"code": "",
"text": "Hi Ayushi,Could you please share the full crash logs?\nWhich SDK language and SDK version are you using?\nPlease also link the documentation you referred to when putting this code together.Regards\nManny",
"username": "Mansoor_Omar"
}
] | Realm.Credentials.anonymous() app is crashing | 2021-02-15T07:12:26.322Z | Realm.Credentials.anonymous() app is crashing | 1,607 |
null | [
"aggregation"
] | [
{
"code": "... // just the important part of the document that will be used in the query\ngroups: {\n group1: ['userId1', 'userId2'], //array of user ID's\n group2: ['userId3', 'userId4'],\n group3: ['userId5', 'userId6'],\n}\nUsers[\n group1: [{_id: userId1, profile: {}}, {_id: userId2, profile: {}}],\n group2: [{_id: userId3, profile: {}}, {_id: userId4, profile: {}}],\n group3: [{_id: userId5, profile: {}}, {_id: userId6, profile: {}}],\n]\n[\n [{_id: userId1, profile: {}}, {_id: userId2, profile: {}} /*more users in this group*/ ],\n [{_id: userId3, profile: {}}, {_id: userId4, profile: {}}],\n [{_id: userId5, profile: {}}, {_id: userId6, profile: {}}],\n]\n",
"text": "Hi everyone, I have a question that I need some help with.I have some data stored in a Collection like so:Is it possible to return the data from the Users collection formatted like this:ORlike this:Any help or advice would be greatly appreciated.Thanks.",
"username": "Dev_Ops"
},
{
"code": "{\n \"group\": { // 1 document per group, and later on you can use $group aggregation to group by these groups.\n \"users\": [\"userId1\", \"userId2\"], \n \"groupId\": \"1\"\n }\n}\n{\n \"group\": [\n {\n \"groupId\": \"group1\",\n \"user\": \"user1\"\n },\n {\n \"groupId\": \"group1\",\n \"user\": \"user1\"\n }\n ]\n}\n",
"text": "Hi @Dev_Ops,This can be accomplished by using $lookup (for a single group). The problem is not to join Users (full doc) by userId, but joining in each group inside groups. $lookup uses 1 key to lookup from other collection. If you want this, you would have to remodel your data (groups collections) in something like this -ORPlease note - Having dynamic “keys” in your data model is a very bad practise and you should try to avoid such cases. These ways, you know exactly which keys to $lookup on and later on you can use $group/$project to modify your query result.",
"username": "shrey_batra"
},
{
"code": "",
"text": "Thanks @shrey_batra, I will give this a shot.",
"username": "Dev_Ops"
},
{
"code": "{\n \"aggregate\": \"testcoll\",\n \"pipeline\": [\n {\n \"$project\": {\n \"_id\": 0,\n \"groupsArray\": {\n \"$let\": {\n \"vars\": {\n \"groups\": {\n \"$map\": {\n \"input\": {\n \"$map\": {\n \"input\": {\n \"$objectToArray\": \"$groups\"\n },\n \"as\": \"m\",\n \"in\": [\n \"$$m.k\",\n \"$$m.v\"\n ]\n }\n },\n \"as\": \"group\",\n \"in\": {\n \"$map\": {\n \"input\": {\n \"$arrayElemAt\": [\n \"$$group\",\n 1\n ]\n },\n \"as\": \"userid\",\n \"in\": {\n \"_id\": \"$$userid\",\n \"profile\": {}\n }\n }\n }\n }\n }\n },\n \"in\": \"$$groups\"\n }\n }\n }\n }\n ],\n \"cursor\": {},\n \"maxTimeMS\": 1200000\n}\n{\"aggregate\" \"testcoll\",\n \"pipeline\"\n [{\"$project\"\n {\"_id\" 0,\n \"groupsArray\"\n {\"$let\"\n {\"vars\"\n {\"groups\"\n {\"$map\"\n {\"input\"\n {\"$map\"\n {\"input\" {\"$objectToArray\" \"$groups\"},\n \"as\" \"m\",\n \"in\" [\"$$m.k\" \"$$m.v\"]}},\n \"as\" \"group\",\n \"in\"\n {\"$map\"\n {\"input\" {\"$arrayElemAt\" [\"$$group\" 1]},\n \"as\" \"userid\",\n \"in\" {\"_id\" \"$$userid\", \"profile\" {}}}}}}},\n \"in\" \"$$groups\"}}}}],\n \"cursor\" {},\n \"maxTimeMS\" 1200000}\n",
"text": "HelloAs being said ,having unknown keys ,is not good idea in mongo.\nArrays are better for dynamic data.But this query does what you want using $objectToArray and $mapIt produces the second way that you said its ok.Same query , just printed more concise,but its not valid json, : and , are missing",
"username": "Takis"
},
{
"code": "",
"text": "Thanks @Takis, missed your response for some reason.Is there any performance benefit to your option versus @shrey_batra",
"username": "Dev_Ops"
}
] | Return 2D Array | 2021-02-01T23:44:27.654Z | Return 2D Array | 4,545 |
null | [
"atlas-functions"
] | [
{
"code": "exports = function(payload) {\n\n const mongodb = context.services.get(\"mongodb-atlas\");\n\n const mycollection = mongodb.db(\"mydatabase2\").collection(\"mycollection2\");\n\n return mycollection.find({}, {_id:0}).toArray();\n\n};\n[{\"id\":\"214\"},{\"id\":\"301\"},{\"id\":\"2110\"},{\"id\":\"152\"}]\n[{\"id\":\"214\"}]\n",
"text": "HiI’m testing out a Realm app with a database from Atlas.I have a GET webhook exposing the entire database as an array, but need the database to be output in a single field as an animation loop sequence (like a stop watch counter.)Here is the function I’m working on:and the webhook database output as static array, instead of animation loop.Please help, I am still learning) and would love to make use of MongoDBxRealm apps properly!Many thanks, AndiPSSo instead of exposing the entire database as a static array:The webhook will loop through the database infinitely as an animated sequence shown as a single field:Thank you!",
"username": "a_Jn"
},
{
"code": "",
"text": "Hi @a_Jn,The webhook is designed to output a static value and it cannot animate your front end result.What you need is to write a front end code that with a setTimeout will get the next value and update the screen output and as a result does this animation.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thank you Pavel, this sounds like a good solution!",
"username": "a_Jn"
}
] | Javascript animate json data on loop | 2021-02-12T18:04:07.754Z | Javascript animate json data on loop | 2,132 |
null | [] | [
{
"code": "",
"text": "Hi there,\nwe are having a question regarding finding a good image hosting API for app-user image uploads which can work best with Realm.\ndoes mongo db have any solution for app-user image uploads? or any third-party api you recommend?Kind regards,\nBehzad ",
"username": "Behzad_Pashaie"
},
{
"code": "",
"text": "Hi @Behzad_Pashaie,I use Amazon S3 with Realm API integration (storing the s3 metadata in Realms):The following blog uses stitch but has the same concept as it works with realm:MongoDB Stitch allows for the easy updating and tracking of files which can then be automatically updated to a cloud storage provider like AWS S3. See the code and follow along on the live coding on Twitch.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "When you say ‘image hosting’ and ‘app-user uploads’ can you be more specific about the use case?We’ve been using Firebase Could Storage for a number of years and it’s a pretty solid solution depending on what your needs are. It’s not directly integrated with realm but the API is pretty transparent and would with with a variety of other products.",
"username": "Jay"
},
{
"code": "",
"text": "@Jay,I assume you can use Firebase storage with a rest api call from realm , but I truly believe that aws service is a much easier and cleaner way by using the S3 api.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "@Pavel_DuchovnyYes - REST calls are fully supported, along SDK’s for with Swift, Java, ObjC, C++Not advocating by any means but we really like the simplicity of implementation for the Firebase Storage product. Once the pod file and project is updated, uploading and downloading files are just a few lines of code with no real struggle to add it to an existing (or new) project.The issue we had with S3 and Stitch is the amount of hoops we went through trying to implement it (not REST) - after months of effort your team determined that the Stitch SDK isn’t supported (and doesn’t work) on macOS and they “have no plans to make it work on macOS”There are also a number of issues with iOS and were finally told we should stick with REST (which is not part of our project implementation). So it wasn’t a solution for us.However, for any projects relying on REST, it seems to be a smoother ride.As a side note - it would be pretty awesome to have a connection to AWS S3 and cloud storage directly from MongoDB Realm SDK.Just my 02. - your milage may vary.",
"username": "Jay"
},
{
"code": " return convertImageToBSONBinaryObject(file)\n .then(result => {\n // AWS S3 Request\n const args = {\n ACL: 'public-read',\n Bucket: bucket,\n ContentType: file.type,\n Key: key,\n Body: result\n }\n\n const request = new AwsRequest.Builder()\n .withService('s3')\n .withAction('PutObject')\n .withRegion('us-east-1')\n .withArgs(args)\n .build()\n\n return this.aws.execute(request)\n })\n .then(result => {\n // MongoDB Request\n const picstream = this.mongodb.db('data').collection('picstream')\n return picstream.insertOne({\n owner_id: this.client.auth.user.id,\n url,\n file: {\n name: file.name,\n type: file.type\n },\n ETag: result.ETag,\n ts: new Date()\n })\n })\n .then(result => {\n // Update UI\n this.getEntries()\n })\n",
"text": "@Jay,The blog on stitch i mentionedd work on realm this is js sdk code working directly with S3 service:Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "@Pavel_Duchovny That’s great info!We are Swift based and would love to take advantage of that; Is an S3 connection something included in the MongoDB Realm Swift/ObjC SDK? We don’t do any js at all so that would be fantastic if it did.",
"username": "Jay"
},
{
"code": "",
"text": "Hi @Jay,I couldn’t find docs on the realm sdk, but the stitch has it:\nAWSServiceClient Protocol ReferenceTest if similar ways work with realm sdks.If it won’t you can call functions from sdk, this is definitely supported and pass base64 data to them for upload:Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "@Pavel_DuchovnyThanks - as mentioned we are macOS based and Stitch does not work on macOS.It looks like the S3 connectivity is not included in any of the current SDK’s according to this thread so we will just have to stick with Firebase Storage for now.",
"username": "Jay"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny. Sorry for a bitt late reply.\nThanks for kind reply. Will check it out .\nWish you a nice day forward ",
"username": "Behzad_Pashaie"
},
{
"code": "",
"text": "Hi @Jay . Sorry for a bitt late reply.\nThanks for kind reply. More specific is, when user add an item so we have form with fields and an upload field for images. What i ment is an api service where we upload these images. And later GET them to be viewed in item page.\nWish you a nice day forward ",
"username": "Behzad_Pashaie"
}
] | Looking for an image hosting api for app-user uploads / maybe with mongodb-realm integration | 2021-02-02T14:13:28.258Z | Looking for an image hosting api for app-user uploads / maybe with mongodb-realm integration | 4,312 |
[
"queries"
] | [
{
"code": "",
"text": "Hello Everyone,Screenshot 2021-02-13 08.28.471366×768 131 KBThe image clearly shows that the Terminal is not providing the desired output. Please assist.",
"username": "Q2_success"
},
{
"code": "",
"text": "Your question is related the M001 course from MongoDB university. You should be asking your course related question on the university’s course specific forum.The command db.compagnies.find(…) is a mongo shell command. You are currently in a bash shell. You have to start the mongo shell first. Instructions on how do that are in the course material.",
"username": "steevej"
}
] | Issue in fetching the correct answer from Terminal | 2021-02-15T03:24:34.253Z | Issue in fetching the correct answer from Terminal | 2,238 |
|
null | [
"crud",
"golang"
] | [
{
"code": "{\"_id\":{\"$oid\":\"5ed0cb4cb8d5570916d1ee7e\"},\"studentid\":\"01\",\"studentname\":\"XYZ\",\"details\":[{\"subject\": \"maths\", \"marks\":50} ,{\"subject\": \"science\", \"marks\":60} ,{\"subject\": \"sports\", \"marks\":40}]}\ncollection := client.Database(\"users\").Collection(\"student\")\nfilter := bson.M{}\nupdates := bson.M{\"$pull\": bson.M{\"details\": bson.M{\"$in\": bson.A{bson.M{{\"subject\": \"maths\", \"marks\":50}, {\"subject\": \"science\", \"marks\":60}}}}},}\nvar detailsArr []string\n\t\tfor index := 0; index < len(student.details)-1; index++ { // Except last one\n\n\t\t\tsubInfo := \"\\\"\" + \"subject\" + \"\\\"\" + \":\" + \"\\\"\" + student.details[index].subject + \"\\\"\" + \"\\\"\" + \"marks\" + \"\\\"\" + \":\" + \"\\\"\" + student.details[index].marks + \"\\\"\"\n\n\t\t\tdetailsArr = append(detailsArr, subInfo)\n\t\t}\n\t\t\n\t\tupdates := bson.M{\n\t\t\t\"$pull\": bson.M{\"details\": bson.M{\"$in\": bson.A{detailsArr}}},\n\t\t}\n\nresult, err := collection.UpdateMany(context, filter, updates) --- tried with UpdateOne also.\n",
"text": "Hello Team,I have been referring following thread -and solution suggested by Mr Wan Bachtiar.Solution given by Mr Wan Bachtiar works for a hardcoded single dimension array value. However, I am not able to execute it successully if I want to delete MANY 2-dimensonal array elements using $pull.Sample data for Student collection :Let’s assume I want to remove data for subejcts & marks for maths & science from above student’s document.I tried it like but no luck -Appraoch 1 :Appraoch 2 :I also tried it using array of string -Please help me on this as I stuck here. Generic solution mentioned in approach #2 would be better as I can’t use hard-coded stuff in my code.PS : I am using mongo-go-driver.Thanks a lot !",
"username": "Rajesh_Gupta"
},
{
"code": " detailsArr := bson.A{}\n for index := 0; index < len(student.details)-1; index++ { // Except last one\n subInfo := bson.M{\"subject\": student.details[index].Subject, \"marks\": student.details[index].Marks}\n detailsArr = append(detailsArr, subInfo)\n }\n \n updates := bson.M{\n \"$pull\": bson.M{\"details\": bson.M{\"$in\": detailsArr}},\n \"$set\": bson.M{\"processedtime\": processedTime},\n }\n\t\t \n result, err := collection.C.UpdateOne(context, filter, updates)\n",
"text": "After few more try out, I did added following code -However, here I see another issue. As I have many student documents in my collection data and each student contains 4 array elements (subject/marks) and my aim is to remove all array elements Except last one.\nI see weird behavior (on execution of above code), sometimes 3 array elements get deleted for a student, 2 for another student, 3 for third student and so on. I mean there is no conssitency in behaviour of $pull here ?Please correct me what I am doing wrong here.",
"username": "Rajesh_Gupta"
}
] | Remove many 2-dimensional array elements in MongoDB in Go | 2021-02-15T07:55:11.105Z | Remove many 2-dimensional array elements in MongoDB in Go | 2,574 |
null | [
"java"
] | [
{
"code": "//the code from the link example,set registry etc\n\nMongoDatabase db = mongoClient.getDatabase(\"sample_training\");\nMongoCollection<Grade> grades = db.getCollection(\"grades\", Grade.class);\n\ngrades.find(); //works i get cursor of Grade instances\n\n//THIS DOESNT WORK\nDocument find_command = new Document().append(\"find\",\"grades\");\ndb=db.withCodecRegistry(codecRegistry);\ndb.runCommand(find_command,Grade.class); //doesnt work,i get 1 Grade instance with null fields\ndb.runCommand(find_command); //doesnt work,i get a cursor with Documents not Grade instances\n",
"text": "HelloI am doing the examples from quick startQuestions1)how to fix the runCommand to get a cursor Document where “firstBatch” will be an Arraylist of Grade instances (i need it because i am making a library)2)when i use POJO i have perfomance penalty?\ni mean at insert time it goes Grade → Document → BSON or Grade->BSON?\nat read(decode) it goes BSON->Document->Grade or BSON->Grade?Thank you",
"username": "Takis"
},
{
"code": "db.runCommand",
"text": "Hello @Takis, see this post for db.runCommand example code using Java Driver:",
"username": "Prasad_Saya"
},
{
"code": "{\"cursor\" : {\"firstBatch\" : [Document1,Document2 ... ], \"id\" : 0, \"ns\" \"test.test\"}, \"ok\" : 1.0}\n{\"cursor\" : {\"firstBatch\" : [Grade1,Grade2 ... ], \"id\" : 0, \"ns\" \"test.test\"}, \"ok\" : 1.0}\n",
"text": "Hello thank you for trying to help me.The default return of a cursor command isI want to get backAnd the main question is how to tell to the driver,to use Grade class instead of Document\nwhen returning the runCommand result",
"username": "Takis"
},
{
"code": "collection.findGraderunCommand",
"text": "And the main question is how to tell to the driver,to use Grade class instead of Document\nwhen returning the runCommand resultI don’t know, how it can happen. But, the collection.find method returns Grade objects - I think the result you are looking for, i.e., a list of grade objects.The runCommand:returns a document that contains the cursor information, including the cursor id and the first batch of documents.One way is to get the batch of documents and somehow map them individually to the Grade objects. But, thats a lot of work I think.",
"username": "Prasad_Saya"
}
] | How to get POJO objects from db.runCommand#find results? | 2021-02-14T23:30:12.037Z | How to get POJO objects from db.runCommand#find results? | 3,451 |
null | [
"aggregation"
] | [
{
"code": "{\"$and\": [\n {\"details.vulns\": \"cve-2019-19781\"},\n]}\ndb.hosts.aggregate([\n {\"$sort\": {\"timestamp\": 1}},\n {\"$addFields\": {\n \"condition\": {$cond: [\n {\"$and\": [\n {\"details.vulns\": \"cve-2019-19781\"},\n ]},\n 1,\n 0,\n ]}\n }},\n {\"$project\": {\n \"ip\": \"$ip\",\n \"timestamp\": \"$timestamp\",\n \"condition\": \"$condition\",\n }},\n]);\n$details.vulns",
"text": "I have a MongoDB database where people can enter relatively arbitrary search conditions (in this case they are vetted, but not restricted conceptually). An example of a condition a user could specify may look like:This works well for searching, however, I want to be able to also generate statistics based on older data, for which I use an aggregation that looks something like:The problem with such a query is MongoDB does not seem to allow using a $cond with something like $details.vulns . I understand a standard approach would be to unwind, however, due to the conditions being not known before my code executes, I have no way to know what fields to unwind unless I re-implement a MongoDB parsing engine in my own code.Is there an alternative approach here I am missing, or does MongoDB simply lack the ability to apply an arbitrary search clause during an aggregation? If there is a slow way of doing it, I would be fine with that.",
"username": "Stephen_Shkardoon"
},
{
"code": "",
"text": "Hello : )I can’t say that i understand what you need,but in case you need to save mongoQL code in database you can use $literal,and save that condition in database as a embedded document.Also the use of function call inside of json can provide “dynamic” queries,like generate\nthe query json,from a function.For example dont write by hand the condition,have a function\nthat generated the mongoQL code.Or the combination,save the condition,and generate queries that use that condition.",
"username": "Takis"
},
{
"code": "// Objects\n{ \"_id\" : ObjectId(\"5cfc73657e2438b115888d1b\"), \"ip\" : NumberLong(\"12345\"), \"timestamp\" : ISODate(\"2019-06-09T02:45:45Z\"), \"vulns\" : [ \"cve-2019-19781\" ] },\n{ \"_id\" : ObjectId(\"5d04c5497e2438b115b06659\"), \"ip\" : NumberLong(\"12345\"), \"timestamp\" : ISODate(\"2019-06-15T10:13:33Z\"), \"vulns\" : [ \"\" ] },\n{ \"_id\" : ObjectId(\"5d108c52211d917c6ff48bfd\"), \"ip\" : NumberLong(\"12345\"), \"timestamp\" : ISODate(\"2019-06-24T08:37:31Z\"), \"vulns\" : [ \"cve-2019-19781\", \"other-vuln\" ] },\n\n// Desired output from aggregate\n{\"ip\" : NumberLong(\"12345\"), \"timestamp\" : ISODate(\"2019-06-09T02:45:45Z\"), \"condition\": 1 },\n{\"ip\" : NumberLong(\"12345\"), \"timestamp\" : ISODate(\"2019-06-15T10:13:33Z\"), \"condition\": 0 },\n{\"ip\" : NumberLong(\"12345\"), \"timestamp\" : ISODate(\"2019-06-24T08:37:31Z\"), \"condition\": 1 },\n",
"text": "To demonstrate the issue more clearly, here is some example data. Keep in mind that this is simplified – the real data contains many different fields that I may wish to query against, hence the above statements about unwind not being a satisfactory solution:",
"username": "Stephen_Shkardoon"
},
{
"code": "",
"text": "Hi Stephen,You can do this using $filter for an array.Refer to this documentation for more info.",
"username": "Sudhesh_Gnanasekaran"
}
] | Is there a way to apply an arbitrary condition to a MongoDB aggregation? | 2020-10-17T19:18:32.837Z | Is there a way to apply an arbitrary condition to a MongoDB aggregation? | 4,463 |
null | [
"queries",
"indexes",
"performance"
] | [
{
"code": "{\n a: 1,\n b: 2,\n $or: [\n { c: 3 },\n { d: 4 },\n { e: 5 },\n { f: 6 }\n ]\n}\n",
"text": "Hello,I have a probleme with a mongo query that uses an $or like :i have every a_b_c, a_b_d, a_b_e and a_b_f indexes but that query takes more than 150 seconds.But, if i comment any 2 of the 4 items in my $or, the request takes 0.5 secondsDoes mongo applie a limit of index used in an $or query ? or something i didn’t get ?Thanks for your answer.",
"username": "Joris_SEBIRE"
},
{
"code": "",
"text": "Hi @Joris_SEBIREWelcome to MongoDB community.The ways your indexes are built the engine tries to do an index intersection loading all indexes to memory and merging them.I think that for this query its best to use either all fields in one index or just index a,b …Thanks,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Refer to this documentation that might be helpful for your use-case.",
"username": "Sudhesh_Gnanasekaran"
}
] | $or query using more than 2 indexes takes forever | 2021-02-12T18:27:10.453Z | $or query using more than 2 indexes takes forever | 1,925 |
null | [
"node-js",
"indexes"
] | [
{
"code": "Users={\n username: string ,unique,\n email:string ,unique,\n password,string,\n nickname:string,\n}\n",
"text": "hay i am trying to move from mongoose to the mongo driver and i want to make a schema for my user collection and i need to make a unique fields the schema i want to build is",
"username": "TTTurtelClub_N_A"
},
{
"code": "",
"text": "hay i am trying to move from mongoose to the mongo driver…Hello.There is no defining of “schema” using the MongoDB NodeJS Driver, as you do using Mongoose ODM. When you insert a document into a collection it is created with the fields you had provided with the insert method. As such, you can insert documents with different schemas within the same collection - this is due to the nature of Flexible Schema of MongoDB documents. But, you can introduce Schema Validation for insert and update operations for a collection; this is an option.…and i need to make a unique fieldsThis can be achieved by creating a Unique Index on the field(s). From the documentation:A unique index ensures that the indexed fields do not store duplicate values; i.e. enforces uniqueness for the indexed fields. By default, MongoDB creates a unique index on the _id field during the creation of a collection.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "A unique index ensures that the indexed fields do not store duplicate values; i.e. enforces uniqueness for the indexed fields. By default, MongoDB creates a unique index on the _id field during the creation of a collection.does the index need to be specify through the shell or i can do it from my backend (NodeJS)?",
"username": "TTTurtelClub_N_A"
},
{
"code": "mongomongo",
"text": "does the index need to be specify through the shell or …Preferably using the mongo shell - It is a good practice to do all data definition activities (these come under database administration) from the mongo shell. Of course, you can use NodeJS program or even a GUI tool like MongoDB Compass.",
"username": "Prasad_Saya"
}
] | Mongo Schema using NodeJS | 2021-02-12T18:27:16.835Z | Mongo Schema using NodeJS | 8,972 |
null | [
"atlas-device-sync",
"react-native"
] | [
{
"code": "",
"text": "Hello,\nWe are using cluster 4.4.3 version,\nnpm package 10.1.4Its new app, new collection. There is no data available to sync. but as soon as User performs Login operation, we get this error:SYNC ERROR active {“category”: “realm::sync::Client::Error”, “code”: 112, “isFatal”: true, “message”: “Bad changeset (DOWNLOAD)”, “name”: “Error”, “userInfo”: {}}",
"username": "Bapu_Hirave"
},
{
"code": "bad changeset (DOWNLOAD)incorrect partition value for key ____ expectedPartition: ____ foundPartition: ____",
"text": "Hi Bapu,We worked on this in the support case you raised and you mentioned that your instance of this issue/error is now resolved. If you have any further questions regarding your issue, please respond in the support case for continuity.For anyone else that experiences this issue, the bad changeset (DOWNLOAD) error is usually a result of (but not limited to):Schema inconsistencies:Partition issues:Regards\nManny",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm SYNC - React Native Error- Bad changeset (DOWNLOAD) | 2021-02-03T21:37:51.395Z | Realm SYNC - React Native Error- Bad changeset (DOWNLOAD) | 3,848 |
null | [] | [
{
"code": "Save and Close{\n $expr:{\n $gte:['$ReportedDate', {$toDate:{$dateToString: { format: \"%Y-%m-%d\", date: new Date() }}}]\n }\n}\n{\n $expr:{\n $gte:['$ReportedDate', ISODate('2021-02-12')]\n }\n}\n",
"text": "I encounter strange behavior on the Save and Close button functionality on Chart dashboard. Button is disabled when I have the following filter in query bar:However, change filter to the following allows me to save my changes:Both filters return the same data needed but the former will not let me save. My goal is to present data which ReportedDate matches current date.",
"username": "chanleong"
},
{
"code": "new Date(){\n$expr:{\n$gte:[’$ReportedDate’, \"$$NOW\"\n}\n}\n",
"text": "Hi @chanleong,Welcome to MongoDB community.Isn’t new Date() will return a date? Why do you need to convert it to string\n?Will that work?Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "$expr$expr{ ReportedDate: { $gte: new Date().setHours(0,0,0,0) } }\n",
"text": "There are some known issues around validating queries with $expr in them, which is the reason the button is disabled. We’ll try to get this sorted soon, but you can do this without needing $expr. I presume it’s important that the date is set to the most recent midnight. In this case you could use a query like this:Another (easier) option is to use the Filters tab. Once you drag the field onto the tab, you can select the “Period” option and then “Current Day”.HTH\nTom",
"username": "tomhollander"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | `Save and Close` button is disabled by data filter in Query Bar | 2021-02-12T18:12:27.249Z | `Save and Close` button is disabled by data filter in Query Bar | 2,239 |
null | [
"crud",
"performance"
] | [
{
"code": "",
"text": "i have a single replica set MongoDBi have increased poolSize to 300creating collections concurrently via 100 threads, reduces performance ridiculously\neach creation takes up to 10 seconds.also doing insertMany takes 5 seconds concurrently even outside of transactionwhat am i missing?",
"username": "Masoud_Naghizade"
},
{
"code": "",
"text": "What is the hardware configuration? RAM? CPU? Disks?What is the system architecture? Where is the server, where is the client running the 100 threads?",
"username": "steevej"
},
{
"code": "",
"text": "all of them are in the same machine running locally, with a single replica set and not shardedRAM = 20GB\nCPU = 2400 sandy bridgei monitor the hardware resources, and it seems it is not due to lack of hardware, at least not for creating collection.",
"username": "Masoud_Naghizade"
},
{
"code": "",
"text": "Disks? Local, NAS, SAN, …?Number of cores?Shared machine or not?creating collections concurrently via 100 threadsSo you are creating 100 collections in 100 threads? How do you create the collections? You do insertMany, then how many?it is not due to lack of hardwareHow do you come to the above conclusion? Do you have any measures? CPU usage, IO usage, RAM usage",
"username": "steevej"
},
{
"code": "",
"text": "local ssd disk.basically everything is local and the one replica set is for supporting transactions4 cores 3.4 GHzit is not shared, not a VM, normal Ubuntu desktop IMac computercreate collection is a single line simple function call in Java driver , no strings attatchedalso the insertMany is a simple function callthere are no other instructions in between that might cause the delaycpu is between 50 to 80 percent and Ram is 10GB.and i dont have any extra index that might effect insertions.the weird thing is creating collections.it seems it is a very costly task, am i right?\nwhat is a normal expectation for creating a collection in a system described as above?actually when i run sequentially all create collection and insertions run less the 0.01 sec.so definitely the concurrency is the problem",
"username": "Masoud_Naghizade"
},
{
"code": "",
"text": "Ram is 10GB.Quite normal since you have 20GB, WiredTiger engine reserve nearly 50%. See https://docs.mongodb.com/manual/core/wiredtiger/Any numbers for disk I/O?create collection is a single line simple function call in Java driverPlease show your code.also the insertMany is a simple function callYes, but if your many is 100000, it is important to know.There is at least 2 files per collections.so definitely the concurrency is the problemToo soon to conclude that mongod is the problem. It might be your threading code.what is a normal expectation for creating a collection in a system described as above?You should expect that creating a collection is very fast. Creating too many collection is a design anti-pattern. See https://www.mongodb.com/article/schema-design-anti-pattern-massive-number-collections/We need numbers for disk I/O usage.Having the client and the server on the same machine with 100 threads might caused too much context switching on a 4 cores machine.when i run sequentially all create collection and insertions run less the 0.01 sec.What do you do with the data directory between your 2 tests?",
"username": "steevej"
},
{
"code": "",
"text": "the blog and the youtube video that descrives 6 anti-pattern of schema design was a great help.i think i am misusing Mongo.because based on my design, i am creating a new collection for each Transaction and then immediately i drop it.but i have more than 20000 collections active at the same time due to load on the server.so as i understood from the youtube video, the WiredTiger creates 2 files for each collection, one for data and one for each index and by default the Wiredtiger opens all of the files at the startup as much as it hits it’s Cache which is half of the Ram.and as recommended, i have to decrease the collections to under 10000 in a single replica set.so i have to change my design.i monitored the disk io and it was almost 90 percent all the time used by the mongo process.so i think you have helped me to pin point the problem.let me change my design to include all of them in a single collection and finding them using indexes.i’ll let you know about the results.i’ll try not to step to the other pitfalls explained in the video, like unused indexes or bloated documents or massive arrays.thank you man, you helped me alot.i’ll post the new stats to help others.",
"username": "Masoud_Naghizade"
},
{
"code": "",
"text": "sorry for late response.you were absolutely right.that video helped a lot.we should not have more than 10000 collection per replica set.so the moment i changed my schema design to single collection with a wildcard index, it was like a miracle.everything is super sonic now.i had careful abstractions and anti corruption layers over my infrastructure which make it possible for me to switch from mongo to sth else in an hour for and enterprise application with more than 10 bounded contexts and allowed me to change my design from thousands of collection, into a single or two collection.but if you hardcode your schema to your code, you will have a hard time modifying your schema design.thank you anyway, you helped me alot",
"username": "Masoud_Naghizade"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Very slow concurrent operations | 2021-02-07T15:54:41.916Z | Very slow concurrent operations | 4,268 |
null | [
"crud"
] | [
{
"code": "",
"text": "I have enabled field level encryption.\nBut I have some existing unencrypted values in db. To update those I was trying to run an update query to update existing values using the same value. But mongodb is not updating the document if we try to update using the same value.\nIs there any way I can force update mongodb document using same values?",
"username": "Navin_Devassy"
},
{
"code": "db.collection.update({},[{ $set { fieldX : \"$fieldY\" } }],{multi : true});\n",
"text": "Hi @Navin_Devassy,Welcome to MongoDB community.I am not certain what exactly you mean, but if you want to update fieldX with fieldY values you can use a pipeline update:Thanks,\nPavel",
"username": "Pavel_Duchovny"
}
] | Force update mongodb document using same values | 2021-02-12T04:33:04.222Z | Force update mongodb document using same values | 3,842 |
null | [
"node-js"
] | [
{
"code": "(node:24990) Warning: Accessing non-existent property 'MongoError' of module exports inside circular dependency\n at emitCircularRequireWarning (internal/modules/cjs/loader.js:650:11)\n at Object.get (internal/modules/cjs/loader.js:664:5)\n at Object.<anonymous> (/home/my-user/my-project/node_modules/mongodb/lib/operations/operation.js:4:38)\n at Module._compile (internal/modules/cjs/loader.js:1063:30)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:1092:10)\n at Module.load (internal/modules/cjs/loader.js:928:32)\n at Function.Module._load (internal/modules/cjs/loader.js:769:14)\n at Module.require (internal/modules/cjs/loader.js:952:19)\n at Module.Hook._require.Module.require (/usr/lib/node_modules/pm2/node_modules/require-in-the-middle/index.js:80:39)\n at require (internal/modules/cjs/helpers.js:88:18)\n",
"text": "Hi,I am getting a warning consistently each time our webserver or another process connects to the DB.Some extra information:I googled for this error and encountered similar errors. Those were solved by Mongodb in a newer version of the driver. So it seems to me this is also an issue with the driver itself, and not our code.Thanks in advance.",
"username": "Laurens"
},
{
"code": "",
"text": "Can confirm this error also appears in node mongodb v3.6.4 but not v3.6.3",
"username": "Goran_Spasojevic"
},
{
"code": "node --trace-warnings ...",
"text": "Hi Goran !\nFor me, appears in mongodb v3.6.4(node:32197) Warning: Accessing non-existent property ‘MongoError’ of module exports inside circular dependency\n(Use node --trace-warnings ... to show where the warning was created)Thanks guys",
"username": "Wagner_Bolfe"
},
{
"code": "",
"text": "Same error at 3.6.4 version",
"username": "Cesar_Gonzalez_Groh"
},
{
"code": "node --trace-warnings ...",
"text": "I’m using “connect-mongo”: “^3.2.0”, and “mongoose”: “^5.11.15”, node version v14.15.4 and I get(node:26812) Warning: Accessing non-existent property ‘MongoError’ of module exports inside circular dependency\n(Use node --trace-warnings ... to show where the warning was created)",
"username": "Travis_Ealy"
},
{
"code": "",
"text": "Hi All,Thanks for reporting! I hit the issue myself today while I was working. I checked in with the Node driver team. The warning is safe to ignore and will hopefully be gone in an upcoming release.",
"username": "Lauren_Schaefer"
},
{
"code": "",
"text": "image1026×114 10.6 KB",
"username": "Anurag_Arwalkar"
},
{
"code": "",
"text": "Thank you for letting us know! I was worried for a bit to be honest and I kept googling to see what I was doing wrong, until I found this.",
"username": "G_L1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Warning: Accessing non-existent property 'MongoError' of module exports inside circular dependency | 2021-02-05T11:06:07.499Z | Warning: Accessing non-existent property ‘MongoError’ of module exports inside circular dependency | 59,673 |
null | [
"atlas-device-sync"
] | [
{
"code": "",
"text": "Hi there.I’m struggeling a bit with mongodb realm when i make major changes to the models.\nSo far i’ve had to delete the app and create a new one - sometimes also the cluster, and it’s getting a bit tedious…I don’t understand this, as i’m in development mode, and according to the documentation, this should allow schemas to be updated automatically.I just removed a ‘Required’ tag from a partition field - i know this is a destructive change, so i:\na) Deleted database\nb) Deleted schema (all schemas), by clicking remove configuration. Realm → schema → ‘REMOVE CONFIGURATION’\nc) Deleted ALL installations of the app.However i’m still getting:\nProperty ‘ChatMessage._partition’ has been made optional.\" failed?When i check the schemas again, the previous schemas has been added again, and all changes are ignored…Any idea as of what i’m doing wrong?",
"username": "Rasmus_B"
},
{
"code": "",
"text": "Check out this thread: [Performing destructive changes to synced Realm - #11 by Jay]. It helped me get out of a similar jam. This information really needs to be in the documentation to avoid future gnashing of teeth.",
"username": "Nina_Friend"
},
{
"code": "",
"text": "Thanks for the reply.I’ve already tried deleting the data - even the entire database when i make those changes, but no luck. The old schemas just reappear, like they are cached somewhere.\nBut yes i agree - documentation is lacking a bit in regards to schemachanges and development-mode.",
"username": "Rasmus_B"
}
] | Unable to delete schemas | 2021-02-12T09:35:05.670Z | Unable to delete schemas | 3,419 |
null | [
"security"
] | [
{
"code": "",
"text": "I would like to know why it is necessary to add an ip on the whitelist. And also how can I connect my project to an ip. I’m hosting my project on Heroku.",
"username": "db_test"
},
{
"code": "",
"text": "Hi @db_test,A whitelist limits the originating IP addresses that can connect to your deployment and is part of the general best practice of Principle of least privilege and limiting network exposure to trusted sources.how can I connect my project to an ip. I’m hosting my project on Heroku.If you are using a Heroku dyno, your application may have a large range of originating IP addresses. See: I need to add Heroku dynos to our allowlist - what are IP address ranges in use at Heroku?.In addition to whitelisting trusted originating IP addresses (where possible), you should also enable other security measures such as Role-Based Access Control and Encrypted Communication (TLS/SSL).Please review the MongoDB Security Checklist for available security measures and links to relevant tutorials.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "You have to use an addon that provides your app with a static IP address, like Fixie Socks or Fixie. Unfortunately I tried this and it didn’t work.",
"username": "George_N"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | IP whitelist config | 2020-08-24T00:12:50.433Z | IP whitelist config | 3,735 |
null | [
"installation"
] | [
{
"code": "",
"text": "Hi, I try to install mongodb using brew on Mac OS Big Sur version 11.1 but when I run “mongod” in the terminal I get the error message shutting down exitCode 100.I have tried the following command “sudo chown -R eliasstihl /usr/local/var/mongodb” but it doesn’t seem to work anyway.What am I doing wrong?",
"username": "Elias_Stihl"
},
{
"code": "",
"text": "If you run just mongod without any params it tries to bring up mongod on default port and default dirpath-/data/db\nIn new versions of Macos access to /data/db is restricted\nAlso did you create logpath dir and given necessary permissions?So try this\nmongod --dbpath your_dbpath --logpath your_logpath --forkPlease check mongo doc for different options",
"username": "Ramachandra_Tummala"
}
] | Installation mac os exitCode 100 | 2021-02-12T23:24:58.702Z | Installation mac os exitCode 100 | 5,871 |
null | [
"node-js",
"atlas"
] | [
{
"code": "",
"text": "Hello,\nI have the task to create a db for development and production.\nThe development part works. I access MongoDB via mongoose. (node+express backend).I use Atlas for setting up MongoDB. I have a connection string user+password.Since I come from the php+mysql world, things are abit confusion for me.\nWhen I add a new db, I think I have to open a new project (because I take the free Atlas version for now)Then _I have to setup a cluster. But cluster is only hardware, isn’t it? How do I add the new db then, within that cluster or within the new project? Thanks a lot!",
"username": "Marco_A"
},
{
"code": "test:PRIMARY> use new_database\nswitched to db new_database\ntest:PRIMARY> db.new_collection.insert({\"new\":\"document\"})\nWriteResult({ \"nInserted\" : 1 })\ntest:PRIMARY> db.new_collection.findOne()\n{ \"_id\" : ObjectId(\"60217d05fefe8bffdab5ee20\"), \"new\" : \"document\" }\ntest:PRIMARY> show dbs \nadmin 0.000GB\nconfig 0.000GB\nlocal 0.000GB\nnew_database 0.000GB\ntest 0.000GB\ntest:PRIMARY> show collections\nnew_collection\n",
"text": "Hi @Marco_A and welcome in the MongoDB Community !First, yes, you are correct, you can only have one M0 cluster per Atlas project. So if you want your dev and prod running on an M0 cluster, you will have to create 2 Atlas projects.That being said, M0 are just shared instances and are not reliable enough for production environment. Please consider using at least an M10 for a production environment which is the first dedicated tier.When you create a cluster in Atlas, which ever tier you create from M0 to M700, by default it’s a replica set of 3 machines running in the background. The cluster you see on Atlas is “just” the hardware + the mongod daemons running (and configured correctly).Databases lives inside the cluster. If you want to create a database, head into the collection tab:image517×511 27.6 KBThen click on “create databse”.image864×573 40.7 KBYou will also have the option their to create a collection which is a set of documents. A collection lives inside a database.That being said, you don’t need to create any database or collection. In MongoDB, they are created automatically when you write the first document in them.Example:I hope this helps.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hi Maxime, thanks a lot for the answer.Mmmhh… now I’m really a bit confused. With mysql, I create my database and populate that with tables. I can then define the primary key for a table and so on…I heard that MongoDB only uses JSON. Do these documents consists of JSON formatted data then?",
"username": "Marco_A"
},
{
"code": "",
"text": "Yup, that’s correct.I found this that might help you understand the “equivalent” between the 2.Also, you should really consider our MongoDB Basics course on our University: MongoDB Courses and Trainings | MongoDB University.It’s completely free and it will just take a few hours for you to understand all the basics of MongoDB .Then we also have more advanced courses if you want to learn more: MongoDB Courses and Trainings | MongoDB UniversityCheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "I will check those links, thanks a lot for the information!",
"username": "Marco_A"
},
{
"code": "",
"text": "You are very welcome.If you prefer tutos, we have a bunch of Node.js content as well on the DevHub:https://www.mongodb.com/learn/?text=Node.jsCheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "I do most of the things with mySQL but thanks a lot!",
"username": "Marco_A"
}
] | Db connection with Atlas, dev and prod part (using node+express) | 2021-02-08T17:34:35.128Z | Db connection with Atlas, dev and prod part (using node+express) | 2,687 |
null | [
"atlas-device-sync"
] | [
{
"code": "",
"text": "I was under the impression that in development mode schema changes are dictated by the client. However in my app sync didn’t work until I changed the schema server side to match the client, terminated sync and restartedCan someone shed a light on why this happens?",
"username": "One_Log"
},
{
"code": "",
"text": "I’m having the same problem and have opened a support ticket.",
"username": "Nina_Friend"
},
{
"code": "",
"text": "Follow this thread. It solved my problem.",
"username": "Nina_Friend"
},
{
"code": "",
"text": "Thank you Nina! It looks like that’s the solution",
"username": "One_Log"
}
] | Sync not happening until schema changed server side even though development mode is ON | 2021-02-07T16:23:47.911Z | Sync not happening until schema changed server side even though development mode is ON | 2,559 |
null | [
"indexes",
"capacity-planning"
] | [
{
"code": "\"memSizeMB\": 80561mongoduser_dataproduct_datastatic_datamongos{\n \"uIdHash_hashed\" : 2663313408, // 2.7GB\n \"bId\" : 1703297024, // 1.7 G B\n \"_id_\" : 9491111936 // 9.5 GB\n}\n{ \"_id_\" : 865091584, // 0.9 GB\n \"pIdHash_hashed\" : 256389120, 0.3 GB\n \"pIdHash_name\" : 877314048, // 1GB\n \"pid_products\" : 8371068928, // 8.4GB\n \"bId\" : 96018432 // 96MB\n // planning to add index here for 2.8 GB\n},\n{\n \"_id_\" : 36864, // 37 KB\n \"expiry\" : 36864, // 37 KB\n}\n",
"text": "Hi. I had few questions about assessing the performance impact of introducing a new index to one of my collections. My current cluster has machines having \"memSizeMB\": 80561 and each machine hosts 3 mongod processes for corresponding shards. The current value of storage.wiredTiger.engineConfig.cacheSizeGB is 20GB considering some buffer for other mongo processes co-hosted on these machines. We have a single database with three collections, let’s call them user_data , product_data and static_data.What I am looking forward to is establishing a new index based upon a feature requirement for my application. Based on this S0 thread, I have estimated it to be of size ~2.8GB. To decide whether adding this index degrade the performance of my existing cluster or not, I had further inspected the index sizes reported by querying mongos using stats. These already accounted for a total of 24.6GB detailed below per collection:So my questions to follow here are(diving slightly deeper):",
"username": "naman"
},
{
"code": "mongod",
"text": "I cannot answer your request, however I want to comment of the following:each machine hosts 3 mongod processes for corresponding shardsHosting multiple mongod instances on a single machine for sharding is a bad system architecture. You would be better off running 1 instance and avoid sharding. Sharding adds overhead. It adds processing overhead as queries have to be routed with mongos. It adds processing overhead as running the config server replica set is not free. It adds storage overhead as the config server has data about chunks location.With 3 instances on the same hardware, you reduce the RAM available for cache available for all of them. The config server also needs RAM for cache. Yes, you potentially distribute the load. But you increase the chance of cache miss, and this is expensive.I do not have any numbers to backup my opinion. Running multiple shards replica sets on the same host sounds wrong. Running multiple mongod instances on the same host sounds simply wrong unless you are developing or testing.",
"username": "steevej"
}
] | Assess the right amount of cacheSize for wiredTiger upon index addition | 2021-02-12T16:08:01.203Z | Assess the right amount of cacheSize for wiredTiger upon index addition | 3,281 |
null | [
"sharding",
"indexes"
] | [
{
"code": "",
"text": "Hi Everybody.I am working with sharding these days, MongoDB 4.4. So far so good!\nI wonder if I can use a partial index for shard-key, considering that in this version\nit could be somehow nullable.Thank you so much for your attention.",
"username": "Francesco_Carucci"
},
{
"code": "",
"text": "Hi @Francesco_Carucci,Welcome to MongoDB community.A shard key entry must be present in any document of the sharded collection otherwise the sharding mechanism will not be able to work and distribute data.A partial index contradicts this assumption as it means some documents will not be present in that index otherwise its not partial…Hope this explains it.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thank you so much for your answer.I appreciate.",
"username": "Francesco_Carucci"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Using partial indexes in sharded collections v. 4.4 | 2021-02-12T15:09:10.451Z | Using partial indexes in sharded collections v. 4.4 | 1,573 |
null | [
"queries",
"text-search"
] | [
{
"code": "Posts`_id\nusername\ntitle\nbody\ntags\nlikes\nshares\ndate`\n(title, body, tags)",
"text": "HiSo I have a sample collection Posts which I filled with 10 million documents for testing purposes. The schema is as followsThe text index is on (title, body, tags) - a simple search like the following returns in 0msdb.posts.find({$text: {$search: “technology computer phone”}}).limit(50)Of course I realise this is just taking the first 50 documents which match the search stems, then returning them. What I would want is for the more relevant/higher scoring matches to appear first thoughdb.posts.find({$text: {$search: “technology computer phone”}}).sort({score:{$meta:“textScore”}}).limit(50)This query takes much longer, however - about 10 seconds. So I have a few questions:Is there anyway to speed this up in pure MongoDB?For a search like this on 10 million documents, where 1.25 million match the search terms (i.e. 1/8th of the total), for them to be sorted by relevance, what kind of execution time would you expect on Atlas? And what would be the cost of such a configuration? Let’s say if I wanted to reduce that query time down from 10s to 1s",
"username": "Mon_Go"
},
{
"code": "",
"text": "Hi @Mon_Go,Welcome to MongoDB community.To understand the current query performance its best to get explain (“executionStats”) , but I believe it produces an in memory sort before fetching.Have you tried first building the score and then sorting like here:We strongly recommend considering Atlas search as a more robust alternative to text indexing. The metadata sorting and available options is also more efficient there:Normalize or modify the score assigned to a returned document with the boost, constant, embedded, or function operator.Thanks\nPavel",
"username": "Pavel_Duchovny"
}
] | Sorting by $meta textScore takes 10s - is it possible to return a filtered & sorted text search more quickly? | 2021-02-12T06:27:33.970Z | Sorting by $meta textScore takes 10s - is it possible to return a filtered & sorted text search more quickly? | 3,263 |
null | [
"atlas-device-sync"
] | [
{
"code": "",
"text": "I’m working on an app where there are users that create records with a partition key, so that other users aren’t allowed to see it, but admins are supposed to be able to. I tried opening a realm with a null partition key, but I think I misread the documentation and that doesn’t do what I want. I changed the cluster read permission to allow the admin to read all records, but I can’t figure out how to sync it to the app.",
"username": "Weekend_Wings"
},
{
"code": "",
"text": "Hi Weekend_Wings!Right now, there isn’t a good way to sync all realms for an app to a single device. In fact, you generally shouldn’t open more than a few realms at a time for optimal performance. And depending on the amount of data your app puts in each realm, I’m not sure that you’d have enough storage on your admin’s device to store all of that data!If you want to give admins access to all data, you could consider using MongoDB Data Access instead. This lets you view data in your linked MongoDB Atlas cluster (which stores a copy of all of your synced data) using MongoDB queries instead of syncing all of the data to persistent storage on your local device. For your admin use case, you could just query for all of the documents regardless of their partition value. You can take a look in the Realm SDKs section of the docs for guides on how to query with MongoDB Data access in your SDK of choice. Do you think this would fit your use case?",
"username": "Nathan_Contino"
},
{
"code": "const [collection, setCollection] = useState([]);\nuser.mongoClient(\"mongodb-atlas\").db(\"db\").collection(\"Collection\").find().then(res => setCollection(res));\nuseEffect[user, refresh]refresh\"$or\": [\n { \"%%user.custom_data.name\": \"admin\" },\n {\n \"%%true\": {\n \"%function\": {\n \"arguments\": [\n \"%%partition\"\n ],\n \"name\": \"canReadPartition\"\n }\n }\n }\n",
"text": "Thank you, that’s what I ended up doing.For anyone else reading this in the future, I was using react native, so I did:If you’re doing it in a component function, you probably want to wrap it in useEffect with a function as the first argument and [user, refresh] as the second, where refresh is state that can be updated to refresh the data, so you don’t create a request after each render (since updating the state causes a re-render), or you’ll use a lot of data and bandwith and get terrible performance.For an admin to be able to read all records, you need to set the permissions properly. I used the following:",
"username": "Weekend_Wings"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Is it possible to sync data through Realm regardless of the partition key? | 2021-02-10T19:01:02.347Z | Is it possible to sync data through Realm regardless of the partition key? | 2,752 |
null | [] | [
{
"code": "",
"text": "Hello! I’m new to MongoDB and I was asking in how to clone a GitHub project… Is there a command you have to do? I haven’t started the setup yet. Is there a way to git clone? Thank you. (link: https://github.com/anautonell/kahoot-clone-nodejs)Thank you.",
"username": "Toni_Delon"
},
{
"code": "",
"text": "Or do I have to only deploy in the menu?",
"username": "Toni_Delon"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Git Cloning in the Database | 2021-02-11T18:47:44.590Z | Git Cloning in the Database | 2,304 |
null | [
"java",
"spring-data-odm"
] | [
{
"code": "",
"text": "Hi TeamHow I can execute below query in spring or java driver,db.getCollection(“user”).find( { “userName”: “xyz”} );entire query comes as string and I should execute the query and return the results as Objects.Is there a way to execute the entire query string in spring or Java driver.",
"username": "Prabaharan_Kathiresa"
},
{
"code": "",
"text": "Hi @Prabaharan_Kathiresa and welcome back !Here are a few examples in Java that will probably help you: GitHub - mongodb-developer/java-quick-start: This repository contains code samples for the Java Quick Start blog post series.You can also check my Java blog post series on the DevHub: https://www.mongodb.com/quickstart/java-setup-crud-operations/ (see all the blog post at the bottom of the page).I also have a Spring starter project that you can check here: GitHub - MaBeuLux88/java-spring-boot-mongodb-starter: MongoDB Blog Post: REST APIs with Java, Spring Boot and MongoDBCheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "db.runCommandMongoTemplate",
"text": "Hello @Prabaharan_Kathiresa,I don’t think there is a way to run the whole string as a query using Java or Spring Java APIs. But, you can run native queries using available APIs. For example, using Java Driver you can use the db.runCommand. Here is a post with its usage:There is similar API method with Spring Data MongoDB, using the MongoTemplate class.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thank you @MaBeuLux88 , Blogs are really helpful. In my Scenario complex part is I don’t know the collection name & what Operation(Find or Count) query intends to(and also I should not parse the string to get the collection name and operation).\nie.db.getCollection(“user”).find( { “userName”: “xyz”} );Second time, I may get the query string asdb.getCollection(“employee”).count( { “empId”: “123”} );Is there any libraries that can help me to run shell queries from java/spring?",
"username": "Prabaharan_Kathiresa"
},
{
"code": "",
"text": "@Prasad_Saya, Thank you for the reply. db.runCommand suits me. Following below\ndocumentation, another learning today.Is there any libraries that can help me to run shell queries from Java/Spring?",
"username": "Prabaharan_Kathiresa"
},
{
"code": "db.getCollection(\"books\").find( { author: \"Leo Tolstoy\" } )db.runCommand({ \"find\": \"books\", \"filter\": { \"author\": \"Leo Tolstoy\" } })String strCmd = \"{ 'find': 'books', 'filter': { 'author': 'Leo Tolstoy' } }\";\nDocument bsonCmd = Document.parse(strCmd);\nDocument result = db.runCommand(bsonCmd);\nDocument cursor = (Document) result.get(\"cursor\");\nList<Document> docs = (List<Document>) cursor.get(\"firstBatch\");\ndocs.forEach(System.out::println);\nstrCmd\"books\"\"{ 'author': 'Leo Tolstoy' }\"",
"text": "For the query db.getCollection(\"books\").find( { author: \"Leo Tolstoy\" } ), you can construct a find command query as, for example, db.runCommand({ \"find\": \"books\", \"filter\": { \"author\": \"Leo Tolstoy\" } }).In Java, you can construct it as:In the above code, the variable strCmdis constructed using the collection name and the query filter strings as \"books\" and \"{ 'author': 'Leo Tolstoy' }\" respectively, from the actual shell query.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB native query execution in Spring | 2021-02-11T13:43:02.486Z | MongoDB native query execution in Spring | 21,301 |
null | [
"python"
] | [
{
"code": "",
"text": "Hello, I am saving eye-coordinates into a csv file using Python. Once the application ends, the csv file is saved to my local computer. At the same time I want to export the data (in a csv file) to mongoDB.I am only new to this and I can’t seem to figure it out. Any help would be very appreciated and if you have any questions, ask away.Thanks.",
"username": "Research_Project"
},
{
"code": "import pandas as pd\nfrom pymongo import MongoClient\nimport json\n\ndef mongoimport(csv_path, db_name, coll_name, db_url='localhost', db_port=27000)\n \"\"\" Imports a csv file at path csv_name to a mongo colection\n returns: count of the documants in the new collection\n \"\"\"\n client = MongoClient(db_url, db_port)\n db = client[db_name]",
"text": "or\nBy the way, its importing into the database.",
"username": "DK2021"
},
{
"code": "",
"text": "Thank you very much for your help. I will try this tomorrow and I will let you know how I got on. Thanks again.",
"username": "Research_Project"
}
] | Import CSV file data into MongoDB using Python | 2021-02-11T18:50:28.320Z | Import CSV file data into MongoDB using Python | 24,185 |
null | [
"golang"
] | [
{
"code": "IsTypeZeroer",
"text": "Per the documentation:\nomitempty: If the omitempty struct tag is specified on a field, the field will not be marshalled if it is set to\nthe zero value. By default, a struct field is only considered empty if the field’s type implements the Zeroer\ninterface and the IsZero method returns true. Struct fields of types that do not implement Zeroer are always\nmarshalled as embedded documents. This tag should be used for all slice and map values.Since golang primitives don’t implement Zeroer, passing usual zero values (such as bool(false), int(0), string(\"\")) still get encoded into the bson objects.Is the appropriate path to create a new Registry, registering bsoncodec.DefaultValueEncoders/bsoncodec.DefaultValueDecoders, and then registering my custom Type Encoder for each type I want to implement IsTypeZeroer?",
"username": "Brian_McQueen"
},
{
"code": "Zeroer",
"text": "Hi @Brian_McQueen,Sorry for the late reply. The documentation is implying that fields which are structs will only be considered empty if the type implements Zeroer. Fields with non-struct types like booleans, numbers, maps, slices, and pointers are handled directly by the driver. You can see the code to determine if a field should be considered empty here and an example showing that primitives are automatically handled without any additional code here.",
"username": "Divjot_Arora"
},
{
"code": "",
"text": "I’ve opened a PR to clarify this documentation.",
"username": "Divjot_Arora"
},
{
"code": "",
"text": "/crazypillsI could have sworn omitempty tagged zero-value primitives were still being marshalled, but I’m no longer able to reproduce, so I’m clearly way off.Thanks for the clarification!",
"username": "Brian_McQueen"
}
] | Golang mongo-driver: getting `omitempty` struct tag to evaluate golang primitives | 2021-01-15T01:36:56.478Z | Golang mongo-driver: getting `omitempty` struct tag to evaluate golang primitives | 7,805 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "Hello Mongo Comunity,Basically I want to lookup a collection inside another collection.The destination of the “from” collection is an array with objects that match foreign/local field.I can place the lookup result in the root of the collection, but when i try put the destination array path it doesnt work.The array shows as empty object (all fields that should be there are gone) only with the “from” collection inside.Am I doing it wrong?Thank you",
"username": "Vasco_Pedro"
},
{
"code": "",
"text": "Hello @Vasco_Pedro, welcome to the MongoDB Community forum Please include an example document and the aggregation query you are working with, so as to get a better idea what you are trying. It is little difficult to visualize what you are trying.",
"username": "Prasad_Saya"
},
{
"code": "db.version()mongo",
"text": "Please include an example document and the aggregation query you are working with, so as to get a better idea what you are trying.Welcome to the MongoDB community forums @Vasco_Pedro!In addition to @Prasad_Saya’s suggestions, it would be helpful to confirm the MongoDB server version you are using (for example, via the output of db.version() in the mongo shell).There have been improvements to aggregation features in successive major releases of MongoDB, so there may be more efficient ways to achieve your desired outcome depending on your MongoDB server version.Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thank you I found a workaround",
"username": "Vasco_Pedro"
}
] | Lookup "as" field doest work with array path | 2021-02-02T19:12:04.155Z | Lookup “as” field doest work with array path | 2,114 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 4.0.23-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.0.22. The next stable release 4.0.23 will be a recommended upgrade for all 4.0 users.\nFixed in this release:",
"username": "Jon_Streets"
},
{
"code": "",
"text": "Hi @Jon_Streets\nThanks for sharing the latest release note for MongoDB update!",
"username": "Soumyadeep_Mandal"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB 4.0.23-rc0 is released | 2021-02-11T17:14:32.520Z | MongoDB 4.0.23-rc0 is released | 2,439 |
null | [
"monitoring"
] | [
{
"code": "",
"text": "What is command on mongostat?",
"username": "mental_N_A"
},
{
"code": "",
"text": "Hi @mental_N_A\nWelcome to the MongoDB Developer Community Forum!\nPlease have a look at these documentations/guides",
"username": "Soumyadeep_Mandal"
}
] | What is command on mongostat? | 2021-02-11T17:23:01.840Z | What is command on mongostat? | 1,459 |
null | [
"swift"
] | [
{
"code": "RealmResultsListObjectthaw()let unmanagedObject = MyObjectClass(value: someManagedObject)",
"text": "I see 10.5.2 adds the ability to thaw a frozen objectAdd support for “thawing” objects. Realm , Results , List and Object now have thaw() methods which return a live copy of the frozen object. This enables app behvaior where a frozen object can be made live again in order to mutate values. For example, first freezing an object passed into UI view, then thawing the object in the view to update values.Maybe just a little clarification would help (me) on thisreturn a live copyWhat part is a copy? How ‘live’ is it? What thread is it on - the same as the original?If an object is frozen and then thawed, does that create a copy of the object which would be equivalent tolet unmanagedObject = MyObjectClass(value: someManagedObject)If not, what’s going on under the hood.",
"username": "Jay"
},
{
"code": "copy",
"text": "I guess copy is a bit of a misnomer here. What it does is it simplifies the process of looking up an object in a live Realm. It’s a bit more convoluted under the hood, but think of it as using the identifier of the frozen realm/object/collection to look it up in the live Realm. It’s not copying any data and in reality, the live version may very well be different from the frozen one - e.g. if a transaction had been committed in the meantime.",
"username": "nirinchev"
},
{
"code": "try! realm.write {\n realm.add(thawedObject, update: .modified)\n}\n",
"text": "Ok so how does one interact with the thawed object?Suppose - per the doc - that an object is frozen, passed to a different view and then thawed. To which then one of the properties is updated.Is this now an unmanaged object that does not exist in Realm but has all of the same property values (including the primary key).Then to write it outSo then both in memory objects (now managed) point to the same (on disk) object?",
"username": "Jay"
},
{
"code": "",
"text": "A thawed object is just a regular managed Realm object, just resolved on the thread where it’s thawed. All rules for regular objects apply to it as well (i.e. you can’t set properties outside transactions, you can’t move it across threads, it gets automatically updated when the Realm is refreshed, etc.).",
"username": "nirinchev"
},
{
"code": "self.myDog = realm.object(ofType: DogClass. self, forPrimaryKey: \"1234\")self.myDog.freeze\n---> pass to other viewController\nself.myDog.thaw\ntry! realm.write {\n self.myDog.age = 102\n}\n",
"text": "@nirinchevThank you for the additional info. I want to make sure we understand the premise before implementation:So it’s the same object and NOT a copy?Let me use a real world example for clarity.Suppose I have a viewController with a class var of DogClass.self.myDog = realm.object(ofType: DogClass. self, forPrimaryKey: \"1234\")another viewController is instantiated and passed a frozen dogthen I need to modify the dogs age so I thaw it in the other viewControllerWill that write will reflect back to the self.myDog in the first viewController?If not, more clarification will be needed so forgive my continual questions.",
"username": "Jay"
},
{
"code": "thawlet frozenDog = self.myDog.freeze()\n// ---> pass frozenDog to other viewController\n\nlet thawedDog = frozenDog.thaw()\ntry! realm.write {\n thawedDog.age = 102\n}\n\n// If the write succeeded, self.myDog in the first viewController\n// should see its age updated to 102.\n",
"text": "Yes - your example is almost correct, the difference being that thaw returns a live version of the object rather than thawing it in place. So in your second snippet, you’ll need something like:",
"username": "nirinchev"
}
] | Freeze (Frozen Objects) and Thawing | 2021-02-10T21:32:41.431Z | Freeze (Frozen Objects) and Thawing | 6,724 |
[] | [
{
"code": "",
"text": "\nBevy_EventDefaultThumbnail_Chat1081×1081 87.2 KB\nJoin us for a live interview, Thursday, February 25th at 11:00 am CST, by MongoDB Principal Developer Advocate, @Karen_Huaulme with @Asya_Kamsky (MongoDB Principal Engineer), and @Danielle_Monteiro , (TEDx Speaker & MongoDB Champion), to talk about their careers, MongoDB, and much more. The level of expertise these three data technologists bring to the table will make you question why they haven’t published a book series (fingers crossed). This is the official kick-off of our new community, Make It Matter, make sure to join the community to be a part of future events.Ask your questions here—it can be about their career growth, MongoDB, helpful resources they’ve used, databases, etc.It’s going to be engaging, fun, and informative. There will be giveaways that will be tied to a fun activity to kick off the event.See you there!",
"username": "Celina_Zamora"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Ask your questions: For the love of databases: Interview with Asya Kamsky and Dani Monteiro | 2021-02-01T16:22:04.004Z | Ask your questions: For the love of databases: Interview with Asya Kamsky and Dani Monteiro | 5,150 |
|
null | [
"queries",
"dot-net",
"data-modeling"
] | [
{
"code": "",
"text": "Hi there,I am developing a .NET app for iOS and Android. The user will have access to two Realms, one is the user’s personal collection of records and the other is a Realm with a ‘PUBLIC’ partition consisting of records all users have access to. Ideally I would like to package a version of this Realm with the app to avoid a large download (60k+ records) on initial sync.I thought I had this working, I am running an Azure function that downloads a CSV file from a server (not mine), updates the records that have changed by using a Realm logged in with an API key, and then copies the Realm to a location where I can later package it into the app. This means the Realm should be vaguely up to date when the user installs, and only a few hundred records may get updated.The problem I am having is that on app start when I try to sync the Realm with the server, I get the error:Error:User mismatch for client file identifier (IDENT) (ProtocolErrorCode=223)Partition:PUBLICI have not been able to find what this mean, but I assume it is because the Realm file is created with one user (the API key) and is then attempted to be consumed by a different user (the actual user) in the app. This all leads me to the question, is what I am attempting to do at all possible? Any thoughts or suggestions appreciated!Many thanksWill",
"username": "varyamereon"
},
{
"code": "",
"text": "Hey Will,Unfortunately, prepackaging synchronized Realms is not something currently supported. I’ll bring it up with the team and see if there are any major roadblocks to delivering such functionality, but for the time being you’ll need to have each user download the file independently.",
"username": "nirinchev"
},
{
"code": "",
"text": "Righto, thanks @nirinchev!Will",
"username": "varyamereon"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Package common database with app | 2021-02-11T13:37:46.114Z | Package common database with app | 2,590 |
null | [] | [
{
"code": "",
"text": "In this meetup, Andrew Morgan, a Staff Engineer at MongoDB, will walk you through the thinking, architecture and design patterns used in building a Mobile Chat App on iOS using MongoDB Realm Sync. The Chat app is used as an example, but the principles can be applied to any mobile app where sync is required. Andrew will focus on the data architecture, both the schema and the partitioning strategy used and after this session, you will come away with the knowledge needed to design an efficient, performant, and robust data architecture for your own mobile app.Register Free now at Realm Sync in use - building and architecting a Mobile Chat App - MongoDB Atlas App Services & Realm - MongoDB Developer Community Forums and join us on February 17th!!Little secret! There might be cool Realm Swag for all attendees!!",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Feb 17th Meetup - Realm Sync in use - building and architecting a Mobile Chat App | 2021-02-11T11:00:55.121Z | Feb 17th Meetup - Realm Sync in use - building and architecting a Mobile Chat App | 1,650 |
null | [
"replication"
] | [
{
"code": "",
"text": "We have a replica set and the secondary and primary are in sync and all is good. We had this running to days without any issues.After 7pm each evening we stop the secondary database instance, backup the server and then start it again. The secondary database instance then eventually catches up. This has worked for the last few months without any issues.Now, when the secondary database instance is restarted we get network timeout messages over and over again and the secondary gets further and further behind.We have no idea what causes these timeouts or how the timeout can be increased.ThanksIan",
"username": "Ian_Hannah"
},
{
"code": "",
"text": "Hi @Ian_Hannah\nAny advice on how to resolve this issue?\nMany thanks,\nJuan",
"username": "juanroy"
}
] | Starting a secondary database in a replica set generates Network Interface Exceeded Time Limit errors | 2020-03-20T13:43:00.637Z | Starting a secondary database in a replica set generates Network Interface Exceeded Time Limit errors | 2,356 |
null | [
"aggregation"
] | [
{
"code": "[ \n {\n $facet: {\n query1: [pipeline_1],\n query2: [pipeline_2],\n query3: [pipeline_3]\n ...\n query_n: [pipeline_n]\n },\n },\n {\n $merge:{ into: some_collection}\n }\n]\n",
"text": "Hi All,I am getting memory size limit error while running multiple sub-pipelines in $facet. Can someone help me on this issue.\nScenario: I have a crown job which runs once a day. I want to execute a pipeline in it against a collection with millions of documents.I tried allowDiskUse=true, But still giving same error.What can be the work around on this. Please help.",
"username": "Pradip_Kumar"
},
{
"code": "allowDiskUse: true",
"text": "Hi @Pradip_Kumar,How did you specified the allowDiskUse: true?AFAIK, this stage should allow it to be used.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "tage should allow it to be usI used below syntax from CLI:db.collection_name.aggregate( [pipeline], {allowDiskUse: true}}and from pymango i am using below syntax :db_client.aggregate(\ndb_name, collection_name, pipeline, allow_disk_use=True)It is not working in both cases.",
"username": "Pradip_Kumar"
},
{
"code": "allowDiskUse",
"text": "@Pradip_Kumar,According to PyMongo the parameter name is still called allowDiskUse:\nhttps://pymongo.readthedocs.io/en/stable/api/pymongo/collection.html#pymongo.collection.Collection.aggregateWhat I suspect is that the error is not rrelated to the 100MB per stage limit, but maybe to the output of $facet object that we get from your agg…This object is not suppose to cross 16MB actually.",
"username": "Pavel_Duchovny"
},
{
"code": "MongoError: document constructed by $facet is 104857822 bytes, which exceeds the limit of 104857600 bytes\nCollection.aggreagate([…]).allowDiskUse(true).then(result => {… [\n {\n \"$facet\": {\n \"emails\": [\n {\n \"$project\": {\n _id: false,\n \"k\": \"$email\",\n \"v\": \"1\"\n }\n }\n ]\n }\n },\n {\n \"$project\": {\n \"emails\": {\n $arrayToObject: \"$emails\"\n }\n }\n }\n ]\n",
"text": "I am running into the this same exact issue. My error is:I understand the 100M state limit, but shouldn’t allowDiskUse:true address that? I am using mongoose and am passing that parameter like so:\nCollection.aggreagate([…]).allowDiskUse(true).then(result => {…You seem to suggest that the issue has to do with the 16MB doc limit. Why, then, does is mention the “the limit of 104857600 bytes” which is about 100MB? I don’t see it complaining about 16 MB anywhere.My aggregation looks like this:",
"username": "Zarif_Alimov"
}
] | MongoError: document constructed by $facet is 104860008 bytes, which exceeds the limit of 104857600 bytes | 2021-01-25T07:03:18.818Z | MongoError: document constructed by $facet is 104860008 bytes, which exceeds the limit of 104857600 bytes | 5,493 |
[
"vscode"
] | [
{
"code": "Unknown or unsupported “disabled” transport to “disabled:” address",
"text": "Unknown or unsupported “disabled” transport to “disabled:” addressimage1296×674 64.2 KBimage745×253 20 KBWhen press enter, show this message:\n",
"username": "Taffarel_Xavier"
},
{
"code": "",
"text": "Hi @Taffarel_Xavier. Recently, someone else reported a similar issue on Github: Unable to connect to local mongodb on WSL2 · Issue #253 · mongodb-js/vscode · GitHub. Are you running VS Code in WSL? If not, can you give us some more details?",
"username": "Massimiliano_Marcon"
}
] | Error Extension VS CODE | 2021-02-11T04:20:32.975Z | Error Extension VS CODE | 3,027 |
|
null | [
"security"
] | [
{
"code": "",
"text": "anyway to know if the account password is change? will this write to table ? or mongodb log when password change?",
"username": "soon_yu"
},
{
"code": "admin.system.userscredentials",
"text": "Hi @soon_yu and welcome in the MongoDB Community !Users in a MongoDB server are stored in the special admin.system.users collection. I tried to open a Change Stream against this collection so I could monitor the changes happening in this collection, but this didn’t work as Change Streams aren’t supported on the special collections.That being said, you could retrieve each user from this collection, calculate a checksum of each of the credentials subdocument and store this in a collection. You could then run this script every X minutes to verify if the checksums are still the same or not.If the checksum is different, then it means the password has been changed.I don’t really have a better idea for now .Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hi @soon_yu,To addup to maxs idea, you can consider looking into our enterprise server auditing mechanism for userUpdate events:https://docs.mongodb.com/manual/reference/audit-message/#audit-event-actions-details-and-resultsThis will let you auditing user password changes.Thanks\nPavel",
"username": "Pavel_Duchovny"
}
] | How to know who changed an account password? | 2021-02-10T04:20:32.681Z | How to know who changed an account password? | 2,876 |
null | [
"aggregation",
"queries"
] | [
{
"code": "[\n { _id: 1,obj: { a: 123, b: \"ABC\" } }\n]\n[\n { _id: 1, arr: [{ a: 123, b: \"ABC\" }, { a: 234, b: \"BCD\" }] },\n { _id: 2, arr: [{ a: 123, b: \"BCD\" }, { a: 234, b: \"ABC\" }] }\n]\nobjarr_id: 1$in$expr$matchdb.one.aggregate([\n {\n $lookup: {\n from: \"two\",\n let: { obj: \"$obj\" },\n pipeline: [\n { $match: { $expr: { $in: [\"$$obj\", \"$arr\"] } } }\n ],\n as: \"matchFromTwo\"\n }\n }\n])\nobj[\n { _id: 1,obj: { b: \"ABC\", a: 123 } }\n]\n[\n { _id: 1, arr: [{ a: 123, b: \"ABC\" }, { a: 234, b: \"BCD\" }] },\n { _id: 2, arr: [{ a: 123, b: \"BCD\" }, { a: 234, b: \"ABC\" }] }\n]\nobjarrarr$anddb.one.aggregate([\n {\n $lookup: {\n from: \"two\",\n let: { obj: \"$obj\" },\n pipeline: [\n {\n $match: {\n $expr: {\n $and: [\n { $in: [\"$$obj.a\", \"$arr.a\"] },\n { $in: [\"$$obj.b\", \"$arr.b\"] }\n ]\n }\n }\n }\n ],\n as: \"matchFromTwo\"\n }\n }\n])\n$elemMatch",
"text": "Collection One:Collection Two:Requirements: Select matching documents from collection two where obj of collection one is in arr of collection two, and should match first (_id: 1) document,1) Try (Working):I tried using $in condition in $expr in $match stage, and this is working perfectly,Playground2) Try (Not Working):This will fail when order of fields in object are different, like i have swap both fields (a,b) position in collection one of obj:\nCollection One:Collection Two:PlaygroundThis is not working because of fields position are different in obj and arr, this will also not work when other side arr objects position are different.3) Try (Not Working):I have tried other way also using separate condition of each field in $and,PlaygroundThis will not match and condition of array in particular element like $elemMatch, this returns both elements,Is there any other way to deal with this kind of conditions?",
"username": "turivishal"
},
{
"code": "db.one.aggregate([\n {\n $match: {\n \"_id\": ObjectId(\"5a934e000102030405000000\")\n }\n },\n {\n $lookup: {\n from: \"two\",\n pipeline: [\n {\n $match: {\n arr: {\n $elemMatch: {\n a: 123,\n b: \"BCD\",\n \n },\n \n },\n \n },\n \n }\n ],\n as: \"matchFromTwo\"\n }\n }\n])\n",
"text": "Hi @turivishal,Would the following aggregation work for you:I tried it on your playground looks sufficient.Please let me know if you have any additional questions.Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "obj",
"text": "Thanks for your reply,Actually i want to match condition dynamically, that is why i have mentioned how to match internal object is in array or not, there are many documents in collection one and i want to join with collection two on the base of obj field.",
"username": "turivishal"
},
{
"code": "db.one.aggregate([{$lookup: {\n from: 'two',\n 'let': {\n x: '$obj.a',\n y: '$obj.b'\n },\n pipeline: [\n {\n $match: { $expr : {\n $in: [{\n a: '$$x',\n b: '$$y'\n },\"$arr\"]\n }\n }\n }\n ],\n as: 'matchFromTwo'\n}}])\n[ { _id: ObjectId(\"601feb0f7d5dd662646de63f\"),\n obj: { b: 'ABC', a: 123 },\n matchFromTwo: \n [ { _id: ObjectId(\"601feb267d5dd662646de640\"),\n arr: [ { a: 123, b: 'ABC' }, { a: 234, b: 'BCD' } ] } ] } ]\n",
"text": "@turivishal,Not sure I follow based on the provided examples . But if my imagination understand correctly , would that be what you are looking for:It seems to yield one match:Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "arr[\n { _id: 1,obj: { b: \"ABC\", a: 123 } }\n]\n[\n { _id: 1, arr: [{ b: 123, a: \"ABC\" }, { a: 234, b: \"BCD\" }] },\n { _id: 2, arr: [{ a: 123, b: \"BCD\" }, { a: 234, b: \"ABC\" }] }\n]\ndb.one.aggregate([{$lookup: {\n from: 'two',\n 'let': {\n x: '$obj.a',\n y: '$obj.b'\n },\n pipeline: [\n {\n $match: { $expr : {\n $in: [{\n a: '$$x',\n b: '$$y'\n },\"$arr\"] // actual array value is [{ b: 123, a: \"ABC\" }, { a: 234, b: \"BCD\" }]\n }\n }\n }\n ],\n as: 'matchFromTwo'\n}}])\narr{a,b}arr$in",
"text": "This is a way we can secure from object side, but what if arr contains different position, like taking your example:Collection One:Collection Two:Query:PlaygroundThis will not work because position of fields in arr are different. we are searching for {a,b} but what if its in different position in arr [{b,a}], so $in will fail.",
"username": "turivishal"
},
{
"code": "",
"text": "You can add a $or of both combinations right?",
"username": "Pavel_Duchovny"
},
{
"code": "$or$expr: {\n $or: [\n { $in: [{ a: \"$$x\", b: \"$$y\" }, \"$arr\"] },\n { $in: [{ b: \"$$y\", a: \"$$x\" }, \"$arr\"] }\n ]\n}\n",
"text": "Yes thanks that would help, I was never thought of $or condition,Thanks again.",
"username": "turivishal"
},
{
"code": "",
"text": "I would also take a look at how many documents have the a different order. May be it is worth to reorder if only a few exceptions. Why is there exceptions? In principles, you have an API that creates and updates your documents. I would wish they all do the manipulation in a consistent order. Or would work, but it might be more efficient to have consistency and use a single $in.",
"username": "steevej"
},
{
"code": "$inTwo.updateOne({ _id: id }, { $push: { arr: { a: 123, b: \"ABC\" } } });\n",
"text": "Thanks @steevej,We have corrected API but there is no consistency in old documents, we have to just make sure response should 100% accurate, we are planning to reorder old documents and user $in condition only,I want to confirm that if i push object in array from mongoose, node.js,Does MongoDB guaranty it will insert in same order?\nor I have to ask this same question to mongoose support?",
"username": "turivishal"
},
{
"code": "{ a : ..., b : ....}{ b: ... , a : ...}",
"text": "@turivishal,I beilive the order should be preserved on the Server. I am not certain about your debugging or visual tools which mightnot gurantee the order of fields.From server perspective I believe there is a difference between { a : ..., b : ....} and { b: ... , a : ...} as those are considered “different” documents.Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "FYI @steevej and @turivishal:How to update documents in MongoDB. How to update a single document in MongoDB. How to update multiple documents in MongoDB. How to update all documents in MongoDB. How to update fields in documents in MongoDB. How to replace documents.",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "@Pavel_Duchovny Thank you for the confirmation.",
"username": "turivishal"
},
{
"code": "Two.updateOne({ _id: id }, { $push: { arr: { a: 123, b: \"ABC\" } } });\nObjectnode> { b: 2, a: 1 }\n{ b: 2, a: 1 }\n\n> { b: 2, a: 1, 123: \"oops\" }\n{ '123': 'oops', b: 2, a: 1 }\n\n> { b: 2, a: 1, \"123\": \"oops\" }\n{ '123': 'oops', b: 2, a: 1 }\n",
"text": "I want to confirm that if i push object in array from mongoose, node.js,Does MongoDB guaranty it will insert in same order?Hi @turivishal,As @Pavel_Duchovny noted, the MongoDB server will preserve the order of fields. However, a caveat to add is “as provided by the driver” because some data structures (like JavaScript Objects) are not guaranteed to be order-preserving in all cases.ES6+ generally preserves the order of fields aside from the special case of object keys that can be represented as ints, which will be sorted in ascending order before any string keys.Example in the node shell:For more background, see discussion on Does JavaScript guarantee object property order?.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Is there any way to combine $elemMatch and $expr or how to match internal object is in array or not? | 2021-02-07T09:21:52.028Z | Is there any way to combine $elemMatch and $expr or how to match internal object is in array or not? | 12,309 |
null | [
"aggregation",
"queries",
"performance"
] | [
{
"code": "",
"text": "I am using mongo 4.2 versionI have read wild card index and its restrictions WildCard Index restrictionsBut still need to know few more points about the restrictions in wild card index\nI have created wild card index for the whole document like“db.collection.createIndex({”$**:1\"})\nWhen I tried to check its executionPlan I found the following resultsFor all aggregate queries having $unwind in that queryFrom the above experiment I could observe that for any aggregate queries wild card will not work. Is my understanding correct. If yes can someone explain the above behavior?",
"username": "Sowmya_LR"
},
{
"code": "$match$sort$unwind",
"text": "Hi @Sowmya_LR,Wellcome to MongoDB Community.For aggregations there are specific restrictions. However, if you want the aggregation to use the index you need to perform either a $match or $sort stages in the beginning of the query.If you are starting your aggregation with the $unwind stage the aggregation will not be able to utilise an index and will unfold everything with collscan and memory operations.Can you share the aggregation with us?Thanks,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_DuchovnyThank you for the reply. The pipeline stages will be as followsdb.collection.aggregate([\n{$unwind: “$Instance”},\n{$match:{“Instance.field”:“value”}}\n])",
"username": "Sowmya_LR"
},
{
"code": " db.collection.aggregate([{$match:{“Instance.field”:“value”},\n{$unwind: “$Instance”},\n{$match:{“Instance.field”:“value”}}\n])\n",
"text": "Hi @Sowmya_LR,This pipeline cannot use the index as unwind is first stage.Why not do the following:This will allow the query to first filter on index and then do the rest.Thanks\nPavel",
"username": "Pavel_Duchovny"
}
] | Wild card index and aggregation pipeline stages | 2021-02-10T12:23:52.141Z | Wild card index and aggregation pipeline stages | 3,394 |
null | [
"crud"
] | [
{
"code": "const filter = {_id: req.body.id};\nconst update = {player2: req.body.player2};\n\nconst updatedGame = await Game.findOneAndUpdate(filter, update);\n",
"text": "Hi, I am wondering how I update multiple fields at the same time when using findOneAndUpdate. Currently I have:Where Game is a Mongoose schema. What I want to accomplish is to update more than just the player2 field in the Game model in the same call. How do I do this?",
"username": "atk3"
},
{
"code": "",
"text": "Turns out this is very easy, all you gotta do is make the second argument (update) an object that takes all the changes you are trying to make. Sorry about pulling the trigger too early on this one. It is very easy to do.",
"username": "atk3"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How do I update multiple fields in a document while using findOneAndUpdate? | 2021-02-11T02:43:19.227Z | How do I update multiple fields in a document while using findOneAndUpdate? | 9,938 |
null | [
"kotlin",
"c-driver"
] | [
{
"code": "",
"text": "Redirected here from: GitHub - mongodb/mongo-c-driver: The Official MongoDB driver for C languageRepo at: GitHub - exertionriver/knMongoc: Demonstration of libmongoc (MongoDB C driver) CRUD tests using Kotlin/Native cInteropPlease let me know your thoughts…!Thank you,\nIan",
"username": "Exertion_River"
},
{
"code": "",
"text": "Hi @Exertion_River - looks great! I’m interested in how you’re using it for your projects.Welcome to the MongoDB community!",
"username": "Sheeri_Cabral"
},
{
"code": "",
"text": "Hi @Sheeri_Cabral, thanks for your reply…! The code linked above is an extract from an Ubuntu console project I am working on that generates local clock-time astronomical aspect data.I think MongoDB could eventually allow preloading of forecast data minutes or hours ahead of current local time in order to show future-trend information.Putting together a working example of Kotlin/Native cInterop also took me a little while and a few web searches to figure out, so I wanted to post something useful for folks to explore and retool as desired.Thank you,\nIan",
"username": "Exertion_River"
},
{
"code": "",
"text": "Wow, what you’re doing is so cool!",
"username": "Sheeri_Cabral"
}
] | knMongoc : Demonstration of libmongoc (MongoDB C driver) CRUD tests using Kotlin/Native cInterop | 2021-02-07T02:41:12.815Z | knMongoc : Demonstration of libmongoc (MongoDB C driver) CRUD tests using Kotlin/Native cInterop | 3,924 |
null | [
"kotlin"
] | [
{
"code": "val syncedUsers : RealmResults<User> = realm.where<User>().sort(\"_id\").findAll()\n val syncedUser : User? = syncedUsers.getOrNull(0) // since there might be no user objects in the results, default to \"null\"\n",
"text": "Hi, I am doing the android SDK tutorial and I have gone through all steps for creating the Task Tracker appWhen I create a user and login successfully, I cannot see anything in the list of projects just an empty screenI pinpointed the problem in the following two lines in the ProjectActivity.kt in private function setUpRecyclerView(realm: Realm)the query results return an empty list, so it cannot find any user in my user realm, therefore the recyclerView cannot be populated. I also downloaded the final branch of the task tracker app in case I did a mistake with the tutorial, as suggested and the same issue occurs. Any ideas?Thank you in advance.",
"username": "Konstantinos_Markaki"
},
{
"code": "tasktrackerrealm-cli",
"text": "Hi Konstantinos!Sorry that I missed this before. Did you follow the full “Set up the Task Tracker Tutorial Backend” instructions before you started the Android Tutorial? There are a number of rules, functions, and triggers that are required for the Android app to work, and the Backend tutorial helps set all of those up. Specifically it sounds like you might be lacking the triggers that instantiate the custom user data for new users, which is used to control which projects they can access.Even if you did follow the backend tutorial already, it might be easiest for you to try re-importing (you can just follow the same instructions as before after deleting your existing Realm app) the tasktracker backend app with realm-cli because it’s possible that something went wrong the first time when you tried importing the backend settings.",
"username": "Nathan_Contino"
}
] | Problem with Tutorial Task Tracker app | 2021-02-01T21:24:16.023Z | Problem with Tutorial Task Tracker app | 2,650 |
null | [
"installation"
] | [
{
"code": "",
"text": "I am new to MongoDB and I am trying to install it in Ubuntu 18.04. I am following the instructions for installation and the command ‘sudo apt-get install -y mongodb-org’ is giving the following result.Reading package lists… Done\nBuilding dependency tree\nReading state information… Done\nE: Unable to locate package mongodb-org",
"username": "Shashank_A.C"
},
{
"code": "repo.mongodb.orgIgn:1 http://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.4 InRelease\nGet:2 http://archive.ubuntu.com/ubuntu bionic InRelease [242 kB]\nGet:3 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]\nGet:4 http://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.4 Release [5391 B] \nGet:5 http://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.4 Release.gpg [801 B] \nGet:6 http://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.4/multiverse amd64 Packages [7139 B] \nGet:7 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [237 kB]\nGet:8 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB] \nGet:9 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB] \nGet:10 http://archive.ubuntu.com/ubuntu bionic/multiverse amd64 Packages [186 kB] \nGet:11 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 Packages [15.3 kB]\nGet:12 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [1816 kB]\nGet:13 http://archive.ubuntu.com/ubuntu bionic/universe amd64 Packages [11.3 MB] \nGet:14 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [1372 kB] \nGet:15 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages [1344 kB] \nGet:16 http://archive.ubuntu.com/ubuntu bionic/restricted amd64 Packages [13.5 kB] \nGet:17 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [53.8 kB] \nGet:18 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [2244 kB] \nGet:19 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [2136 kB] \nGet:20 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [266 kB] \nGet:21 http://archive.ubuntu.com/ubuntu bionic-backports/main amd64 Packages [11.3 kB] \nGet:22 http://archive.ubuntu.com/ubuntu bionic-backports/universe amd64 Packages [11.4 kB] \nFetched 21.6 MB in 8s (2606 kB/s) \nReading package lists... Done\ncat /etc/apt/sources.list.d/mongodb-org-4.4.listdeb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.4 multiverse",
"text": "After adding the repo did the apt-get update complete successfully ?It should look similar to the below, specifically the lines with repo.mongodb.orgDouble check the list file was created sucessfully:\ncat /etc/apt/sources.list.d/mongodb-org-4.4.listOutput should look like:\ndeb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.4 multiverse",
"username": "chris"
},
{
"code": "E: Unable to locate package mongodb-org",
"text": "@chris hello,i have everything right, but i get this error :\nE: Unable to locate package mongodb-orgmy ubuntu version is :\n# lsb_release -a\nNo LSB modules are available.\nDistributor ID: Ubuntu\nDescription: Ubuntu 18.04.5 LTS\nRelease: 18.04\nCodename: bionicliste file is :\n# cat /etc/apt/sources.list.d/mongodb-org-4.4.list\ndeb [ arch=amd64,arm64 ] MongoDB Repositories bionic/mongodb-org/4.4 multiverse",
"username": "thomas_boulenger"
}
] | E: Unable to locate package mongodb-org | 2020-12-23T19:21:05.066Z | E: Unable to locate package mongodb-org | 11,699 |
null | [] | [
{
"code": "",
"text": "Hi Community,is it possible to display my JSON file directly in the browser using a URL address?\nFor example, if I open this address:\nhttps://jsonplaceholder.typicode.com/todos/1,\nI can see the JSON file directly.So would it be possible to call something like this?:\nmongodb://name:pwd.mydatabase.json",
"username": "BG_G"
},
{
"code": "",
"text": "you can use “API” services.",
"username": "DK2021"
},
{
"code": "exports = function(payload, response) {\n const coll = context.services.get(\"mongodb-atlas\").db(\"test\").collection(\"users\");\n return coll.find().toArray();\n};\ntest.users",
"text": "Hi @BG_G and welcome in the MongoDB Community !If you cluster is in Atlas, you can achieve this by creating a REST API in MongoDB Realm.image835×365 29.9 KBimage1392×1056 96.4 KBimage955×747 43.8 KBimage1042×965 58.9 KBimage1056×337 30.6 KBimage499×629 27.4 KBimage1038×868 60.6 KBimage1240×157 30.4 KBAnd of course, my piece of JS code is super basic but you can make it as complicated as you like, retrieve query parameters, pass them to the find query param, sort the results, etc.Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
] | View JSON Data in any Browser, with URL | 2021-02-08T20:42:21.980Z | View JSON Data in any Browser, with URL | 5,945 |
null | [
"dot-net",
"app-services-user-auth"
] | [
{
"code": "",
"text": "I have 2 MongoDB Dot-net Syncd projects already working.I created a new Cluster with a new Realm App.\nSetup Network access as the other apps.\nAdded database user\nSetup authentication for anonymous and user-pass\nCreate the App.taskApp from the AppId (hospicecare-wnzsw)\nAll Models and schemas validate.On running the Application, it hangs at either anonymous or user-pass login stage:return app.CurrentUser ?? await app.LogInAsync(Credentials.Anonymous());\nor\nreturn app.CurrentUser ?? await app.LogInAsync(Credentials.EmailPassword(username, password));Logs show no error:\nOK Feb 07 9:59:36+00:00 378ms Authenticationlogin local-userpass\nHowever if I use an invalid userrname, the Logs shows “invalid password error” but never returns.I’ve spent a day comparing the other apps but I’m jabberflastered.\nWhat could be I be missing?Interestingly, I can connect to the same database and populate it with data from another app in another cluster using the same AppId.Any suggestions would be appreciated, I’m on the verge of scrapping the project and starting over.",
"username": "Richard_Fairall"
},
{
"code": "",
"text": "abberflasteredHi Richard,I’d like to help you, but I need to see more of your code. Could it be possible for you to share the project with me? If it has sensitive information, I can provide you with a link to our secure upload tool.Andrea",
"username": "Andrea_Catalini"
},
{
"code": "",
"text": "Hi Andrea\nIt should be possible, I have created another cluster and RealmApp mirroring the original.\nSame problem - LoginAsync Hangs.\nSo I have the first app which I have ‘gutted’. There’s only one Collection with simple schema.\nI can simplify it and get it to you.First, a rather horrible idea: Two Other Synced apps I have are working fine.\nHowever, these were both created a couple of months ago, probably earlier than Realm.beta.2 and Atlas 4.2.\nThe new projects have both been created using Atlas 4.4 and Realm beta.6. Could the issue be with creating Apps ground up with latest Realm versions?btw I upgraded the naughty apps to realm.10.0.1 and they still hang.I can’t help thinking I’m missing something embarrassingly simple, but the fact that the methods appear to be working and giving the expected logs is suspicious.Please send me the upload link and I’ll prepare the Application.\nThanks",
"username": "Richard_Fairall"
},
{
"code": "",
"text": "Yes, what you describe sounds a bit suspicious. It’s a good idea to take a look at it.\nHere it’s the upload link to securely share your project with us.",
"username": "Andrea_Catalini"
},
{
"code": "",
"text": "That was easy.\nThe VS 2019 project interesting files are:App.cs\nMainWindow.cs (calls RealmConnector.Login method)\nRealmConnector. csI inadvertently left a temporary username and password in but it’s just test stuff for this app.Rich",
"username": "Richard_Fairall"
},
{
"code": "MainWindow.OnLoadedConnectRealm().Resultclass MainWindow : Window\n{\n // ... you vars and other methods\n bool IsBusy = false;\n\n public void OnLoaded(object sender, RoutedEventArgs e)\n {\n MainFrame.Content = \" Opening Realm Database\";\n\n // pseudo code\n yourMessagingSystem.notificationOfInterest += SetContent;\n \n _ = ConnectRealm();\n }\n\n private async Task<bool> ConnectRealm()\n {\n try\n {\n IsBusy = true;\n var user = App.taskApp.CurrentUser;\n if (user == null)\n {\n user = await RealmConnector.GetOrLoginAnonUser(App.taskApp);\n }\n var dlg = new MessageYesNoWindow(\"LoginAsync\", \"Success\");\n\n //pseudo call to messaging system\n yourMessagingSystem.Notify(...);\n }\n catch (Exception ex)\n {\n App.ProcessException(\"MainWindow.ConnectRealm\", ex.InnerException.Message);\n }\n finally\n {\n IsBusy = false;\n }\n }\n\n private void SetContent()\n {\n MainFrame.Content = \"\";\n }\n\n // more of your code\n\n private void Close_Click(object sender, RoutedEventArgs e)\n {\n // more pseudo code\n yourMessagingSystem.notificationOfInterest -= SetContent\n \n // ...\n }\n}\nclass MainWindow : Window\n{\n // ... you vars and other methods\n\n public void OnLoaded(object sender, RoutedEventArgs e)\n {\n MainFrame.Content = \" Opening Realm Database\";\n var connected = Task.Run(async () => await ConnectRealm()).Result;\n if (connected)\n {\n MainFrame.Content = \"\";\n }\n }\n\n // more of your code\n}\n",
"text": "Hi Richard,I took a look at the code you sent me.\nIn the method MainWindow.OnLoaded at line 24 you do ConnectRealm().Result where ConnectRealm is an async method. By using Task.Result to run an async method synchronously, you’re blocking the UI thread. However, due to the way the runtime invokes continuations after an await, the execution context in ConnectRealm will try to return to the main thread. This causes a deadlock where the main thread waits for ConnectRealm to complete and ConnectRealm waits for the main thread to be yielded to continue execution after the first await.You have 2 options:1- (Recommended) Call the async method without awaiting it. And in order to be notified when it’s done you can use any messaging system that you prefer or you could manually set a boolean telling you when it’s done and at specific events check it and take the needed action. I mixed the 2 options below, I hope it’s clear:2- (Not recommended) If you really can’t do this asynchronously, then you can use a trick where you spin up a background task to do the work and .Result that one. It’s generally discouraged because you’ll be blocking the main thread for a non-deterministic amount of time which can result in poor user experience. You should also be careful not to open Realm instances in the background method because those won’t be usable on the main thread.\nThe code would look something like this:Hope this helps,Andrea",
"username": "Andrea_Catalini"
},
{
"code": "",
"text": "I see what you mean but the use of .Result was a late debug attempt to see what’s going on.\nPlease remover the .Result and you’ll have what still fails…These are the calling methods I have used over the past year, without using Result, and they fail now:In MainWindow, it just calls ConnectRealm(); (no if(ConnectRealm …)var user = App.taskApp.CurrentUser;\nif (user == null)\n{\nuser = await RealmConnector.GetOrLoginAnonUser(App.taskApp);\n}and in RealmConnector:public static async Task LoginRealmUser(Realms.Sync.App taskApp)\n{\nreturn taskApp.CurrentUser ?? await taskApp.LogInAsync(Credentials.EmailPassword(username, password));\n}If you try this then it will fail in the code I provided.",
"username": "Richard_Fairall"
},
{
"code": "",
"text": "Could you be more specific in what you mean by “it will fail”?\nDo you mean that you are able to connect to the database but then you aren’t able to perform actions on it? Or something else?",
"username": "Andrea_Catalini"
},
{
"code": "",
"text": "Async calls to LoginAsync never returns, the program stalls, no User is returned.\nRegisterUserAsync also stalls.",
"username": "Richard_Fairall"
},
{
"code": "",
"text": "Hi Andrea\nI spent days staring at the code comparing the various apps, trying not to get egg on my face.\nThanks for your all your efforts, the difference was the ConnectRealm method which for various reasons I converted to Task bool instead of void. Aargh 4 days!\nBut we got there, and at least I can now knock up a cluster and Apps like a pro.\n“If it ain’t broke - then don’t fix it”.\nThanks again\nYours Humbly\nRich",
"username": "Richard_Fairall"
},
{
"code": "",
"text": "Hi Richard,I’m glad to hear that you discovered what the issue was and you’re now unblocked.\nHappy coding! Regards,\nAndrea",
"username": "Andrea_Catalini"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Syncd MongoDB Realm Applications hangs at LoginAsync with no errors | 2021-02-07T10:25:51.911Z | Syncd MongoDB Realm Applications hangs at LoginAsync with no errors | 3,615 |
null | [
"react-native"
] | [
{
"code": "",
"text": "I wanted to know if its possible to use Realm with redux on react-native?",
"username": "Tony_Ngomana"
},
{
"code": "",
"text": "Hi @Tony_Ngomana,I don’t see a reason why not.Have you checked our Realm react-native SDK:Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "useRealmuseRealmredux-thunkuserpartitionValueimport Realm from \"realm\";\nimport { getRealmApp } from \"../functions/realmConfig\";\nimport { ItemSchema } from \"./itemSchema\";\n\nconst app = getRealmApp();\n\nconst user: any = Realm.User;\n\nconst useConfig = {\n schema: [ItemSchema], //other schema will be added in the future\n sync: {\n user: user.apiKeys,\n partitionValue: user.name,\n },\n};\n\nexport const useRealm = Realm.open({\n schema: [ItemSchema],\n sync: {\n user: user.prototype,\n partitionValue: app.currentUser?.id!,\n },\n});\n",
"text": "Yes i have the problem is i’m building a massive app and react context isn’t the right tool for the app. I opted to use redux. But when using it i stumbled upon an error when i tried to export useRealm. I need to use useRealm in a redux-thunk function but however my app won’t compile because of an error caused by user and partitionValue.\nmy code below:",
"username": "Tony_Ngomana"
},
{
"code": "Realm.UserclassRealm.User.prototypeconst openSyncedRealm = async () => {\n const app = getRealmApp();\n const user = await app.logIn(Realm.Credentials.anonymous());\n const realm = await Realm.open({\n schema: [ItemSchema],\n sync: {\n user,\n partitionValue: user.id,\n error: (e) => {\n console.log(e);\n },\n },\n });\n\n return realm;\n};\n",
"text": "Hello Tony,I would highly recommend against putting realm collections and/or items in a redux store, as Realm objects are self-mutating objects, and therefor wont play well with the principles of redux.That said, there are some issues with the code snippet you supplied:Realm.User is not a user instance, it’s the class defining a realm-user.You seem to never actually authenticate a user for sync.\nPlease see: https://docs.mongodb.com/realm/react-native/quick-start/#authenticate-a-userSo you’ll first need to determine how your users will authenticate? email/password, facebook etc?When you have a logged in user instance, this can be used in the sync config (and not Realm.User.prototype. If this is in our docs or examples somewhere, please let me know).Theoretical example (this uses anonymous login):",
"username": "Steffen_Agger"
},
{
"code": "",
"text": "Thank you, guess i will have to look for alternatives",
"username": "Tony_Ngomana"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Using Realm with redux | 2021-02-10T08:15:17.354Z | Using Realm with redux | 7,300 |
[
"aggregation",
"react-native"
] | [
{
"code": "Itemconst items = await (await realm).objects(\"Item\");",
"text": "I’m unable to query my database Item in order to retrieve an array of objects.\nwhen I run this function const items = await (await realm).objects(\"Item\"); I get this in return:\nScreenshot 2021-02-05 at 21.51.57816×989 138 KB",
"username": "Tony_Ngomana"
},
{
"code": "awaitconst items = realm.object(\"Item\")itemsitems[0]",
"text": "First, you don’t need the awaits in order to retrieve your object. const items = realm.object(\"Item\") is enough.The object items is a managed collection i.e., it is not a plain JavaScript array. You can still do items[0] to get the first object, etc.The benefit of a managed collection is that it is live and lazy loaded. You can read more about it in the documentation.",
"username": "Kenneth_Geisshirt"
},
{
"code": "",
"text": "Is it possible to export a single mongo instance and use it other files?",
"username": "Tony_Ngomana"
},
{
"code": "itemsitems",
"text": "Yes, you can use items elsewhere. Actually, items is a live object.",
"username": "Kenneth_Geisshirt"
},
{
"code": "useRealmimport Realm from \"realm\";\nimport { getRealmApp } from \"../functions/realmConfig\";\nimport { ItemSchema } from \"./itemSchema\";\n\nconst app = getRealmApp();\n\nconst user: any = Realm.User;\n\nconst useConfig = {\n schema: [ItemSchema], //other schema will be added in the future\n sync: {\n user: user.apiKeys,\n partitionValue: user.name,\n },\n};\n\nexport const useRealm = Realm.open({\n schema: [ItemSchema],\n sync: {\n user: user.prototype,\n partitionValue: app.currentUser?.id!,\n error: (e) => {\n console.log(e);\n },\n },\n});\n",
"text": "What i mean is i’m trying to export useRealm and use it other files:",
"username": "Tony_Ngomana"
},
{
"code": "",
"text": "@Tony_Ngomana see my response in your other thread.",
"username": "Steffen_Agger"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unable to retrieve objects from realm local database | 2021-02-05T19:52:18.010Z | Unable to retrieve objects from realm local database | 3,091 |
|
null | [
"mongodb-shell"
] | [
{
"code": "",
"text": "Hi,Trying to install mongo shell on Apple Mac M1 - Bigsur 11.2Did anyone see this before “zsh: bad CPU type in executable: mongo”can anyone help me please.Thanks,\nAravind.",
"username": "Aravind_Adla"
},
{
"code": "",
"text": "I do not think that the new processor of the M1 is supported yet. Check on MongoDB Developer Community Forums for verification.",
"username": "steevej"
},
{
"code": "",
"text": "What is your default shell?\nIf it is not zsh try to switch to it and tryIf you are in zsh then it could be version compatibilty issue like [steevej-1495] mentioned.\nIn some cases 32 bit vs 64 bit also cause issues\nCheck this links.May helphttps://discussions.apple.com/thread/250777998",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Did anyone see this before “zsh: bad CPU type in executable: mongo”Hi @Aravind_Adla,This is an older question but I wanted to note that although MongoDB does not run natively on M1 processors at the moment, you can use macOS’ Rosetta Translation Environment to run the Intel binaries. If you don’t have Rosetta installed, I believe you should be prompted to install this the first time you try to run an Intel binary.hi what links should i check?Follow the standard Install MongoDB Community Edition on macOS guide to install MongoDB server & tools via the Homebrew package manager. The only other requirement is that you have installed Rosetta (which is a one-off installation if you want to run any Intel apps).Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Zsh: bad CPU type in executable: mongo | 2021-02-10T00:56:25.458Z | Zsh: bad CPU type in executable: mongo | 1,712 |
null | [
"spark-connector"
] | [
{
"code": "MongoSpark.write(resultsDataset)\n .option(\"collection\", \"maycollection\")\n .option(\"replaceDocument\", \"false\")\n .mode(SaveMode.Overwrite).save();\nTimed out after 30000 ms while waiting to connect. \nClient view of cluster state is {type=UNKNOWN, servers=[{address=host.docker.internal:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketException: host.docker.internal}, caused by {java.net.UnknownHostException: host.docker.internal}}]\ncom.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=host.docker.internal:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketException: host.docker.internal}, caused by {java.net.UnknownHostException: host.docker.internal}}]\nWriteConfig writeConfig = genWriteConfig(jsc, collectionSb);\nMongoSpark.save(resultsRDD, writeConfig);\nMongoSpark.write(resultsDataset)\n .option(\"collection\", \"maycollection\")\n .option(\"replaceDocument\", \"false\")\n .mode(SaveMode.Overwrite).save();\n",
"text": "I am using MongoSpark connector 3.0.0 with PlayFramework 2.8.7 and Apache Spark.\nThe MongoDB Community Edition is in a docker container, and Apache Spark Cluster also runs in a separate bridge network as containers in docker.I call the MongoSpark java APIto save a Dataset/DataFrame in Java with OpenJDK11 to the Mongodb on the docker host, i got an error from Spark connector that host address “host.docker.internal” unknown.Since the Spark Workers/Executor runs in a docker container, to reach the docker host IP where my Mongodb container resides, it is common approach on MacOSX or Windows to call the “host.docker.internal” to resolve to the real IP of docker host.The strange thing is, it only happens wenn i store a Dataset. I did not change anything, but convert the Dataset to an RDD and save a RDD using WriteConfig, MongoSpark.save(RDD<?>, …)The call was executed flawless, the aforementioned unknown host error “host.docker.internal” didn’t happen. I assume here might be some inconsistent behaviour to save the RDD and Dataset. And this is really annoying.As I started the SparkSession with a SparkConfig, the “spark.mongodb.output.uri” containing the mongodb host “host.docker.internal” wasn’t changed. But still the RDD and Dataset save() calls behave differently. Does the save function of SparkConnector doing different sanity check on the mongodb URI and host address internally?My last test was using the IP address of docker host to replace the “host.docker.internal” String in the SparkConfig for “spark.mongodb.output.uri” and call the following code to save a Dataset.The save() function works fine.I have opened an issue for SparkConnector on Jira https://jira.mongodb.org/browse/SPARK-287 , and was advice to raise my questions here in the community support.I am grateful for any hints and help? or do you also experience the same host unknown error?",
"username": "Yingding_Wang"
},
{
"code": "MongoSpark.writeMongoSpark.saveMongoSpark.save(dataset, writeConfig)DataFrameWriter",
"text": "Hi @Yingding_Wang,That’s strange as both MongoSpark.write and MongoSpark.save ultimately should follow the same code path.Does using MongoSpark.save(dataset, writeConfig) work as expected?\nThat bypasses using the DataFrameWriter API.Could you post the stacktrace of the error?Ross",
"username": "Ross_Lawley"
},
{
"code": "Dataset.collect()Dataset.take()spark.mongodb.input.urihost.docker.internalcollect() or take()host.docker.internalcollect() or count() or take()",
"text": "Hi @Ross_Lawley,Thank you so much for you help. I think I may find the root cause of my issue. It is my fault. That the Apache Spark Driver node exists outside the ApacheSpark Docker Cluster.I accidentally performed Dataset.collect(), Dataset.take() actions with spark.mongodb.input.uri set to mongodb host address host.docker.internal. Since the collect() or take() runs on Driver node. The Apache Driver node outside docker can not resolve host.docker.internal host string.Sorry again for your time spent on my silly mistake. I may get this logical thinking error due to the wrong impression of how Spark Connector works. I thought Spark Connector will be executed solely inside spark executor node, and the Dataset/RDD will be loaded only on Spark executor and it is safe for Driver node to call collect() or count() or take()actions. But it seems like that Spark Connector has performed lazy loading and instead carry out an execution plan on the Driver node.Thanks again for your help.",
"username": "Yingding_Wang"
},
{
"code": "",
"text": "Hi @Yingding_Wang,Glad you were able to find the cause! Spark generally treats data as a lazy collection and in doing so the Spark Driver will send work to the Spark Worker nodes. However, that is outside the control of the MongoDB (or any) Spark Connector.All the best,Ross",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Spark Connector 3.0.0 Java API for Dataset save doesn't recognize host address "host.docker.internal" | 2021-02-09T11:23:10.396Z | Spark Connector 3.0.0 Java API for Dataset save doesn’t recognize host address “host.docker.internal” | 4,259 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "Hi everyone,I have thousands of grocery store records in Excel format. I want to update stock and price of all items from excel sheet using C#. What is the best way to bulk update?\nThank you.",
"username": "Tabish_Alam"
},
{
"code": "",
"text": "Hello @Tabish_Alam. Welcome to the MongoDB Community forum.You are trying to update the Excel spreadsheet data using C#. Do you think MongoDB database and its tools have any use in this process or do you want to use MongoDB database and tools?As such to read and write form and to Excel spreadsheets using C#, there are libraries like NPOI and IKVM. You can use them directly without any mention of MongoDB.",
"username": "Prasad_Saya"
}
] | How to bulk update an Excel spreadsheet data item's stock and price? | 2021-02-09T19:22:08.544Z | How to bulk update an Excel spreadsheet data item’s stock and price? | 1,524 |
null | [] | [
{
"code": "",
"text": "Hello there, I am using mongodb for a seat booking application.\nWhat is the best way to make sure more than 1 do not book a seat.Can I do a validation with a value on update to mark a seat booked?thanks\nSrikanth",
"username": "SrikantH"
},
{
"code": "db.seats.findAndModify({ seatId: ..., status: \"free\" },{$set : {status : \"booking\"}});\n",
"text": "Hi @SrikantH,Welcome to MongoDB community.I think that the best way is to use a findAndModify operation when a user clicks a seat for booking.Once the seat is booked you can user booking details to user data. And maybe place the userid on the seat document as booked .If the user clicks a seat that was just booked this query will return 0 and you can message that seat was booked and refresh seat map to user.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "To complete @Pavel_Duchovny’s response, you could also use Multi-Document Transactions. It depends how you have done your data model.I wrote a blog post about it. In my example, Alice is trying to buy beers, but I have only 5 to sell… So I’m making sure I don’t have more beers in the shopping carts than I actually have in stock. So I don’t sell 6 beers by “accident”.In this blog post we compare single document transactions with MongoDB 4.0's ACID compliant multi-document transactions and walk through examples of how we can leverage this new feature with Java.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thank you Pavel, I did with update similarly… in the find part, I was checking for seat status to be “not booked” and only then booking…it was behaving as expectedfindAndModify is deprecated, was asked to use findOneAndUpdatethanks\nSrikanth",
"username": "SrikantH"
},
{
"code": "",
"text": "Thankyou for posting this, learned a lot from this",
"username": "SrikantH"
}
] | Update validation with a value | 2021-01-24T20:05:53.830Z | Update validation with a value | 1,663 |
null | [
"on-premises"
] | [
{
"code": "docker swarm init\ndocker pull [quay.io/mongodb/charts:19.12.2](http://quay.io/mongodb/charts:19.12.2)\ndocker run --rm [quay.io/mongodb/charts:19.12.2](http://quay.io/mongodb/charts:19.12.2) charts-cli test-connection 'mongodb://[172.31.50.8:27017](http://172.31.50.8:27017/)'\nMongoDB connection URI successfully verified.\n\nBefore starting Charts, please create a Docker Secret containing this connection URI using the following command:\necho \"mongodb://[172.31.50.8:27017](http://172.31.50.8:27017/)\" | docker secret create charts-mongodb-uri -\necho \"mongodb://[172.31.50.8:27017](http://172.31.50.8:27017/)\" | docker secret create charts-mongodb-uri -\npyxyr9rwp40eb3j2jj0e6b5ly\ndocker stack deploy -c charts-docker-swarm-19.12.2.yml mongodb-charts\ndocker service ls\ndocker exec -it \\\n$(docker container ls --filter name=_charts -q) \\\ncharts-cli add-user --first-name \"<First>\" --last-name \"<Last>\" \\\n--email \"<[[email protected]](mailto:[email protected])>\" --password \"<Password>\" \\\n--role \"<UserAdmin|User>\"\n\n1. But after installation, web server is not coming up. \n\n2. I see following error\n\n",
"text": "I ran following commandsmongodb-charts_charts.1.kk1p9rm3mbkc@docker-desktop | lastKnownVersion (‘1.9.1’)\nmongodb-charts_charts.1.kk1p9rm3mbkc@docker-desktop | existingClientAppIds ([ ‘mongodb-charts-vjsuf’ ])\nmongodb-charts_charts.1.kk1p9rm3mbkc@docker-desktop | migrationsExecuted ({})\nmongodb-charts_charts.1.kk1p9rm3mbkc@docker-desktop | (node:8) UnhandledPromiseRejectionWarning: Error: Error removing all functions:\nmongodb-charts_charts.1.kk1p9rm3mbkc@docker-desktop | at _checkApp.then.then.catch.err (/mongodb-charts/bin/charts-cli.js:33546:15)\nmongodb-charts_charts.1.kk1p9rm3mbkc@docker-desktop | at process._tickCallback (internal/process/next_tick.js:68:7)\nmongodb-charts_charts.1.kk1p9rm3mbkc@docker-desktop | (node:8) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 2)\nmongodb-charts_charts.1.kk1p9rm3mbkc@docker-desktop | events.js:174\nmongodb-charts_charts.1.kk1p9rm3mbkc@docker-desktop | throw er; // Unhandled ‘error’ event\nmongodb-charts_charts.1.kk1p9rm3mbkc@docker-desktop | ^\nmongodb-charts_charts.1.kk1p9rm3mbkc@docker-desktop |\nmongodb-charts_charts.1.kk1p9rm3mbkc@docker-desktop | Error: incorrect header check\nmongodb-charts_charts.1.kk1p9rm3mbkc@docker-desktop | at Zlib.zlibOnError [as onerror] (zlib.js:164:17)\nmongodb-charts_charts.1.kk1p9rm3mbkc@docker-desktop | Emitted ‘error’ event at:\nmongodb-charts_charts.1.kk1p9rm3mbkc@docker-desktop | at errorOrDestroy (internal/streams/destroy.js:107:12)\nmongodb-charts_charts.1.kk1p9rm3mbkc@docker-desktop | at Gunzip.onerror (_stream_readable.js:734:7)\nmongodb-charts_charts.1.kk1p9rm3mbkc@docker-desktop | at Gunzip.emit (events.js:198:13)\nmongodb-charts_charts.1.kk1p9rm3mbkc@docker-desktop | at Zlib.zlibOnError [as onerror] (zlib.js:167:8)",
"username": "Subhash_Lengare"
},
{
"code": "docker volume rm mongodb-charts_keysapp auth metadata log hosting",
"text": "Hi @Subhash_Lengare -Sorry to hear you’re having problems. It looks like you’re following the right steps here. I’ve not seen that exact error before, but my guess is that you had a previous installation attempt that went wrong somehow, and it didn’t clean up after itself properly.If this is a fresh install, my suggestion would be to clean up the volumes and the databases generated on the failed attempt, i.e:\ndocker volume rm mongodb-charts_keys\nand delete these DBs: app auth metadata log hostingAnd then try everything again. Let me know if that helps.\nTom",
"username": "tomhollander"
}
] | On-premise MongoDB charts not starting | 2021-02-09T19:20:56.620Z | On-premise MongoDB charts not starting | 4,567 |
null | [
"c-driver"
] | [
{
"code": "linux64_gcc5/libmongoc-static-1.0.a(mongoc-scram.c.o): In function `_mongoc_sasl_prep_impl':\nmongoc-scram.c:(.text+0x6eb): undefined reference to `u_strFromUTF8_60'\nmongoc-scram.c:(.text+0x737): undefined reference to `u_strFromUTF8_60'\nmongoc-scram.c:(.text+0x752): undefined reference to `usprep_openByType_60'\nmongoc-scram.c:(.text+0x77d): undefined reference to `usprep_prepare_60'\nmongoc-scram.c:(.text+0x7ca): undefined reference to `usprep_prepare_60'\nmongoc-scram.c:(.text+0x7e8): undefined reference to `usprep_close_60'\nmongoc-scram.c:(.text+0x7ff): undefined reference to `u_strToUTF8_60'\nmongoc-scram.c:(.text+0x83e): undefined reference to `u_strToUTF8_60'\nmongoc-scram.c:(.text+0x8c4): undefined reference to `usprep_close_60'\nmongoc-scram.c:(.text+0x921): undefined reference to `usprep_close_60'",
"text": "When building my project, I am statically linking to libbson-static-1.0 and libmongoc-static-1.0, but am receiving the following errors from my build. I know this is a missing dependency on my end because of the static linking but I was wondering if anyone knew what library I am missing?",
"username": "Thomas_Morten"
},
{
"code": "",
"text": "@Thomas_Morten That is quite peculiar. Our CI tests statically linking to both libbson and libmongoc, though I am not sure what particular library might be missing here. Can you provide the complete build output leading up to this error?",
"username": "Roberto_Sanchez"
},
{
"code": "",
"text": "I also ran into this error, but I fixed it by reinstalling the mongoc driver. I used a script to reinstall, you can find it here.",
"username": "Evan_Ugarte"
},
{
"code": "gcc hello_mongoc.c -I./include/ -lmongoc-static-1.0 -lbson-static-1.0 -pthread -lrt -lresolv -lcrypto -lssl -lz -licuuc -L. -o hello_mongo",
"text": "I just ran into the same issue. I was compiling the libraries myself from source. CMake project build right away and all tests and example where running fine. However I wasn’t able to use the static library in my own project.@Evan_Ugarte if you read this, could you check the link you’ve posted? I get a 404I was able to fix it with linking libicuuc.aThis build command works for me (simplified pathes!):",
"username": "bugblatterbeast"
},
{
"code": "-licuuc-DENABLE_ICU=OFF",
"text": "@bugblatterbeast are you certain that the libmongoc build was able to find the ICU libraries? I have confirmed that the static build for libmongoc correctly links with -licuuc. Another possibility is that you disabled ICU with -DENABLE_ICU=OFF on the CMake command line.",
"username": "Roberto_Sanchez"
},
{
"code": "",
"text": "@Roberto_Sanchez yes I am very certain about that. In the example build command line for hello_mongo as well as in my project, the linker option -licuuc is making the difference. Without it I get the exact errors mentioned by Thomas_Morten.I am using Ubuntu 18.04 and already had the library in the std path. Nevertheless, it seems to me that the preprocessor switch you suggested is an even better solution for this problem. I’m going to try that as well.edit: I am starting to think that I maybe mistunderstood you. Did you mean that the static library build should have found the icu libraries and that it shouldn’t have been necessary to link the application to it? I will look into that too.",
"username": "bugblatterbeast"
},
{
"code": "",
"text": "@bugblatterbeast, that is strange. If you could provide the commands you used to build the C driver and sample code with a build command that triggers the failure, I can look into why our CI builds are not catching the error.edit: I am starting to think that I maybe mistunderstood you. Did you mean that the static library build should have found the icu libraries and that it shouldn’t have been necessary to link the application to it? I will look into that too.That was precisely my meaning. Any additional information you can provide will be helpful in identifying the potential build issue.",
"username": "Roberto_Sanchez"
},
{
"code": " $ cmake -DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF -DCMAKE_BUILD_TYPE=Release ..\n-- The C compiler identification is GNU 7.5.0\n-- Detecting C compiler ABI info\n-- Detecting C compiler ABI info - done\n-- Check for working C compiler: /usr/bin/cc - skipped\n-- Detecting C compile features\n-- Detecting C compile features - done\nfile VERSION_CURRENT contained BUILD_VERSION 1.17.3\n-- Build and install static libraries\n -- Using bundled libbson\nlibbson version (from VERSION_CURRENT file): 1.17.3\n-- Check if the system is big endian\n-- Searching 16 bit integer\n-- Looking for sys/types.h\n-- Looking for sys/types.h - found\n-- Looking for stdint.h\n-- Looking for stdint.h - found\n-- Looking for stddef.h\n-- Looking for stddef.h - found\n-- Check size of unsigned short\n-- Check size of unsigned short - done\n-- Searching 16 bit integer - Using unsigned short\n-- Check if the system is big endian - little endian\n-- Looking for snprintf\n-- Looking for snprintf - found\n-- Looking for reallocf\n-- Looking for reallocf - not found\n-- Performing Test BSON_HAVE_TIMESPEC\n-- Performing Test BSON_HAVE_TIMESPEC - Success\n-- struct timespec found\n-- Looking for gmtime_r\n-- Looking for gmtime_r - found\n-- Looking for rand_r\n-- Looking for rand_r - found\n-- Looking for strings.h\nCMake Warning (dev) at /snap/cmake/769/share/cmake-3.19/Modules/CheckIncludeFile.cmake:80 (message):\n Policy CMP0075 is not set: Include file check macros honor\n CMAKE_REQUIRED_LIBRARIES. Run \"cmake --help-policy CMP0075\" for policy\n details. Use the cmake_policy command to set the policy and suppress this\n warning.\n\n CMAKE_REQUIRED_LIBRARIES is set to:\n\n /usr/lib/x86_64-linux-gnu/librt.so\n\n For compatibility with CMake 3.11 and below this check is ignoring it.\nCall Stack (most recent call first):\n src/libbson/CMakeLists.txt:91 (CHECK_INCLUDE_FILE)\nThis warning is for project developers. Use -Wno-dev to suppress it.\n\n-- Looking for strings.h - found\n-- Looking for strlcpy\n-- Looking for strlcpy - not found\n-- Looking for clock_gettime\n-- Looking for clock_gettime - found\n-- Looking for strnlen\n-- Looking for strnlen - found\n-- Looking for stdbool.h\n-- Looking for stdbool.h - found\n-- Looking for SYS_gettid\n-- Looking for SYS_gettid - found\n-- Looking for syscall\n-- Looking for syscall - found\n-- Performing Test HAVE_ATOMIC_32_ADD_AND_FETCH\n-- Performing Test HAVE_ATOMIC_32_ADD_AND_FETCH - Success\n-- Performing Test HAVE_ATOMIC_64_ADD_AND_FETCH\n-- Performing Test HAVE_ATOMIC_64_ADD_AND_FETCH - Success\n-- Looking for pthread.h\n-- Looking for pthread.h - found\n-- Performing Test CMAKE_HAVE_LIBC_PTHREAD\n-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed\n-- Check if compiler accepts -pthread\n-- Check if compiler accepts -pthread - yes\n-- Found Threads: TRUE \nAdding -fPIC to compilation of bson_static components\nlibmongoc version (from VERSION_CURRENT file): 1.17.3\n-- Searching for zlib CMake packages\n-- Found ZLIB: /usr/lib/x86_64-linux-gnu/libz.so (found version \"1.2.11\")\n-- zlib found version \"1.2.11\"\n-- zlib include path \"/usr/include\"\n-- zlib libraries \"/usr/lib/x86_64-linux-gnu/libz.so\"\n-- Looking for include file unistd.h\n-- Looking for include file unistd.h - found\n-- Looking for include file stdarg.h\n-- Looking for include file stdarg.h - found\n-- Searching for compression library zstd\n-- Found PkgConfig: /usr/bin/pkg-config (found version \"0.29.1\")\n-- Checking for module 'libzstd'\n-- Found libzstd, version 1.3.3\n-- Found zstd version 1.3.3 in\n-- Found OpenSSL: /usr/lib/x86_64-linux-gnu/libcrypto.so (found version \"1.1.1\") \n-- Looking for ASN1_STRING_get0_data in /usr/lib/x86_64-linux-gnu/libcrypto.so\n-- Looking for ASN1_STRING_get0_data in /usr/lib/x86_64-linux-gnu/libcrypto.so - found\n-- Searching for sasl/sasl.h\n-- Found in /usr/include\n-- Searching for libsasl2\n-- Found /usr/lib/x86_64-linux-gnu/libsasl2.so\n-- Check size of socklen_t\n-- Check size of socklen_t - done\n-- Looking for res_nsearch\n-- Looking for res_nsearch - found\n-- Looking for res_ndestroy\n-- Looking for res_ndestroy - not found\n-- Looking for res_nclose\n-- Looking for res_nclose - found\n-- Looking for sched_getcpu\n-- Looking for sched_getcpu - not found\n-- Detected parameters: accept (int, struct sockaddr *, socklen_t *)\n-- Searching for compression library header snappy-c.h\n-- Not found (specify -DCMAKE_INCLUDE_PATH=/path/to/snappy/include for Snappy compression)\nSearching for libmongocrypt\n-- libmongocrypt not found. Configuring without Client-Side Field Level Encryption support.\n-- Performing Test MONGOC_HAVE_SS_FAMILY\n-- Performing Test MONGOC_HAVE_SS_FAMILY - Success\n-- Compiling against OpenSSL\n-- Compiling against Cyrus SASL\nAdding -fPIC to compilation of mongoc_static components\n-- Building with MONGODB-AWS auth support\n-- Build files generated for:\n-- build system: Unix Makefiles\n-- Configuring done\n-- Generating done\n-- Build files have been written to: SRC_PATH/mongo-c-driver-1.17.3/cmake-build\n$ cmake --build .\n$ gcc -o hello_mongoc hello_mongoc.c -I./include -L. -lmongoc-1.0 -lbson-1.0 -Wl,-rpath .\n$ gcc -o hello_mongo hello_mongoc.c -I./include -L. -lmongoc-static-1.0 -lbson-static-1.0 -pthread -lrt -lresolv -lcrypto -lssl -lz -lsasl2 -lzstd\n./libmongoc-static-1.0.a(mongoc-scram.c.o): In function `_mongoc_sasl_prep_impl':\nmongoc-scram.c:(.text+0x61b): undefined reference to `u_strFromUTF8_60'\nmongoc-scram.c:(.text+0x667): undefined reference to `u_strFromUTF8_60'\nmongoc-scram.c:(.text+0x682): undefined reference to `usprep_openByType_60'\nmongoc-scram.c:(.text+0x6ad): undefined reference to `usprep_prepare_60'\nmongoc-scram.c:(.text+0x6fa): undefined reference to `usprep_prepare_60'\nmongoc-scram.c:(.text+0x718): undefined reference to `usprep_close_60'\nmongoc-scram.c:(.text+0x72f): undefined reference to `u_strToUTF8_60'\nmongoc-scram.c:(.text+0x76e): undefined reference to `u_strToUTF8_60'\nmongoc-scram.c:(.text+0x80c): undefined reference to `usprep_close_60'\nmongoc-scram.c:(.text+0x871): undefined reference to `usprep_close_60'\ncollect2: error: ld returned 1 exit status\n$ g++ -o hello_mongoc hello_mongoc.c -I./include -L. -Wl,-Bstatic -lmongoc-static-1.0 -lbson-static-1.0\n/usr/bin/ld: cannot find -lgcc_s\n./libmongoc-static-1.0.a(mongoc-client.c.o): In function `mongoc_client_connect_tcp':\nmongoc-client.c:(.text+0x1145): warning: Using 'getaddrinfo' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking\n/usr/bin/ld: cannot find -lgcc_s\ncollect2: error: ld returned 1 exit status\n$ gcc --version\ngcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0\nCopyright (C) 2017 Free Software Foundation, Inc.\nThis is free software; see the source for copying conditions. There is NO\nwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n$ ln -s /usr/lib/gcc/x86_64-linux-gnu/7.5.0/libgcc_s.so libgcc_s.so\n$ gcc -o hello_mongoc hello_mongoc.c -I./include -L. -l:./libgcc_s.so -Wl,-Bstatic -lmongoc-static-1.0 -lbson-static-1.0\n/usr/bin/ld: cannot find -lgcc_s\n./libmongoc-static-1.0.a(mongoc-client.c.o): In function `mongoc_client_connect_tcp':\nmongoc-client.c:(.text+0x1145): warning: Using 'getaddrinfo' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking\n/usr/bin/ld: cannot find -lgcc_s\ncollect2: error: ld returned 1 exit status\n$ gcc -o hello_mongo hello_mongoc.c -I./include -L. -lmongoc-static-1.0 -lbson-static-1.0 -pthread -lrt -lresolv -lcrypto -lssl -lz -lsasl2 -lzstd -licuuc\n",
"text": "@Roberto_Sanchez OS is Ubuntu 18.04 I’ve configured the C driver build with this command:getting this result:I was building with this command:Everything works perfectly fine. So far, I haven’t found a test or example that’s not working.When I try this it also works fine (adjusted paths):This command however gives the errors mentioned above:For a moment I thought I was wrong to assume that gcc will automatically recognize that I try to link against static libraries. So I’ve tried to add the linker option “-Wl,-Bstatic” (I thought this is only required when static and shared libraries are available with the same basename and -l option would be ambiguous) but that always results in another linker error I can not explain.I’m not sure if this is even the right approach. Anyway, I’ve tried this:However, the following command is working (it slightly differs from the call I posted previously because I was doing a clean start to prepare this info and installed some additional packages):Contact me if you need more specific details. I also want to let you know that I’m OK with it. I totally understand if you want to look into this and I’ll try to provide you with any further information you need. Just please don’t do it for my sake.edit: FYI I just build the C driver but I don’t install it",
"username": "bugblatterbeast"
},
{
"code": "add_subdirectory(mongo-c-driver)CMakeLists.txtpkg-configubuntu@ip-10-122-6-162:~/mongo-c-driver/cmake-build$ pkg-config --cflags libmongoc-static-1.0\n-DMONGOC_STATIC -DBSON_STATIC -I/usr/local/include/libmongoc-1.0 -I/usr/local/include/libbson-1.0\nubuntu@ip-10-122-6-162:~/mongo-c-driver/cmake-build$ pkg-config --libs libmongoc-static-1.0\n-L/usr/local/lib -lmongoc-static-1.0 -lsasl2 -lssl -lcrypto -lrt -lresolv -pthread -lz -lzstd -licuuc -lbson-static-1.0 /usr/lib/x86_64-linux-gnu/librt.so /usr/lib/x86_64-linux-gnu/libm.so -pthread\n",
"text": "@bugblatterbeast, so it looks like there are a few things going on here. The C driver is not meant to be built and then used from the build tree without first being installed. You might want to review the instructions for using the C driver in another project. Essentially, you should include it as a CMake module or let pkg-config provide the flags. If you are trying to avoid the need to install the C driver, then you can clone it into your project source tree and include it with something like add_subdirectory(mongo-c-driver) in your CMakeLists.txt file or you can use CMake’s external project feature. This will give you access to the same CMake targets described in the instructions I linked above, all while letting your top-level CMake project manage the build of the components you need. If you stick to the static targets, you should also not end up with any additional components installed from your project.All that said, I ran the build on an Ubuntu 18.04 machine with the same options you gave. Then I installed the C driver. The pkg-config commands provided the following output:So, It looks like you should depend on the CMake or pkg-config targets to make sure you get the proper pre-processor macro definitions. Additionally, the linker options from pkg-config look to match those from your working example. Static linking always requires explicitly linking any additional libraries which were used by the static components, since static libraries cannot record linkage information in the same way as dynamic libraries. We let CMake generate these configurations since it already knows what has been linked and what would be required to link the targets we are generating. It is conceivable that some second-order linkages can be omitted, depending on which components from the static libmongoc and libbson are used and which are not used in your project, but that approach is likely to lead to difficult to diagnose issues which can be avoided by using the generated configurations.Thanks for providing the additional information and let us know if you have any further questions.",
"username": "Roberto_Sanchez"
},
{
"code": "",
"text": "Thank you very much for this valuable information. The option to add the directory to our CMake configuration sounds promising. I’m gonna test it one of the next days and I’ve also forwarded your reply to our integrator.",
"username": "bugblatterbeast"
},
{
"code": "",
"text": "You’re quite welcome.",
"username": "Roberto_Sanchez"
},
{
"code": "",
"text": "Our integrator has now included a reference to the mongo-c-driver project in our CMake configuration as you suggested. I am very happy with that solution.",
"username": "bugblatterbeast"
}
] | Missing dependency when staticly linking mongoc driver ubuntu | 2020-10-02T12:00:15.717Z | Missing dependency when staticly linking mongoc driver ubuntu | 7,212 |
Subsets and Splits