image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"queries"
] | [
{
"code": " (\n [_id] => 5e7c65801760570981683461\n [RolesID] => apiUser\n [controller] => user\n [value] => Array\n (\n [0] => Array\n (\n [sub] => login\n [access] => Array\n (\n [0] => run\n )\n\n )\n\n [1] => Array\n (\n [sub] => logout\n [access] => Array\n (\n [0] => run\n )\n\n )\n\n )\n\n)\n",
"text": "HI there,Let said I’ve got a document structure like this:What is the correct query syntax to find the collection for controller:user and sub:login ?Thank you if someone can help me with this. Still new and learning about MongoDB",
"username": "Immanuel_Rusdi"
},
{
"code": "controllersubtest{ _id: 1, controller: \"c-1\", value: [ { sub: \"login\" }, { sub: \"logout\" } ] }\n{ _id: 2, controller: \"c-2\", value: [ { sub: \"login\" }, { sub: \"insert\" }, { sub: \"logout\" } ] }\ndb.test.find( { controller: \"c-1\" } )\ndb.test.find( { controller: \"c-1\", \"value.sub\": \"login\" } )\ndb.test.find( { \"value.sub\": \"insert\" } )\n_id: 1_id: 2mongo",
"text": "What is the correct query syntax to find the collection for controller:user and sub:loginIn MongoDB document structure, from what you have posted, controller is a string field and sub is a string field in a nested (or sub or embedded) document of an array. Arrays and nested documents are compound data types, and the string is scalar.Consider the following two documents in a test collection:And the following queries:The first two queries return the document with _id: 1. The third query returns the document with _id: 2.Here are some links to MongoDB documentation related to querying documents:Still new and learning about MongoDBI suggest you use the mongo shell and the MongoDB Compass tools to create and query documents; these are the most commonly used and understood.",
"username": "Prasad_Saya"
}
] | How to search sub value | 2020-03-30T18:11:27.818Z | How to search sub value | 3,608 |
[
"react-native"
] | [
{
"code": "export function RealmDB_SetLoginUser(Data)\n{\n realm.write(() =>\n {\n let LoginUser = realm.objects('LoginUser');\n\n realm.delete(LoginUser);\n\n realm.create('LoginUser', {\n Name: Data.Name,\n Password: Data.Password,\n IsTest: Data.IsTest,\n });\n });\n}\n",
"text": "I reinstalled the MacOs operating system, my working code has failed.I don’t understand, it was working before reinstalling.My project detail is; “react”: “16.9.0” “react-native”: “0.61.1”, “realm”: “^3.6.5”,I was tried realm 3.2.0 and 3.1.0 versions but not working, the same error persists.Ekran Resmi 2020-03-28 19.49.05378×674 69.9 KBmy code is;Could you please help me?",
"username": "Yusuf_Isik"
},
{
"code": "",
"text": "Welcome to the community @Yusuf_Isik!I see your question was asked (and answered) on Stack Overflow: TypeError: Reflect.construct requires the first argument be a constructor.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm TypeError Reflect.construct requires the first argument be a constructor | 2020-03-28T20:03:24.606Z | Realm TypeError Reflect.construct requires the first argument be a constructor | 2,795 |
|
null | [
"php"
] | [
{
"code": "",
"text": "Hi there!I try to connect to my MongoDB database using these following information:|MongoDB extension version|1.7.4|\n|MongoDB extension stability|stable|\n|libbson bundled version|1.16.2|\n|libmongoc bundled version|1.16.2|\n|libmongoc SSL|enabled|\n|libmongoc SSL library|OpenSSL|\n|libmongoc crypto|enabled|\n|libmongoc crypto library|libcrypto|\n|libmongoc crypto system profile|disabled|\n|libmongoc SASL|enabled|\n|libmongoc ICU|disabled|\n|libmongoc compression|enabled|\n|libmongoc compression snappy|disabled|\n|libmongoc compression zlib|enabled|\n|libmongocrypt bundled version|1.0.3|\n|libmongocrypt crypto|enabled|\n|libmongocrypt crypto library|libcrypto|My database had 3 replica set membersI try to connect it using but can’t see my documents hosted on my MongoDB account.Is there someone expert on MongoDB with CodeIgniter that can help solve my problem?Also offer an opportunity as a freelancer to help our project. Thanks.",
"username": "APPRO_Mobile_Develop"
},
{
"code": "",
"text": "Welcome to the community @APPRO_Mobile_Develop,I try to connect it using but can’t see my documents hosted on my MongoDB account.To help investigate this issue can you please provide:Is there someone expert on MongoDB with CodeIgniter that can help solve my problem?Are you trying to use the official MongoDB driver directly or via an extension/library for CodeIgniter 3? If you are using a library please confirm which version.Regards,\nStennie",
"username": "Stennie_X"
}
] | Can't connect to MongoDB from CodeIgniter 3.1.11 and PHP 7.2 | 2020-03-25T16:54:36.163Z | Can’t connect to MongoDB from CodeIgniter 3.1.11 and PHP 7.2 | 3,090 |
null | [
"performance"
] | [
{
"code": "",
"text": "Hi everyone,I was wondering is there documentation or a good blog out there that gives some details on the recommended query performance expectations for MongoDB Or Youtube Video?I feel sometimes the answer would be “it depends” such as type of workload… extracting etc, we’d want higher throughput… where response time would be expected to be slower.But in terms of quick transactions or searches in MongoDB what is the expected response time in ms?\nIs there a guideline on general performance/response time expectations? (Assuming we have decent disk performance and ram)For example if we look at a mongod.log file, should we be aiming for queries always under 200ms or 100ms or 50ms for example?Thanks",
"username": "nchan"
},
{
"code": "",
"text": "From a recent class at MongoDB university, I learned that:In a typical web application, a rule of thumb is any end-to-end request must be less than 200 ms. This is for apps using the REST APIs and HTTP requests. Some factors to consider are workload, infrastructure and operations that rely on external services.You also have to look at the service-level agreement (SLA); i.e., your user / customer expectations.Then there are response times specific to:Any database operation greater than 100 ms in regards with MongoDB is considered slow, by default.The database profiler is a handy tool to log and analyze the response times for the database operations; where you can specify the threshold for slow operations.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thanks Prasad, appreciate the info!",
"username": "nchan"
}
] | How fast should the response time be for MongoDB? | 2020-03-26T20:48:53.313Z | How fast should the response time be for MongoDB? | 6,696 |
null | [
"php"
] | [
{
"code": "",
"text": "help please.\ni have windows 10 php 7 and mysql installed and all working fine.\ni installed mongodb database and its operational, tested using compass.\ni am now trying to use it from within php. i have used it before on an older machine successfully.\ni downloaded mongodb.dll threadsafe64 - i thought the right one.\ni put the extension parameter into ini file - =mongodb.dll\nand then restarted all wamp services.\nmongo driver is not loading and no sign in phpinfo().\nin the php error log it says not found [di]r/mongodb.dll unable to load module.\ni gathered this could be becuae i had installed the wrong version? i changed the name in the ini file to a rubbish name and tried again, same message.\nthe filename is correct and its in the right directory .\ni then downlaoded a 32bit version (same name obviously) to see what would happen. this time it got a different message - about being wrong type. so its definitly going to the right place.any ideas?\nall i can now think of is that i am using the wrong wrong version of the dll, i tried to check the right version of the dll for php v7 but i couldnt trace what that should be.\ndo i need to put the pdb file there also?thanks",
"username": "brian_harding"
},
{
"code": "",
"text": "exact error msg is\nPHP Startup: Unable to load dynamic library ‘C:\\wamp64\\bin\\php\\php7.0.10\\ext\\php_mongodb.dll’ - The specified module could not be found.in Unknown on line 0the mongodb.dll version is 7.4.0",
"username": "brian_harding"
},
{
"code": "libsasl.dllPATH",
"text": "If the file is indeed in the exact path given above, then “the module could not be found” may not refer to the MongoDB extension itself, but one of its dependencies. As outlined in the Windows installation docs, the extension needs libsasl.dll to be found in the operating system’s PATH. Please check if the file is in the path and change the configuration if it isn’t. Thanks!",
"username": "Andreas_Braun"
},
{
"code": "",
"text": "manyh thanks. i started looking as you suggested, then found a different version of the driver. i tried that and it all burst into life!many thanks for responding.it seems so hard to find proper list of dlls per php version per platform.need to try PI next!",
"username": "brian_harding"
},
{
"code": "",
"text": "Pi working as well, but had to install full mongodb and then add the .so module to the ini",
"username": "brian_harding"
}
] | Windows PHP startup error - unable to find module | 2020-03-18T16:14:09.806Z | Windows PHP startup error - unable to find module | 10,467 |
null | [
"production",
"server"
] | [
{
"code": "",
"text": "MongoDB 4.2.5 is out and is ready for production deployment. This release contains only fixes since 4.2.5, and is a recommended upgrade for all 4.2 users.Note: The release of version 4.2.4 was skipped due to an issue encountered during the release. However, the 4.2.5 release includes the fixes made in 4.2.4.Fixed in this release:SERVER-46121 mongos crashes with invariant error after changing taskExecutorPoolSize\nSERVER-45137 Increasing memory allocation in Top::record with high rate of collection creates and drops\nSERVER-44904 Startup recovery should not delete corrupt documents while rebuilding unfinished indexes\nSERVER-44260 Transaction can conflict with previous transaction on the session if the all committed point is held back\nSERVER-35050 Don’t abort collection clone due to negative document count\nSERVER-39112 Primary drain mode can be unnecessarily slow4.2 Release Notes | All Issues | DownloadsAs always, please let us know of any issues.– The MongoDB Team",
"username": "Dima_Agranat"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB 4.2.5 is released | 2020-03-30T16:24:06.663Z | MongoDB 4.2.5 is released | 2,031 |
null | [
"node-js"
] | [
{
"code": "",
"text": "Hi all,I’ve got an integration test on my (node.js) app that is checking the correct behaviour in case the connection to the db is not available.It simply tries to connect to the machine where the db is running, but on a invalid port.Before adding the useUnifiedTopology (set to true) on the connection options, the test works as expected, I was able to get the MongoNetworkError and drive the app correctly.With the useUnifiedTopology option, the test failed because I reach the jest timeout.node --version is v13.8.0\nmongodb driver version is 3.5.5Any hints?Thanks",
"username": "aleb"
},
{
"code": "connectconnectserverSelectionTimeoutMSMongoClient",
"text": "Hey @aleb!\nThe legacy topologies have a “fast fail” mode where connect will return immediately is some network error has occurred (invalid uri, ssl options, etc). The unified topology implements connect as a “server selection” which means it will wait up to serverSelectionTimeoutMS before failing to connect, so the short answer to your question is that you want to reduce that value (either through the connection string, or MongoClient options) to fail faster.",
"username": "mbroadst"
},
{
"code": "",
"text": "Ciao @mbroadst,thanks for your answer, it works like a charm. I don’t know if this is the right place, but I would document this in the official driver documentation.For example this option is not mentioned in the MongoClient available options (Class: MongoClient), neither on the reference for the connection settings (Connection Settings).Thanks for your help.AB",
"username": "aleb"
},
{
"code": "",
"text": "Glad to hear it @aleb! The documentation is on its way, the unified topology is only opt-in at the moment but will become the default in the upcoming v4 release of the driver. We will include documentation for these connection string options in the 3.6 release of the driver, with a note that they are only supported by the unified topology.",
"username": "mbroadst"
}
] | useUnifiedTopology test failed | 2020-03-28T23:58:38.817Z | useUnifiedTopology test failed | 2,900 |
null | [
"aggregation",
"dot-net"
] | [
{
"code": "[\n{event: _id: '{5883E716-947B-4403-BF5D-B9C2BBED3177}', datetime: '2020-01-01T00:00:00', data:'{key:Username, Value:Username}', eventType: 'login'},\n{event: _id: '{4008D4D7-9786-4C5F-8924-F05E9CC1532A}', datetime: '2020-01-01T01:00:00', data:'{key:Name, Value:AnyFirstName}', eventType: 'add'},\n{event: _id: '{4008D4D7-9786-4C5F-8924-F05E9CC1532C}', datetime: '2020-01-01T03:00:00', data:'{key:Name, Value:AnyFirstName}', eventType: 'add'},\n{event: _id: '{4008D4D7-9786-4C5F-8924-F05E9CC1532C}', datetime: '2020-01-01T03:00:00', data:'{key:Name, Value:AnyFirstName}', eventType: 'edit'},\n{event: _id: '{4008D4D7-9786-4C5F-8924-F05E9CC1532C}', datetime: '2020-01-01T03:00:00', data:'{key:Name, Value:AnyFirstName}', eventType: 'add'},\n{event: _id: '{4008D4D7-9786-4C5F-8924-F05E9CC1532B}', datetime: '2020-01-01T03:10:00', data:'{key:Username, Value:Username}', eventType: 'logout'},\n{event: _id: '{4008D4D7-9786-4C5F-8924-F05E9CC1532D}', datetime: '2020-01-02T00:00:00', data:'{key:Username, Value:Username}', eventType: 'login'},\n{event: _id: '{4008D4D7-9786-4C5F-8924-F05E9CC1532A}', datetime: '2020-01-02T01:00:00', data:'{key:Name, Value:AnyFirstName}', eventType: 'edit'},\n{event: _id: '{4008D4D7-9786-4C5F-8924-F05E9CC1532C}', datetime: '2020-01-02T03:00:00', data:'{key:Name, Value:AnyFirstName}', eventType: 'add'},\n{event: _id: '{4008D4D7-9786-4C5F-8924-F05E9CC1532C}', datetime: '2020-01-02T03:00:00', data:'{key:Name, Value:AnyFirstName}', eventType: 'edit'},\n{event: _id: '{4008D4D7-9786-4C5F-8924-F05E9CC1532C}', datetime: '2020-01-02T03:00:00', data:'{key:Name, Value:AnyFirstName}', eventType: 'add'},\n{event: _id: '{4008D4D7-9786-4C5F-8924-F05E9CC1532B}', datetime: '2020-01-02T03:10:00', data:'{key:Username, Value:Username}', eventType: 'logout'}\n]\n[\n{\nStart:2020-01-01T00:00:00,\nEnd:2020-01-01T03:10:00,\nEntries:{\n\t{event: _id: '{5883E716-947B-4403-BF5D-B9C2BBED3177}', datetime: '2020-01-01T00:00:00', data:'{key:Username, Value:Username}', eventType: 'login'},\n\t{event: _id: '{4008D4D7-9786-4C5F-8924-F05E9CC1532A}', datetime: '2020-01-01T01:00:00', data:'{key:Name, Value:AnyFirstName}', eventType: 'add'},\n\t{event: _id: '{4008D4D7-9786-4C5F-8924-F05E9CC1532C}', datetime: '2020-01-01T03:00:00', data:'{key:Name, Value:AnyFirstName}', eventType: 'add'},\n\t{event: _id: '{4008D4D7-9786-4C5F-8924-F05E9CC1532C}', datetime: '2020-01-01T03:00:00', data:'{key:Name, Value:AnyFirstName}', eventType: 'edit'},\n\t{event: _id: '{4008D4D7-9786-4C5F-8924-F05E9CC1532C}', datetime: '2020-01-01T03:00:00', data:'{key:Name, Value:AnyFirstName}', eventType: 'add'},\n\t{event: _id: '{4008D4D7-9786-4C5F-8924-F05E9CC1532B}', datetime: '2020-01-01T03:10:00', data:'{key:Username, Value:Username}', eventType: 'logout'}\n\t}\n},{\nStart:2020-01-02T00:00:00,\nEnd:2020-01-02T03:10:00,\nEntries:{\n\t{event: _id: '{4008D4D7-9786-4C5F-8924-F05E9CC1532D}', datetime: '2020-01-02T00:00:00', data:'{key:Username, Value:Username}', eventType: 'login'},\n\t{event: _id: '{4008D4D7-9786-4C5F-8924-F05E9CC1532A}', datetime: '2020-01-02T01:00:00', data:'{key:Name, Value:AnyFirstName}', eventType: 'edit'},\n\t{event: _id: '{4008D4D7-9786-4C5F-8924-F05E9CC1532C}', datetime: '2020-01-02T03:00:00', data:'{key:Name, Value:AnyFirstName}', eventType: 'add'},\n\t{event: _id: '{4008D4D7-9786-4C5F-8924-F05E9CC1532C}', datetime: '2020-01-02T03:00:00', data:'{key:Name, Value:AnyFirstName}', eventType: 'edit'},\n\t{event: _id: '{4008D4D7-9786-4C5F-8924-F05E9CC1532C}', datetime: '2020-01-02T03:00:00', data:'{key:Name, Value:AnyFirstName}', eventType: 'add'},\n\t{event: _id: '{4008D4D7-9786-4C5F-8924-F05E9CC1532B}', datetime: '2020-01-02T03:10:00', data:'{key:Username, Value:Username}', eventType: 'logout'}\n\t}\n}\n]\n",
"text": "Hi there,i’m new to NoSql and the nosql-queries.I’m searching for a way to Group Data between 2 Eventtypes like login and logout.\nI want to list all Logins and what happens until the User sends logout. Is there a way to do this Serverside?How can i do this performant with MongoDB.I’m using C# .net core 3.1 for developing.Source:so the Result should be something like this:thank you so much\nThomas",
"username": "Thomas"
},
{
"code": "mongousers[\n { _id: 1, datetime: '2020-01-01T00:00:00', Username:'user-1', eventType: 'login'},\n { _id: 2, datetime: '2020-01-01T01:00:00', Username:'user-1', eventType: 'add'},\n ...\ndb.users.aggregate( [\n { \n $sort: { Username: 1, datetime: 1} \n },\n { \n $group: {\n _id: \"$Username\", \n Start: { $first: \"$datetime\" }, \n End: { $last: \"$datetime\" },\n Entries: { $push: \"$$ROOT\" }\n } \n },\n {\n $project: { _id: 0 }\n }\n] )\n{\n \"Start\" : \"2020-01-01T00:00:00\",\n \"End\" : \"2020-01-01T03:10:00\",\n \"Entries\" : [\n {\n \"_id\" : 1,\n \"datetime\" : \"2020-01-01T00:00:00\",\n \"Username\" : \"user-1\",\n \"eventType\" : \"login\"\n },\n ...\n}, ...",
"text": "This is working from the mongo shell.Assuming the users collection has documents like this:The following aggregationproduces a result like this:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thank you for your Help.I think this Solution will Group by Username.\nIn my case, i don’t have a Username for each Event.\nAlso a User kann have more than 1 Login and i want to the Duration foreach Login until a logout.Is there a way to get Elements between Login and Logout when there are many Login,Logouts.The Login,Logout example is only an Example for a simular Problem.\nI have many Start Stop Entries, after a Start there are always some Entries and then a End, so i need to Group this Events from a Start to his End, then the next Set (Start to End).If it is possible i would like to do this with Mongo and not in the Code.Like:\nEvent Start\nEvent SelectArticle4711\nEvent 1\nEvent 2\nEvent 3\nEvent Stop\nEvent Start\nEvent SelectArticle4712\nEvent 1\nEvent 2\nEvent 3\nEvent 4\nEvent Stop",
"username": "Thomas"
},
{
"code": "key:Name, Value:AnyFirstName",
"text": "I think this Solution will Group by Username.Correct.In my case, i don’t have a Username for each Event.So, what is the criteria by which to relate an event with a user or login? How is this key:Name, Value:AnyFirstName related with a user?Also a User kann have more than 1 Login and i want to the Duration foreach Login until a logout.So, there is no login session identifier for the user’s login (and the events are not identified by it)…Is there a way to get Elements between Login and Logout when there are many Login,LogoutsIt is related to the above previous two points.The Login,Logout example is only an Example for a simular Problem.\nI have many Start Stop Entries, after a Start there are always some Entries and then a End, so i need to Group this Events from a Start to his End, then the next Set (Start to End).If it is possible i would like to do this with Mongo and not in the Code.It is possible, but with appropriate input.",
"username": "Prasad_Saya"
}
] | Grouping Data between events | 2020-03-29T13:37:38.341Z | Grouping Data between events | 2,016 |
null | [
"node-js",
"replication"
] | [
{
"code": "At the docker service:\n\"error: MongoNetworkError: Connection 46 to {server-name} timed out\nat Socket<anonymous> .../node-modules/mongodb-core/lib/connection/connection-js:258:7).\n...\".\n\nAt the localhost:\n\"Error: read ECONNRESET at tcp. on StreamRead (internal-stream_base_commons.js:167:27)\".\n",
"text": "Hey,I’m using the mongodb node driver (version 3.0.6) in my server and I am having trouble with a specific use case problem that I wish to get the answer from you.I have a replica set of 6 mongos which are sectioned to: 1 master 5 secondaries (which 3 of them are passives).I have a node service with 4 instances of this service that connects to the replica set. Each instance is connected to a part of the replica set and not the whole set (because of a need my app requires to divided some of the DBs), the division is: I have a “core” set which includes the master and the 2 secondaries (which are not passives).The first instance includes only the core set - and for the example shall be called “Core Instance”. The others are connected to the core set and only one of the remaining passives db (every secondary is used by an instance of my node service) - and for the example shall be called “External instance”.I noticed that when one of the “External instances” is shutting down (because the server is shut down) the other “External instances” are having a connection error for several minutes (and after few minutes the service is working good again).The error that is shown in the “External instances” while the shutdown is happening is:I don’t understand why when one the “External instances” is failing the other instances are having trouble in the connection.In my opinion, it is because of the replica set connection in the MongoClient.connect URI (is written as the documentation mentiones:\n“mongodb://mongodb0.example.com:27017,mongodb1.example.com:27017,mongodb2.example.com:27017/?replicaSet=myRepl”), that it is ignoring the host list and connected to the whole replica set (or that it is working correctly but I didn’t understand the host list meaning).My mongo version is 3.6.5 . The mongodb npm package is 3.0.10 .*Maybe worth to mention that we are using the oplog.",
"username": "bt_of"
},
{
"code": "",
"text": "Welcome to the community @bt_of,In my opinion, it is because of the replica set connection in the MongoClient.connect URI (is written as the documentation mentiones:\n“mongodb://mongodb0.example.com:27017,mongodb1.example.com:27017,mongodb2.example.com:27017/?replicaSet=myRepl”), that it is ignoring the host list and connected to the whole replica set (or that it is working correctly but I didn’t understand the host list meaning).When you include multiple hosts in a replica set connection string, the hosts are used as a seed list to discover the replica set members (and canonical host names & ports) via the replica set configuration. The seed list only determines which hosts the driver will connect to for discovering the configuration. The expected behaviour is that after successfully connecting to one of the seed list hosts and fetching the replica set configuration, the driver will attempt to connect to (and monitor) all non-hidden members of the replica set.If you want to target read operations to a subset of your replica set, the supported approach is using read preferences with optional replica set tag sets.If you want to connect to a specific member of your replica set without discovery and monitoring of the other replica set members, you can use a standalone connection string. However, standalone mode is a direct connection and does not support replica set failover behaviour (since you are only connected to a single MongoDB instance).Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hey Stennie,First of all thank you very much, you helped me a lot!I wanted to know if there is any option for me to see the list of the eligible members that the nearest read preference creates. (as mentioned here: https://docs.mongodb.com/manual/core/read-preference-mechanics/ in the nearest section).Sincerely,\nTal.",
"username": "bt_of"
},
{
"code": "nearestmaxStalenessSecondsnearest",
"text": "I wanted to know if there is any option for me to see the list of the eligible members that the nearest read preference creates.Hi Tal,The list of eligible members is evaluated for each read operation based on the requested read preference and options (for example, nearest read preference with a maxStalenessSeconds option).I’m not sure if there is a straightforward way to log the eligible members before one is selected. You could look through the Node driver’s functional tests to see if there might be any debug or logging method for this level of detail.If you want to target a more deterministic set of replica set members than nearest, you can add replica set tag sets to associate replica set members with user-defined tag pairs (for example, a specific use case or location). One or more tags can be used to refine targeting replica set members for secondary read preferences. See Read Preference Tag Sets in the MongoDB manual for some examples.Regards,\nStennie",
"username": "Stennie_X"
}
] | MongoDB Node driver replica set connection | 2020-03-29T19:43:02.770Z | MongoDB Node driver replica set connection | 4,453 |
null | [
"database-tools",
"backup"
] | [
{
"code": "",
"text": "mongodump --host=“rs0/localhost:27017,localhost:27018,localhost:27019” --readPreference=secondary -d local -c oplog.rs --query “{“ts”: {”$gte\": {\"$timestamp\": {“t”:$lasttime, “i”: 50}}}}\" -vvv -o /home/anupama/backupec2/inc_backWhen I bsondump the file I get too many irrelevant data saying “msg”:“periodic noop”\nHow can i filter it in addition to the above query command.",
"username": "raushan_sharma"
},
{
"code": "",
"text": "May be by adding additional filter on operation o can help“o.msg”:\n{\n$ne: “periodic noop”}",
"username": "Ramachandra_Tummala"
}
] | How to filter periodic noop with mongodump | 2020-03-30T07:51:02.599Z | How to filter periodic noop with mongodump | 2,496 |
null | [
"performance"
] | [
{
"code": "",
"text": "Morning everyone,Been recently playing around with MongoDB and its indexes within a sharded environment. Taking this into account I wanted to make some tests within my local DB and see how writes are affected by indexes.I’ve got a collection with 2 indexes, one is used for the reading side of things and the other one was used in the sharded cluster.Having a hashed index which was used for sharding in a single cluster environment infers a penalty when writing or any other kind of issue, should I remove it? It’s not an unused index, that’s for sure!",
"username": "eddy_turbox"
},
{
"code": "_id",
"text": "Welcome to the community @eddy_turbox, Having a hashed index which was used for sharding in a single cluster environment infers a penalty when writing or any other kind of issue, should I remove it? It’s not an unused index, that’s for sure! A hashed index will still be considered by the query planner for equality queries so may not be entirely unused, but if you already have an index on this field (for example, _id ) the additional index will not have any performance benefit.In general you should remove unnecessary indexes to free up system resources (extra storage and RAM consumed). Assuming you are using the default WiredTiger storage engine, indexes only add write overhead when the value of the indexed field is added or modified.Regards,\nStennie",
"username": "Stennie_X"
}
] | Hashed indexes within non-sharded collections infer a performance penalty? | 2020-03-26T09:59:40.930Z | Hashed indexes within non-sharded collections infer a performance penalty? | 1,426 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "List bsonPlanData = database.GetCollection(Startup.mongoDB_ScurvePlanCollection).AsQueryable().Where(w => w.BridgeId.ToUpper() == BridgeId.ToUpper()).ToList();i am not able to put trim after ToUpper();",
"username": "Rajesh_Yadav"
},
{
"code": "Trim()w.BridgeIdContains()var planData = collection.AsQueryable()\n .Where(\n w => w.BridgeId.ToUpper().Contains(BridgeId.ToUpper())\n );\n",
"text": "Hi @Rajesh_Yadav,i am not able to put trim after ToUpper();If you’re referring to calling Trim() for w.BridgeId in comparison, currently that’s not supported. There’s an open ticket: CSHARP-2077, feel free to watch/up-vote to receive notifications on it.\nIdeally the value should be trimmed before insertion to the database. As an alternative, you can use Contains(), for example:Regards,\nWan.",
"username": "wan"
}
] | How to put trim() | 2020-03-29T09:16:22.237Z | How to put trim() | 2,068 |
[
"legacy-realm-server"
] | [
{
"code": "",
"text": "The troubleshooting docs are now behind a login and we (realm customers) can’t access it anymore.\nimage1100×676 43.3 KB\nFor example, trying to access this link is not possible anymore: GitBookHow can I get access to these docs again?Thanks.",
"username": "Mo_Basm"
},
{
"code": "",
"text": "How can I get access to these docs again?Hi Mo,We are no longer onboarding new customers onto the self-hosted Realm Object Server, but if you are an existing Realm customer please open a case on the Support Portal.The “Invalid Credentials” error implied by your link indicates that login failed due to incorrect credentials (incorrect username/password combination, or a user that hasn’t been created yet).Regards,\nStennie",
"username": "Stennie_X"
}
] | Can't access troubleshooting documentation | 2020-03-29T01:51:32.658Z | Can’t access troubleshooting documentation | 3,152 |
|
null | [
"stitch"
] | [
{
"code": "function not found: 'find' at e._Error (https://s3.amazonaws.com/stitch-sdks/js/bundles/4.3.1/stitch.js:1:121318)https://us-east-1.aws.stitch.mongodb.com/api/client/v2.0/app/stitch-app-id(removed)/functions/callError:\nfunction not found: 'find'\nStack Trace:\nStitchError: function not found: 'find'\n{\n \"arguments\": [\n {\n \"collection\": \"listingsAndReviews\",\n \"database\": \"sample_airbnb\",\n \"query\": {\n \"address.country\": \"United States\",\n \"beds\": {\n \"$eq\": {\n \"$numberInt\": \"1\"\n }\n },\n \"bathrooms\": {\n \"$eq\": {\n \"$numberInt\": \"1\"\n }\n },\n \"property_type\": \"Apartment\"\n },\n \"limit\": {\n \"$numberInt\": \"25\"\n }\n }\n ],\n \"name\": \"find\",\n \"service\": \"weekly-challenge-5\"\n}\n",
"text": "Hi All,RE: https://mdbwchallengeweek5.splashthat.com/I’m going through Eliot’s challenges from last year’s MDBW and I’m stuck on connecting my JS app to Stitch. It’s calling out to Stitch but receiving 404 not found with an error in the call stackfunction not found: 'find' at e._Error (https://s3.amazonaws.com/stitch-sdks/js/bundles/4.3.1/stitch.js:1:121318)It doesn’t mention to create a function in the stitch app but the failure is happening at POST to this endpoint. https://us-east-1.aws.stitch.mongodb.com/api/client/v2.0/app/stitch-app-id(removed)/functions/callThis is from the logs on Stitch.",
"username": "jeremyfiel"
},
{
"code": "",
"text": "finally figured it out. My MDB_SERVICE name was incorrect. When I linked the cluster, I changed the name of it from the default.",
"username": "jeremyfiel"
},
{
"code": "",
"text": "@jeremyfiel Thanks for following up with the fix for the error you encountered. Were you able to complete the first four challenges without any issues?The Weekly Challenges for MDBW19 are almost a year old, so there may be a few UI differences in screenshots. The challenges should still be solvable, but please start a new discussion topic in the forum or comment on the relevant solution blog post if any details need to be updated.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hey StennieI just finished up the Jamstack challenge only because it caught my eye as I was reading other JAM related stuff. Eventually, I will try all of the other challenges. Thanks for your reply",
"username": "jeremyfiel"
}
] | MDBW19 Eliot's Weekly Challenge #5 - JAMStack | 2020-03-26T00:28:25.827Z | MDBW19 Eliot’s Weekly Challenge #5 - JAMStack | 2,075 |
[
"atlas",
"monitoring"
] | [
{
"code": "",
"text": "Hi! I moved here since mLab was bought out and they wanted to move users to Atlas. Everything was going fine until a week or two ago when something strange happened - I get a lot of lag around some DB-related operations. Previously, my login process was fairly instant, but now I’m seeing a lot of lag around an operation (35.5 seconds to complete, and it gets worse as time goes on):\nimage1139×91 11 KB\nNote that when this issue originated, I have made no changes to my codebase - previously it was fine. I figured this was related to a slow DB query so I checked the profiler, but:\nimage803×380 20.4 KB\nIt seems like there are no slow queries, which I’m not so sure about. I’ve went and added a few additional indexes but it hasn’t really helped anything. I’m pretty much out of ideas - is there anything that could have caused this recently on the Atlas side? FWIW, my local DB/codebase operates fine, it’s just prod that’s breaking.I’ve tried upgrading my Atlas cluster, that didn’t seem to change anything. I’ve also tried upgrading my VPS and that didn’t change anything. I’m genuinely at a loss of what I can do to troubleshoot this further… any ideas would help, thanks.",
"username": "Kyle_J_Kemp"
},
{
"code": "",
"text": "I’ve tried upgrading my Atlas cluster, that didn’t seem to change anything. I’ve also tried upgrading my VPS and that didn’t change anything. I’m genuinely at a loss of what I can do to troubleshoot this further… any ideas would help, thanks.Welcome to the community @Kyle_J_Kemp!Have you discussed your issue with Atlas support?It looks like the timing you are measuring is from your application point of view, so likely includes network round trip and other application processing time. To better understand & troubleshoot application performance issues, I suggest separately profiling the time spent processing in database vs application code using an Application Performance Management (APM) tool with Atlas support.If you have an M10+ Atlas cluster, New Relic and Data Dog are APM solutions currently available as third-party monitoring services that can correlate database activity with application metrics. Both have trial periods so you could test to see if they provide any additional insights.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi, and thanks. I don’t currently have a support plan because my project is just a hobby one.Yes, I am measuring it in my application, and it turns out my hunch was semi-correct - the fault resided with Azure. I moved my DB to AWS an hour or two ago and that seems to have resolved the issue (I initially re-created it because I found you can’t downgrade back to free after upgrading).Not really sure what happened in the last two weeks to bog down Azure so much but I guess this issue is resolved.",
"username": "Kyle_J_Kemp"
},
{
"code": "",
"text": "Not really sure what happened in the last two weeks to bog down Azure so much but I guess this issue is resolved.Hi Kyle,There has been a significant increase in cloud services activity given recent world events, and Azure in particular has had some challenges around availability of new instances. I expect those are temporary challenges, but if you are using lower cost tiers for your hobby project there may have been more notable impact.Recent story:Admits ongoing provisioning problems but insists no capacity crunch even as it drops freebiesRegards,\nStennie",
"username": "Stennie_X"
}
] | Lots of lag in last two weeks on Atlas | 2020-03-29T19:43:16.312Z | Lots of lag in last two weeks on Atlas | 2,254 |
|
null | [
"react-native",
"legacy-realm-cloud"
] | [
{
"code": "",
"text": "Hello,It’s more than a year that I have this problem, could anyone please help to solve it? It’s very annoying to have to write something like “Tap the screen if it stuck” in the first view of my app.When my app start I show a spinner until the synchronization of my query-synched… realm completes. \n\n## Goals\n\nWhen the sync ends the spinner disappears because of the promise resolved.\n\n## Expected Results\n\nThe promise is resolved and the code go on.\n\n## Actual Results\n\nThe problem is that since the promises are not resolved until I tap the screen the loading process seems infinite and the spinner doesn't disappear.\n\n## Steps to Reproduce\n\nI have this problem since almost 1 year. You can reproduce it by:\n\n- use create-react-native-app\n- install realm\n- try\n\n## Code Sample\n\n```\n var realm = Database.getRealm()\n\n var class1 = realm.objects(\"Class1\")\n var class2 = realm.objects(\"Class2\")\n var class3 = realm.objects(\"Class3\")\n\n await Promise.all(\n [\n Database.susbscribeAndSyncTo(class1),\n Database.susbscribeAndSyncTo(class2),\n Database.susbscribeAndSyncTo(class3),\n ]\n )\n\n // This console.log is not executed until I don't tap on the screen. \n // More classes I add to this and more times I need to tap the screen\n console.log(\"Synched\")\n\n return true\n\n```\n\n```\n\nstatic susbscribeAndSyncTo = async (object, object_name) => {\n var subscription = object.subscribe()\n\n return new Promise((resolve, reject) => {\n subscription.addListener((subscription, state) => {\n if (this.checkSubscriptionState(state, object_name)) {\n try {\n subscription.removeAllListeners()\n } catch (e) {\n console.log(e.message)\n }\n resolve(true);\n }\n })\n });\n }\n static checkSubscriptionState = (state, object_type) => {\n switch (state) {\n case Realm.Sync.SubscriptionState.Creating:\n // The subscription has not yet been written to the Realm\n break;\n case Realm.Sync.SubscriptionState.Pending:\n // The subscription has been written to the Realm and is waiting\n // to be processed by the server\n break;\n case Realm.Sync.SubscriptionState.Complete:\n // The subscription has been processed by the server and all objects\n // matching the query are in the local Realm\n return true\n\n break;\n case Realm.Sync.SubscriptionState.Invalidated:\n // The subscription has been removed\n break;\n case Realm.Sync.SubscriptionState.Error:\n break;\n\n default:\n break;\n }\n\n return false\n }\n```\n\n## Version of Realm and Tooling\n\n- Realm JS SDK Version: Realm JS from 3.4.2\n- Node or React Native: React Native\n- Client OS & Version: Android / iOS\n- Which debugger for React Native: None",
"username": "Aurelio_Petrone"
},
{
"code": "",
"text": "We have the exact same problem ",
"username": "Mo_Basm"
},
{
"code": "",
"text": "I’ve talked with the team from Realm. They say this kind of issues depends from something that is not easy to change in how realm J’s works, so their advice is to use a setTimeout as workaround.In my example you should write it (in my case with 30ms of delay) afterresolve()",
"username": "Aurelio_Petrone"
},
{
"code": "",
"text": "Oh I see. Thank you for providing a workaround . Will try that out. Cheers.",
"username": "Mo_Basm"
}
] | Promises are not resolved inside change listeners callbacks until tapping the screen | 2020-03-03T18:54:16.551Z | Promises are not resolved inside change listeners callbacks until tapping the screen | 2,562 |
null | [
"legacy-realm-cloud"
] | [
{
"code": "",
"text": "Hi there, I have two questions:Cheers.",
"username": "Mo_Basm"
},
{
"code": "",
"text": "What are the available regions for hosting a realm cloud instance?Hi Mo,You can choose to create your Realm Cloud Standard database instances within AWS in either the US (US West) or Europe (Frankfurt) regions.If you are interested in a Dedicated Cloud plan, other options may be available (contact [email protected]).Note: the upcoming MongoDB Realm beta will support deploying MongoDB Realm applications in any region where Stitch is currently supported.For more information on MongoDB Realm, please see the MongoDB Realm public roadmap.How can we relocate our instance to other regions (for example, Australia)?For Realm Cloud you would have to open a support case to discuss.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Ok I see, thank you.",
"username": "Mo_Basm"
}
] | How to relocate realm data / instance to other regions? | 2020-03-27T22:27:56.276Z | How to relocate realm data / instance to other regions? | 2,418 |
null | [
"dot-net"
] | [
{
"code": "public class RootObject1\n {\n \n public Dictionary<string, string> nodeHierarchy { get; set; }\n public Dictionary<string, string> Parentnode { get; set; }\n public string wsId { get; set; }\n public DateTime dateFrom { get; set; }\n public DateTime dateTo { get; set; }\n public bool IsLeafNode { get; set; }\n public decimal weightage { get; set; }\n}\n/** \n* this is what i have in mongodb\n* Paste one or more documents here\n*/\n{\n \"Parentnode\": {\n \"LEVEL1\": \"Civil and Structural\",\n \"LEVEL2\": \"Cluster 1\",\n \"LEVEL3\": \"Workshop & Maintenance Building\"\n },\n \"nodeHierarchy\": {\n \"LEVEL1\": \"Civil and Structural\",\n \"LEVEL2\": \"Cluster 1\",\n \"LEVEL3\": \"Workshop & Maintenance Building\",\n \"LEVEL4\": \"Excavation\"\n },\n \"weightage\": \"20\",\n \"dateFrom\": \"2019-01-01\",\n \"dateTo\": \"2019-12-31\",\n \"IsLeafNode\": true,\n \"wsId\": \"PROJECT2\"\n}\n",
"text": "List<Models.RootObject1> bsonPlanData2 = collection.AsQueryable().Where(e1 => e1.wsId == WSId).Select(e => new { e.nodeHierarchy, e.Parentnode,e.wsId,e.dateFrom,e.dateTo,e.IsLeafNode,e.weightage }).ToList();List<Models.RootObject1> bsonPlanData2 = collection.AsQueryable().Where(e1 => e1.wsId == WSId).Select(e => new { e.nodeHierarchy, e.Parentnode,e.wsId,e.dateFrom,e.dateTo,e.IsLeafNode,e.weightage }).ToList();List<Models.RootObject1> bsonPlanData2 = collection.AsQueryable().Where(e1 => e1.wsId == WSId).Select(e => { e.nodeHierarchy, e.Parentnode,e.wsId,e.dateFrom,e.dateTo,e.IsLeafNode,e.weightage }).ToList();List<Models.RootObject1> bsonPlanData2 = collection.AsQueryable().Where(e1 => e1.wsId == WSId).Select(e => new RootObject1() { e.nodeHierarchy, e.Parentnode,e.wsId,e.dateFrom,e.dateTo,e.IsLeafNode,e.weightage }).ToList();above 4 ways are there, where i tried to get the data from mongo and put it in my c# object.\nthough there is one newtonsoft deserializer is there , but somebody told me that my 4 call also deserializes, so i do not want to use extra conversion by usng newtornsoft desrilizer.and above code shows me that u have to implemnt ienumerator in my class. so pls tel me is there any othe way i could do this with out implimenting ienumertoe in my class.",
"username": "Rajesh_Yadav"
},
{
"code": "",
"text": "As you can read here:\nhttps://mongodb-documentation.readthedocs.io/en/latest/ecosystem/tutorial/use-linq-queries-with-csharp-driver.htmlSelect must be the last operation in the Linq chain. You must first get the result of the select, then you can get a list from it using ToList from the result.",
"username": "Leonardo_Daga"
}
] | How get the ToList() of objects from MongoDB to C# | 2020-03-09T11:23:59.263Z | How get the ToList() of objects from MongoDB to C# | 12,041 |
null | [
"compass"
] | [
{
"code": "",
"text": "when ever i enter any date it adds time as 18:30 as shown below why it is show and why\ntime does not cange unless we change it.2019-06-17T18:30:00.000+00:00",
"username": "Rajesh_Yadav"
},
{
"code": "",
"text": "How are you inserting the date?",
"username": "DavidSol"
},
{
"code": "",
"text": "using campass by hand.",
"username": "Rajesh_Yadav"
},
{
"code": "",
"text": "I am afraid I wouldn’t know",
"username": "DavidSol"
}
] | When i enter 2019-01-17 in date col of mongo then why it addes only 18:30 | 2020-03-27T12:21:08.947Z | When i enter 2019-01-17 in date col of mongo then why it addes only 18:30 | 1,740 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "var collection = database.GetCollection(“abc”);\nwhen i execute the above code.\ndoes it get the all records at that time.\nor when i execute following.List bsonPlanData = collection.AsQueryable().Where(w => w.BridgeId == “BkN”).ToList();bsically i want to apply the where condition in db it self.\nyours sincerley",
"username": "Rajesh_Yadav"
},
{
"code": "database.GetCollection()ToList()",
"text": "does it get the all records at that time.\nor when i execute following.List bsonPlanData = collection.AsQueryable().Where(w => w.BridgeId == “BkN”).ToList();Hi Rajesh,The database.GetCollection() method creates a client-side connection object, but the server doesn’t retrieve any documents until you execute a command (such as your LINQ query).i want to apply the where condition in db it self.Your query will execute on the server and return a cursor. The ToList() method will iterate the cursor and retrieve all matching results.For more code examples, check out the C# Driver Quick Tour and the C# Quick Start blog post series.Regards,\nStennie",
"username": "Stennie_X"
}
] | When we get the data from db in c# | 2020-03-27T12:24:48.981Z | When we get the data from db in c# | 1,650 |
null | [] | [
{
"code": "",
"text": "First of all, sorry for my English, I had help from google for this question =DI am looking to migrate my MySQL solution to MongoDB because of a number of advantages that it allows me to. For this, there is a procedure that is done through a procedure that takes records that expired on a certain date and stores them in a secondary backup table, to speed up the search process in the main table.I looked in the MongoDB documentation but I didn’t find anything that looks like a procedure for that. I cannot place this process being done by an external application because I work with clustered solutions so I would have to have the same application doing the same job at least 3 times in the bank (because of the multiple zones in the amazon cloud)is there any way to schedule a task, procedure, or execution of a particular process within MongoDB itself, so that I can transfer these documents hourly to a backup collection?",
"username": "Leandro_Santiago_Gom"
},
{
"code": "",
"text": "Hola Leandro!I think this is what you are looking for:\nYou set of a TTL for your documents: https://docs.mongodb.com/manual/tutorial/expire-data/\nAnd then you get the deleted documents with a Change Stream: https://docs.mongodb.com/manual/changeStreams/\nAnd when you get them you insert them into the secondary collection.In Atlas, on the other hand, you could use Triggers: https://docs.atlas.mongodb.com/triggers/",
"username": "DavidSol"
},
{
"code": "collection_yyyymmdddelete_id",
"text": "Welcome to the community Leandro,is there any way to schedule a task, procedure, or execution of a particular process within MongoDB itself, so that I can transfer these documents hourly to a backup collection?The MongoDB server does not have a built-in task scheduler for running tasks or archiving documents, so you will have to use an external application and scheduler for a self-managed deployment.MongoDB Atlas (our managed cloud service) does have a Scheduled Triggers feature enabling custom functions to run on a schedule.I cannot place this process being done by an external application because I work with clustered solutions so I would have to have the same application doing the same job at least 3 times in the bank (because of the multiple zones in the amazon cloud)If you are working with clustered MongoDB deployments (for example, a sharded cluster with multiple zones), you should only have to execute your archival task once per deployment. If you are managing multiple deployments, you will have to run separate tasks per deployment.However, before adding the I/O overhead of moving documents to a new collection, I would consider if this is actually the best approach.Some alternative approaches to consider:You set of a TTL for your documents\n…\nthen you get the deleted documents with a Change StreamA TTL index removes matching documents after an expiry date (in seconds).The change stream delete event will be fired after documents are removed, but will only include the document _id:The fullDocument document is omitted as the document no longer exists at the time the change stream cursor sends the delete event to the client.In Atlas, on the other hand, you could use TriggersMongoDB Atlas provides two kinds of triggers:Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to schedule tasks in MongoDB | 2020-03-27T21:36:20.915Z | How to schedule tasks in MongoDB | 9,241 |
null | [
"graphql"
] | [
{
"code": "",
"text": "Hello, I’ve been looking everywhere for docs on how to integrate a Java web client to GraphQL. I’ve seen some using Angular, etc. Please help.",
"username": "Carl_Catral"
},
{
"code": "",
"text": "Welcome @Carl_Catral,For information on the Realm GraphQL API see: How to use the API. Realm’s GraphQL API provides authenticated endpoints that your drivers/applications interact with using HTTP GET/POST requests.If you’re looking for more convenient GraphQL client libraries, the GraphQL site is a good starting point. See: Java/Android clients.Regards,\nStennie",
"username": "Stennie_X"
}
] | GraphQL Java Web Client Realm | 2020-03-26T03:43:46.649Z | GraphQL Java Web Client Realm | 2,493 |
null | [
"legacy-realm-cloud"
] | [
{
"code": "",
"text": "Hello,We use the standard Realm Cloud service, which has a storage limit.Can you please explain how we can check how much storage we’re using currently?We were unable to find this operation in Realm Studio or on cloud.realm.io",
"username": "Ivo_Dimitrov"
},
{
"code": "",
"text": "Hi Ivo,Realm Cloud storage usage is summed across all of the instances linked to an account.The total isn’t currently available via the Realm Cloud UI, but you can create a support case to check on current usage.Regards,\nStennie",
"username": "Stennie_X"
}
] | How to check Realm Cloud storage use? | 2020-03-23T19:55:29.344Z | How to check Realm Cloud storage use? | 1,559 |
null | [
"python"
] | [
{
"code": "",
"text": "hi all,\nhow manage a array of object with pymongo ? I try with find-and_modify , updateone but it’s not right\ndo you have a exemple to insert a object into my array ou update itmany thanksI’ve a collectiontest :test\nstate : array\n0 :\ndate : 22-03-2020 xxxx\nuser : test user\nstate : pending1:\ndate : 22-03-2020 xxxx\nuser : test user2\nstate : pending2",
"username": "Couture_Christophe"
},
{
"code": "arraysmongo> db.arrays.findOne()\n{\n \"_id\" : 1,\n \"state\" : [\n {\n \"date\" : ISODate(\"2020-03-25T15:10:22.220Z\"),\n \"user\" : \"user-1\",\n \"state\" : \"pending\"\n },\n {\n \"date\" : ISODate(\"2020-03-26T09:57:17.315Z\"),\n \"user\" : \"user-2\",\n \"state\" : \"active\"\n }\n ]\n}\nstate{\n \"date\" : \"today's date\",\n \"user\" : \"user-3\",\n \"state\" : \"none\"\n}\n### Connect to MongoDB database and query the arrays collection:\n###\n>>> import pymongo\n>>> from pymongo import MongoClient\n>>> client = MongoClient()\n>>> db = client.test\n>>> collection = db.arrays\n>>> import pprint\n>>> pprint.pprint(collection.find_one())\n{'_id': 1.0,\n 'state': [{'date': datetime.datetime(2020, 3, 25, 15, 10, 22, 220000),\n 'state': 'pending',\n 'user': 'user-1'},\n {'date': datetime.datetime(2020, 3, 26, 9, 57, 17, 315000),\n 'state': 'active',\n 'user': 'user-2'}]}\n\n###\n### Add a new object to the 'state' array, using the '$push' array update operator.\n### 'result' is a UpdateResult object.\n###\n>>> import datetime\n>>> result = collection.update_one( { '_id': 1 }, \n { '$push': { 'state': { 'date' : datetime.datetime.utcnow(),\n 'user' : 'user-3', 'state' : 'none'\n } } } )\n>>> result.matched_count\n1\n>>> result.modified_count\n1\n>>> pprint.pprint(collection.find_one())\n{'_id': 1.0,\n 'state': [{'date': datetime.datetime(2020, 3, 25, 15, 10, 22, 220000),\n 'state': 'pending',\n 'user': 'user-1'},\n {'date': datetime.datetime(2020, 3, 26, 9, 57, 17, 315000),\n 'state': 'active',\n 'user': 'user-2'},\n {'date': datetime.datetime(2020, 3, 27, 10, 1, 28, 267000),\n 'state': 'none',\n 'user': 'user-3'}]}\n\n###\n### Update the new object in the 'state' array, using the '$set' update operator.\n###\n>>> result = collection.update_one( { '_id': 1, 'state.state': 'none' }, { '$set': { 'state.$.state' : 'done' } } )\n>>> result.modified_count\n1\n>>> pprint.pprint(collection.find_one())\n{'_id': 1.0,\n 'state': [{'date': datetime.datetime(2020, 3, 25, 15, 10, 22, 220000),\n 'state': 'pending',\n 'user': 'user-1'},\n {'date': datetime.datetime(2020, 3, 26, 9, 57, 17, 315000),\n 'state': 'active',\n 'user': 'user-2'},\n {'date': datetime.datetime(2020, 3, 27, 10, 1, 28, 267000),\n 'state': 'done',\n 'user': 'user-3'}]}\n\n###\n### Remove the new object from the 'state' array, using the '$pull' array update \noperator.\n###\n>>> result = collection.update_one( { '_id': 1 }, { '$pull': { 'state' : { 'state': 'done' } } } )\n>>> result.modified_count\n1\n>>> pprint.pprint(collection.find_one())\n{'_id': 1.0,\n 'state': [{'date': datetime.datetime(2020, 3, 25, 15, 10, 22, 220000),\n 'state': 'pending',\n 'user': 'user-1'},\n {'date': datetime.datetime(2020, 3, 26, 9, 57, 17, 315000),\n 'state': 'active',\n 'user': 'user-2'}]}",
"text": "how manage a array of object with pymongo ?Here is an example, using a collection called as arrays with one document.Initially I inserted a document from the mongo shell, and queried:The array state will be updated to add a new object, then update this new object and finally delete the new object from the array.The new object is:I used the Python shell to run the following code interactively using PyMongo 3.9 and MongoDB 4.2.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thanks for the quick feedback and examples. I have successfully implemented the code in my project and it works.",
"username": "Couture_Christophe"
}
] | Array of objects with PyMongo | 2020-03-26T16:29:44.412Z | Array of objects with PyMongo | 6,533 |
null | [
"installation"
] | [
{
"code": "",
"text": "about to fork child process, waiting until server is ready for connections.\nforked process: 7891\nERROR: child process failed, exited with error number 1\nTo see additional information in this output, start without the “–fork” option.Help me with this.",
"username": "Chaitanya_Kashyap"
},
{
"code": "",
"text": "Chaitanya_Kashyap\nYou might have already got solution for this in discourse forumPlease run without fork option(interactive mode).It will throw error on your terminal as to why it is failing\nMost likely missing dbpath,logpath or permissions issueor with fork option you would have used logpath\nPlease check mongodb.log.It will give more details",
"username": "Ramachandra_Tummala"
}
] | Can't start MongoDB in Vagrant | 2020-03-19T09:12:43.921Z | Can’t start MongoDB in Vagrant | 1,538 |
null | [] | [
{
"code": "",
"text": "I have been moving my database from Mongodb to Firebase, and I am having trouble with some of the queries. When I query tickets, e.g. TicketModel.find({}). It returns a circular json structure that I cannot use. I have also tried with our InvoiceModel, which also returns a circular structure that I cannot use. Any help would be appreciated.",
"username": "Ezra_Cook"
},
{
"code": "",
"text": "Welcome @Ezra_Cook,To help understand this issue, can you please provide:an example of the JSON output that is problematic for you to usedetails of the driver & version you are writing your query inThanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "JSON is built on two structures\nA collection of name/value pairs. In various languages, this is realized as an object, record, struct, dictionary, hash table, keyed list, or associative array. An ordered list of values.",
"username": "samuel_otomewo"
}
] | Circular JSON structure | 2020-03-27T01:44:08.701Z | Circular JSON structure | 1,815 |
null | [] | [
{
"code": "mongod --port 27000 --bind_ip \"localhost, 192.168.103.100\" --dbpath \"\\data\\db\\\" --auth\n2020-03-27T09:26:35.171+1100 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\n2020-03-27T09:26:35.604+1100 W ASIO [main] No TransportLayer configured during NetworkInterface startup\n2020-03-27T09:26:35.606+1100 I CONTROL [initandlisten] MongoDB starting : pid=20124 port=27000 dbpath=\\data\\db\" --auth 64-bit host=MatinauSurface\n2020-03-27T09:26:35.606+1100 I CONTROL [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2\n2020-03-27T09:26:35.606+1100 I CONTROL [initandlisten] db version v4.2.5\n2020-03-27T09:26:35.606+1100 I CONTROL [initandlisten] git version: 2261279b51ea13df08ae708ff278f0679c59dc32\n2020-03-27T09:26:35.607+1100 I CONTROL [initandlisten] allocator: tcmalloc\n2020-03-27T09:26:35.607+1100 I CONTROL [initandlisten] modules: none\n2020-03-27T09:26:35.607+1100 I CONTROL [initandlisten] build environment:\n2020-03-27T09:26:35.607+1100 I CONTROL [initandlisten] distmod: 2012plus\n2020-03-27T09:26:35.607+1100 I CONTROL [initandlisten] distarch: x86_64\n2020-03-27T09:26:35.608+1100 I CONTROL [initandlisten] target_arch: x86_64\n2020-03-27T09:26:35.608+1100 I CONTROL [initandlisten] options: { net: { bindIp: \"localhost, 192.168.103.100\", port: 27000 }, storage: { dbPath: \"\\data\\db\" --auth\" } }\n2020-03-27T09:26:35.611+1100 E STORAGE [initandlisten] Failed to set up listener: SocketException: The requested address is not valid in its context.\n2020-03-27T09:26:35.612+1100 I CONTROL [initandlisten] now exiting\n2020-03-27T09:26:35.612+1100 I CONTROL [initandlisten] shutting down with code:48\n",
"text": "I’m running through the m103 lab and unable to launch mongod using the following command line:mongod --port 27000 --bind_ip “localhost, 192.168.103.100” --dbpath “\\data\\db” --authOutput:Vagrant is running as instructed “mongod-m103” which is displayed in the VirtualBox Manager.Any ideas what I’m doing wrong? If I remove “192.168.103.100” from the bind_ip argument the mongod instance is created successfully but then I cannot validate from vagrant.",
"username": "Matthew_Howdill"
},
{
"code": "",
"text": "MongoDB university has its own forum to discuss issues with the course labs. For m103, it is https://www.mongodb.com/community/forums/c/M103/9For this particular lab there is an issue with the validation script that makes the validation fails because of the extra space you have between local and 192.168.103.100.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks Steeve and apologizes for posting here. I’ll visit the link, unfortunately removing the space didn’t help.",
"username": "Matthew_Howdill"
},
{
"code": "",
"text": "I have just notice something else. The lab asks for data path to be /data/db and you specified \\data\\db.",
"username": "steevej"
}
] | M103-Lab - Launching Mongod | 2020-03-26T23:18:27.625Z | M103-Lab - Launching Mongod | 4,583 |
null | [
"monitoring"
] | [
{
"code": "",
"text": "From time to time we get an “Plan executor error during find command”-error in de mongod.log (see at the end of my message).\nI know this is related to a sort operation that used more than the maximum 33554432 bytes of RAM. Add an index, or specify a smaller limit will solve the problem.But our MongoDB replicaset hosts about 150 databases… And I don’t find any databasename mentioned in the error… Any suggestions how to find the related database with this error ?Line from mongod.log:Plan executor error during find command: FAILURE, stats: { stage: “PROJECTION”, nReturned: 0, executionTimeMillisEstimate: 70, works: 8821, advanced: 0, needTime: 8820, needYield: 0, saveState: 68, restoreState: 68, isEOF: 0, invalidates: 0, transformBy: \t{ params: 1, type: 1, status: 1, updatedAt: 1, createdAt: 1, id: 1 }, \tinputStage: { stage: “SORT”, nReturned: 0, executionTimeMillisEstimate: 70, works: 8821, advanced: 0, needTime: 8820, needYield: 0, saveState: 68, \trestoreState: 68, isEOF: 0, invalidates: 0, sortPattern: { createdAt: -1 }, memUsage: 33555285, memLimit: 33554432, inputStage: \t\t\t{ stage: “SORT_KEY_GENERATOR”, nReturned: 8492, executionTimeMillisEstimate: 20, \tworks: 8820, advanced: 8492, needTime: 328, needYield: 0, saveState: 68, restoreState: 68, isEOF: 0, nvalidates: 0, inputStage: { stage: “COLLSCAN”, filter: \t{ $and: [{ user.id: { $eq: “93477ac8-6421-4b34-92f9-a26e7085fef7” } }, { status: { $in: [ 0, 1 ] } } ] \t}, nReturned: 8492, executionTimeMillisEstimate: 10, works: 8819, advanced: 8492, needTime: 327, needYield: 0, \tsaveState: 68, restoreState: 68, isEOF: 0, invalidates: 0, direction: “forward”, docsExamined: 8818 }}} }",
"username": "Peter_Mol"
},
{
"code": "{ \"$and\":[ { \"user.id\":{ \"$eq\":“934...ef7” } }, { \"status\":{ \"$in\":[ 0, 1 ] } }{ \"createdAt\":-1 }",
"text": "But our MongoDB replicaset hosts about 150 databases… And I don’t find any databasename mentioned in the error… Any suggestions how to find the related database with this error ?I think you can use mtools’s mlogfilter to query your logs for that specific timestamp and figure the queries running at that period of time. The query’s filter { \"$and\":[ { \"user.id\":{ \"$eq\":“934...ef7” } }, { \"status\":{ \"$in\":[ 0, 1 ] } } and sort on { \"createdAt\":-1 } point to a specific database and collection, typically referred as “namespace”.Also, the mtool’s mloginfo mongod.log --queries lists all the queries and the associated namespace for query.",
"username": "Prasad_Saya"
}
] | Plan executor error during find - RAM limit exceeded | 2020-03-26T16:29:07.704Z | Plan executor error during find - RAM limit exceeded | 6,060 |
null | [] | [
{
"code": "",
"text": "My question is, will the update query be still running in the background if the client timeout or will the operation be reversed?",
"username": "sai_ajay"
},
{
"code": "",
"text": "I think it will proceed, but I am not sure.\nBut, if your client times out waiting for an update… there is a problem with your update… ",
"username": "DavidSol"
}
] | What happens when a client time outs during update operation? | 2020-03-26T05:25:23.941Z | What happens when a client time outs during update operation? | 1,597 |
null | [] | [
{
"code": "",
"text": "About the changes to MongoDB University… Thank you very much!\nSometimes I have “issues” with the time I can dedicate to the courses (work-related), so being able to advance at my own pace will help a lot.\nAnd yes, we will have to be more disciplined now, for not trying to finish everything the last two days!\nThank you for the opportunity for learning.\nPS. What about creating a forum for MongoDB University?",
"username": "DavidSol"
},
{
"code": "",
"text": "https://www.mongodb.com/community/forums/ ",
"username": "Jonny"
},
{
"code": "",
"text": "Thanks… I meant here too.",
"username": "DavidSol"
},
{
"code": "",
"text": "Hi @DavidSol - work is in progress to make exactly that change. Some back end things need to happen first to make it a smooth transition. We’ll keep you posted when we’re closer to go-live there!",
"username": "Jamie"
},
{
"code": "",
"text": "Excellent! Thank you very much",
"username": "DavidSol"
},
{
"code": "",
"text": "Hi @Jamie,can you please convert “you posted” to an public announcement?We’ll keep you posted when we’re closer to go-live there!I guess there are plenty people who like to know about that change Michael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "About the changes to MongoDB UniversityIf anyone is wondering what these recent changes are, please see Updates to MongoDB University: March, 2020.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "I will definitely post an announcement when I have more details to share. Thanks @michael_hoeller!",
"username": "Jamie"
},
{
"code": "",
"text": "",
"username": "Jamie"
}
] | MongoDB University feedback | 2020-03-24T18:14:16.107Z | MongoDB University feedback | 4,360 |
null | [
"aggregation",
"indexes"
] | [
{
"code": "> db.system.profile.find({\"op\":\"command\"}).pretty()\n{\n\t\"op\" : \"command\",\n\t\"ns\" : \"test.bar3\",\n\t\"command\" : {\n\t\t\"aggregate\" : \"bar3\",\n\t\t\"pipeline\" : [\n\t\t\t{\n\t\t\t\t\"$match\" : {\n\t\t\t\t\t\"from\" : \"0x222222\",\n\t\t\t\t\t\"to\" : \"0x333333\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$sort\" : {\n\t\t\t\t\t\"_id\" : -1\n\t\t\t\t}\n\t\t\t}\n\t\t],\n\t\t\"cursor\" : {\n\t\t\t\n\t\t},\n\t\t\"lsid\" : {\n\t\t\t\"id\" : UUID(\"7712d372-c98a-472a-9020-94072f40dd84\")\n\t\t},\n\t\t\"$db\" : \"test\"\n\t},\n\t\"keysExamined\" : 10,\n\t\"docsExamined\" : 10,\n\t\"cursorExhausted\" : true,\n\t\"numYield\" : 0,\n\t\"nreturned\" : 5,\n\t\"queryHash\" : \"DE146C9F\",\n\t\"planCacheKey\" : \"DE146C9F\",\n\t\"locks\" : {\n\t\t\"ReplicationStateTransition\" : {\n\t\t\t\"acquireCount\" : {\n\t\t\t\t\"w\" : NumberLong(1)\n\t\t\t}\n\t\t},\n\t\t\"Global\" : {\n\t\t\t\"acquireCount\" : {\n\t\t\t\t\"r\" : NumberLong(1)\n\t\t\t}\n\t\t},\n\t\t\"Database\" : {\n\t\t\t\"acquireCount\" : {\n\t\t\t\t\"r\" : NumberLong(1)\n\t\t\t}\n\t\t},\n\t\t\"Collection\" : {\n\t\t\t\"acquireCount\" : {\n\t\t\t\t\"r\" : NumberLong(1)\n\t\t\t}\n\t\t},\n\t\t\"Mutex\" : {\n\t\t\t\"acquireCount\" : {\n\t\t\t\t\"r\" : NumberLong(1)\n\t\t\t}\n\t\t}\n\t},\n\t\"flowControl\" : {\n\t\t\n\t},\n\t\"responseLength\" : 1028,\n\t\"protocol\" : \"op_msg\",\n\t\"millis\" : 0,\n\t\"planSummary\" : \"IXSCAN { _id: 1 }\",\n\t\"ts\" : ISODate(\"2020-03-19T08:31:11.480Z\"),\n\t\"client\" : \"127.0.0.1\",\n\t\"allUsers\" : [ ],\n\t\"user\" : \"\"\n}\n$match$sort\"_id\"\"planSummary\"\"IXSCAN { _id: 1 }\"\"keysExamined\" $match$sort$match$sort$sort$match",
"text": "Hello everyone,I have a collection with 10 documents. This is the result of my query using aggregationOn $match stage, there are 5 documents satisfy filter condition. The following $sort stage sorts the documents with \"_id\", so the \"planSummary\" is \"IXSCAN { _id: 1 }\".Why the number of \"keysExamined\" is 10 but not 5?? In my opinion, $match will scan all documents in collections, and then get 5 documents. So $sort stage should sort those 5 documents.I just want to know the stage execution sequence. Is $match before $sort or $sort before $match?Thanks in advance!!!Version: v4.2.3 Mac osx",
"username": "sammy_Ma"
},
{
"code": "$match$sort$sort$match$match$sort\"keysExamined\" $matchbar3$match",
"text": "I just want to know the stage execution sequence. Is $match before $sort or $sort before $match ?The execution sequence is always $match followed by $sort (irrespective of the order of these stages). The optimizer makes sure that the sort stage has documents after they are filtered (that would be less number of documents than the input to the match stage).More details at: $sort + $match Sequence OptimizationWhy the number of \"keysExamined\" is 10 but not 5?? In my opinion, $match will scan all documents in collections, and then get 5 documents.There might be some index on the collection bar3, and this is affecting the $match stage, I think. You can post the details.If possible, please run the explain for the aggregation query with “executionStats” mode.",
"username": "Prasad_Saya"
},
{
"code": "from:1,to:1,_id:1",
"text": "To fully do match and sort from an index you would need an index on from:1,to:1,_id:1What indexes do you have on this collection?",
"username": "Asya_Kamsky"
},
{
"code": "_id_bar3$matchCOLLSCANkeysExamineddocsExamined> db.bar3.getIndexes()\n[\n\t{\n\t\t\"v\" : 2,\n\t\t\"key\" : {\n\t\t\t\"_id\" : 1\n\t\t},\n\t\t\"name\" : \"_id_\",\n\t\t\"ns\" : \"test.bar3\"\n\t}\n]\n> db.bar3.find().count()\n10\n> \n_id_$sortIXSCANkeysExamined",
"text": "There is only one index _id_ on the collection bar3. So I suppose execution plan of $match is COLLSCAN and its keysExamined should be 0 and docsExamined should be 10.Because of index _id_, I suppose execution plan of $sort is IXSCAN, and its keysExamined should be 5. Is that right?",
"username": "sammy_Ma"
},
{
"code": "> db.bar3.explain(\"executionStats\").aggregate([ { \"$match\" : { \"from\" : \"0x222222\", \"to\" : \"0x333333\" } }, { \"$sort\" : { \"_id\" : -1 } } ])\n{\n\t\"queryPlanner\" : {\n\t\t\"plannerVersion\" : 1,\n\t\t\"namespace\" : \"test.bar3\",\n\t\t\"indexFilterSet\" : false,\n\t\t\"parsedQuery\" : {\n\t\t\t\"$and\" : [\n\t\t\t\t{\n\t\t\t\t\t\"from\" : {\n\t\t\t\t\t\t\"$eq\" : \"0x222222\"\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"to\" : {\n\t\t\t\t\t\t\"$eq\" : \"0x333333\"\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"optimizedPipeline\" : true,\n\t\t\"winningPlan\" : {\n\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\"filter\" : {\n\t\t\t\t\"$and\" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"from\" : {\n\t\t\t\t\t\t\t\"$eq\" : \"0x222222\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"to\" : {\n\t\t\t\t\t\t\t\"$eq\" : \"0x333333\"\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\"_id\" : 1\n\t\t\t\t},\n\t\t\t\t\"indexName\" : \"_id_\",\n\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\"_id\" : [ ]\n\t\t\t\t},\n\t\t\t\t\"isUnique\" : true,\n\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\"direction\" : \"backward\",\n\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\"_id\" : [\n\t\t\t\t\t\t\"[MaxKey, MinKey]\"\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"rejectedPlans\" : [ ]\n\t},\n\t\"executionStats\" : {\n\t\t\"executionSuccess\" : true,\n\t\t\"nReturned\" : 5,\n\t\t\"executionTimeMillis\" : 0,\n\t\t\"totalKeysExamined\" : 10,\n\t\t\"totalDocsExamined\" : 10,\n\t\t\"executionStages\" : {\n\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\"filter\" : {\n\t\t\t\t\"$and\" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"from\" : {\n\t\t\t\t\t\t\t\"$eq\" : \"0x222222\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"to\" : {\n\t\t\t\t\t\t\t\"$eq\" : \"0x333333\"\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"nReturned\" : 5,\n\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\"works\" : 11,\n\t\t\t\"advanced\" : 5,\n\t\t\t\"needTime\" : 5,\n\t\t\t\"needYield\" : 0,\n\t\t\t\"saveState\" : 0,\n\t\t\t\"restoreState\" : 0,\n\t\t\t\"isEOF\" : 1,\n\t\t\t\"docsExamined\" : 10,\n\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\"nReturned\" : 10,\n\t\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\t\"works\" : 11,\n\t\t\t\t\"advanced\" : 10,\n\t\t\t\t\"needTime\" : 0,\n\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\"saveState\" : 0,\n\t\t\t\t\"restoreState\" : 0,\n\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\"_id\" : 1\n\t\t\t\t},\n\t\t\t\t\"indexName\" : \"_id_\",\n\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\"_id\" : [ ]\n\t\t\t\t},\n\t\t\t\t\"isUnique\" : true,\n\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\"direction\" : \"backward\",\n\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\"_id\" : [\n\t\t\t\t\t\t\"[MaxKey, MinKey]\"\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"keysExamined\" : 10,\n\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\"dupsTested\" : 0,\n\t\t\t\t\"dupsDropped\" : 0\n\t\t\t}\n\t\t}\n\t},\n\t\"serverInfo\" : {\n\t\t\"host\" : \"maxiaomindeMacBook-Pro.local\",\n\t\t\"port\" : 27017,\n\t\t\"version\" : \"4.2.3\",\n\t\t\"gitVersion\" : \"6874650b362138df74be53d366bbefc321ea32d4\"\n\t},\n\t\"ok\" : 1\n}\n> \n\n$match",
"text": "OK, the result of explain is following:It seems that $match stage queries from index. Really weird.",
"username": "sammy_Ma"
},
{
"code": "",
"text": "The execution sequence is always “$match” followed by “$sort”",
"username": "samuel_otomewo"
},
{
"code": "_id_id\"FETCH\"",
"text": "This is exactly as expected. There is no index to use to do the $match “fast” but there is an index to do the $sort on _id and the way to do it is to look at every key in the _id index, to keep them in order and then to manually “filter” on the documents (that’s what the \"FETCH\" stage in explain is doing).",
"username": "Asya_Kamsky"
},
{
"code": "{from: 1, to : 1}totalKeysExamined$match{from:1, to:1}{_id: 1}{from: 1, to:1}> db.bar3.getIndexes()\n[\n\t{\n\t\t\"v\" : 2,\n\t\t\"key\" : {\n\t\t\t\"_id\" : 1\n\t\t},\n\t\t\"name\" : \"_id_\",\n\t\t\"ns\" : \"test.bar3\"\n\t},\n\t{\n\t\t\"v\" : 2,\n\t\t\"key\" : {\n\t\t\t\"from\" : 1,\n\t\t\t\"to\" : 1\n\t\t},\n\t\t\"name\" : \"from_1_to_1\",\n\t\t\"ns\" : \"test.bar3\"\n\t}\n]\n> db.bar3.find().count()\n10\n> db.bar3.explain(\"executionStats\").aggregate([ { \"$match\" : { \"from\" : \"0x222222\", \"to\" : \"0x333333\" } }, { \"$sort\" : { \"_id\" : -1 } } ])\n{\n\t\"queryPlanner\" : {\n\t\t\"plannerVersion\" : 1,\n\t\t\"namespace\" : \"test.bar3\",\n\t\t\"indexFilterSet\" : false,\n\t\t\"parsedQuery\" : {\n\t\t\t\"$and\" : [\n\t\t\t\t{\n\t\t\t\t\t\"from\" : {\n\t\t\t\t\t\t\"$eq\" : \"0x222222\"\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"to\" : {\n\t\t\t\t\t\t\"$eq\" : \"0x333333\"\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"optimizedPipeline\" : true,\n\t\t\"winningPlan\" : {\n\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\"filter\" : {\n\t\t\t\t\"$and\" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"from\" : {\n\t\t\t\t\t\t\t\"$eq\" : \"0x222222\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"to\" : {\n\t\t\t\t\t\t\t\"$eq\" : \"0x333333\"\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\"_id\" : 1\n\t\t\t\t},\n\t\t\t\t\"indexName\" : \"_id_\",\n\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\"_id\" : [ ]\n\t\t\t\t},\n\t\t\t\t\"isUnique\" : true,\n\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\"direction\" : \"backward\",\n\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\"_id\" : [\n\t\t\t\t\t\t\"[MaxKey, MinKey]\"\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"rejectedPlans\" : [ ]\n\t},\n\t\"executionStats\" : {\n\t\t\"executionSuccess\" : true,\n\t\t\"nReturned\" : 5,\n\t\t\"executionTimeMillis\" : 0,\n\t\t\"totalKeysExamined\" : 10,\n\t\t\"totalDocsExamined\" : 10,\n\t\t\"executionStages\" : {\n\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\"filter\" : {\n\t\t\t\t\"$and\" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"from\" : {\n\t\t\t\t\t\t\t\"$eq\" : \"0x222222\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"to\" : {\n\t\t\t\t\t\t\t\"$eq\" : \"0x333333\"\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"nReturned\" : 5,\n\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\"works\" : 11,\n\t\t\t\"advanced\" : 5,\n\t\t\t\"needTime\" : 5,\n\t\t\t\"needYield\" : 0,\n\t\t\t\"saveState\" : 0,\n\t\t\t\"restoreState\" : 0,\n\t\t\t\"isEOF\" : 1,\n\t\t\t\"docsExamined\" : 10,\n\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\"nReturned\" : 10,\n\t\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\t\"works\" : 11,\n\t\t\t\t\"advanced\" : 10,\n\t\t\t\t\"needTime\" : 0,\n\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\"saveState\" : 0,\n\t\t\t\t\"restoreState\" : 0,\n\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\"_id\" : 1\n\t\t\t\t},\n\t\t\t\t\"indexName\" : \"_id_\",\n\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\"_id\" : [ ]\n\t\t\t\t},\n\t\t\t\t\"isUnique\" : true,\n\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\"direction\" : \"backward\",\n\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\"_id\" : [\n\t\t\t\t\t\t\"[MaxKey, MinKey]\"\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"keysExamined\" : 10,\n\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\"dupsTested\" : 0,\n\t\t\t\t\"dupsDropped\" : 0\n\t\t\t}\n\t\t}\n\t},\n\t\"serverInfo\" : {\n\t\t\"host\" : \"maxiaomindeMacBook-Pro.local\",\n\t\t\"port\" : 27017,\n\t\t\"version\" : \"4.2.3\",\n\t\t\"gitVersion\" : \"6874650b362138df74be53d366bbefc321ea32d4\"\n\t},\n\t\"ok\" : 1\n}\n> \n$match$sort",
"text": "Thank you for your reply. However, I still feel confused.If I create index {from: 1, to : 1}, the totalKeysExamined is 10 but not 5. For $match, in fact, using index {from:1, to:1} to query is faster than using index {_id: 1}, however, we can see it dose not use index {from: 1, to:1} in explain. When this collection has more and more documents, this query plan will perform worse.The command is following:As official document said, the execution sequence is $match followed by $sort, so, what is the sequence for index usage? How to view the process of selecting indexes by the MongoDB Optimizer in detail? Why use this index instead of other indexes? I think this will be helpful for data modeling and index creation.",
"username": "sammy_Ma"
},
{
"code": "{from:1,to:1,_id:1}$match$sort",
"text": "from:1,to:1,_id:1My initial suggestion was to create an index on all three fields ( {from:1,to:1,_id:1} ) - then the index would be used for both $match and $sort.",
"username": "Asya_Kamsky"
}
] | Optimization of $match+$sort on aggregation pipeline | 2020-03-19T09:04:38.176Z | Optimization of $match+$sort on aggregation pipeline | 7,511 |
null | [] | [
{
"code": "",
"text": "In “Geospatial Data” course, video shows field “skyCoverLayer”. I tried to find it in Compass via {field: “skyCoverLayer”}, but no documents are returning. Have I miswrote the syntax of filter or something?Compass version - 1.20.5",
"username": "_72525"
},
{
"code": "",
"text": "Which connect string you used and in which DB/collection you checked for the field\nIt is available.I can see from Compass",
"username": "Ramachandra_Tummala"
},
{
"code": "100YWeather100YWeatherSmall",
"text": "Hi @_72525,You might be connected to a different cluster. Please connect to the class atlas cluster.\nScreenshot 2020-03-26 at 8.37.37 PM2028×1214 285 KB\nNote that the database names are different. In the video it is 100YWeather and in the cluster it is 100YWeatherSmall.Hope it helps!~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "",
"username": "system"
}
] | Cannot find specific field | 2020-03-25T15:43:47.443Z | Cannot find specific field | 1,017 |
null | [
"aggregation"
] | [
{
"code": " {\n\t\"_id\":1,\n\t\"CID\":\"1\",\n\t\"a\" : \"Z\",\n\t\"b\" : \"Y\", \n\t\"sign\": [{\n\t\t\"c\": \"c\",\n\t\t\"d\": \"d\",\n\t\t\"e\": \"e\",\n\t\t\"f\": \"insufficientFunds\"\n\t}, {\n\t\t\"g\": false,\n\t\t\"h\": null,\n\t\t\"i\": 0,\n\t\t\"j\": \"accessFunds\"\n\t}],\n\t\"y\": null,\n\t\"z\": true\n\t\n\t}\n/* 1 */\n{\n \"_id\" : 1.0,\n \"CID\" : \"1\",\n \"c\" : \"c\",\n \"d\" : \"d\",\n \"e\" : \"e\",\n \"f\" : \"insufficientFunds\",\n \"g\" : null,\n \"h\" : null,\n \"i\" : null,\n \"j\" : null\n}\n/* 2 */\n{\n \"_id\" : 1.0,\n \"CID\" : \"1\",\n \"c\" : null,\n \"d\" : null,\n \"e\" : null,\n \"f\" : null,\n \"g\" : false,\n \"h\" : null,\n \"i\" : 0.0,\n \"j\" : \"accessFunds\"\n}\ndb.myColl.aggregate( [\n { \n \"$match\" : { \n \"_id\" : 1\n }\n },\n { $unwind: \"$sign\" },\n { $replaceRoot: { newRoot: \"$sign\" }},\n { \n \"$project\" : { \n \"CID\":1,\n \"a\" : 1,\n \"b\" : 1, \n \"y\": 1,\n \"z\": 1,\n \"c\": 1,\n\t\t\"d\": 1,\n\t\t\"e\": 1,\n\t\t\"f\": 1,\n \t\t\"g\": 1,\n\t\t\"h\": 1,\n\t\t\"i\": 1,\n\t\t\"j\": 1\n }\n } \n ])\n/* 1 */\n{\n \"c\" : \"c\",\n \"d\" : \"d\",\n \"e\" : \"e\",\n \"f\" : \"insufficientFunds\"\n}\n\n/* 2 */\n{\n \"g\" : false,\n \"h\" : null,\n \"i\" : 0.0,\n \"j\" : \"accessFunds\"\n}",
"text": "Here is the document:Required Output:Here is the queryOutput from the above query",
"username": "Nabeel_Raza"
},
{
"code": "{ $addFields: { \"CID\": \"$CID\" } }",
"text": "I also used $addFields but it doesn’t work for me.{ $addFields: { \"CID\": \"$CID\" } }",
"username": "Nabeel_Raza"
},
{
"code": "db.myColl.aggregate( [\n { \n \"$match\" : { \n \"_id\" : 1\n }\n },\n { $unwind: \"$sign\" },\n { \n \"$project\" : { \n \"CID\":1,\n \"c\": { $ifNull: [ \"$sign.c\", null ] }, \n \"d\":{ $ifNull: [ \"$sign.d\", null ] }, \n \"e\":{ $ifNull: [ \"$sign.e\", null ] }, \n \"f\":{ $ifNull: [ \"$sign.f\", null ] },\n \"g\": { $ifNull: [ \"$sign.g\", null ] }, \n \"h\":{ $ifNull: [ \"$sign.h\", null ] }, \n \"i\":{ $ifNull: [ \"$sign.i\", null ] }, \n \"j\":{ $ifNull: [ \"$sign.j\", null ] } \n }\n } \n ])",
"text": "No need to use $replaceRoot. this will give required output.",
"username": "Nabeel_Raza"
}
] | How can we add field in the output when we are using $replaceRoot in the query? | 2020-03-26T09:58:34.906Z | How can we add field in the output when we are using $replaceRoot in the query? | 2,517 |
null | [] | [
{
"code": "",
"text": "Hi everyone, I need a bit of help.In production servers, we often have to run migration scripts manually or in an autonomous way to perform some type of schema changes / update keys or anything for the new tag to be deployed. What is the best approach for these migrations, considering revert in tags due to any failure or dependent tag failure (breaking changes)? Even to create backups, assume we have huge amounts of data and backups could be costly.Waiting to hear your reply and what you follow…!Thanks",
"username": "shrey_batra"
},
{
"code": "",
"text": "I am a big fan of using polymorphic design pattern as described in Building with Patterns: The Schema Versioning Pattern | MongoDB Blog to do such a thing.",
"username": "steevej"
}
] | Applying DB migrations - Best practises | 2020-03-26T11:17:52.662Z | Applying DB migrations - Best practises | 1,606 |
null | [
"java"
] | [
{
"code": "public class Datatypes{\n public List<DatatypeDefinitionTag> datatypeDefinitions;\n}\npublic class DatatypeDefinitionInteger extends DatatypeDefinitionTag {\n [...]\n}\n\npublic class DatatypeDefinitionString extends DatatypeDefinitionTag {\n [...]\n}\n\n@BsonDiscriminator\npublic abstract class DatatypeDefinitionTag {\n [...]\n}\ndatatypeDefinitions_t:reqIF.reqIF_dataStructure.abstracts.tags.DatatypeDefinitionTag_t:reqIF.reqIF_dataStructure.coreContent.reqIF_content.datatypes.datatypeDefinitions.DatatypeDefinitionString_t:reqIF.reqIF_dataStructure.coreContent.reqIF_content.datatypes.datatypeDefinitions.DatatypeDefinitionInteger[BsonKnownTypes(typeof(DatatypeDefinitionString), typeof(DatatypeDefinitionInteger))]\npublic class DatatypeDefinitionTag \n",
"text": "I am trying to write and read a complex object to a MongoDB using Java Mongo Driver.The object includes among other things Lists of Objects of different but related type that I cover with abstract classes in the data model, for example:…which gets filled with these objects:Now, according to the documentation (POJOs) this should work by annotating the abstract class as follows:…however, if I do that, I still get an error while trying to read data from the MongoDB, and checking the MongoDB directly, I can see that all entries under datatypeDefinitions have _t:reqIF.reqIF_dataStructure.abstracts.tags.DatatypeDefinitionTagI would have expected the discriminator to assign the following values instead, based on the saved object:Any idea why this is happening? In essence, I am just looking for the Java equivalent of how this would work with the .Net Driver:",
"username": "Kira_Resari"
},
{
"code": "",
"text": "I have now created a minimalistic test project that replicates this exact behavior (run the MongoDB_ReaderTest to replicate):A minimalistic MongoDB Test Project. Contribute to Kira-Cesonia/MongoDB_TestProject development by creating an account on GitHub.",
"username": "Kira_Resari"
},
{
"code": "",
"text": "I figured out what caused this.\nI was using a too-old version of the MongoDB driver.\nNow I use org.mongodb:mongodb-driver:3.6.0 , and it works perfectly.",
"username": "Kira_Resari"
}
] | Mongo Java Driver ~ @BsonDiscriminator mapping parent instead of child class | 2020-03-24T18:19:27.363Z | Mongo Java Driver ~ @BsonDiscriminator mapping parent instead of child class | 5,013 |
null | [
"installation"
] | [
{
"code": "",
"text": "I’m trying to download MongoDB Community Server for Ubuntu 18.04 but the download doesn’t start. Is there any problem with the server?",
"username": "Julen_Albizuri"
},
{
"code": "",
"text": "Welcome to the MongoDB community @Julen_Albizuri!Can you provide more information on how you are trying to download including the specific MongoDB server version and download url?From your description it sounds like you are trying to use the MongoDB Download Center, which does not have any known issues at the moment. If you happen to be using ad blocker software, you should try disabling this temporarily in case it is interfering with the download.FYI: the recommended installation approach for Linux is using a package manager as per the Install MongoDB on Ubuntu tutorial.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thank you for the reply!\nI’m using MongoDB Download Center and trying to download version 4.2.5 for Ubuntu 18.04 Linux x64 the package Server. If I click the download button it doesn’t start, and if I use the URL (https://repo.mongodb.org/apt/ubuntu/dists/bionic/mongodb-org/4.2/multiverse/binary-amd64/mongodb-org-server_4.2.5_amd64.deb) that appears at the bottom of the form, it responses 404 Not Found. But the way, I’ve already installed from tutorial you provided me.Kindest regards,\nJulen",
"username": "Julen_Albizuri"
},
{
"code": "",
"text": "I’m using MongoDB Download Center and trying to download version 4.2.5 for Ubuntu 18.04 Linux x64 the package Server. If I click the download button it doesn’t start, and if I use the URLHi Julen,Apologies for the inconvenience. It looks like the MongoDB 4.2.5 packages are currently in the process of being released and links have appeared in the download centre before packages for all platforms have been published.Our release engineering team is working on correcting this issue.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi Julen,Quick update: the download issue should be resolved now.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Can't download MongoDB Community Server | 2020-03-25T22:50:11.577Z | Can’t download MongoDB Community Server | 3,825 |
[
"installation"
] | [
{
"code": "",
"text": "I can’t start mongodb. Here are some log and config files that I could trace.\nAny help would be much appriciated. Thanks.\n\nUntitled1869×710 105 KB\n",
"username": "Trung_Le_Dinh"
},
{
"code": "journalctl -xejournalctl -u mongod/var/log/mongo/mongod.log",
"text": "14\tReturned by MongoDB applications which encounter an unrecoverable error, an uncaught exception or uncaught signal. The system exits without performing a clean shutdown.You’ll have more helpful output in journalctl journalctl -xe (as per the output from systemctl start/restart) or journalctl -u mongod possibly even in /var/log/mongo/mongod.log",
"username": "chris"
},
{
"code": "/var/log/mongo/mongod.log",
"text": "even in /var/log/mongo/mongod.logThis is definitely the first place I would check.",
"username": "Doug_Duncan"
}
] | MongoDB won't start | 2020-03-25T07:31:11.869Z | MongoDB won’t start | 6,422 |
|
null | [
"server"
] | [
{
"code": "",
"text": "Hi,Mongodb crashed on our production server with this error :2020-03-23T22:37:14.687+0100 I - [TTLMonitor] Invariant failure: ret resulted in status UnknownError: 5: Input/output error at src/mongo/db/storage/wiredtiger/wiredtiger_index.cpp 1238\n2020-03-23T22:37:14.688+0100 I - [TTLMonitor]***aborting after invariant() failure2020-03-23T22:37:14.746+0100 F - [TTLMonitor] Got signal: 6 (Aborted).0x55ef43768cb1 0x55ef43767ec9 0x55ef437683ad 0x7f7a8dacc5e0 0x7f7a8d72f1f7 0x7f7a8d7308e8 0x55ef429f9199 0x55ef4343f9a8 0x55ef4343df76 0x55ef42e46c6a 0x55ef42e490f3 0x55ef42bebc34 0x55ef42bebfc6 0x55ef42bbfc2f 0x55ef42d624ae 0x55ef42d87843 0x55ef4309307a 0x55ef4309399b 0x55ef43093acd 0x55ef43474c8f 0x55ef43475dfa 0x55ef43476538 0x55ef436d2d31 0x55ef441e7620 0x7f7a8dac4e25 0x7f7a8d7f234d\n----- BEGIN BACKTRACE -----db version v3.4.18\nbuild environment:\ndistmod: rhel70\ndistarch: x86_64\ntarget_arch: x86_64Any Idea for this error ?Thanks,Steve",
"username": "Steve_Heldebaume"
},
{
"code": "",
"text": "3.4 Is no longer supported in general.Your version is Nov 2018 vintage so if you are stuck on 3.4 for ${REASONS} you should at least upgrade to 3.4.24.I don’t see any TTL specific issues in 3.4 Release Notes But you can go and dig through the Closed jiras for each release to be sure.",
"username": "chris"
}
] | Mongodb crash TTLMonitor "Invariant failure" | 2020-03-24T18:19:06.440Z | Mongodb crash TTLMonitor “Invariant failure” | 2,735 |
[] | [
{
"code": "",
"text": "Hello all, im newbie in mongo. I try to send DataFrame to my DB and have this error. What is the problem?\n\nerror1069×404 103 KB\n",
"username": "Veleslav_Negadaev"
},
{
"code": "dataframe.reset_index(inplace=True)\ndictionary = dataframe.to_dict(\"records\")\ncollection.insert_many(dictionary)",
"text": "You can’t directly upload a Pandas Dataframe to Mongo, as Mongo don’t recognize them.\nBut you can upload a Dictionary!",
"username": "DavidSol"
},
{
"code": "",
"text": "In general i dont upload Dataframe directrly, i upload a dictionary with Dataframes inside. Or im still convert all dataframes to dictionaries?",
"username": "Veleslav_Negadaev"
}
] | Problem with pandas DataFrame | 2020-03-24T12:21:36.499Z | Problem with pandas DataFrame | 2,537 |
|
null | [
"queries"
] | [
{
"code": "",
"text": "I created one collection in which I have one key as an array with around 19000 key\nbut when I run a query find in which I add where condition to filter the same array by it’s value is not working\nplease look my collection datawhere condition\n->where(‘studentIds’,‘all’,[‘N00085858’])\nacepted result should be return the above document\ncurrent result empty data",
"username": "Sonal_Panchal"
},
{
"code": "findOne()mongoall$all",
"text": "Welcome to the MongoDB Community @Sonal_Panchal!To help understand your issue can you please provide:I suspect the all in your query may instead want to be the $all operator.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "findOne()mongo DB::table('popup_master')\n ->where('status','Published')\n ->where('start_date','<=',$curDate)\n ->where('end_date','>=',$curDate)\n ->whereNull('deleted_at')\n ->where(function($q) use($userRole, $userId) {\n $q->where('studentIds','all',[strtoupper($userId)])\n ->orWhere('notif_roles',$userRole);\n })\n ->orderBy('position')->select(['link_url','display_view','internalFlag','encryptParameterFlag','parameterName'])->get();\n> db.popup_master.find().pretty()\n> {\n> \t\"_id\" : ObjectId(\"5e7afa46b5677c71fd425133\"),\n> \t\"name\" : \"test Document\",\n> \t\"start_date\" : ISODate(\"2020-03-25T06:29:02Z\"),\n> \t\"link_url\" : null,\n> \t\"end_date\" : ISODate(\"2020-03-31T06:28:49Z\"),\n> \t\"filter_type\" : \"2\",\n> \t\"display_view\" : {\n> \t\t\"desktop_image\" : \"asset/test.jpeg\"\n> \t},\n> \t\"status\" : \"Published\",\n> \t\"internalFlag\" : \"0\",\n> \t\"encryptParameterFlag\" : \"0\",\n> \t\"parameterName\" : null,\n> \t\"updated_at\" : ISODate(\"2020-03-25T06:29:26Z\"),\n> \t\"created_at\" : ISODate(\"2020-03-25T06:29:26Z\"),\n> \t\"notif_roles\" : null,\n> \t\"popup_population_filters\" : {\n> \t\t\"facultad\" : [ ],\n> \t\t\"year\" : [ ],\n> \t\t\"Period\" : [ ],\n> \t\t\"Campus\" : [ ],\n> \t\t\"level\" : [ ],\n> \t\t\"Modalidad\" : [ ],\n> \t\t\"Programa\" : [ ],\n> \t\t\"program_all\" : null,\n> \t\t\"nrc_all\" : null,\n> \t\t\"Course\" : [ ],\n> \t\t\"ProgramaStr\" : [ ],\n> \t\t\"CourseStr\" : [ ]\n> \t},\n> \t\"selected_year\" : null,\n> \t\"studentIds\" : [\n> \t\t\"N00248748\",\n> \t\t\"N00157568\",\n> \t\t\"N00175665\",\n> \t\t\"N00176430\",\n> \t\t\"N00176448\",\n> \t\t\"N00177176\",\n> \t\t\"N00179354\",\n> \t\t\"N00179477\",\n> \t\t\"N00179942\",\n> \t\t\"N00181371\",\n> \t\t\"N00181429\",\n> \t\t\"N00182040\",\n> \t\t\"N00182228\",\n> \t\t\"N00183028\",\n> \t\t\"N00183372\",\n> \t\t\"N00183826\",\n> \t\t\"N00184232\",\n> \t\t\"N00184363\",\n> \t\t\"N00184561\",\n> \t\t\"N00186485\",\n> \t\t\"N00187468\",\n> \t\t\"N00188085\",\n> \t\t\"N00189250\",\n> \t\t\"N00080147\",\n> \t\t\"N00081952\",\n> \t\t\"N00083010\",\n> \t\t\"N00083496\",\n> \t\t\"N00084994\",\n> \t\t\"N00085858\",\n> \t\t\"N00086044\",\n> \t\t\"N00087382\",\n> \t\t\"N00088295\",\n> \t\t\"N00089202\",\n> \t\t\"N00089254\",\n> \t\t\"N00089278\",\n> \t\t\"N00091533\",\n> \t\t\"N00092863\",\n> \t\t\"N00093083\",\n> \t\t\"N00093731\",\n> \t\t\"N00094043\",\n> \t\t\"N00095024\",\n> \t\t\"N00097777\",\n> \t\t\"N00101858\",\n> \t\t\"N00102420\",\n> \t\t\"N00031621\",\n> \t\t\"N00032062\",\n> \t\t\"N00032849\",\n> \t\t\"N00034267\",\n> \t\t\"N00035306\",\n> \t\t\"N00035736\",\n> \t\t\"N00040903\",\n> \t\t\"N00043942\",\n> \t\t\"N00044626\",\n> \t\t\"N00044996\",\n> \t\t\"N00046861\",\n> \t\t\"N00047971\",\n> \t\t\"N00048622\",\n> \t\t\"N00049530\",\n> \t\t\"N00050165\",\n> \t\t\"N00050292\",\n> \t\t\"N00054701\",\n> \t\t\"N00054840\",\n> \t\t\"N00129756\",\n> \t\t\"N00130312\",\n> \t\t\"N00130689\",\n> \t\t\"N00133876\",\n> \t\t\"N00134188\",\n> \t\t\"N00139655\",\n> \t\t\"N00144574\",\n> \t\t\"N00145797\",\n> \t\t\"N00149348\",\n> \t\t\"N00149580\",\n> \t\t\"N00150598\",\n> \t\t\"N00151333\",\n> \t\t\"N00151420\",\n> \t\t\"N00152764\",\n> \t\t\"N00152992\",\n> \t\t\"N00021808\",\n> \t\t\"N00023997\",\n> \t\t\"N00024152\",\n> \t\t\"N00024561\",\n> \t\t\"N00027204\",\n> \t\t\"N00191021\",\n> \t\t\"N00191458\",\n> \t\t\"N00196571\",\n> \t\t\"N00197525\",\n> \t\t\"N00198474\",\n> \t\t\"N00199706\",\n> \t\t\"N00199732\",\n> \t\t\"N00201281\",\n> \t\t\"N00201516\",\n> \t\t\"N00202372\",\n> \t\t\"N00199414\",\n> \t\t\"N00199572\",\n> \t\t\"N00199776\",\n> \t\t\"N00200480\",\n> \t\t\"N00200877\",\n> \t\t\"N00201440\",\n> \t\t\"N00201904\",\n> \t\t\"N00203264\",\n> \t\t\"N00203358\",\n> \t\t\"N00205204\",\n> \t\t\"N00205538\",\n> \t\t\"N00205725\",\n> \t\t\"N00205942\",\n> \t\t\"N00206523\",\n> \t\t\"N00206956\",\n> \t\t\"N00207127\",\n> \t\t\"N00207578\",\n> \t\t\"N00207625\",\n> \t\t\"N00207863\",\n> \t\t\"N00081927\",\n> \t\t\"N00082453\",\n> \t\t\"N00082639\",\n> \t\t\"N00082943\",\n> \t\t\"N00083266\",\n> \t\t\"N00089299\",\n> \t\t\"N00089442\",\n> \t\t\"N00093761\",\n> \t\t\"N00094614\",\n> \t\t\"N00094914\",\n> \t\t\"N00097205\",\n> \t\t\"N00098532\",\n> \t\t\"N00098781\",\n> \t\t\"N00099894\",\n> \t\t\"N00100197\",\n> \t\t\"N00103413\",\n> \t\t\"N00222196\",\n> \t\t\"N00222308\",\n> \t\t\"N00222893\",\n> \t\t\"N00223343\",\n> \t\t\"N00223357\",\n> \t\t\"N00223418\",\n> \t\t\"N00224127\",\n> \t\t\"N00224862\",\n> \t\t\"N00225039\",\n> \t\t\"N00225402\",\n> \t\t\"N00225418\",\n> \t\t\"N00225449\",\n> \t\t\"N00226104\",\n> \t\t\"N00056018\",\n> \t\t\"N00056275\",\n> \t\t\"N00058141\",\n> \t\t\"N00058482\",\n> \t\t\"N00060656\",\n> \t\t\"N00061144\",\n> \t\t\"N00062488\",\n> \t\t\"N00062867\",\n> \t\t\"N00064827\",\n> \t\t\"N00067090\",\n> \t\t\"N00068136\",\n> \t\t\"N00070557\",\n> \t\t\"N00071633\",\n> \t\t\"N00073704\",\n> \t\t\"N00073707\",\n> \t\t\"N00074209\",\n> \t\t\"N00078616\",\n> \t\t\"N00168533\",\n> \t\t\"N00168554\",\n> \t\t\"N00171127\",\n> \t\t\"N00172088\",\n> \t\t\"N00172528\",\n> \t\t\"N00172838\",\n> \t\t\"N00173046\",\n> \t\t\"N00173582\",\n> \t\t\"N00175310\",\n> \t\t\"N00015481\",\n> \t\t\"N00019928\",\n> \t\t\"N00021408\",\n> \t\t\"N00027008\",\n> \t\t\"N00027329\",\n> \t\t\"N00027665\",\n> \t\t\"N00028776\",\n> \t\t\"N00107922\",\n> \t\t\"N00111273\",\n> \t\t\"N00112021\",\n> \t\t\"N00115283\",\n> \t\t\"N00116308\",\n> \t\t\"N00118295\",\n> \t\t\"N00118304\",\n> \t\t\"N00119608\",\n> \t\t\"N00120203\",\n> \t\t\"N00121282\",\n> \t\t\"N00121290\",\n> \t\t\"N00123292\",\n> \t\t\"N00124292\",\n> \t\t\"N00128539\",\n> \t\t\"N00186117\",\n> \t\t\"N00186221\",\n> \t\t\"N00187034\",\n> \t\t\"N00187094\",\n> \t\t\"N00187423\",\n> \t\t\"N00187511\",\n> \t\t\"N00190664\",\n> \t\t\"N00192398\",\n> \t\t\"N00195441\",\n> \t\t\"N00196790\",\n> \t\t\"N00197358\",\n> \t\t\"N00197368\",\n> \t\t\"N00198076\",\n> \t\t\"N00198099\",\n> \t\t\"N00061324\",\n> \t\t\"N00063443\",\n> \t\t\"N00064015\",\n> \t\t\"N00064069\",\n> \t\t\"N00065255\",\n> \t\t\"N00065717\",\n> \t\t\"N00066781\",\n> \t\t\"N00066920\",\n> \t\t\"N00067423\",\n> \t\t\"N00067559\",\n> \t\t\"N00071035\",\n> \t\t\"N00074849\",\n> \t\t\"N00076151\",\n> \t\t\"N00076712\",\n> \t\t\"N00212656\",\n> \t\t\"N00212736\",\n> \t\t\"N00212885\",\n> \t\t\"N00213156\",\n> \t\t\"N00213240\",\n> \t\t\"N00214111\",\n> \t\t\"N00214754\",\n> \t\t\"N00215894\",\n> \t\t\"N00216225\",\n> \t\t\"N00216262\",\n> \t\t\"N00216972\",\n> \t\t\"N00217077\",\n> \t\t\"N00218353\",\n> \t\t\"N00219017\",\n> \t\t\"N00219039\",\n> \t\t\"N00219352\",\n> \t\t\"N00219598\",\n> \t\t\"N00220626\",\n> \t\t\"N00221009\",\n> \t\t\"N00221324\",\n> \t\t\"N00221457\",\n> \t\t\"N00221538\",\n> \t\t\"N00221571\",\n> \t\t\"N00102637\",\n> \t\t\"N00102878\",\n> \t\t\"N00103681\",\n> \t\t\"N00104099\",\n> \t\t\"N00105960\",\n> \t\t\"N00108793\",\n> \t\t\"N00111755\",\n> \t\t\"N00112861\",\n> \t\t\"N00113142\",\n> \t\t\"N00113729\",\n> \t\t\"N00114576\",\n> \t\t\"N00114830\",\n> \t\t\"N00114847\",\n> \t\t\"N00115376\",\n> \t\t\"N00115465\",\n> \t\t\"N00120840\",\n> \t\t\"N00121867\",\n> \t\t\"N00122170\",\n> \t\t\"N00123580\",\n> \t\t\"N00124282\",\n> \t\t\"N00125391\",\n> \t\t\"N00126840\",\n> \t\t\"N00151504\",\n> \t\t\"N00151686\",\n> \t\t\"N00152402\",\n> \t\t\"N00152953\",\n> \t\t\"N00153304\",\n> \t\t\"N00153371\",\n> \t\t\"N00154957\",\n> \t\t\"N00155097\",\n> \t\t\"N00156566\",\n> \t\t\"N00156670\",\n> \t\t\"N00158184\",\n> \t\t\"N00158665\",\n> \t\t\"N00158901\",\n> \t\t\"N00159145\",\n> \t\t\"N00159542\",\n> \t\t\"N00163874\",\n> \t\t\"N00166810\",\n> \t\t\"N00169333\",\n> \t\t\"N00169341\",\n> \t\t\"N00130545\",\n> \t\t\"N00105673\",\n> \t\t\"N00106005\",\n> \t\t\"N00108214\",\n> \t\t\"N00111741\",\n> \t\t\"N00113532\",\n> \t\t\"N00113904\",\n> \t\t\"N00114819\",\n> \t\t\"N00116880\",\n> \t\t\"N00116934\",\n> \t\t\"N00117644\",\n> \t\t\"N00119024\",\n> \t\t\"N00119350\",\n> \t\t\"N00119369\",\n> \t\t\"N00119985\",\n> \t\t\"N00120493\",\n> \t\t\"N00121935\",\n> \t\t\"N00125797\",\n> \t\t\"N00126315\",\n> \t\t\"N00126609\",\n> \t\t\"N00127082\",\n> \t\t\"N00057428\",\n> \t\t\"N00057813\",\n> \t\t\"N00058312\",\n> \t\t\"N00058457\",\n> \t\t\"N00059406\",\n> \t\t\"N00061224\",\n> \t\t\"N00061927\",\n> \t\t\"N00065703\",\n> \t\t\"N00066271\",\n> \t\t\"N00067508\",\n> \t\t\"N00067874\",\n> \t\t\"N00068616\",\n> \t\t\"N00068922\",\n> \t\t\"N00069433\",\n> \t\t\"N00070786\",\n> \t\t\"N00076119\",\n> \t\t\"N00076974\",\n> \t\t\"N00077878\",\n> \t\t\"N00030168\",\n> \t\t\"N00031919\",\n> \t\t\"N00032818\",\n> \t\t\"N00036898\",\n> \t\t\"N00040927\",\n> \t\t\"N00041304\",\n> \t\t\"N00042355\",\n> \t\t\"N00044991\",\n> \t\t\"N00046015\",\n> \t\t\"N00046827\",\n> \t\t\"N00047183\",\n> \t\t\"N00050424\",\n> \t\t\"N00053772\",\n> \t\t\"N00156924\",\n> \t\t\"N00158829\",\n> \t\t\"N00160726\",\n> \t\t\"N00162005\",\n> \t\t\"N00163080\",\n> \t\t\"N00163152\",\n> \t\t\"N00163862\",\n> \t\t\"N00164648\",\n> \t\t\"N00165940\",\n> \t\t\"N00166313\",\n> \t\t\"N00169208\",\n> \t\t\"N00169346\",\n> \t\t\"N00170413\",\n> \t\t\"N00170864\",\n> \t\t\"N00171841\",\n> \t\t\"N00018012\",\n> \t\t\"N00018901\",\n> \t\t\"N00019613\",\n> \t\t\"N00020496\",\n> \t\t\"N00024178\",\n> \t\t\"N00024365\",\n> \t\t\"N00025594\",\n> \t\t\"N00202956\",\n> \t\t\"N00203152\",\n> \t\t\"N00204168\",\n> \t\t\"N00205577\",\n> \t\t\"N00206550\",\n> \t\t\"N00206800\",\n> \t\t\"N00207514\",\n> \t\t\"N00207654\",\n> \t\t\"N00208249\",\n> \t\t\"N00208939\",\n> \t\t\"N00209030\",\n> \t\t\"N00209751\",\n> \t\t\"N00210453\",\n> \t\t\"N00210930\",\n> \t\t\"N00211034\",\n> \t\t\"N00211308\",\n> \t\t\"N00211524\",\n> \t\t\"N00212036\",\n> \t\t\"N00212241\",\n> \t\t\"N00212612\",\n> \t\t\"N00213291\",\n> \t\t\"N00213373\",\n> \t\t\"N00128593\",\n> \t\t\"N00129764\",\n> \t\t\"N00132485\",\n> \t\t\"N00132895\",\n> \t\t\"N00133121\",\n> \t\t\"N00137058\",\n> \t\t\"N00137777\",\n> \t\t\"N00138069\",\n> \t\t\"N00142152\",\n> \t\t\"N00142587\",\n> \t\t\"N00144281\",\n> \t\t\"N00144333\",\n> \t\t\"N00144695\",\n> \t\t\"N00145939\",\n> \t\t\"N00146208\",\n> \t\t\"N00146895\",\n> \t\t\"N00150654\",\n> \t\t\"N00150922\",\n> \t\t\"N00152279\",\n> \t\t\"N00208643\",\n> \t\t\"N00209309\",\n> \t\t\"N00209621\",\n> \t\t\"N00210412\",\n> \t\t\"N00210540\",\n> \t\t\"N00211336\",\n> \t\t\"N00212100\",\n> \t\t\"N00212129\",\n> \t\t\"N00213511\",\n> \t\t\"N00214499\",\n> \t\t\"N00214530\",\n> \t\t\"N00217479\",\n> \t\t\"N00218510\",\n> \t\t\"N00219285\",\n> \t\t\"N00219980\",\n> \t\t\"N00220009\",\n> \t\t\"N00232386\",\n> \t\t\"N00233549\",\n> \t\t\"N00233593\",\n> \t\t\"N00234108\",\n> \t\t\"N00153370\",\n> \t\t\"N00153592\",\n> \t\t\"N00154827\",\n> \t\t\"N00159110\",\n> \t\t\"N00159368\",\n> \t\t\"N00160638\",\n> \t\t\"N00160779\",\n> \t\t\"N00160807\",\n> \t\t\"N00161037\",\n> \t\t\"N00161822\",\n> \t\t\"N00165366\",\n> \t\t\"N00165612\",\n> \t\t\"N00169072\",\n> \t\t\"N00169110\",\n> \t\t\"N00169167\",\n> \t\t\"N00169745\",\n> \t\t\"N00171797\",\n> \t\t\"N00016420\",\n> \t\t\"N00019122\",\n> \t\t\"N00024053\",\n> \t\t\"N00028206\",\n> \t\t\"N00189154\",\n> \t\t\"N00190652\",\n> \t\t\"N00192652\",\n> \t\t\"N00194204\",\n> \t\t\"N00195248\",\n> \t\t\"N00195901\",\n> \t\t\"N00195991\",\n> \t\t\"N00196289\",\n> \t\t\"N00198214\",\n> \t\t\"N00198912\",\n> \t\t\"N00055693\",\n> \t\t\"N00056314\",\n> \t\t\"N00057377\",\n> \t\t\"N00057450\",\n> \t\t\"N00058237\",\n> \t\t\"N00059145\",\n> \t\t\"N00059321\",\n> \t\t\"N00061491\",\n> \t\t\"N00063577\",\n> \t\t\"N00065040\",\n> \t\t\"N00066267\",\n> \t\t\"N00066343\",\n> \t\t\"N00066465\",\n> \t\t\"N00068583\",\n> \t\t\"N00070152\",\n> \t\t\"N00071286\",\n> \t\t\"N00072242\",\n> \t\t\"N00072265\",\n> \t\t\"N00072974\",\n> \t\t\"N00073691\",\n> \t\t\"N00076693\",\n> \t\t\"N00077194\",\n> \t\t\"N00224798\",\n> \t\t\"N00224802\",\n> \t\t\"N00225027\",\n> \t\t\"N00225201\",\n> \t\t\"N00225358\",\n> \t\t\"N00225873\",\n> \t\t\"N00226362\",\n> \t\t\"N00226998\",\n> \t\t\"N00227002\",\n> \t\t\"N00229407\",\n> \t\t\"N00229568\",\n> \t\t\"N00229648\",\n> \t\t\"N00229990\",\n> \t\t\"N00230220\",\n> \t\t\"N00230437\",\n> \t\t\"N00230518\",\n> \t\t\"N00230935\",\n> \t\t\"N00231198\",\n> \t\t\"N00231347\",\n> \t\t\"N00231367\",\n> \t\t\"N00231392\",\n> \t\t\"N00231636\",\n> \t\t\"N00232503\",\n> \t\t\"N00233282\",\n> \t\t\"N00233382\",\n> \t\t\"N00233490\",\n> \t\t\"N00130217\",\n> \t\t\"N00130333\",\n> \t\t\"N00132811\",\n> \t\t\"N00133751\",\n> \t\t\"N00136095\",\n> \t\t\"N00136852\",\n> \t\t\"N00137346\",\n> \t\t\"N00137511\",\n> \t\t\"N00137975\",\n> \t\t\"N00139185\",\n> \t\t\"N00139607\",\n> \t\t\"N00141634\",\n> \t\t\"N00141882\",\n> \t\t\"N00142044\",\n> \t\t\"N00142220\",\n> \t\t\"N00143465\",\n> \t\t\"N00143479\",\n> \t\t\"N00146517\",\n> \t\t\"N00147560\",\n> \t\t\"N00148349\",\n> \t\t\"N00150841\",\n> \t\t\"N00151067\",\n> \t\t\"N00104262\",\n> \t\t\"N00104992\",\n> \t\t\"N00106891\",\n> \t\t\"N00110459\",\n> \t\t\"N00112159\",\n> \t\t\"N00112368\",\n> \t\t\"N00113593\",\n> \t\t\"N00113638\",\n> \t\t\"N00118561\",\n> \t\t\"N00119969\",\n> \t\t\"N00121152\",\n> \t\t\"N00122989\",\n> \t\t\"N00123746\",\n> \t\t\"N00125859\",\n> \t\t\"N00220895\",\n> \t\t\"N00221013\",\n> \t\t\"N00221497\",\n> \t\t\"N00221556\",\n> \t\t\"N00221637\",\n> \t\t\"N00221815\",\n> \t\t\"N00221860\",\n> \t\t\"N00222049\",\n> \t\t\"N00223118\",\n> \t\t\"N00223192\",\n> \t\t\"N00223545\",\n> \t\t\"N00223884\",\n> \t\t\"N00224060\",\n> \t\t\"N00224359\",\n> \t\t\"N00224868\",\n> \t\t\"N00225439\",\n> \t\t\"N00227048\",\n> \t\t\"N00227131\",\n> \t\t\"N00227314\",\n> \t\t\"N00228613\",\n> \t\t\"N00229187\",\n> \t\t\"N00229503\",\n> \t\t\"N00210768\",\n> \t\t\"N00212375\",\n> \t\t\"N00213433\",\n> \t\t\"N00213812\",\n> \t\t\"N00215281\",\n> \t\t\"N00215472\",\n> \t\t\"N00217304\",\n> \t\t\"N00217307\",\n> \t\t\"N00217571\",\n> \t\t\"N00218057\",\n> \t\t\"N00218145\",\n> \t\t\"N00219212\",\n> \t\t\"N00219553\",\n> \t\t\"N00219831\",\n> \t\t\"N00219841\",\n> \t\t\"N00220349\",\n> \t\t\"N00220416\",\n> \t\t\"N00220915\",\n> \t\t\"N00221160\",\n> \t\t\"N00221424\",\n> \t\t\"N00221699\",\n> \t\t\"N00230108\",\n> \t\t\"N00230573\",\n> \t\t\"N00230658\",\n> \t\t\"N00231943\",\n> \t\t\"N00231984\",\n> \t\t\"N00232124\",\n> \t\t\"N00233146\",\n> \t\t\"N00233437\",\n> \t\t\"N00233442\",\n> \t\t\"N00234092\",\n> \t\t\"N00234129\",\n> \t\t\"N00234153\",\n> \t\t\"N00203496\",\n> \t\t\"N00205803\",\n> \t\t\"N00206485\",\n> \t\t\"N00206951\",\n> \t\t\"N00207590\",\n> \t\t\"N00207765\",\n> \t\t\"N00207914\",\n> \t\t\"N00209002\",\n> \t\t\"N00209194\",\n> \t\t\"N00209667\",\n> \t\t\"N00210237\",\n> \t\t\"N00210360\",\n> \t\t\"N00210990\",\n> \t\t\"N00211384\",\n> \t\t\"N00211554\",\n> \t\t\"N00212044\",\n> \t\t\"N00212047\",\n> \t\t\"N00212359\",\n> \t\t\"N00212502\",\n> \t\t\"N00212625\",\n> \t\t\"N00222500\",\n> \t\t\"N00222747\",\n> \t\t\"N00222962\",\n> \t\t\"N00223384\",\n> \t\t\"N00223710\",\n> \t\t\"N00223952\",\n> \t\t\"N00224084\",\n> \t\t\"N00224329\",\n> \t\t\"N00224555\",\n> \t\t\"N00224604\",\n> \t\t\"N00225073\",\n> \t\t\"N00225604\",\n> \t\t\"N00225980\",\n> \t\t\"N00226009\",\n> \t\t\"N00226085\",\n> \t\t\"N00226807\",\n> \t\t\"N00227247\",\n> \t\t\"N00227353\",\n> \t\t\"N00227434\",\n> \t\t\"N00081637\",\n> \t\t\"N00083414\",\n> \t\t\"N00085341\",\n> \t\t\"N00087159\",\n> \t\t\"N00087694\",\n> \t\t\"N00088957\",\n> \t\t\"N00090066\",\n> \t\t\"N00090999\",\n> \t\t\"N00093032\",\n> \t\t\"N00093751\",\n> \t\t\"N00097481\",\n> \t\t\"N00098278\",\n> \t\t\"N00098494\",\n> \t\t\"N00099500\",\n> \t\t\"N00101213\",\n> \t\t\"N00102906\",\n> \t\t\"N00054496\",\n> \t\t\"N00054728\",\n> \t\t\"N00055987\",\n> \t\t\"N00056464\",\n> \t\t\"N00059676\",\n> \t\t\"N00062038\",\n> \t\t\"N00063751\",\n> \t\t\"N00064165\",\n> \t\t\"N00065014\",\n> \t\t\"N00066730\",\n> \t\t\"N00067571\",\n> \t\t\"N00068685\",\n> \t\t\"N00069665\",\n> \t\t\"N00072430\",\n> \t\t\"N00073642\",\n> \t\t\"N00075065\",\n> \t\t\"N00075996\",\n> \t\t\"N00076844\",\n> \t\t\"N00173103\",\n> \t\t\"N00176403\",\n> \t\t\"N00178682\",\n> \t\t\"N00179810\",\n> \t\t\"N00179863\",\n> \t\t\"N00180434\",\n> \t\t\"N00181388\",\n> \t\t\"N00184250\",\n> \t\t\"N00185495\",\n> \t\t\"N00185938\",\n> \t\t\"N00187246\",\n> \t\t\"N00028387\",\n> \t\t\"N00029783\",\n> \t\t\"N00030117\",\n> \t\t\"N00034460\",\n> \t\t\"N00036629\",\n> \t\t\"N00037666\",\n> \t\t\"N00038120\",\n> \t\t\"N00039126\",\n> \t\t\"N00039161\",\n> \t\t\"N00040479\",\n> \t\t\"N00043168\",\n> \t\t\"N00044337\",\n> \t\t\"N00044835\",\n> \t\t\"N00045436\",\n> \t\t\"N00045651\",\n> \t\t\"N00048074\",\n> \t\t\"N00048338\",\n> \t\t\"N00049787\",\n> \t\t\"N00052770\",\n> \t\t\"N00213986\",\n> \t\t\"N00214524\",\n> \t\t\"N00215921\",\n> \t\t\"N00216511\",\n> \t\t\"N00217220\",\n> \t\t\"N00217316\",\n> \t\t\"N00218558\",\n> \t\t\"N00218868\",\n> \t\t\"N00218869\",\n> \t\t\"N00219589\",\n> \t\t\"N00220247\",\n> \t\t\"N00220488\",\n> \t\t\"N00224005\",\n> \t\t\"N00224011\",\n> \t\t\"N00224102\",\n> \t\t\"N00107385\",\n> \t\t\"N00108074\",\n> \t\t\"N00110516\",\n> \t\t\"N00110747\",\n> \t\t\"N00110777\",\n> \t\t\"N00111584\",\n> \t\t\"N00112635\",\n> \t\t\"N00114416\",\n> \t\t\"N00115132\",\n> \t\t\"N00117507\",\n> \t\t\"N00122588\",\n> \t\t\"N00123470\",\n> \t\t\"N00123512\",\n> \t\t\"N00124695\",\n> \t\t\"N00124701\",\n> \t\t\"N00125542\",\n> \t\t\"N00125980\",\n> \t\t\"N00126495\",\n> \t\t\"N00079876\",\n> \t\t\"N00084531\",\n> \t\t\"N00085530\",\n> \t\t\"N00085602\",\n> \t\t\"N00088010\",\n> \t\t\"N00088920\",\n> \t\t\"N00090221\",\n> \t\t\"N00092965\",\n> \t\t\"N00094566\",\n> \t\t\"N00097704\",\n> \t\t\"N00099628\",\n> \t\t\"N00100438\",\n> \t\t\"N00100649\",\n> \t\t\"N00101054\",\n> \t\t\"N00101361\",\n> \t\t\"N00102696\",\n> \t\t\"N00187745\",\n> \t\t\"N00188427\",\n> \t\t\"N00189555\",\n> \t\t\"N00190567\",\n> \t\t\"N00191703\",\n> \t\t\"N00194408\",\n> \t\t\"N00196337\",\n> \t\t\"N00198157\",\n> \t\t\"N00198439\",\n> \t\t\"N00198756\",\n> \t\t\"N00199937\",\n> \t\t\"N00201840\",\n> \t\t\"N00202422\",\n> \t\t\"N00202508\",\n> \t\t\"N00202885\",\n> \t\t\"N00203237\",\n> \t\t\"N00203535\",\n> \t\t\"N00204471\",\n> \t\t\"N00207104\",\n> \t\t\"N00207424\",\n> \t\t\"N00208027\",\n> \t\t\"N00208131\",\n> \t\t\"N00208436\",\n> \t\t\"N00209676\",\n> \t\t\"N00226768\",\n> \t\t\"N00227977\",\n> \t\t\"N00228104\",\n> \t\t\"N00228801\",\n> \t\t\"N00229252\",\n> \t\t\"N00229844\",\n> \t\t\"N00230168\",\n> \t\t\"N00230189\",\n> \t\t\"N00230496\",\n> \t\t\"N00230639\",\n> \t\t\"N00231397\",\n> \t\t\"N00231489\",\n> \t\t\"N00231673\",\n> \t\t\"N00231794\",\n> \t\t\"N00232114\",\n> \t\t\"N00232151\",\n> \t\t\"N00232194\",\n> \t\t\"N00232233\",\n> \t\t\"N00232234\",\n> \t\t\"N00131162\",\n> \t\t\"N00133817\",\n> \t\t\"N00134884\",\n> \t\t\"N00136636\",\n> \t\t\"N00142157\",\n> \t\t\"N00143244\",\n> \t\t\"N00143582\",\n> \t\t\"N00144410\",\n> \t\t\"N00144440\",\n> \t\t\"N00144839\",\n> \t\t\"N00145969\",\n> \t\t\"N00146102\",\n> \t\t\"N00146201\",\n> \t\t\"N00147525\",\n> \t\t\"N00147897\",\n> \t\t\"N00149373\",\n> \t\t\"N00150364\",\n> \t\t\"N00185404\",\n> \t\t\"N00186496\",\n> \t\t\"N00186669\",\n> \t\t\"N00187247\",\n> \t\t\"N00191115\",\n> \t\t\"N00191237\",\n> \t\t\"N00191396\",\n> \t\t\"N00192399\",\n> \t\t\"N00193440\",\n> \t\t\"N00194333\",\n> \t\t\"N00197593\",\n> \t\t\"N00171575\",\n> \t\t\"N00171736\",\n> \t\t\"N00172524\",\n> \t\t\"N00172801\",\n> \t\t\"N00174324\",\n> \t\t\"N00174334\",\n> \t\t\"N00176220\",\n> \t\t\"N00176389\",\n> \t\t\"N00177153\",\n> \t\t\"N00177204\",\n> \t\t\"N00178652\",\n> \t\t\"N00179250\",\n> \t\t\"N00181046\",\n> \t\t\"N00181471\",\n> \t\t\"N00181761\",\n> \t\t\"N00181768\",\n> \t\t\"N00182119\",\n> \t\t\"N00185204\",\n> \t\t\"N00185310\",\n> \t\t\"N00185962\",\n> \t\t\"N00107744\",\n> \t\t\"N00109597\",\n> \t\t\"N00110549\",\n> \t\t\"N00110925\",\n> \t\t\"N00113681\",\n> \t\t\"N00117780\",\n> \t\t\"N00118519\",\n> \t\t\"N00118995\",\n> \t\t\"N00120100\",\n> \t\t\"N00120333\",\n> \t\t\"N00121238\",\n> \t\t\"N00121369\",\n> \t\t\"N00122594\",\n> \t\t\"N00124325\",\n> \t\t\"N00124769\",\n> \t\t\"N00125806\",\n> \t\t\"N00125907\",\n> \t\t\"N00126542\",\n> \t\t\"N00131342\",\n> \t\t\"N00016668\",\n> \t\t\"N00018355\",\n> \t\t\"N00019033\",\n> \t\t\"N00019572\",\n> \t\t\"N00025374\",\n> \t\t\"N00026805\",\n> \t\t\"N00027009\",\n> \t\t\"N00031287\",\n> \t\t\"N00032471\",\n> \t\t\"N00037524\",\n> \t\t\"N00037860\",\n> \t\t\"N00045536\",\n> \t\t\"N00046923\",\n> \t\t\"N00047875\",\n> \t\t\"N00048143\",\n> \t\t\"N00048285\",\n> \t\t\"N00051695\",\n> \t\t\"N00055892\",\n> \t\t\"N00056078\",\n> \t\t\"N00057234\",\n> \t\t\"N00057824\",\n> \t\t\"N00058939\",\n> \t\t\"N00060054\",\n> \t\t\"N00060779\",\n> \t\t\"N00061808\",\n> \t\t\"N00061839\",\n> \t\t\"N00064102\",\n> \t\t\"N00065715\",\n> \t\t\"N00066847\",\n> \t\t\"N00068114\",\n> \t\t\"N00068650\",\n> \t\t\"N00072731\",\n> \t\t\"N00073615\",\n> \t\t\"N00077000\",\n> \t\t\"N00077428\",\n> \t\t\"N00077480\",\n> \t\t\"N00080037\",\n> \t\t\"N00080937\",\n> \t\t\"N00081871\",\n> \t\t\"N00082169\",\n> \t\t\"N00085404\",\n> \t\t\"N00085541\",\n> \t\t\"N00086975\",\n> \t\t\"N00088075\",\n> \t\t\"N00088808\",\n> \t\t\"N00089931\",\n> \t\t\"N00091601\",\n> \t\t\"N00097586\",\n> \t\t\"N00097723\",\n> \t\t\"N00097902\",\n> \t\t\"N00098826\",\n> \t\t\"N00098977\",\n> \t\t\"N00099330\",\n> \t\t\"N00100199\",\n> \t\t\"N00156952\",\n> \t\t\"N00160032\",\n> \t\t\"N00162432\",\n> \t\t\"N00165701\",\n> \t\t\"N00167208\",\n> \t\t\"N00168496\",\n> \t\t\"N00171162\",\n> \t\t\"N00171356\",\n> \t\t\"N00171602\",\n> \t\t\"N00198040\",\n> \t\t\"N00201400\",\n> \t\t\"N00201830\",\n> \t\t\"N00202665\",\n> \t\t\"N00202942\",\n> \t\t\"N00203715\",\n> \t\t\"N00203727\",\n> \t\t\"N00203737\",\n> \t\t\"N00203757\",\n> \t\t\"N00204548\",\n> \t\t\"N00204715\",\n> \t\t\"N00205865\",\n> \t\t\"N00206002\",\n> \t\t\"N00206726\",\n> \t\t\"N00206871\",\n> \t\t\"N00207491\",\n> \t\t\"N00207809\",\n> \t\t\"N00208375\",\n> \t\t\"N00188282\",\n> \t\t\"N00188376\",\n> \t\t\"N00188422\",\n> \t\t\"N00188688\",\n> \t\t\"N00189709\",\n> \t\t\"N00190311\",\n> \t\t\"N00190860\",\n> \t\t\"N00191942\",\n> \t\t\"N00192114\",\n> \t\t\"N00193133\",\n> \t\t\"N00193683\",\n> \t\t\"N00193749\",\n> \t\t\"N00194254\",\n> \t\t\"N00195344\",\n> \t\t\"N00196302\",\n> \t\t\"N00196409\",\n> \t\t\"N00197337\",\n> \t\t\"N00197968\",\n> \t\t\"N00198184\",\n> \t\t\"N00198351\",\n> \t\t\"N00198375\",\n> \t\t\"N00198570\",\n> \t\t\"N00131830\",\n> \t\t\"N00134166\",\n> \t\t\"N00138573\",\n> \t\t\"N00138722\",\n> \t\t\"N00139823\",\n> \t\t\"N00141406\",\n> \t\t\"N00141508\",\n> \t\t\"N00142727\",\n> \t\t\"N00143409\",\n> \t\t\"N00144591\",\n> \t\t\"N00145375\",\n> \t\t\"N00146223\",\n> \t\t\"N00147454\",\n> \t\t\"N00148011\",\n> \t\t\"N00148664\",\n> \t\t\"N00150257\",\n> \t\t\"N00151340\",\n> \t\t\"N00152224\",\n> \t\t\"N00153686\",\n> \t\t\"N00153796\",\n> \t\t\"N00154434\",\n> \t\t\"N00221087\",\n> \t\t\"N00221130\",\n> \t\t\"N00221300\",\n> \t\t\"N00221338\",\n> \t\t\"N00222007\",\n> \t\t\"N00222445\",\n> \t\t\"N00223041\",\n> \t\t\"N00223935\",\n> \t\t\"N00224074\",\n> \t\t\"N00225634\",\n> \t\t\"N00226174\",\n> \t\t\"N00227106\",\n> \t\t\"N00227400\",\n> \t\t\"N00228315\",\n> \t\t\"N00228410\",\n> \t\t\"N00228705\",\n> \t\t\"N00228901\",\n> \t\t\"N00228925\",\n> \t\t\"N00228980\",\n> \t\t\"N00229185\",\n> \t\t\"N00230101\",\n> \t\t\"N00031683\",\n> \t\t\"N00035219\",\n> \t\t\"N00037022\",\n> \t\t\"N00037155\",\n> \t\t\"N00038963\",\n> \t\t\"N00039509\",\n> \t\t\"N00039708\",\n> \t\t\"N00039896\"\n> \t]\n> }\n",
"text": "db.version()\n4.0.14QueryThe above query works when there were 1000 records in the array key studentIds but where there were 20 thousand records it doesn’t work with the same query.\ncan you please let me know what’s the issue in that\nIf the query works for 1000 records it should work for 20, 40 thousands of records.",
"username": "Sonal_Panchal"
}
] | Finding documents with matching array elements using "all" | 2020-03-24T18:20:28.962Z | Finding documents with matching array elements using “all” | 4,468 |
[
"transactions"
] | [
{
"code": "",
"text": "I am currently using the MongoDB Java Driver and as part of transaction support, I wanted to see if someone can abort a transaction till a savepoint? At the moment, it appears that the entire transaction aborts/rollbacks.ref :",
"username": "Nachi"
},
{
"code": "",
"text": "As far as I know, there are no save points in MongoDB transactions",
"username": "DavidSol"
},
{
"code": "",
"text": "You are both correct. The MongoDB server (as at 4.2) does not have support for savepoints or nested transactions, so the only execution outcomes are commit or abort of all operations in a transaction.You could raise this as a feature suggestion the MongoDB Feedback site for others to upvote and watch.Regards,\nStennie",
"username": "Stennie_X"
}
] | Does MongoDB provide rollback to a savepoint? | 2020-03-24T20:39:35.867Z | Does MongoDB provide rollback to a savepoint? | 3,313 |
|
null | [] | [
{
"code": "",
"text": "Hi there I am not able to connect to the database using mongodb compass. my laptop even passes both the ping and the port test but i am not still able to access the database and I am using the stable version of mongo compass too. please help me with this issueMongo compass error\nping result\nport result:\n",
"username": "Srinivas_05529"
},
{
"code": "mongodb://m001-student:[email protected]:27017,cluster0-shard-00-01-jxeqq.mongodb.net:27017,cluster0-shard-00-02-jxeqq.mongodb.net:27017/test?authSource=admin&replicaSet=Cluster0-shard-0&readPreference=primary&ssl=true",
"text": "Try this:mongodb://m001-student:[email protected]:27017,cluster0-shard-00-01-jxeqq.mongodb.net:27017,cluster0-shard-00-02-jxeqq.mongodb.net:27017/test?authSource=admin&replicaSet=Cluster0-shard-0&readPreference=primary&ssl=true",
"username": "007_jb"
},
{
"code": "",
"text": "mongodb://m001-student:[email protected]:27017,cluster0-shard-00-01-jxeqq.mongodb.net:27017,cluster0-shard-00-02-jxeqq.mongodb.net:27017/test?authSource=admin&replicaSet=Cluster0-shard-0&readPreference=primary&ssl=trueIt Works, Thank you so much for the help!",
"username": "Srinivas_05529"
},
{
"code": "",
"text": "Closing this thread as the issue has been resolved.",
"username": "Shubham_Ranjan"
}
] | Not able to connect to database | 2020-03-24T17:13:01.345Z | Not able to connect to database | 1,904 |
null | [
"dot-net"
] | [
{
"code": "SELECT auto,Part,Entity_ID,Pos_Z \nFROM Nesting \nWHERE Part='033TRAVERSOLATERALE3' or (Part='033TRAVERSOLATERALE1' and Pos_Z=-90)\nORDER BY Pos_Z ASC, Entity_ID DESC\nPart='033TRAVERSOLATERALE3' or (Part='033TRAVERSOLATERALE1' and Pos_Z=-90)\n{\n$or:[{Part: {$eq:'033TRAVERSOLATERALE3'}}, {$and:[{Part: {$eq:'033TRAVERSOLATERALE1'}}, {Pos_Z: {$eq:-90}}]}]\n}\nBsonDocument filter = new BsonDocument();\n filter.Add(\"$or\", new BsonArray()\n .Add(new BsonDocument()\n .Add(\"Part\", \"033TRAVERSOLATERALE3\")\n )\n .Add(new BsonDocument()\n .Add(\"$and\", new BsonArray()\n .Add(new BsonDocument()\n .Add(\"Part\", \"033TRAVERSOLATERALE1\")\n )\n .Add(new BsonDocument()\n .Add(\"Pos_Z\", new BsonInt64(-90L))\n )\n )\n )\n );\n",
"text": "Good morning,\ngiven that I have no extensive knowledge of databases, I was asked to develop a generic MongoDB class derived from an abstract Database class and implement the “Insert”, “Select”, “Update” and “Delete” methods for MongoDB using SQL syntax as input. I am having difficulty converting queries like thisto MongoDB, also with the help of the Aggregation Framework. Can anyone tell me if there is a nuget package or snippet code that already does the query conversion? I found some useful tools but no libraries or sources. Is there no other way than to parsify the SQL WHERE argument string to convert the query?to a MongoDB $match stage with this argument:or toThe natural way would perhaps be to use sql-linq and entities but how to do it without defining an a priori entity and creating a class that allows you to perform CRUD operations on any table (e.g. document)?\nAny help for this “dirty” job will be greatly appreciated. Thank you",
"username": "Pierdomenico_D_Erric"
},
{
"code": "",
"text": "Hi @Pierdomenico_D_Erric, welcome!The natural way would perhaps be to use sql-linq and entities but how to do it without defining an a priori entity and creating a class that allows you to perform CRUD operations on any table (e.g. document)?Without knowing exactly why you’re trying to develop the class, generally, if you would like a database abstraction in C# you would use LINQ. The code then will have some portability layer. For example see MongoDB .NET/C#: LINQ CRUD. I understand that you would have to create entities for it first, but should work for both SQL and MongoDB.Regards,\nWan.",
"username": "wan"
}
] | MongoDB, SQL, Access common base abstract class C# | 2020-03-19T10:52:45.864Z | MongoDB, SQL, Access common base abstract class C# | 2,266 |
null | [
"production",
"server"
] | [
{
"code": "",
"text": "MongoDB 4.0.17 is out and is ready for production deployment. This release contains only fixes since 4.0.16, and is a recommended upgrade for all 4.0 users.Fixed in this release:4.0 Release Notes | All Issues | DownloadsAs always, please let us know of any issues.– The MongoDB Team",
"username": "Luke_Chen"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB 4.0.17 is released | 2020-03-25T04:09:34.035Z | MongoDB 4.0.17 is released | 2,406 |
null | [
"change-streams"
] | [
{
"code": "db.collection.watch",
"text": "For databases that are not replica sets and sharded clusters, how to implement a similar function to db.collection.watch, that is, to monitor the changes of the database in real time?db.collection.watch is For replica sets and sharded clusters only .",
"username": "masx200_masx200"
},
{
"code": "",
"text": "I am not sure of your requirements but perhaps the change stream will be helpful.See https://docs.mongodb.com/manual/changeStreams/",
"username": "steevej"
},
{
"code": "",
"text": "I mean how to use Change Streams on a normal database",
"username": "masx200_masx200"
},
{
"code": "",
"text": "@masx200_masx200 Change streams require a replica set or sharded cluster deployment because the mechanism for monitoring data changes is provided by the replication oplog which does not exist in a standalone deployment.However, you can create a single member replica set deployment for testing or development purposes. To do so, follow the tutorial to Convert a Standalone to a Replica Set but do not add any additional members after initialising the replica set.The main downsides of a single member deployment are that you don’t get any of the usual replica set benefits such as data redundancy and fault tolerance. A replica set member also has some expected write overhead as compared to a standalone server because it has to maintain the oplog.Regards,\nStennie",
"username": "Stennie_X"
}
] | For databases that are not replica sets and sharded clusters, how to monitor the changes of the database in real time? | 2020-03-24T11:50:56.332Z | For databases that are not replica sets and sharded clusters, how to monitor the changes of the database in real time? | 1,865 |
[
"stitch"
] | [
{
"code": "",
"text": "How to access a Stitch API from Postman using API Key? The API works with “Anonymous Auth” but not sure how to use with Postman - Server key or User key?This document https://docs.mongodb.com/stitch/authentication/api-key/#server-api-keys does not specify clearly how to use API Keys.\nstitch-server3220×1232 211 KB\n\nstitch-user2572×1194 190 KB\n\nstitch-api3182×1584 298 KB\nTried this on Postman\nstitch-postman2264×1288 305 KB\n",
"username": "Suren_Konathala"
},
{
"code": "",
"text": "Hello Suren.I spent a whole day with the same problem that you have, in the image I show you how I solved it.\nPostman2264×1288 293 KB\n",
"username": "Federico_Ettlin"
},
{
"code": "",
"text": "Thanks @Federico_Ettlin but still showing the same error?i’m going to Stitch > Users > ReadAPIKey (server) and using the value under ID as the VALUE in POSTMAN.\nScreen_Shot_2020-03-23_at_4_42_25_PM2208×1404 239 KB\n",
"username": "Suren_Konathala"
},
{
"code": "",
"text": "With some help from Mongodb support… here’s the solution.\nScreen_Shot_2020-03-23_at_11_58_28_PM2210×1274 174 KB\n",
"username": "Suren_Konathala"
}
] | How to use Mongodb Stitch from Postman using API Key? | 2020-03-23T18:22:33.423Z | How to use Mongodb Stitch from Postman using API Key? | 3,799 |
|
null | [
"node-js"
] | [
{
"code": "napi-inl.h:4795:32: error: no matching function for call to ‘Napi::AsyncProgressWorker<T>::NonBlockingCall(std::nullptr_t) const’\nnode-addon-apigit+https://github.com/blagoev/node-addon-api.gitgit+https://github.com/blagoev/node-addon-api.git",
"text": "Hello,\ntoday morning Linux builds of Realm 5.0.0 with electron-rebuild went fine, but this afternoon I git a build error:Has something changed? I suspect the node-addon-api, the dependency is specified just as git+https://github.com/blagoev/node-addon-api.git and I have different version on my machine and Linux build machine. So, I guess the repo git+https://github.com/blagoev/node-addon-api.git has added some commits recently? Can you perform electron-rebuild for Realm 5.0.0 on Linux?",
"username": "Ondrej_Medek"
},
{
"code": "git+https://github.com/blagoev/node-addon-api.git#rjs",
"text": "It has been fixed in 5.0.2 by setting the dependency to git+https://github.com/blagoev/node-addon-api.git#rjs. I hope, this branch would not change since the release.",
"username": "Ondrej_Medek"
},
{
"code": "",
"text": "Hi Ondrej,It’s very unfortunate that you’ve had to spend valuable time on this. We should definitely lock the version of the node-addon-api repository to a specific branch / tag to prevent this from happening in the future (until the changes we need gets released upstream). Thank you for bringing this to our attention.Also - please not that Electron is still not an officially supported platform (which is probably why we’ve been less careful ensuring this platform built in the absence of a prebuilt binary). What version of Electron are you using here?",
"username": "kraenhansen"
},
{
"code": "",
"text": "Hi Kræn,\nWe are using Electron 8 now (8.0.1, but will upgrade to 8.1.1 soon, I think). The prebuilt binary would be great, see also my post Electron-rebuild and ssl libs on Windows",
"username": "Ondrej_Medek"
},
{
"code": "README.md",
"text": "@kraenhansen You should also update the README.md, see my post Electron-rebuild and ssl libs on Windows - #2 by Ondrej_Medek and also the doc still mention “Node 8 and 10” two times.",
"username": "Ondrej_Medek"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm 5.0.0 electron-rebuild error on Linux | 2020-03-19T19:49:23.784Z | Realm 5.0.0 electron-rebuild error on Linux | 2,926 |
null | [
"containers",
"installation"
] | [
{
"code": "",
"text": "I’m trying to create a mongodb replica set in docker / docker-compose.In all the tutorials I’ve found online, the replica set is created in a basic fashion in docker-compose and then an administrator manually initializes the replica set in a mongo shell session.I’m trying to do something a little better. On startup, the container downloads the latest back from S3, and then initializes or joins a replica set, automatically finding the other containers in the network, according to pattern of numbered hostnames, eg mongo-1, mongo-2, … mongo-9.I’ve created a starter project here, https://github.com/bboyle1234/MongoSet, and I’m looking for someone who can help complete it. Pay is offered. You will need to be experienced in the following: docker, docker-compose, mongodb, mongodb replication, shell scripting, and s3.Thank you.",
"username": "Benjamin_Boyle"
},
{
"code": "",
"text": "I’ve changed direction. Since I’m not able to delete this post, I’ll update y’all with my decision here.I wanted mongo containers that would not have volumes mapped to the host. So they could appear and disappear anywhere on any server. I envisioned that upon startup, they would restore the latest backup version from S3, and then join an existing replica set to receive the latest updates. I also envisioned that if they could find the other db instances in the replica set, but none of them had the replica set initialised, then it would initialise the replica set.All that became “too hard” for me, in the sense it would take too long to setup starting from my limited knowledge. Why should I spend another week or two of my time setting this up so that once or twice a year, some administrator in my company doesn’t have to type “rs.initiate(config)” into a mongo shell? I could spend a few hours on documentation and not a week on learning something with little value to the company. So I did.We have a backup container dumping backups into S3 every half-hour, and it has the ability to restore any backup version to the mongo cluster. I could not find a container in docker hub that would do backup, restore, AND connect to a replica set, so I rolled my own.In the event that all three mongo db instances are removed at the same time from the “dynamic” container orchestration, we’re going to have to manually initialise the replicaset and run the restore. O well.",
"username": "Benjamin_Boyle"
}
] | Replica set in docker-compose | 2020-03-19T09:04:43.424Z | Replica set in docker-compose | 6,991 |
null | [
"atlas-functions",
"app-services-user-auth",
"stitch"
] | [
{
"code": "",
"text": "Hello,There are some business logics that require running a function without authentication such as sending a text message during user sign up. Is there a way to call stitch function from client without auth (e.g anonymous auth) now or in the plan? Thanks!",
"username": "Alex_Wu"
},
{
"code": "Stitch.defaultAppClient.auth.loginWithCredential(new AnonymousCredential()).then(user => {\n Stitch.defaultAppClient.callFunction(\"foobar\", [argument]).then(response=>{\n console.log(result);\n }); \n}).catch(console.error);\n",
"text": "Hi @Alex_Wu, welcome!Is there a way to call stitch function from client without auth (e.g anonymous auth) now or in the plan?You should be able to do this now using Anonymous Authentication. For example using the JavaScript SDK:Regards,\nWan",
"username": "wan"
}
] | Stitch Function without Authentication | 2020-03-15T22:46:14.195Z | Stitch Function without Authentication | 1,825 |
[
"server",
"installation"
] | [
{
"code": "mongod --dbpath ~/data/db",
"text": "I am following this thread: macos - MongoDB can't find data directory after upgrading to Mac OS 10.15 (Catalina) - Stack Overflow).When I enter mongod --dbpath ~/data/db Terminal stalls on this:\nScreenshot 2020-03-23 at 17.25.472874×1388 1.21 MB\n",
"username": "Ryan_Frost"
},
{
"code": "mongodmongo",
"text": "Hi @Ryan_Frost the mongod command you ran will start the server and then log all output to the current terminal, unless you state where to write the log data.At the bottom of that log (third line from the bottom) it says that the server is listening on port 27017. Since you’re bound to localhost (that information is towards the middle of the log file) you should be able to start a new terminal window and run the mongo command to connect to the database and open a new MongoDB shell.Are you having problems connecting to the MongoDB server? If so what errors are you getting?",
"username": "Doug_Duncan"
},
{
"code": "mongodmongod",
"text": "I’ve usually ran mongodb by typing mongod. I get this when I type mongod\nScreenshot 2020-03-23 at 19.58.162008×400 270 KB\n",
"username": "Ryan_Frost"
},
{
"code": "",
"text": "\nScreenshot 2020-03-23 at 19.56.381952×916 485 KB\n",
"username": "Ryan_Frost"
},
{
"code": "",
"text": "Sorry can only reply with one picture at a time:\nScreenshot 2020-03-23 at 19.56.282872×1492 1.23 MB\n",
"username": "Ryan_Frost"
},
{
"code": "mongodmongomongo",
"text": "Ok so in the first image you get an error stating that the address is already in use. This happens if you try to run two different mongod commands without changing the port. In this case the second server shuts down as expected.For the second image I see a running mongo session and it’s ready for you to start working. There are warnings (you see these in the original message as well) that you should look into if this will be a production server.The final image is a log file with the last few lines showing that you did indeed connect to the server from a mongo shell.From what you’re showing everything is working as expected.",
"username": "Doug_Duncan"
},
{
"code": "mongodmongod --dbpath ~/data/db",
"text": "Thanks @Doug_Duncan.\nMy metric for success used to be when I typed in mongod it would get everything running.\nMy Mac did an update to Catalina and it changed where the data/db directory was stored.I’ve moved it to home. So my mongod --dbpath ~/data/db was trying to reset where mongodb reads it (what the stack overflow article suggested).",
"username": "Ryan_Frost"
}
] | Specifying path of data directory | 2020-03-23T17:50:57.853Z | Specifying path of data directory | 9,449 |
|
[] | [
{
"code": "",
"text": "Every time I login to community board, I get an e-mail like this. where is the settings to disable it?\nScreenshot_2020-03-15 Gmail - Successful sign-in for kivanca gmail com from new device1322×1562 222 KB\n",
"username": "coderkid"
},
{
"code": "",
"text": "Good question - let me ask the appropriate team.",
"username": "Jamie"
},
{
"code": "",
"text": "Hey @coderkid,Are you still experiencing this issue? This should only occur the first time you log in on a new browser (it’s saved via a cookie in the browser).Cheers,Jamie",
"username": "Jamie"
},
{
"code": "",
"text": "Hello @Jamie,I use Firefox on “Strict” mode… Every tab I open, technically, a new browser with a new cookie jar ",
"username": "coderkid"
},
{
"code": "",
"text": "Thanks for the follow up. I checked with our cloud team and it looks like there’s no way to turn off these notifications for now, but they will investigate for a possible future feature. Apologies for the annoyance in the meantime.",
"username": "Jamie"
},
{
"code": "",
"text": "Ok I understand,I created a filter on Gmail, it deletes them automatically.Thank you anyway.",
"username": "coderkid"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to stop login notification e-mails? | 2020-03-15T15:57:35.435Z | How to stop login notification e-mails? | 4,056 |
|
null | [
"morphia-odm"
] | [
{
"code": "",
"text": "Morphia 2.0.0-BETA2 is in the process of syncing to maven central! This release fixes some packaging problems around dependencies that didn’t show up because all the testing and releases are done one machines with all the necessary artifacts already present. There were a handful of bug fixes and some API tweaks as well. The documentation (https://morphia.dev/) was also updated as well to be a bit more complete especially around the getting started and aggregation guides. Please hit this release with everything you’ve got. My hope is to leave this release out there for a while before cutting the final release and to try to catch any issues/concerns before things lock down for the final release. If you find anything please post here or file an issue and I’ll do my best to get you settled.",
"username": "Justin_Lee"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Morphia 2.0.0-BETA2 | 2020-03-23T18:04:16.013Z | Morphia 2.0.0-BETA2 | 3,388 |
null | [
"atlas",
"charts"
] | [
{
"code": "",
"text": "Hi, Charts fans! We’re excited to announce that the February release of Charts on Atlas is now live, and includes two big new features:You can try these features by activating Charts in any MongoDB Atlas project. If you have any questions, feel free to post here, or use the MongoDB Feedback Engine to submit feature requests.Happy Charting!\nTom Hollander\nMongoDB Charts Product Manager",
"username": "tomhollander"
},
{
"code": "",
"text": "That is great, is there an API, so our users can create chart via our UI?",
"username": "coderkid"
},
{
"code": "",
"text": "Hi Kay, not yet - the SDK is limited to embedding and interacting with charts that were already created with the Charts UI. A full API for creating and modifying charts is something we are considering in the future.Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "Do you have timeline? I need to know, before investing too much time on Charts instead of other Chart/BI tool. Thank you.",
"username": "coderkid"
},
{
"code": "",
"text": "It’s probably about a year away.",
"username": "tomhollander"
},
{
"code": "",
"text": "Is it available for regular Charts (not Atlas based)? Can’t find dashboard filter option.\nScreenshot 2020-03-22 at 17.33.213360×1678 428 KB\n",
"username": "Dmitriy_Serebryanski"
},
{
"code": "",
"text": "Hi @Dmitriy_Serebryanski. For now these new features are only available in the Atlas version.",
"username": "tomhollander"
},
{
"code": "",
"text": "@tomhollander I never used Atlas version of the Charts, I have been heavily using stand alone, though.What other features the non-atlas one is missing? Is there a comparison table somewhere?",
"username": "coderkid"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB Charts on Atlas: February 2020 update | 2020-02-13T01:44:07.001Z | MongoDB Charts on Atlas: February 2020 update | 3,219 |
null | [] | [
{
"code": "",
"text": "Hello Team,I don’t any option to download MongoDB Compass for 32 bit OS from MongoDB Compass Download | MongoDB. I have Windows 7 32 bit OS and the options under Download section are for 64 bit version. Can anyone help me here please ?Thanks and Regards,\nM.S. Bisht",
"username": "Mahendra_Singh_59111"
},
{
"code": "",
"text": "Unfortunately there’s no support for 32 bit OS.",
"username": "007_jb"
},
{
"code": "",
"text": "Ummm Ok. Thanks JB.",
"username": "Mahendra_Singh_59111"
},
{
"code": "",
"text": "",
"username": "system"
}
] | OS version issue for MongoDB Compass | 2020-03-21T12:58:46.293Z | OS version issue for MongoDB Compass | 1,896 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "Hi there,Are there any info about when an update containing the new frozen objects are released to .NET ?I’m currently building a Blazor Server application and since Blazor decides to build UI on a thread of their own discretion, I’m unable to use the realm models directly.This means that I have to map everything to new models going back and forth between Realm and the UI - which is plain silly.So. When do you plan to release and update? According to the roadmap it should be out already. When will .NET get some love?",
"username": "Void"
},
{
"code": "",
"text": "Hey,\nWe normally try to avoid committing to too many hard deadlines as priorities (and the world situation at large) may impact our estimates. But assuming a normal disclaimer, .NET is getting love now and we are wrapping up Core6 support and that should hopefully land in April as GA. The Frozen Object support will come after that, best case late May. This is our current plan, but take it with due uncertainty as the current conditions may affect things…",
"username": "Brian_Munkholm"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Frozen objects for .NET | 2020-03-18T16:14:44.063Z | Frozen objects for .NET | 2,164 |
null | [
"security"
] | [
{
"code": "",
"text": "Hi,\nI’m trying to use mongodb auditLog.\nIn order to do so- I’ve changed the configuration file according to the documentation and it works fine.The problem is that in order for it to work I have to restart mongod service.Is there any option to change set auditLog without restarting mongod?\nIf not- is there at least a way to change the filter without restarting mongod? that’s something I need to change more often and restarting mongod isn’t an optionThanks for your help,\nOfer",
"username": "Ofer_Haim"
},
{
"code": "",
"text": "Is there any option to change set auditLog without restarting mongod?Auditing is intentionally configured via the process configuration file so audit filters cannot be changed or disabled via a compromised MongoDB user account.If not- is there at least a way to change the filter without restarting mongod? that’s something I need to change more often and restarting mongod isn’t an optionCan you explain your use case for changing audit filtering frequently? There is likely an alternative approach to recommend.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi Stennie, thanks for your quick response.\nI want to allow users to change the policy, so that every user will be able to see only the logs he needs.The current solution that I found is to change the filter according to the specific user’s required policy, but then I have to restart mongod whenever the user want to change his policy.Another solution I have is to filter the logs after mongodb is writing them, but writing all logs and then deleting unnecessary ones causes overload (unsurprisingly)Ofer",
"username": "Ofer_Haim"
},
{
"code": "",
"text": "Hi Ofer,Auditing is intended to be a server-level configuration option rather than a per-user setting, as the typical use case is for compliance.What audit events are you allowing your users to configure? You might want to look into Change Streams or database profiling as possible runtime alternatives.A more typical multi-user approach for shared logging would be to send logs to a central service (for example, Splunk or Graylog) which provides filtering, dashboards, and role-based access control.Regards,\nStennie",
"username": "Stennie_X"
}
] | Update auditLog configuration without restart | 2020-03-22T07:53:25.628Z | Update auditLog configuration without restart | 2,377 |
null | [
"dot-net"
] | [
{
"code": "{\n CustomerNr,\n Lastname,\n Order1(\"Toothbrush\", \"Soap\", ...)\n Order2(\"Book\",\"Paper\",\"Milk\", ...)\n}\npublic class CustomerModel\n {\n [BsonId]\n public Guid ID { get; set; }\n public Int64 CustomNr { get; set; }\n public string LastName { get; set; }\n public ??? ORDER ???\n}\n\npublic class OrderModel\n{\n ?????\n}\nvar x = new CustomModel(1,\"Smith\", ????)\n",
"text": "Hi,I am pretty new to NoSQL and to MongoDB I have a beginner question maybe someone could help me. I would like to try out a simple CustomerDB.I am using MongoDB Atlas and C# Driver in Visual Studio.My data should look something like this:And if the Customer orders next time I would like to add a Order3 in the document. Next Order add Order4 etc.How do I have to write my C# Class to accomplish this?and later on create that instanceI hope someone understands what I am asking for Thx for Help",
"username": "Henning"
},
{
"code": "{\n customerNr: 123,\n lastname: \"Smith\",\n orders: [\n { orderNo: 1, items: [ \"Toothbrush\", \"Soap\", ... ] },\n { orderNo: 2, items: [ \"Book\", \"Paper\", \"Milk\", ... ] },\n ...\n ]\n}\npublic class CustomerModel\n{\n [BsonId]\n public Guid ID { get; set; }\n public Int64 CustomNr { get; set; }\n public string LastName { get; set; }\n public ??? ORDER ???\n}\nListList<OrderModel>OrderModelpublic class OrderModel\n{\n public Int64 OrderNo { get; set; }\n public List<string> Items { get; set; }\n}\nvar x = new CustomModel(1, \"Smith\", ????)var order1 = new OrderModel( 1, new List<string> { \"Toothbrush\", \"Soap\" } )\nvar order2 = new OrderModel( 2, new List<string> { \"Book\", \"Paper\", \"Pencils\" } )\nvar x = new CustomModel(1, \"Smith\", new List<OrderModel> { order1, order2 } )\n",
"text": "In the MongoDB your customer collection’s document will store the orders as an array of order objects. In general, MongoDB data structure is that a database has collections and collection has documents (see MongoDB Introduction).What is ??? ORDER ???:This is to be a List collection, like List<OrderModel>. And the OrderModel class might look like:Creating the Customer Instance:What will be ??? in var x = new CustomModel(1, \"Smith\", ????)NOTE: I have never worked with C# before; I know Java programming language and both have similar OO concepts of classes, collections, etc., and I have written based on that.Useful Reference: Data model introduction",
"username": "Prasad_Saya"
}
] | Simple Customer Database how to add several orders to the same Customer | 2020-03-21T19:37:43.036Z | Simple Customer Database how to add several orders to the same Customer | 3,258 |
null | [
"aggregation"
] | [
{
"code": "db.users.aggregate([\n { \n $lookup: \n {\n from: \"prodcuts\", \n localField: \"Products.ProductID\", \n foreignField: \"_id\", \n as:\"resultTest\"\n }\n },\n {\n $match:\n {\n eq: { resultTest: [] }\n }\n }\n]\n",
"text": "Hi all I have a question that I would like to get your help with.I have users documents that have products array. Each product contains ProductID.\nIn addition, I have products documents.I want to find all the users that have products that don’t exist in products document.My query that doesn’t work:",
"username": "Mark_Rachman"
},
{
"code": "",
"text": "This is also being discussed at Redirecting...",
"username": "alexbevi"
},
{
"code": "db.users.aggregate([\n { \n $lookup: \n {\n from: \"products\", \n localField: \"Products.ProductID\",\n foreignField: \"_id\", \n as:\"resultTest\"\n }\n },\n {$addFields:{\n diff:{$setDifference:[ \"$Products.ProductID\",\"$resultTest._id\"]}\n }},\n {\n $match:\n {\n diff: [ ]\n }\n }\n]\n",
"text": "It’s strange that you’re discussing this on FB - not a place I think of for technical discussions. Anyway I happen to disagree with the solution that’s offered there.Try this aggregation:",
"username": "Asya_Kamsky"
}
] | Aggregation query | 2020-03-21T19:37:10.357Z | Aggregation query | 1,617 |
null | [
"mongodb-shell"
] | [
{
"code": "",
"text": "Modularity cannot be used in mongodb-shell, such as “import” or “require”",
"username": "masx200_masx200"
},
{
"code": "mongoload()mongorequire",
"text": "@masx200_masx200 Yes, that is expected. The current mongo shell provides a limited JavaScript environment for interaction with a MongoDB deployment, but is not have a complete scripting environment like Node.js. You can use load() to eval JavaScript from an external file into the current mongo session, but there is no equivalent for require or streaming I/O functions.For more advanced scripting, use a supported MongoDB driver such as the Node.js driver.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "thanks for your help",
"username": "masx200_masx200"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Modularity cannot be used in mongodb-shell, such as "import" or "require" | 2020-03-22T01:17:21.695Z | Modularity cannot be used in mongodb-shell, such as “import” or “require” | 1,800 |
[] | [
{
"code": "",
"text": "Hi,We want to contribute to MongoDB, there are several members in our team, so we should sign this contributor agreement together? Or can we only sign this one by one?Contributor Agreement",
"username": "renhai_zhao"
},
{
"code": "",
"text": "Hi @renhai_zhao,I checked with our legal team and they confirmed:For each contribution, please have every person who has participated in the contribution sign the Contributor Agreement.The form is very short and allows us to easily verify pull requests against GitHub usernames that have signed the agreement.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | About the contributor agreement | 2020-03-10T08:55:54.394Z | About the contributor agreement | 2,717 |
|
null | [
"swift"
] | [
{
"code": " class Process: Object {\n\n @objc dynamic var processID:Int = 1 \n let steps = List<Step>()\n }\n\n class Step: Object {\n\n @objc private dynamic var stepCode: Int = 0 \n @objc dynamic var stepDateUTC: Date? = nil \n var stepType: ProcessStepType {\n get { return ProcessStepType(rawValue: stepCode) ?? .created }\n set { stepCode = newValue.rawValue }\n }\n\n }\n\n\n enum ProcessStepType: Int { // to review - real value\n case created = 0\n case scheduled = 1\n case processing = 2\n case paused = 4\n case finished = 5\n\n }\nlet processes = realm.objects(Process.self).filter(NSPredicate(format: \"ANY steps.stepCode = 3 AND NOT (ANY steps.stepCode = 5)\")\n\nlet ongoingprocesses = processes.filter(){$0.steps.sorted(byKeyPath: \"stepDateUTC\", ascending: false).first!.stepType == .processing}\nNSPredicate(format: \"steps[LAST].stepCode = \\(TicketStepType.processing.rawValue)\")\n",
"text": "I Have the following modelA process can start, processing , paused , resume (to be in step processing again), pause , resume again, etc. the current step is the one with the latest stepDateUTCI am trying to get all Processes, having for last step ,a step of stepType processing \"processing \", ie. where for the last stepDate, stepCode is 3 . I came with the following predicate… which doesn’t work. Any idea of the right perform to perform such query ?my best trial is the one. Is it possible to get to this result via one realm query .what I hoped would workI understand [LAST] is not supported by realm (as per the cheatsheet). but is there anyway around I could achieve my goal through a realm query?",
"username": "Raphael_sacle"
},
{
"code": "{$0.steps.sorted",
"text": "This is a cross post to Stack Overflow post Cannot get Realm result for objects filtered by the latest (nsdate) value of a property of a collection property swift (the example is clearer)It’s a good idea to keep posts in one place so they have focus. The answer is there is no ‘3’ stepCode - at least as defined in ProcessStepTypeNote that filtering with {$0.steps.sorted may have unintended results as that’s a swift sort, returns an array but ‘disconnects’ those objects from Realm because they are not in a Realm Results object. If you want the objects to remain connected to Realm, use realm sorting to return a Results object.Also, while realm doesn’t support [LAST] in the context you used, if you sort the results by timestamp the last item will be… the last item e.g. the most current.",
"username": "Jay"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm result - Last in collection | 2020-03-21T08:59:41.710Z | Realm result - Last in collection | 3,141 |
null | [] | [
{
"code": "",
"text": "Just my write-up on how to build out reporting analytics stack out of MongoDB for freehttps://www.holistics.io/blog/build-reporting-analytics-mongodb-using-holistics-for-free/Feel free to let me know if you have any question!",
"username": "Anthony_Thong_Do"
},
{
"code": "",
"text": "Thanks for sharing that!",
"username": "Brett_Donovan"
}
] | A Guide to Build Reporting Analytics on MongoDB for free | 2020-03-20T13:49:04.751Z | A Guide to Build Reporting Analytics on MongoDB for free | 1,527 |
null | [
"java"
] | [
{
"code": "$project: {\n \"FieldNameA\" : {\n $filter: {\n input: \"$FieldNameB\", as: \"item\",\n cond: { $in: [ \"{B}\", \"$$item.List\" ] }\n } \n }\n}\n",
"text": "Is the $filter aggregation operator supported by the java driver?I’ve looked in tutorials, stack overflow, and these two files: mongo-java-driver/Projections.java at master · mongodb/mongo-java-driver · GitHub and mongo-java-driver/Aggregates.java at master · mongodb/mongo-java-driver · GitHub , but haven’t found anything on it. I assumed it to be supported based on https://docs.mongodb.com/ecosystem/drivers/java/#mongodb-compatibility and the “version 3.2” note in the $filter docs. Any help is apprecciated.In particular, I’m looking to do",
"username": "Nick_12"
},
{
"code": "{\n \"_id\" : 1,\n \"FieldNameB\" : [\n {\n \"List\" : [ \"one\", \"two\", \"three\" ]\n },\n {\n \"List\" : [ \"five\", \"six\" ]\n }\n ]\n}\ndb.collection.aggregate([\n{\n $project: {\n \"FieldNameA\" : {\n $filter: {\n input: \"$FieldNameB\", as: \"item\",\n cond: { $in: [ \"six\", \"$$item.List\" ] }\n } \n }\n }\n}\n] )\nMongoClient mongoClient = MongoClients.create(\"mongodb://localhost/\");\nMongoDatabase database = mongoClient.getDatabase(\"test\");\nMongoCollection<Document> collection = database.getCollection(\"collection\");\nBson projection = project(\n\t\t\t\t\tcomputed(\n\t\t\t\t\t\t\"FieldNameA\", \n\t\t\t\t\t\t\teq(\"$filter\", \n\t\t\t\t\t\t\t\tand(\n\t\t\t\t\t\t\t\t\teq(\"input\", \"$FieldNameB\"), \n\t\t\t\t\t\t\t\t\teq(\"as\", \"item\"), \n\t\t\t\t\t\t\t\t\tin(\"cond\", Arrays.asList(\"six\", \"$$item.List\"))\n\t\t\t\t\t\t\t\t)\n\t\t\t\t\t\t\t)\n\t\t\t\t\t)\n);\n\nList<Bson> pipeline = Arrays.asList(projection);\t\t\t\t\t\t\t\t\t\nList<Document> results = new ArrayList<>();\ncollection.aggregate(pipeline).into(results); \t\nresults.forEach(doc -> System.out.println(doc.toJson()));\n{\n \"_id\" : 1,\n \"FieldNameA\" : [\n {\n \"List\" : [\n \"five\",\n \"six\"\n ]\n }\n ]\n}\n",
"text": "I am assuming an input document like this:And the aggregation based on your post (may not be exact, but similar) using MongoDB v4.2.3:The Java code using MongoDB Java Driver v3.12.2:The output:NOTE: You can also build the aggregation and export the pipeline to Java programming language; see Aggregation Pipeline Builder.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thank you!I never would have guessed it was possible like that. Out of sheer curiosity, could you explain how that works? Is it taking advantage of the way computed/eq/and are represented by the java driver to use a projection/aggregation not “natively” supported by the higher-level API?",
"username": "Nick_12"
},
{
"code": "",
"text": "Is it taking advantage of the way computed/eq/and are represented by the java driver to use a projection/aggregationYes, using the driver and its API calls.I am using MongoDB Java driver APIs to build the Java code to connect to the MongoDB database server, build and run the aggregation query (thats the only way, as I know, to access the MongoDB database from Java).In general, if you are running the aggregation from the mongo shell, there is a mongo shell’s associated driver. All client programs, like mongo shell or Java or Python programs/scripts, access MongoDB via a driver and are coded using relevant APIs.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "I wasn’t really asking that, though thank you for the answer, I was curious how you seem to be manually creating the bson query object using “eq” and “and” in the java code, specifically. I looked for how to do this a decent amount, and e.g the docs for projections for the java driver don’t even mention the $filter operator: ProjectionsI’d been looking for something like Projections.filter, but apparently since it’s not there it required manually building the bson query. If I understand correctly, the “eq” filter is being used/abused to represent arbitrary key-value pairs, and the “and” filter to represent arbitrary bson objects, allowing you to manually construct the query? Without using the higher level constructs from Aggregates.* or Projections.*Your solution worked perfectly, and if I understand the way you created the bson query manually using “eq” and “and” correctly I should be able to do the same for anything else I find missing in the “dumb” top-level api. Thank you!",
"username": "Nick_12"
},
{
"code": "$filter$project$filter{ \n $project: { \n details: {\n $filter: {\n input: \"$details\",\n as: \"d\",\n cond: { $eq: [ \"$$d.state\", \"Active\" ] }\n }\n } \n } \n}\nproject()\n .and(\n filter(\"details\")\n .as(\"d\")\n .by(Eq.valueOf(\"d.state\").equalToValue(\"Active\" )))\n .as(\"details\")\n",
"text": "I looked for how to do this a decent amount, and e.g the docs for projections for the java driver don’t even mention the $filter operator…There is no specific API for the $filter aggregation array operator; I haven’t seen one yet. If you are looking for such constructs the Spring MongoDB APIs have those; e.g using the ArrayOperators.Filter:The following aggregation stage with $project and $filter,translates to (in Java using Spring APIs):",
"username": "Prasad_Saya"
}
] | Java driver - $filter aggregation operator | 2020-03-19T16:31:43.236Z | Java driver - $filter aggregation operator | 7,139 |
null | [
"aggregation",
"stitch"
] | [
{
"code": "{ \n Date : xxx,\n userId : yyy,\n dataType : zzz\n value : aaa \n}\n{\n _id : zzz, (this will be unique)\n date : xxx, (this is a unique list for each _id)\n total : sum(aaa) (one number for each date)\n}\n result = collection.aggregate([\n { $match: { userId: { \"$in\" : userIds } } },\n { $group: { _id : \"$dataType\", \n date: { \"$first\": \"$Date\" }, \n total: { $sum: \"$value\" } \n } },\n { $sort : { _id : 1 } } \n ])\n{ _id: dataType1, date: date1, total: 10},\n{ _id: dataType1, date: date2, total: 20},\n{ _id: dataType2, date: date1, total: 3},\n{ _id: dataType2, date: date2, total: 77} etc...\n{ _id: dataType1, {date: date1, total: 10}, {date: date2, total: 20}},\n{ _id: dataType2, {date: date1, total: 3}, {date: date2, total: 77}} etc...\n",
"text": "Hello,I have a collection with documents that look like this:I am trying to run an aggregate query to output the following:Here’s my Stitch function’s aggregation code, the function receives the argument ‘userIds’:This currently outputs one date per dataType, but I’d like the results to look like this:or even like this…How can I get this working correctly?As a bonus, I’d also like the results to be sorted by date within each subgroup of dataType.",
"username": "Daniel_Gold"
},
{
"code": "dataTypeDatedb.collection.aggregate( [\n { \n $match: { userId: { \"$in\" : userIds } } \n },\n { \n $group: { \n _id: { \n \"dataType\": \"$dataType\", \n \"Date\": \"$Date\" \n }, \n total: { $sum: \"$value\" } \n } \n },\n { \n $project: { \n dataType: \"$_id.dataType\", \n Date: \"$_id.Date\", \n _id: 0, \n total: 1 \n } \n },\n { \n $sort: { \n dataType: 1, \n Date: 1 \n } \n },\n { \n $group: { \n _id: \"$dataType\", \n DateTotals: { $push: { Date: \"$Date\", total: \"$total\" } } \n } \n }\n] )\n$group",
"text": "The following aggregation does the grouping by dataType and Date fields and the summing.The output is similar to what you are expecting. Using the last $group stage (the one after the sort) is optional; depends upon how you want the output.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "That’s fantastic, thanks so much for your help!",
"username": "Daniel_Gold"
}
] | Aggregation into separate groups | 2020-03-19T23:16:30.161Z | Aggregation into separate groups | 1,678 |
null | [
"security"
] | [
{
"code": "{\n\t\"_id\" : \"admin.manosh\",\n\t\"userId\" : UUID(\"870d8587-17ac-48d4-b739-29cf56681551\"),\n\t\"user\" : \"manosh\",\n\t\"db\" : \"admin\",\n\t\"roles\" : [\n\t\t{\n\t\t\t\"role\" : \"root\",\n\t\t\t\"db\" : \"admin\"\n\t\t}\n\t],\n\t\"mechanisms\" : [\n\t\t\"SCRAM-SHA-1\",\n\t\t\"SCRAM-SHA-256\"\n\t]\n}\n",
"text": "Hi,For some simple testing, I am trying to drop the local database using the root role enabled user. but I am receiving an error, below I have shared logs.command>:\n1. use local\n2. db.dropDatabase()User:",
"username": "Manosh_Malai"
},
{
"code": "",
"text": "That is a system database. I don’t think you can drop it.",
"username": "chris"
}
] | MongoDB 4.2.3 trying to drop local database but failed | 2020-03-20T13:49:09.689Z | MongoDB 4.2.3 trying to drop local database but failed | 2,207 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Hi,How would one go about manually creating collections and documents from an existing realm? Or. How can I export from my current Realm and import the dataset into Atlas? How should I construct that code?I’m asking because I could get a head start in making a new Blazor server app for an existing realm cloud solution. The plan is to be ready with this app when Mongo is ready with MongoDB Realm at which point I would then be able to connect the Blazor app (with perhaps minor changes) to the realm database already in production.I have a broad idea how this is going to work. But how do I handle backlinks for example ? Are there any best practices, anything I need to be aware of?I should mention that I’m new to MongoDB.",
"username": "Void"
},
{
"code": "",
"text": "@Brian_Munkholm Could you extend fx. Realm studio with a Realm export that could be imported as an Atlas DB? Any tooling would be appreciated so one could begin working with MongoDB using the .NET Driver.",
"username": "Void"
},
{
"code": "",
"text": "Hi Tim,\nThere won’t be an export function in Studio (at least initially). But if you have a local Realm you would be able to sync that directly into MongoDB with the JS SDK.",
"username": "Brian_Munkholm"
},
{
"code": "",
"text": "How? Is there a blog, a gist, some doc explaining the concepts involved, a sample ?",
"username": "Void"
},
{
"code": "",
"text": "I’m afraid you will need to be a bit patient for now. We haven’t released the beta for syncing with MongoDB yet, and therefore also no documentation. Expect to see announcements on this at MongoDB World 2022 | June 7-9, 2022 | MongoDB",
"username": "Brian_Munkholm"
},
{
"code": "",
"text": "Øv Thanks Brian.",
"username": "Void"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Manually mapping Realm Objects to Documents | 2020-03-18T16:14:40.599Z | Manually mapping Realm Objects to Documents | 2,449 |
null | [
"charts",
"on-premises"
] | [
{
"code": "(TRANSPORT_ERROR): the request transport encountered an error communicating with Stitch: Network request failedhttps://<mydomain>/api/client/v2.0/app/<App-id>/location{\"deployment_model\":\"GLOBAL\",\"location\":\"US-VA\",\"hostname\":\"http://<mydomain>\"}",
"text": "Hello,I have installed MongoDB Charts on an ECS container and in front of that,\ni have a cloudfront that terminate the SSL connection an redirect traffic over HTTP on my container.I can access to the login page but as soon as i’m trying to login, I’m getting this error:(TRANSPORT_ERROR): the request transport encountered an error communicating with Stitch: Network request failedBy looking to queries send by browser, i can see that a kind of metadatas pre-fetch query on\nhttps://<mydomain>/api/client/v2.0/app/<App-id>/location is launched, but is gives back an hostname with http protocol over https:\n{\"deployment_model\":\"GLOBAL\",\"location\":\"US-VA\",\"hostname\":\"http://<mydomain>\"}Then the preflight (OPTIONS) query is launched but ober http, so the browser raise an “Mixed Content” errorCan someone help me to figure out why it returns this hostname ? and how to make it works ?PS: I need to terminate SSL connection ahead the container so i can’t configure HTTPS on MongoDB Charts web server with CHARTS_HTTPS_CERTIFICATE* variables.",
"username": "nicolas_miannay"
},
{
"code": "",
"text": "Hi Nicolas,Charts respects any of these headers to determine if it needs to make the hostname HTTPS:X-Forwarded-Proto: https\nFront-End-Https: on\nX-Forwarded-Protocol: https\nX-Forwarded-Ssl’: on\nX-Url-Scheme: httpsYou should be able to set these headers either by changing the Origin Protocol Policy, or in a pinch, via Origin Custom Headers.",
"username": "Nathan_Smyth"
},
{
"code": "",
"text": "Hi Nathan,I added X-Url-Scheme header to my Origin Custom Headers and it works, thank you so much !",
"username": "nicolas_miannay"
},
{
"code": "",
"text": "Hi\nBefore you can configure HTTPS for your MongoDB Charts web server, you must first obtain an SSL key and certificate from an appropriate certification authority. Instructions for obtaining an SSL key and providing a list of trusted certificate authorities",
"username": "samuel_otomewo"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | HTTPS termination ahead of MongoDB charts 19.2.1 container | 2020-03-18T16:13:59.688Z | HTTPS termination ahead of MongoDB charts 19.2.1 container | 3,176 |
null | [
"java",
"production",
"scala"
] | [
{
"code": "",
"text": "The 4.0.1 MongoDB Java & JVM Drivers release is a patch to the 4.0.0 release and a recommended upgrade.The documentation hub includes extensive documentation of the 4.0 driver, includingWhat’s New: what’s new in the 4.0 driverUpgrading: upgrading from the 3.12.x driverInstallation MongoDB Driver: how to get the Java driver.Installation MongoDB Reactive Streams Driver: how to get the Reactive Streams driver.Installation MongoDB Scala Driver: how to get the Scala driver.and much more.You can find a full list of bug fixes here.MongoDB Java Driver documentation",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Java & JVM Drivers 4.0.1 Released | 2020-03-20T09:43:37.711Z | MongoDB Java & JVM Drivers 4.0.1 Released | 3,501 |
null | [
"graphql"
] | [
{
"code": "",
"text": "Hello,\nit is possible to use mathematical functions in GraphQL-Queries?\nFor example i want to do something like this:where sin is the sinus-function.best regardsVolkhard Vogeler",
"username": "Volkhard_Vogeler"
},
{
"code": "",
"text": "@Volkhard_Vogeler Math functions are not currently part of Realm’s GraphQL API. Supported queries are based on objects & relationships. For more details, see How to use the Realm GraphQL API: Queries.However, your examples look like fields that could be pre-calculated and included in the saved object (which would also be more efficient to query).Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hello Stennie,\nthanks for this informations.\ni know - the sample is a liitle bit too simple - it is just for clarifying, what i wanted to do (e.g. Radius-Search)best regardsVolkhard",
"username": "Volkhard_Vogeler"
},
{
"code": "",
"text": "i know - the sample is a liitle bit too simple - it is just for clarifying, what i wanted to do (e.g. Radius-Search)Thanks for the further info. As a workaround you could perhaps use a geocoding system like geohash. There are several implementations of this including libraries for generating covering geohashes for radius searches (for example: ProximityHash).You can also create a feature suggestion with more details on your use case at the MongoDB Feedback site so others can upvote & watch the suggestion for updates.Regards,\nStennie",
"username": "Stennie_X"
}
] | GraphQL: Use of mathematical functions | 2020-03-17T18:02:11.439Z | GraphQL: Use of mathematical functions | 3,672 |
null | [
"golang",
"transactions",
"change-streams"
] | [
{
"code": "",
"text": "I have a server that requires data integrity that is enforced through the use of multi-document transactions (which requires Primary ReadPreference), but also serves many Changestreams across multiple different collections (SecondaryPreferred would be the preferred ReadConcern here).What would be the best practice to support this? It seems you can’t set a read preference on ChangeStreams, which would have been the easiest way. It seems the only way to set a ReadPreference is client-wide with the official MongoDB golang driver.Would maintaining two clients, one with ReadPref Primary and one with ReadPref SecondaryPreferred, be the best way to handle this? Or does Mongo have a more elegant way of handling this?Thanks for any suggestions",
"username": "Brian_McQueen"
},
{
"code": "Client.WatchDatabase.WatchCollection.WatchDatabaseCollectionsecondary := readpref.Secondary()\ndbOpts := options.Database().SetReadPreference(secondary)\ndb := client.Database(\"dbname\", dbOpts)\n",
"text": "Are you creating the change streams through Client.Watch, Database.Watch, or Collection.Watch? If it’s Database or Collection, you can make a database or collection object configured with secondary read preference:",
"username": "Divjot_Arora"
},
{
"code": "",
"text": "You’re absolutely correct; I facepalmed when I realized it shortly after posting.The biggest benefit I got out of realizing it is how to further hone my repository pattern to pass Collections instead of collection names. This gets me more customizable ReadPreferences, which I need because not every call will be a Transaction, but certainly some will.Thanks for your help!",
"username": "Brian_McQueen"
}
] | Best practice for multiple read concerns? Transactions and multiple changestreams with golang | 2020-03-18T20:40:22.862Z | Best practice for multiple read concerns? Transactions and multiple changestreams with golang | 4,769 |
null | [] | [
{
"code": "",
"text": "I asked this on the ubuntu one forum but have not had an answer yet, so hoping someone can answer asap on here.Hi,I need to setup an apt mirror for mongodb on a DMZ server.I have read lots of posts that discuss keys, but they are all installing mongodb on the server itself, not actually creating a mongdb mirror that other servers can connect to, to do an apt update/apt upgrade.The repo server as I call it has apt mirror for ubuntu 18.04.\nI have also installed the keys for mongodb 18.04 on this repo server and can see two keys for mongodb when I execute apt-key list.Whenever I try to apt update on another remote server pointed to this repo server, when it comes to mongodb, apt say InRelease is not signed.Thanks for any help given.",
"username": "Michael_Love"
},
{
"code": "",
"text": "MongoDB may not allow ‘mirroring’ per se.I’ve used a couple of projects before as a caching apt server. Apt-cacher-ng is one example.",
"username": "chris"
}
] | How do I setup a MongoDB APT mirror - not a MongoDB mirror | 2020-03-19T10:52:34.667Z | How do I setup a MongoDB APT mirror - not a MongoDB mirror | 1,971 |
null | [
"java"
] | [
{
"code": "",
"text": "Our application is still being put together and as we have more pieces added we’ve started seeing the total number of connections to MongoDB spike when certain pods are restarted. We’ve traced it down to how MongoClient handles the connection pool. For our app, to meet SLAs, we need a lot of available connections in the pool for brief periods of time so they are set to a min of 10, a max of 100, and a timeout of 10 min. Our app uses multiple instances of MongoClient, one to access each logical collection of DBs/Collections for security reasons. On app startup all of these instances are created thus each creates its pool which starts with 100 connections. 10 minutes after startup we see the connection go down dramatically when the first prune of the pools are done. With the current state of the app there are 8 components (and it will be growing a lot) using an average of 13 instances of MongoClient so that means at system startup we’re creating over 10,000 connections to MongoDB. We need a ways to better control connection creation at startup, like only create the min in each pool, or timeout the extra connections very quickly. We need a simple solution, like config changes or simple code changes since we don’t have time to do any redesign or reimplementation at this point. Our SEs are insisting we have one MongoDB configuration for the entire system. We’re arguing to tune each service independently regarding min/max/timeout values but I doubt we’ll win. If we truly have to have high max pool values and many minute of timeout are there any ways we can better control the creation of connections at startup time?",
"username": "Rick_Poole"
},
{
"code": "MongoClient",
"text": "Welcome @Rick_Poole,Please confirm the specific driver & version you are using. The MongoClient class name is a standard convention used by several drivers.Also, what sort of deployment do you have: standalone, replica set, or sharded cluster?Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "We are using the Java driver version 3.11 and a replica set.",
"username": "Rick_Poole"
},
{
"code": "",
"text": "See this recent StackOverflow post on Managing Mongodb connections in Java as Object Oriented.There is useful information related to connection pool settings (or configuration), application of these settings on MongoClient within Java code accessing MongoDB server.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "That is exactly what we are already doing…we use a shared MongoClient for each db/collection that requires a unique userid/password for security reasons. Plus our application is spread out over a lot of Pods. We cannot change the security requirements so we have to keep the separate MongoClients for each unique userid/password required. The problem stems from the fact that when a MongoClient creates the underlying connection pool it always creates the maximum number of connections. Multiply this by the number of MongoClients we need across all the Pods that are starting and you get well over 10,000 connections at system startup just with the parts we have running so far. As the rest of the system comes online that number would easily triple or more.",
"username": "Rick_Poole"
}
] | Control number of connections on startup | 2020-03-11T18:48:51.490Z | Control number of connections on startup | 3,037 |
null | [
"installation"
] | [
{
"code": "echo “deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/4.0 multiverse” | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list\nsudo apt-get update\nE: Encountered a section with no Package: header\nE: Problem with MergeList /var/lib/apt/lists/repo.mongodb.org_apt_ubuntu_dists_xenial_mongodb-org_4.0_multiverse_binary-amd64_Packages\nE: The package lists or status file could not be parsed or opened.\n",
"text": "Hello,The install packages for ubuntu are broken (at least for Xenial), we cannot deploy it on our servers.The issue in the tracker was closed, but still wasn’t fixed:\nhttps://jira.mongodb.org/browse/SERVER-46938running:produces",
"username": "Boris_Polishuk"
},
{
"code": "",
"text": "I’m getting the same thing on a debian repo at http://repo.mongodb.org/apt/debian/dists/stretch/mongodb-org/4.0/main/binary-amd64/PackagesIt seems to be the last line: “dpkg-scanpackages: info: Wrote 90 entries to output Packages file.”",
"username": "Byron_Murgatroyd"
}
] | Broken package header for MongoDB 4.0 on Ubuntu 16.04 Xenial | 2020-03-19T09:04:22.041Z | Broken package header for MongoDB 4.0 on Ubuntu 16.04 Xenial | 1,409 |
null | [
"ruby"
] | [
{
"code": "\"2020-02-06T04:51:42.112+09:00\" Read retry due to: Mongo::Error::SocketError Errno::ECONNREFUSED: Connection refused - connect(2) for x.x.x.x:27017 (for x.x.x.x:27017 (server-00-01-ismg4.mongodb.net:27017, TLS)) (on server-00-01-ismg4.mongodb.net:27017, modern retry, attempt 1) (on server-00-01-ismg4.mongodb.net:27017, modern retry, attempt 1)\"\n\"2020-02-06T04:51:42.387+09:00\" Mongo::Error::SocketError (Errno::ECONNREFUSED: Connection refused - connect(2) for x.x.x.x:27017 (for x.x.x.x:27017 (server-00-01-ismg4.mongodb.net:27017, TLS)) (on server-00-01-ismg4.mongodb.net:27017, modern retry, attempt 2) (on server-00-01-ismg4.mongodb.net:27017, modern retry, attempt 2)):\"\n",
"text": "We operate 5 replicas on MongoDB Atlas. Set retryWrites = true in Ruby client connection string.\nThe problem is that when I shut down the primary member (for example, maintenance), I expect the read retry to be started (the read has failed once) and the driver wants to wait for the primary election, the driver will retry without waiting.I want the driver to wait for the selected primary, how do I solve it?mongo-ruby-driver: 2.11.2\nmongoid: 6.4.4\nMongoDB Atlas: 3.6",
"username": "Kazuhiro_Shibuya"
},
{
"code": "",
"text": "What is the exception and stack trace produced by the driver?",
"username": "Oleg_Pudeyev"
},
{
"code": "server_selection_timeoutErrno::ECONNREFUSED: Connection refused - connect(2) for x.x.x.x:27017\n from mongo/socket/ssl.rb:49:in `connect'\n from mongo/socket/ssl.rb:49:in `block (2 levels) in connect!'\n from mongo/socket.rb:315:in `handle_errors'\n from mongo/socket/ssl.rb:49:in `block in connect!'\n from timeout.rb:93:in `block in timeout'\n from timeout.rb:103:in `timeout'\n from mongo/socket/ssl.rb:48:in `connect!'\n from mongo/socket/ssl.rb:81:in `initialize'\n from mongo/address/ipv4.rb:91:in `new'\n from mongo/address/ipv4.rb:91:in `socket'\n from mongo/address.rb:205:in `block in create_resolver'\n from mongo/address.rb:202:in `each'\n from mongo/address.rb:202:in `create_resolver'\n from mongo/address.rb:162:in `socket'\n from mongo/server/connection.rb:190:in `do_connect'\n from mongo/server/connection.rb:177:in `connect!'\n from mongo/server/connection_pool.rb:731:in `connect_connection'\n from mongo/server/connection_pool.rb:357:in `check_out'\n from mongo/server/connection_pool.rb:556:in `with_connection'\n from mongo/server.rb:417:in `with_connection'\n from mongo/operation/shared/executable.rb:55:in `dispatch_message'\n from mongo/operation/shared/executable.rb:50:in `get_result'\n from mongo/operation/shared/executable.rb:29:in `block (3 levels) in do_execute'\n from mongo/operation/shared/response_handling.rb:87:in `add_server_diagnostics'\n from mongo/operation/shared/executable.rb:28:in `block (2 levels) in do_execute'\n from mongo/operation/shared/response_handling.rb:43:in `add_error_labels'\n from mongo/operation/shared/executable.rb:27:in `block in do_execute'\n from mongo/operation/shared/response_handling.rb:73:in `unpin_maybe'\n from mongo/operation/shared/executable.rb:26:in `do_execute'\n from mongo/operation/shared/executable.rb:38:in `execute'\n from mongo/operation/shared/op_msg_or_find_command.rb:28:in `execute'\n from mongo/collection/view/iterable.rb:89:in `send_initial_query'\n from mongo/collection/view/iterable.rb:46:in `block in each'\n from mongo/retryable.rb:61:in `block in read_with_retry_cursor'\n from mongo/retryable.rb:316:in `modern_read_with_retry'\n from mongo/retryable.rb:117:in `read_with_retry'\n from mongo/retryable.rb:60:in `read_with_retry_cursor'\n from mongo/collection/view/iterable.rb:45:in `each'\n from mongoid/query_cache.rb:222:in `each'\n from mongoid/contextual/mongo.rb:258:in `first'\n from mongoid/contextual/mongo.rb:258:in `block in first'\n from mongoid/contextual/mongo.rb:531:in `try_cache'\n from mongoid/contextual/mongo.rb:256:in `first'\n from mongoid/contextual.rb:20:in `first'\n from mongoid/relations/builders/referenced/one.rb:20:in `build'\n from mongoid/relations/accessors.rb:42:in `create_relation'\n from mongoid/relations/accessors.rb:25:in `__build__'\n from mongoid/relations/accessors.rb:103:in `block (2 levels) in get_relation'\n from mongoid/threaded/lifecycle.rb:130:in `_loading'\n from mongoid/relations/accessors.rb:99:in `block in get_relation'\n from mongoid/threaded/lifecycle.rb:89:in `_building'\n from mongoid/relations/accessors.rb:98:in `get_relation'\n from mongoid/relations/accessors.rb:186:in `block in getter'\n from app/models/store.rb:1208:in `ship_setting_statuses'\n from mongoid/relations/proxy.rb:121:in `method_missing'\n from app/decorators/store_decorator.rb:334:in `meta_data'\n from app/views/layouts/store.html.erb:58:in `_app_views_layouts_store_html_erb___1904580886405372244_152195780'\n from action_view/template.rb:159:in `block in render'\n from active_support/notifications.rb:170:in `instrument'\n from action_view/template.rb:354:in `instrument_render_template'\n from action_view/template.rb:157:in `render'\n from action_view/renderer/template_renderer.rb:66:in `render_with_layout'\n from action_view/renderer/template_renderer.rb:52:in `render_template'\n from action_view/renderer/template_renderer.rb:16:in `render'\n from action_view/renderer/renderer.rb:44:in `render_template'\n from action_view/renderer/renderer.rb:25:in `render'\n from action_view/rendering.rb:103:in `_render_template'\n from action_controller/metal/streaming.rb:219:in `_render_template'\n from action_view/rendering.rb:84:in `render_to_body'\n from action_controller/metal/rendering.rb:52:in `render_to_body'\n from action_controller/metal/renderers.rb:142:in `render_to_body'\n from abstract_controller/rendering.rb:25:in `render'\n from action_controller/metal/rendering.rb:36:in `render'\n from action_controller/metal/instrumentation.rb:46:in `block (2 levels) in render'\n from active_support/core_ext/benchmark.rb:14:in `block in ms'\n from benchmark.rb:308:in `realtime'\n from active_support/core_ext/benchmark.rb:14:in `ms'\n from action_controller/metal/instrumentation.rb:46:in `block in render'\n from action_controller/metal/instrumentation.rb:87:in `cleanup_view_runtime'\n from active_record/railties/controller_runtime.rb:36:in `cleanup_view_runtime'\n from mongoid/railties/controller_runtime.rb:25:in `cleanup_view_runtime'\n from action_controller/metal/instrumentation.rb:45:in `render'\n from wicked_pdf/pdf_helper.rb:46:in `call'\n from wicked_pdf/pdf_helper.rb:46:in `render_with_wicked_pdf'\n from wicked_pdf/pdf_helper.rb:30:in `render'\n from app/controllers/application_controller.rb:288:in `store_render'\n from app/controllers/news_controller.rb:32:in `index_page'\n from action_controller/metal/basic_implicit_render.rb:6:in `send_action'\n from abstract_controller/base.rb:194:in `process_action'\n from ddtrace/contrib/action_pack/action_controller/instrumentation.rb:114:in `process_action'\n from action_controller/metal/rendering.rb:30:in `process_action'\n from abstract_controller/callbacks.rb:42:in `block in process_action'\n from active_support/callbacks.rb:109:in `block in run_callbacks'\n from raven/integrations/rails/controller_transaction.rb:7:in `block in included'\n from active_support/callbacks.rb:118:in `instance_exec'\n from active_support/callbacks.rb:118:in `block in run_callbacks'\n from active_support/callbacks.rb:136:in `run_callbacks'\n from abstract_controller/callbacks.rb:41:in `process_action'\n from action_controller/metal/rescue.rb:22:in `process_action'\n from action_controller/metal/instrumentation.rb:34:in `block in process_action'\n from active_support/notifications.rb:168:in `block in instrument'\n from active_support/notifications/instrumenter.rb:23:in `instrument'\n from active_support/notifications.rb:168:in `instrument'\n from action_controller/metal/instrumentation.rb:32:in `process_action'\n from action_controller/metal/params_wrapper.rb:256:in `process_action'\n from active_record/railties/controller_runtime.rb:24:in `process_action'\n from mongoid/railties/controller_runtime.rb:19:in `process_action'\n from abstract_controller/base.rb:134:in `process'\n from action_view/rendering.rb:32:in `process'\n from action_controller/metal.rb:191:in `dispatch'\n from action_controller/metal.rb:252:in `dispatch'\n from action_dispatch/routing/route_set.rb:52:in `dispatch'\n from action_dispatch/routing/route_set.rb:34:in `serve'\n from action_dispatch/routing/mapper.rb:18:in `block in <class:Constraints>'\n from action_dispatch/routing/mapper.rb:48:in `serve'\n from action_dispatch/journey/router.rb:52:in `block in serve'\n from action_dispatch/journey/router.rb:35:in `each'\n from action_dispatch/journey/router.rb:35:in `serve'\n from action_dispatch/routing/route_set.rb:840:in `call'\n from warden/manager.rb:36:in `block in call'\n from warden/manager.rb:35:in `catch'\n from warden/manager.rb:35:in `call'\n from omniauth/strategy.rb:192:in `call!'\n from omniauth/strategy.rb:169:in `call'\n from omniauth/strategy.rb:192:in `call!'\n from omniauth/strategy.rb:169:in `call'\n from omniauth/builder.rb:64:in `call'\n from config/initializers/session_store.rb:37:in `call'\n from rack/attack.rb:182:in `call'\n from rack/tempfile_reaper.rb:15:in `call'\n from rack/etag.rb:25:in `call'\n from rack/conditional_get.rb:25:in `call'\n from rack/head.rb:12:in `call'\n from action_dispatch/http/content_security_policy.rb:18:in `call'\n from rack/session/abstract/id.rb:259:in `context'\n from rack/session/abstract/id.rb:253:in `call'\n from action_dispatch/middleware/cookies.rb:670:in `call'\n from action_dispatch/middleware/callbacks.rb:28:in `block in call'\n from active_support/callbacks.rb:98:in `run_callbacks'\n from action_dispatch/middleware/callbacks.rb:26:in `call'\n from action_dispatch/middleware/debug_exceptions.rb:61:in `call'\n from ddtrace/contrib/rails/middlewares.rb:17:in `call'\n from action_dispatch/middleware/show_exceptions.rb:33:in `call'\n from lograge/rails_ext/rack/logger.rb:15:in `call_app'\n from rails/rack/logger.rb:26:in `block in call'\n from active_support/tagged_logging.rb:71:in `block in tagged'\n from active_support/tagged_logging.rb:28:in `tagged'\n from active_support/tagged_logging.rb:71:in `tagged'\n from rails/rack/logger.rb:26:in `call'\n from action_dispatch/middleware/remote_ip.rb:81:in `call'\n from request_store/middleware.rb:19:in `call'\n from action_dispatch/middleware/request_id.rb:27:in `call'\n from rack/method_override.rb:22:in `call'\n from rack/runtime.rb:22:in `call'\n from action_dispatch/middleware/executor.rb:14:in `call'\n from action_dispatch/middleware/static.rb:127:in `call'\n from rack/sendfile.rb:111:in `call'\n from rack/utf8_sanitizer.rb:22:in `call'\n from raven/integrations/rack.rb:51:in `call'\n from ddtrace/contrib/rack/middlewares.rb:85:in `call'\n from rails/engine.rb:524:in `call'\n from unicorn/http_server.rb:576:in `process_client'\n from unicorn/worker_killer.rb:52:in `process_client'\n from unicorn/http_server.rb:670:in `worker_loop'\n from unicorn/http_server.rb:525:in `spawn_missing_workers'\n from unicorn/http_server.rb:536:in `maintain_worker_count'\n from unicorn/http_server.rb:294:in `join'\n from /var/www/xxx/shared/bundle/ruby/2.5.0/gems/unicorn-4.8.3/bin/unicorn_rails:209:in `<top (required)>'\n from /var/www/xxx/shared/bundle/ruby/2.5.0/bin/unicorn_rails:23:in `load'\n from /var/www/xxx/shared/bundle/ruby/2.5.0/bin/unicorn_rails:23:in `<main>'\nMongo::Error::SocketError: Errno::ECONNREFUSED: Connection refused - connect(2) for x.x.x.x:27017 (for x.x.x.x:27017 (server-shard-00-02-ismg4.mongodb.net:27017, TLS)) (on server-shard-00-02-ismg4.mongodb.net:27017, modern retry, attempt 1)\n from mongo/socket.rb:319:in `rescue in handle_errors'\n from mongo/socket.rb:313:in `handle_errors'\n from mongo/socket/ssl.rb:49:in `block in connect!'\n from timeout.rb:93:in `block in timeout'\n from timeout.rb:103:in `timeout'\n from mongo/socket/ssl.rb:48:in `connect!'\n from mongo/socket/ssl.rb:81:in `initialize'\n from mongo/address/ipv4.rb:91:in `new'\n from mongo/address/ipv4.rb:91:in `socket'\n from mongo/address.rb:205:in `block in create_resolver'\n from mongo/address.rb:202:in `each'\n from mongo/address.rb:202:in `create_resolver'\n from mongo/address.rb:162:in `socket'\n from mongo/server/connection.rb:190:in `do_connect'\n from mongo/server/connection.rb:177:in `connect!'\n from mongo/server/connection_pool.rb:731:in `connect_connection'\n from mongo/server/connection_pool.rb:357:in `check_out'\n from mongo/server/connection_pool.rb:556:in `with_connection'\n from mongo/server.rb:417:in `with_connection'\n from mongo/operation/shared/executable.rb:55:in `dispatch_message'\n from mongo/operation/shared/executable.rb:50:in `get_result'\n from mongo/operation/shared/executable.rb:29:in `block (3 levels) in do_execute'\n from mongo/operation/shared/response_handling.rb:87:in `add_server_diagnostics'\n from mongo/operation/shared/executable.rb:28:in `block (2 levels) in do_execute'\n from mongo/operation/shared/response_handling.rb:43:in `add_error_labels'\n from mongo/operation/shared/executable.rb:27:in `block in do_execute'\n from mongo/operation/shared/response_handling.rb:73:in `unpin_maybe'\n from mongo/operation/shared/executable.rb:26:in `do_execute'\n from mongo/operation/shared/executable.rb:38:in `execute'\n from mongo/operation/shared/op_msg_or_find_command.rb:28:in `execute'\n from mongo/collection/view/iterable.rb:89:in `send_initial_query'\n from mongo/collection/view/iterable.rb:46:in `block in each'\n from mongo/retryable.rb:61:in `block in read_with_retry_cursor'\n from mongo/retryable.rb:316:in `modern_read_with_retry'\n from mongo/retryable.rb:117:in `read_with_retry'\n from mongo/retryable.rb:60:in `read_with_retry_cursor'\n from mongo/collection/view/iterable.rb:45:in `each'\n from mongoid/query_cache.rb:222:in `each'\n from mongoid/contextual/mongo.rb:258:in `first'\n from mongoid/contextual/mongo.rb:258:in `block in first'\n from mongoid/contextual/mongo.rb:531:in `try_cache'\n from mongoid/contextual/mongo.rb:256:in `first'\n from mongoid/contextual.rb:20:in `first'\n from mongoid/relations/builders/referenced/one.rb:20:in `build'\n from mongoid/relations/accessors.rb:42:in `create_relation'\n from mongoid/relations/accessors.rb:25:in `__build__'\n from mongoid/relations/accessors.rb:103:in `block (2 levels) in get_relation'\n from mongoid/threaded/lifecycle.rb:130:in `_loading'\n from mongoid/relations/accessors.rb:99:in `block in get_relation'\n from mongoid/threaded/lifecycle.rb:89:in `_building'\n from mongoid/relations/accessors.rb:98:in `get_relation'\n from mongoid/relations/accessors.rb:186:in `block in getter'\n from app/models/store.rb:1208:in `ship_setting_statuses'\n from mongoid/relations/proxy.rb:121:in `method_missing'\n from app/decorators/store_decorator.rb:334:in `meta_data'\n from app/views/layouts/store.html.erb:58:in `_app_views_layouts_store_html_erb___1904580886405372244_152195780'\n from action_view/template.rb:159:in `block in render'\n from active_support/notifications.rb:170:in `instrument'\n from action_view/template.rb:354:in `instrument_render_template'\n from action_view/template.rb:157:in `render'\n from action_view/renderer/template_renderer.rb:66:in `render_with_layout'\n from action_view/renderer/template_renderer.rb:52:in `render_template'\n from action_view/renderer/template_renderer.rb:16:in `render'\n from action_view/renderer/renderer.rb:44:in `render_template'\n from action_view/renderer/renderer.rb:25:in `render'\n from action_view/rendering.rb:103:in `_render_template'\n from action_controller/metal/streaming.rb:219:in `_render_template'\n from action_view/rendering.rb:84:in `render_to_body'\n from action_controller/metal/rendering.rb:52:in `render_to_body'\n from action_controller/metal/renderers.rb:142:in `render_to_body'\n from abstract_controller/rendering.rb:25:in `render'\n from action_controller/metal/rendering.rb:36:in `render'\n from action_controller/metal/instrumentation.rb:46:in `block (2 levels) in render'\n from active_support/core_ext/benchmark.rb:14:in `block in ms'\n from benchmark.rb:308:in `realtime'\n from active_support/core_ext/benchmark.rb:14:in `ms'\n from action_controller/metal/instrumentation.rb:46:in `block in render'\n from action_controller/metal/instrumentation.rb:87:in `cleanup_view_runtime'\n from active_record/railties/controller_runtime.rb:36:in `cleanup_view_runtime'\n from mongoid/railties/controller_runtime.rb:25:in `cleanup_view_runtime'\n from action_controller/metal/instrumentation.rb:45:in `render'\n from wicked_pdf/pdf_helper.rb:46:in `call'\n from wicked_pdf/pdf_helper.rb:46:in `render_with_wicked_pdf'\n from wicked_pdf/pdf_helper.rb:30:in `render'\n from app/controllers/application_controller.rb:288:in `store_render'\n from app/controllers/news_controller.rb:32:in `index_page'\n from action_controller/metal/basic_implicit_render.rb:6:in `send_action'\n from abstract_controller/base.rb:194:in `process_action'\n from ddtrace/contrib/action_pack/action_controller/instrumentation.rb:114:in `process_action'\n from action_controller/metal/rendering.rb:30:in `process_action'\n from abstract_controller/callbacks.rb:42:in `block in process_action'\n from active_support/callbacks.rb:109:in `block in run_callbacks'\n from raven/integrations/rails/controller_transaction.rb:7:in `block in included'\n from active_support/callbacks.rb:118:in `instance_exec'\n from active_support/callbacks.rb:118:in `block in run_callbacks'\n from active_support/callbacks.rb:136:in `run_callbacks'\n from abstract_controller/callbacks.rb:41:in `process_action'\n from action_controller/metal/rescue.rb:22:in `process_action'\n from action_controller/metal/instrumentation.rb:34:in `block in process_action'\n from active_support/notifications.rb:168:in `block in instrument'\n from active_support/notifications/instrumenter.rb:23:in `instrument'\n from active_support/notifications.rb:168:in `instrument'\n from action_controller/metal/instrumentation.rb:32:in `process_action'\n from action_controller/metal/params_wrapper.rb:256:in `process_action'\n from active_record/railties/controller_runtime.rb:24:in `process_action'\n from mongoid/railties/controller_runtime.rb:19:in `process_action'\n from abstract_controller/base.rb:134:in `process'\n from action_view/rendering.rb:32:in `process'\n from action_controller/metal.rb:191:in `dispatch'\n from action_controller/metal.rb:252:in `dispatch'\n from action_dispatch/routing/route_set.rb:52:in `dispatch'\n from action_dispatch/routing/route_set.rb:34:in `serve'\n from action_dispatch/routing/mapper.rb:18:in `block in <class:Constraints>'\n from action_dispatch/routing/mapper.rb:48:in `serve'\n from action_dispatch/journey/router.rb:52:in `block in serve'\n from action_dispatch/journey/router.rb:35:in `each'\n from action_dispatch/journey/router.rb:35:in `serve'\n from action_dispatch/routing/route_set.rb:840:in `call'\n from warden/manager.rb:36:in `block in call'\n from warden/manager.rb:35:in `catch'\n from warden/manager.rb:35:in `call'\n from omniauth/strategy.rb:192:in `call!'\n from omniauth/strategy.rb:169:in `call'\n from omniauth/strategy.rb:192:in `call!'\n from omniauth/strategy.rb:169:in `call'\n from omniauth/builder.rb:64:in `call'\n from config/initializers/session_store.rb:37:in `call'\n from rack/attack.rb:182:in `call'\n from rack/tempfile_reaper.rb:15:in `call'\n from rack/etag.rb:25:in `call'\n from rack/conditional_get.rb:25:in `call'\n from rack/head.rb:12:in `call'\n from action_dispatch/http/content_security_policy.rb:18:in `call'\n from rack/session/abstract/id.rb:259:in `context'\n from rack/session/abstract/id.rb:253:in `call'\n from action_dispatch/middleware/cookies.rb:670:in `call'\n from action_dispatch/middleware/callbacks.rb:28:in `block in call'\n from active_support/callbacks.rb:98:in `run_callbacks'\n from action_dispatch/middleware/callbacks.rb:26:in `call'\n from action_dispatch/middleware/debug_exceptions.rb:61:in `call'\n from ddtrace/contrib/rails/middlewares.rb:17:in `call'\n from action_dispatch/middleware/show_exceptions.rb:33:in `call'\n from lograge/rails_ext/rack/logger.rb:15:in `call_app'\n from rails/rack/logger.rb:26:in `block in call'\n from active_support/tagged_logging.rb:71:in `block in tagged'\n from active_support/tagged_logging.rb:28:in `tagged'\n from active_support/tagged_logging.rb:71:in `tagged'\n from rails/rack/logger.rb:26:in `call'\n from action_dispatch/middleware/remote_ip.rb:81:in `call'\n from request_store/middleware.rb:19:in `call'\n from action_dispatch/middleware/request_id.rb:27:in `call'\n from rack/method_override.rb:22:in `call'\n from rack/runtime.rb:22:in `call'\n from action_dispatch/middleware/executor.rb:14:in `call'\n from action_dispatch/middleware/static.rb:127:in `call'\n from rack/sendfile.rb:111:in `call'\n from rack/utf8_sanitizer.rb:22:in `call'\n from raven/integrations/rack.rb:51:in `call'\n from ddtrace/contrib/rack/middlewares.rb:85:in `call'\n from rails/engine.rb:524:in `call'\n from unicorn/http_server.rb:576:in `process_client'\n from unicorn/worker_killer.rb:52:in `process_client'\n from unicorn/http_server.rb:670:in `worker_loop'\n from unicorn/http_server.rb:525:in `spawn_missing_workers'\n from unicorn/http_server.rb:536:in `maintain_worker_count'\n from unicorn/http_server.rb:294:in `join'\n from /var/www/xxx/shared/bundle/ruby/2.5.0/gems/unicorn-4.8.3/bin/unicorn_rails:209:in `<top (required)>'\n from /var/www/xxx/shared/bundle/ruby/2.5.0/bin/unicorn_rails:23:in `load'\n from /var/www/xxx/shared/bundle/ruby/2.5.0/bin/unicorn_rails:23:in `<main>'\nMongo::Error::OperationFailure: interrupted at shutdown (11600) (on server-shard-00-01-ismg4.mongodb.net:27017, modern retry, attempt 2)\n from mongo/operation/result.rb:297:in `raise_operation_failure'\n from mongo/operation/result.rb:268:in `validate!'\n from mongo/operation/shared/response_handling.rb:29:in `block (3 levels) in validate_result'\n from mongo/operation/shared/response_handling.rb:87:in `add_server_diagnostics'\n from mongo/operation/shared/response_handling.rb:28:in `block (2 levels) in validate_result'\n from mongo/operation/shared/response_handling.rb:43:in `add_error_labels'\n from mongo/operation/shared/response_handling.rb:27:in `block in validate_result'\n from mongo/operation/shared/response_handling.rb:73:in `unpin_maybe'\n from mongo/operation/shared/response_handling.rb:26:in `validate_result'\n from mongo/operation/shared/executable.rb:39:in `block in execute'\n from mongo/operation/shared/executable.rb:38:in `tap'\n from mongo/operation/shared/executable.rb:38:in `execute'\n from mongo/operation/shared/op_msg_or_find_command.rb:28:in `execute'\n from mongo/collection/view/iterable.rb:89:in `send_initial_query'\n from mongo/collection/view/iterable.rb:46:in `block in each'\n from mongo/retryable.rb:61:in `block in read_with_retry_cursor'\n from mongo/retryable.rb:392:in `retry_read'\n from mongo/retryable.rb:323:in `rescue in modern_read_with_retry'\n from mongo/retryable.rb:315:in `modern_read_with_retry'\n from mongo/retryable.rb:117:in `read_with_retry'\n from mongo/retryable.rb:60:in `read_with_retry_cursor'\n from mongo/collection/view/iterable.rb:45:in `each'\n from mongoid/query_cache.rb:222:in `each'\n from mongoid/contextual/mongo.rb:258:in `first'\n from mongoid/contextual/mongo.rb:258:in `block in first'\n from mongoid/contextual/mongo.rb:531:in `try_cache'\n from mongoid/contextual/mongo.rb:256:in `first'\n from mongoid/contextual.rb:20:in `first'\n from mongoid/relations/builders/referenced/one.rb:20:in `build'\n from mongoid/relations/accessors.rb:42:in `create_relation'\n from mongoid/relations/accessors.rb:25:in `__build__'\n from mongoid/relations/accessors.rb:103:in `block (2 levels) in get_relation'\n from mongoid/threaded/lifecycle.rb:130:in `_loading'\n from mongoid/relations/accessors.rb:99:in `block in get_relation'\n from mongoid/threaded/lifecycle.rb:89:in `_building'\n from mongoid/relations/accessors.rb:98:in `get_relation'\n from mongoid/relations/accessors.rb:186:in `block in getter'\n from app/models/store.rb:1208:in `ship_setting_statuses'\n from mongoid/relations/proxy.rb:121:in `method_missing'\n from app/decorators/store_decorator.rb:334:in `meta_data'\n from app/views/layouts/store.html.erb:58:in `_app_views_layouts_store_html_erb___1904580886405372244_152195780'\n from action_view/template.rb:159:in `block in render'\n from active_support/notifications.rb:170:in `instrument'\n from action_view/template.rb:354:in `instrument_render_template'\n from action_view/template.rb:157:in `render'\n from action_view/renderer/template_renderer.rb:66:in `render_with_layout'\n from action_view/renderer/template_renderer.rb:52:in `render_template'\n from action_view/renderer/template_renderer.rb:16:in `render'\n from action_view/renderer/renderer.rb:44:in `render_template'\n from action_view/renderer/renderer.rb:25:in `render'\n from action_view/rendering.rb:103:in `_render_template'\n from action_controller/metal/streaming.rb:219:in `_render_template'\n from action_view/rendering.rb:84:in `render_to_body'\n from action_controller/metal/rendering.rb:52:in `render_to_body'\n from action_controller/metal/renderers.rb:142:in `render_to_body'\n from abstract_controller/rendering.rb:25:in `render'\n from action_controller/metal/rendering.rb:36:in `render'\n from action_controller/metal/instrumentation.rb:46:in `block (2 levels) in render'\n from active_support/core_ext/benchmark.rb:14:in `block in ms'\n from benchmark.rb:308:in `realtime'\n from active_support/core_ext/benchmark.rb:14:in `ms'\n from action_controller/metal/instrumentation.rb:46:in `block in render'\n from action_controller/metal/instrumentation.rb:87:in `cleanup_view_runtime'\n from active_record/railties/controller_runtime.rb:36:in `cleanup_view_runtime'\n from mongoid/railties/controller_runtime.rb:25:in `cleanup_view_runtime'\n from action_controller/metal/instrumentation.rb:45:in `render'\n from wicked_pdf/pdf_helper.rb:46:in `call'\n from wicked_pdf/pdf_helper.rb:46:in `render_with_wicked_pdf'\n from wicked_pdf/pdf_helper.rb:30:in `render'\n from app/controllers/application_controller.rb:288:in `store_render'\n from app/controllers/news_controller.rb:32:in `index_page'\n from action_controller/metal/basic_implicit_render.rb:6:in `send_action'\n from abstract_controller/base.rb:194:in `process_action'\n from ddtrace/contrib/action_pack/action_controller/instrumentation.rb:114:in `process_action'\n from action_controller/metal/rendering.rb:30:in `process_action'\n from abstract_controller/callbacks.rb:42:in `block in process_action'\n from active_support/callbacks.rb:109:in `block in run_callbacks'\n from raven/integrations/rails/controller_transaction.rb:7:in `block in included'\n from active_support/callbacks.rb:118:in `instance_exec'\n from active_support/callbacks.rb:118:in `block in run_callbacks'\n from active_support/callbacks.rb:136:in `run_callbacks'\n from abstract_controller/callbacks.rb:41:in `process_action'\n from action_controller/metal/rescue.rb:22:in `process_action'\n from action_controller/metal/instrumentation.rb:34:in `block in process_action'\n from active_support/notifications.rb:168:in `block in instrument'\n from active_support/notifications/instrumenter.rb:23:in `instrument'\n from active_support/notifications.rb:168:in `instrument'\n from action_controller/metal/instrumentation.rb:32:in `process_action'\n from action_controller/metal/params_wrapper.rb:256:in `process_action'\n from active_record/railties/controller_runtime.rb:24:in `process_action'\n from mongoid/railties/controller_runtime.rb:19:in `process_action'\n from abstract_controller/base.rb:134:in `process'\n from action_view/rendering.rb:32:in `process'\n from action_controller/metal.rb:191:in `dispatch'\n from action_controller/metal.rb:252:in `dispatch'\n from action_dispatch/routing/route_set.rb:52:in `dispatch'\n from action_dispatch/routing/route_set.rb:34:in `serve'\n from action_dispatch/routing/mapper.rb:18:in `block in <class:Constraints>'\n from action_dispatch/routing/mapper.rb:48:in `serve'\n from action_dispatch/journey/router.rb:52:in `block in serve'\n from action_dispatch/journey/router.rb:35:in `each'\n from action_dispatch/journey/router.rb:35:in `serve'\n from action_dispatch/routing/route_set.rb:840:in `call'\n from warden/manager.rb:36:in `block in call'\n from warden/manager.rb:35:in `catch'\n from warden/manager.rb:35:in `call'\n from omniauth/strategy.rb:192:in `call!'\n from omniauth/strategy.rb:169:in `call'\n from omniauth/strategy.rb:192:in `call!'\n from omniauth/strategy.rb:169:in `call'\n from omniauth/builder.rb:64:in `call'\n from config/initializers/session_store.rb:37:in `call'\n from rack/attack.rb:182:in `call'\n from rack/tempfile_reaper.rb:15:in `call'\n from rack/etag.rb:25:in `call'\n from rack/conditional_get.rb:25:in `call'\n from rack/head.rb:12:in `call'\n from action_dispatch/http/content_security_policy.rb:18:in `call'\n from rack/session/abstract/id.rb:259:in `context'\n from rack/session/abstract/id.rb:253:in `call'\n from action_dispatch/middleware/cookies.rb:670:in `call'\n from action_dispatch/middleware/callbacks.rb:28:in `block in call'\n from active_support/callbacks.rb:98:in `run_callbacks'\n from action_dispatch/middleware/callbacks.rb:26:in `call'\n from action_dispatch/middleware/debug_exceptions.rb:61:in `call'\n from ddtrace/contrib/rails/middlewares.rb:17:in `call'\n from action_dispatch/middleware/show_exceptions.rb:33:in `call'\n from lograge/rails_ext/rack/logger.rb:15:in `call_app'\n from rails/rack/logger.rb:26:in `block in call'\n from active_support/tagged_logging.rb:71:in `block in tagged'\n from active_support/tagged_logging.rb:28:in `tagged'\n from active_support/tagged_logging.rb:71:in `tagged'\n from rails/rack/logger.rb:26:in `call'\n from action_dispatch/middleware/remote_ip.rb:81:in `call'\n from request_store/middleware.rb:19:in `call'\n from action_dispatch/middleware/request_id.rb:27:in `call'\n from rack/method_override.rb:22:in `call'\n from rack/runtime.rb:22:in `call'\n from action_dispatch/middleware/executor.rb:14:in `call'\n from action_dispatch/middleware/static.rb:127:in `call'\n from rack/sendfile.rb:111:in `call'\n from rack/utf8_sanitizer.rb:22:in `call'\n from raven/integrations/rack.rb:51:in `call'\n from ddtrace/contrib/rack/middlewares.rb:85:in `call'\n from rails/engine.rb:524:in `call'\n from unicorn/http_server.rb:576:in `process_client'\n from unicorn/worker_killer.rb:52:in `process_client'\n from unicorn/http_server.rb:670:in `worker_loop'\n from unicorn/http_server.rb:525:in `spawn_missing_workers'\n from unicorn/http_server.rb:536:in `maintain_worker_count'\n from unicorn/http_server.rb:294:in `join'\n from /var/www/xxx/shared/bundle/ruby/2.5.0/gems/unicorn-4.8.3/bin/unicorn_rails:209:in `<top (required)>'\n from /var/www/xxx/shared/bundle/ruby/2.5.0/bin/unicorn_rails:23:in `load'\n from /var/www/xxx/shared/bundle/ruby/2.5.0/bin/unicorn_rails:23:in `<main>'\n",
"text": "Thank you @Oleg_Pudeyev.\nI think it would wait for server_selection_timeout if it failed the first time, but it doesn’t.",
"username": "Kazuhiro_Shibuya"
},
{
"code": "",
"text": "As you’re using Mongoid with Unicorn can you verify that you’ve followed the Usage with Forking Servers guidance?",
"username": "alexbevi"
},
{
"code": "",
"text": "Thank you @alexbevi.What you taught may be correct.\nMaybe the parent connection remains.\nI’ll give it a try.",
"username": "Kazuhiro_Shibuya"
},
{
"code": "",
"text": "I suspect this is a driver issue and am investigating.",
"username": "Oleg_Pudeyev"
},
{
"code": "2020-03-19T05:08:05.194+09:00 [22269ef8-27d5-4e0a-8f2a-115c29d8a0a8] Read retry due to: Mongo::Error::SocketError Errno::ECONNRESET: Connection reset by peer - SSL_connect (for x.x.x.x:27017 (server-shard-00-03-ismg4.mongodb.net:27017, TLS)) (on server-shard-00-03-ismg4.mongodb.net:27017, modern retry, attempt 1) (on server-shard-00-03-ismg4.mongodb.net:27017, modern retry, attempt 1)\n2020-03-19T05:08:05.208+09:00 MONGODB | Populator failed to connect a connection for server-shard-00-03-ismg4.mongodb.net:27017: Mongo::Error::SocketError: Errno::ECONNREFUSED: Connection refused - connect(2) for x.x.x.x:27017 (for x.x.x.x:27017 (server-shard-00-03-ismg4.mongodb.net:27017, TLS)).\n2020-03-19T05:08:05.587+09:00 [22269ef8-27d5-4e0a-8f2a-115c29d8a0a8] Read retry due to: Mongo::Error::SocketError Errno::ECONNREFUSED: Connection refused - connect(2) for x.x.x.x:27017 (for x.x.x.x:27017 (server-shard-00-03-ismg4.mongodb.net:27017, TLS)) (on server-shard-00-03-ismg4.mongodb.net:27017, modern retry, attempt 1) (on server-shard-00-03-ismg4.mongodb.net:27017, modern retry, attempt 1)\n2020-03-19T05:08:05.608+09:00 MONGODB | Populator failed to connect a connection for server-shard-00-03-ismg4.mongodb.net:27017: Mongo::Error::SocketError: Errno::ECONNREFUSED: Connection refused - connect(2) for x.x.x.x:27017 (for x.x.x.x:27017 (server-shard-00-03-ismg4.mongodb.net:27017, TLS)).\n2020-03-19T05:08:05.803+09:00 [22269ef8-27d5-4e0a-8f2a-115c29d8a0a8] Read retry due to: Mongo::Error::SocketError Errno::ECONNREFUSED: Connection refused - connect(2) for x.x.x.x:27017 (for x.x.x.x:27017 (prod-mongodb-shard-00-03-ismg4.mongodb.net:27017, TLS)) (on server-shard-00-03-ismg4.mongodb.net:27017, modern retry, attempt 1) (on server-shard-00-03-ismg4.mongodb.net:27017, modern retry, attempt 1)\n2020-03-19T05:08:05.809+09:00 MONGODB | Populator failed to connect a connection for server-shard-00-03-ismg4.mongodb.net:27017: Mongo::Error::SocketError: Errno::ECONNREFUSED: Connection refused - connect(2) for x.x.x.x:27017 (for x.x.x.x:27017 (server-shard-00-03-ismg4.mongodb.net:27017, TLS)). It will retry.\n2020-03-19T05:08:05.825+09:00 MONGODB | Populator failed to connect a connection for server-shard-00-03-ismg4.mongodb.net:27017: Mongo::Error::SocketError: Errno::ECONNREFUSED: Connection refused - connect(2) for x.x.x.x:27017 (for x.x.x.x:27017 (server-shard-00-03-ismg4.mongodb.net:27017, TLS)). It will retry.\n2020-03-19T05:08:05.971+09:00 [22269ef8-27d5-4e0a-8f2a-115c29d8a0a8] ActionView::Template::Error (Errno::ECONNREFUSED: Connection refused - connect(2) for x.x.x.x:27017 (for x.x.x.x:27017 (server-shard-00-03-ismg4.mongodb.net:27017, TLS)) (on server-shard-00-03-ismg4.mongodb.net:27017, modern retry, attempt 2) (on server-shard-00-03-ismg4.mongodb.net:27017, modern retry, attempt 2)):\n",
"text": "@alexbevi @Oleg_PudeyevI set up and maintained according to the Usage with Forking Servers, but it doesn’t seem to wait for server_selection_timeout period to retry as before.Is there anything else that could be the cause?",
"username": "Kazuhiro_Shibuya"
}
] | Why does the Ruby driver not wait for server selection during failover? | 2020-02-18T16:41:10.256Z | Why does the Ruby driver not wait for server selection during failover? | 6,588 |
null | [
"containers",
"installation"
] | [
{
"code": "E: Encountered a section with no Package: header\nE: Problem with MergeList \nhttp://repo.mongodb.org/apt/debian stretch/mongodb-org/4.0/main amd64Step 14/29 : RUN apt-get update && apt-get install -y gnupg && rm -rf /var/lib/apt/lists/*\n ---> Using cache\n ---> 35aa643018af\nStep 15/29 : RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9DA31620334BD75D9DCB49F368818C72E52529D4\n ---> Using cache\n ---> ef7827fd065f\nStep 16/29 : RUN echo \"deb http://repo.mongodb.org/apt/debian stretch/mongodb-org/4.0 main\" | tee /etc/apt/sources.list.d/mongodb-org-4.0.list\n ---> Using cache\n ---> 993cfc0ff19e\nStep 17/29 : RUN rm -rf /var/lib/apt/lists/* && apt-get clean && apt-get update && apt-get install -y libpcre++-dev libc6 libssl1.1 libssl-dev ca-certificates openssl apt-transport-https make gzip mongodb-org-tools git && rm -rf /var/lib/apt/lists/*\n ---> Running in 0ec47fbdb9e5\nIgn:1 http://deb.debian.org/debian stretch InRelease\nGet:2 http://security.debian.org/debian-security stretch/updates InRelease [94.3 kB]\nIgn:3 http://repo.mongodb.org/apt/debian stretch/mongodb-org/4.0 InRelease\nGet:4 http://deb.debian.org/debian stretch-updates InRelease [91.0 kB]\nGet:5 http://deb.debian.org/debian stretch Release [118 kB]\nGet:6 http://deb.debian.org/debian stretch Release.gpg [2410 B]\nGet:7 http://security.debian.org/debian-security stretch/updates/main amd64 Packages [520 kB]\nGet:8 http://repo.mongodb.org/apt/debian stretch/mongodb-org/4.0 Release [1492 B]\nGet:9 http://repo.mongodb.org/apt/debian stretch/mongodb-org/4.0 Release.gpg [801 B]\nGet:10 http://deb.debian.org/debian stretch-updates/main amd64 Packages [27.9 kB]\nGet:11 http://deb.debian.org/debian stretch/main amd64 Packages [7083 kB]\nGet:12 http://repo.mongodb.org/apt/debian stretch/mongodb-org/4.0/main amd64 Packages [12.4 kB]\nFetched 7951 kB in 3s (2127 kB/s)\nReading package lists...\nE: Encountered a section with no Package: header\nE: Problem with MergeList /var/lib/apt/lists/repo.mongodb.org_apt_debian_dists_stretch_mongodb-org_4.0_main_binary-amd64_Packages.lz4\nE: The package lists or status file could not be parsed or opened.\nThe command '/bin/sh -c rm -rf /var/lib/apt/lists/* && apt-get clean && apt-get update && apt-get install -y libpcre++-dev libc6 libssl1.1 libssl-dev ca-certificates openssl apt-transport-https make gzip mongodb-org-tools git && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 100\nmake[1]: *** [test-container] Error 100\nmake: *** [test-in-container] Error 2\n",
"text": "We build docker images using a debian/golang base. Today, quite suddenly, we started seeing an error:Nothing changed on our end, just suddenly stopped working. If I remove mongo from the apt commands (e.g. don’t echo the package header and call update), then the build works again.\nIt looks like something is broken with http://repo.mongodb.org/apt/debian stretch/mongodb-org/4.0/main amd64Relevant docker build commands:",
"username": "TopherGopher"
},
{
"code": "",
"text": "Hi @TopherGopher, it looks like there were changes. The following path might help you find what you’re looking for: Index of binary-amd64. I unfortunately don’t have time to test to make sure it works however. ",
"username": "Doug_Duncan"
},
{
"code": "ADD http://repo.mongodb.org/apt/debian/dists/stretch/mongodb-org/4.0/main/binary-amd64/mongodb-org-tools_4.0.17_amd64.deb /tmp/mongodb-org-tools.deb\nRUN dpkg -i /tmp/mongodb-org-tools.deb\nhttp://repo.mongodb.org/apt/debian/dists/stretch/mongodb-org/4.0/main/binary-amd64/mongodb-org-tools_4.0.17_amd64.deb",
"text": "Cool - so for anyone who’s having issues, I added this to my Dockerfile:You can just curl/wget http://repo.mongodb.org/apt/debian/dists/stretch/mongodb-org/4.0/main/binary-amd64/mongodb-org-tools_4.0.17_amd64.deb (that’s what ADD is)I don’t know what’s up with the actual package cache, but hopefully someone is on it.",
"username": "TopherGopher"
},
{
"code": "",
"text": "Nothing changed on our end, just suddenly stopped working.@TopherGopher There was a packaging problem with the latest 4.0.x release for Debian, but this should be resolved now.Relevant tracking issue: SERVER-46938: mongodb-org debian repo has malformed Packages file.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Seems that they have close the Issue in the tracker\nbut the problem still exists\nrunning:\necho “deb [ arch=amd64,arm64 ] MongoDB Repositories xenial/mongodb-org/4.0 multiverse” | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list\nsudo apt-get update\nproduces\nE: Encountered a section with no Package: header\nE: Problem with MergeList /var/lib/apt/lists/repo.mongodb.org_apt_ubuntu_dists_xenial_mongodb-org_4.0_multiverse_binary-amd64_Packages\nE: The package lists or status file could not be parsed or opened.",
"username": "Boris_Polishuk"
}
] | Debian install issue using apt repo: no Package: header | 2020-03-18T16:45:26.758Z | Debian install issue using apt repo: no Package: header | 3,607 |
null | [] | [
{
"code": "",
"text": "Hello! I was adding a bunch of slack workspaces in my local slack client on a new computer, and I added the “developer.mongodb.com/community/forums” workspace. I now want to remove it since it is read only and no longer active, but I am not able to…I was able to leave a different workspace by clicking:Profile & Account → Account Settings → Deactivate My AccountHowever, when I click the “Account Settings” butotn in Mongo community slack nothing happens…Is there any way for me to remove this workspace from my local slack client? Thanks!",
"username": "Jim_Lynch"
},
{
"code": "",
"text": "Welcome @Jim_Lynch!,You should be able to remove an unused Slack workspace from your desktop Slack client by Deactivating your account for that workspace or right clicking on the workspace icon in Slack and choosing “Sign out”.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi Stennie!After quiting Slack and restarting it I was able to remove the old workspace. Thanks! ",
"username": "Jim_Lynch"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Can't leave the old Slack workspace | 2020-03-19T01:34:43.289Z | Can’t leave the old Slack workspace | 6,630 |
null | [
"cxx"
] | [
{
"code": " mongocxx::instance inst{};\n mongocxx::client conn{mongocxx::uri{}};\n auto collection = conn[\"simpleemdb\"][\"priceCurves\"];\n auto cursor = collection.find({});\n \n for (auto&& doc : cursor) {\ncurvename[i] = doc.getField(\"curve name\")\n",
"text": "I am trying to extract data from MongoDB using the C++ driver.I am able to successfully run a find query and can print the json string to the terminal, however I cannot figure out how to extract the data from the cursor. Ideally I don’t want to convert the data to a json string, and then parse the string, as that sounds like an unnecessary round trip. (is that assumption true?)Is there a way to directly access the column values from the cursor, without needing to create a json string.My code is below:At this point i would like to extract the document values.\nFor example i have a field in this collection called “curve name”,I would like to do something likeIs that possible, or what options do I have. Any help with this would be greatly appreciated.Please note performance is a large concern, so the most efficient method would be appreciated.",
"username": "arif_saeed"
},
{
"code": "auto cursor = collection.find({});\nfor (auto&& doc : cursor) {\n bsoncxx::document::element curvename = doc[\"curveName\"];\n std::cout << \"curve name: \" << curvename.get_utf8().value << std::endl;\n}\ncurveNamemongocxx::options::find opts;\nopts.projection(make_document(kvp(\"curveName\", 1)));\nauto cursor = collection.find({}, opts);\ncurveName#include <bsoncxx/builder/basic/kvp.hpp>\n#include <bsoncxx/builder/basic/document.hpp>\n\nusing bsoncxx::builder::basic::make_document;\nusing bsoncxx::builder::basic::kvp;\n\nmongocxx::pipeline p{};\np.group(make_document(\n kvp(\"_id\", NULL),\n kvp(\"curveNames\", make_document(kvp(\"$addToSet\", \"$curveName\")))\n));\nauto cursor = collection.aggregate(p, mongocxx::options::aggregate{});\n",
"text": "Hi @arif_saeed,Is that possible, or what options do I have.The cursor iterates though document::view, so you can retrieve a document::element from the view using the operator[]() member of view. For example:Please note that by default MongoDB queries return all fields in matching documents. To limit the amount of data returned to the application you can specify projection to restrict fields to return. See Project Fields to Return from Query for more information. For example in C++ to project just curveName field:I would like to do something likeLooking at the example line that you would like to do, I assumed that you’re trying to retrieve curve names from the collection and store into an array. If so, you can try using aggregation pipeline. For example, to retrieve all curveName field values from the collection into an array with no duplicate:See also $push operator, which adds array field without removing duplicates.Regards,\nWan.",
"username": "wan"
}
] | How to extract data from c++ driver cursor | 2020-03-17T01:47:20.023Z | How to extract data from c++ driver cursor | 6,178 |
[
"server"
] | [
{
"code": "cat /opt/foo/conf/SB/SB11.pid 2> /dev/nullcat /opt/foo/conf/SB/SB31.pid 2> /dev/null",
"text": "Hi all,I made a script to rotate the logs every day, but my log files remain open after the rotation (see attachment).Even if I restart the mongod processes, this problem will appear in the next few days.Is there a problem with my script? Or the \"kill -SIGUSR1 \" command cannot be used when MongoDB is balancing?Any suggestions or root cause guessing?Thanks~05 00 * * * /opt/log_bk.sh#!/bin/sh\n/bin/kill -SIGUSR1 cat /opt/foo/conf/SB/SB11.pid 2> /dev/null 2> /dev/null || true\nmv /opt/foo/logs/SB/SB11.log.* /opt/foo/logsbk/SB\ncompress /opt/foo/logsbk/SB/SB11.log.*/bin/kill -SIGUSR1 cat /opt/foo/conf/SB/SB31.pid 2> /dev/null 2> /dev/null || true\nmv /opt/foo/logs/SB/SB31.log.* /opt/foo/logsbk/SB\ncompress /opt/foo/logsbk/SB/SB31.log.*find /opt/foo/logsbk/SB/ -type f -name “*.Z” -mtime +7 -exec rm -rf {} ;\nlsof deleted 11296×332 263 KB\n \nlsof deleted 21165×623 1.03 MB\n",
"username": "Seth"
},
{
"code": "# /etc/logrotate.d/mongod\n/var/log/mongodb/mongod.log {\n\trotate 36500\n\tcompress\n\tdaily\n\tpostrotate\n\t\tsystemctl kill -s USR1 mongod\n\tendscript\n}\n\n",
"text": "You need to SIGKILL after moving the file. Don’t roll your own just use logrotatehere is my file for logrotate.",
"username": "chris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB server keeps deleted log files open after rotation | 2020-03-18T05:54:21.257Z | MongoDB server keeps deleted log files open after rotation | 4,006 |
|
null | [
"queries",
"performance"
] | [
{
"code": "Models.Pass.countDocuments({ show: showId }).maxTimeMS(17000)\n{\n \"command\": {\n \"aggregate\": \"passes\",\n \"pipeline\": [\n {\n \"$match\": {\n \"show\": {\n \"$oid\": \"5d8021bd1c4eef00086d4fb6\"\n }\n }\n },\n {\n \"$group\": {\n \"_id\": 1,\n \"n\": {\n \"$sum\": 1\n }\n }\n }\n ],\n \"cursor\": {},\n \"lsid\": {\n \"id\": {\n \"$uuid\": \"c54a4dc4-d595-fe6d-16b9-8517594f2da1\"\n }\n },\n \"$clusterTime\": {\n \"clusterTime\": {\n \"$timestamp\": {\n \"t\": 1584299634,\n \"i\": 255\n }\n },\n \"signature\": {\n \"hash\": {\n \"$binary\": \"PHh4eHh4eD4=\",\n \"$type\": 0\n },\n \"keyId\": 6778896889304580000\n }\n },\n \"$db\": \"main\"\n },\n \"planSummary\": [\n {\n \"COUNT_SCAN\": {\n \"show\": 1\n }\n }\n ],\n \"numYields\": 1836,\n \"queryHash\": \"97FA1A2E\",\n \"planCacheKey\": \"D86294E6\",\n \"ok\": 0,\n \"errMsg\": \"Error in $cursor stage :: caused by :: operation was interrupted because a client disconnected\",\n \"errName\": \"ClientDisconnect\",\n \"errCode\": 279,\n \"reslen\": 311,\n \"locks\": {\n \"ReplicationStateTransition\": {\n \"acquireCount\": {\n \"w\": 1838\n }\n },\n \"Global\": {\n \"acquireCount\": {\n \"r\": 1838\n }\n },\n \"Database\": {\n \"acquireCount\": {\n \"r\": 1837\n }\n },\n \"Collection\": {\n \"acquireCount\": {\n \"r\": 1837\n }\n },\n \"Mutex\": {\n \"acquireCount\": {\n \"r\": 2\n }\n }\n },\n \"protocol\": \"op_msg\",\n \"millis\": 17594\n}\n",
"text": "Hello! We are having some performance issues with our countDocuments() query as it is taking > 17 seconds (our client-side timeout is 17s) for most queries to complete execution during our load testing. This is only happening during load testing and the query returns in ~300ms when the system is not under load. This query, in particular, is the only one that has performance issues as I can compare it to the others from the Profiler within Atlas. The collection itself has 680k documents and we are using Mongoose within our node.js app.I am wondering if there are any best practices for optimizing the countDocuments() operation?Here is the the log document from the Profiler:",
"username": "Jason_Mattiace"
},
{
"code": "showexplaincountDocuments$match$countfinddb.collection.find( { some_field: some_criteria } ).count()some_filed",
"text": "I see that there is an index on the query filter field show. The query planner does show a COUNT_SCAN stage when the index is used. I ran a similar query (including a filter) with over a million small documents. As such you cannot run explain on the countDocuments method in mongo shell. So, I ran an aggregation with $match and $count stages (and I believe its operation is same as countDocuments)., so that I can see the query plan output. This timed approximately 500ms (running on MongoDB 4.2 on an older PC).When an index is not there, the query plan shows a COLLSCAN and the time is about 1000ms.I think the performance might be something to do with the ‘load’ only. Does this operation has different semantics than other read methods like find (I don’t know)?db.collection.find( { some_field: some_criteria } ).count()Note there was an index on the some_filed.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | How to optimize countDocuments() | 2020-03-16T00:28:11.095Z | How to optimize countDocuments() | 12,165 |
null | [
"aggregation",
"data-modeling"
] | [
{
"code": " db.artists.insertMany([\n { \"_id\" : 1, \"last_name\" : \"Bernard\", \"first_name\" : \"Emil\", \"year_born\" : 1868, \"year_died\" : 1941, \"nationality\" : \"France\" },\n { \"_id\" : 2, \"last_name\" : \"Rippl-Ronai\", \"first_name\" : \"Joszef\", \"year_born\" : 1861, \"year_died\" : 1927, \"nationality\" : \"Hungary\" },\n { \"_id\" : 3, \"last_name\" : \"Ostroumova\", \"first_name\" : \"Anna\", \"year_born\" : 1871, \"year_died\" : 1955, \"nationality\" : \"Russia\" },\n { \"_id\" : 4, \"last_name\" : \"Van Gogh\", \"first_name\" : \"Vincent\", \"year_born\" : 1853, \"year_died\" : 1890, \"nationality\" : \"Holland\" },\n { \"_id\" : 5, \"last_name\" : \"Maurer\", \"first_name\" : \"Alfred\", \"year_born\" : 1868, \"year_died\" : 1932, \"nationality\" : \"USA\" },\n { \"_id\" : 6, \"last_name\" : \"Munch\", \"first_name\" : \"Edvard\", \"year_born\" : 1863, \"year_died\" : 1944, \"nationality\" : \"Norway\" },\n { \"_id\" : 7, \"last_name\" : \"Redon\", \"first_name\" : \"Odilon\", \"year_born\" : 1840, \"year_died\" : 1916, \"nationality\" : \"France\" },\n { \"_id\" : 8, \"last_name\" : \"Diriks\", \"first_name\" : \"Edvard\", \"year_born\" : 1855, \"year_died\" : 1930, \"nationality\" : \"Norway\" }\n ])\n db.artists.aggregate( [\n // First Stage\n {\n $bucket: {\n groupBy: \"$year_born\", // Field to group by\n boundaries: [ 1840, 1850, 1860, 1870, 1880 ], // Boundaries for the buckets\n default: \"Other\", // Bucket id for documents which do not fall into a bucket\n output: { // Output for each bucket\n \"count\": { $sum: 1 },\n \"artists\" :\n {\n $push: {\n \"name\": { $concat: [ \"$first_name\", \" \", \"$last_name\"] },\n \"year_born\": \"$year_born\"\n }\n }\n }\n }\n }\n ] )\n /* 1 */\n {\n \"_id\" : 1840.0,\n \"count\" : 1.0,\n \"artists\" : [ \n {\n \"F_name\" : \"Odilon\",\n \"L_name\" : \"Redon\",\n \"year_born\" : 1840.0,\n \"year_died\" : 1916.0,\n \"nationality\" : \"France\"\n }\n ]\n }\n\n /* 2 */\n {\n \"_id\" : 1850.0,\n \"count\" : 2.0,\n \"artists\" : [ \n {\n \"F_name\" : \"Vincent\",\n \"L_name\" : \"Van Gogh\",\n \"year_born\" : 1853.0,\n \"year_died\" : 1890.0,\n \"nationality\" : \"Holland\"\n }, \n {\n \"F_name\" : \"Edvard\",\n \"L_name\" : \"Diriks\",\n \"year_born\" : 1855.0,\n \"year_died\" : 1930.0,\n \"nationality\" : \"Norway\"\n }\n ]\n }\n\n /* 3 */\n {\n \"_id\" : \"other\",\n \"count\" : 5.0,\n \"artists\" : [ \n {\n \"F_name\" : \"Emil\",\n \"L_name\" : \"Bernard\",\n \"year_born\" : 1868.0,\n \"year_died\" : 1941.0,\n \"nationality\" : \"France\"\n }, \n {\n \"F_name\" : \"Joszef\",\n \"L_name\" : \"Rippl-Ronai\",\n \"year_born\" : 1861.0,\n \"year_died\" : 1927.0,\n \"nationality\" : \"Hungary\"\n }, \n {\n \"F_name\" : \"Anna\",\n \"L_name\" : \"Ostroumova\",\n \"year_born\" : 1871.0,\n \"year_died\" : 1955.0,\n \"nationality\" : \"Russia\"\n }, \n {\n \"F_name\" : \"Alfred\",\n \"L_name\" : \"Maurer\",\n \"year_born\" : 1868.0,\n \"year_died\" : 1932.0,\n \"nationality\" : \"USA\"\n }, \n {\n \"F_name\" : \"Edvard\",\n \"L_name\" : \"Munch\",\n \"year_born\" : 1863.0,\n \"year_died\" : 1944.0,\n \"nationality\" : \"Norway\"\n }\n ]\n }\n",
"text": "Hi, i was going through from the mognodb documentation and the topic was bucket.\ni tried the given example which was as followed:and query on that data.and here is the output of the above query:1- My question is that the artists array is grouped by year_born and data isn’t in ascending order or descending order?\n2- Can we set limit of documents to be placed in one bucket.\n3- Can we add number of buckets while creating create statement. is this is possible or not?\nso that when we insert a document it will be placed/moved to the matched condition bucket.",
"username": "Nabeel_Raza"
},
{
"code": "artistsboundariesgroupBy",
"text": "1- My question is that the artists array is grouped by year_born and data isn’t in ascending order or descending order?artists is not an array; it is a collection. “…is grouped by year_born and data isn’t in ascending order or descending order” - correct.2- Can we set limit of documents to be placed in one bucket.Actually you cannot specify the number of documents, or use some kind of limit expression. The boundaries field specifies the limits (upper and lowerbound) for each bucket based on the groupBy expression.3- Can we add number of buckets while creating create statement. is this is possible or not? so that when we insert a document it will be placed/moved to the matched condition bucket.You mean while creating the collection? It is not clear what you mean by “create statement”.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Check the attachment. year_born isn’t in any order.\n",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "When we create the collection can we specify the bucket?",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "No.There is no such feature with collection creation.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Okay thanks. It help me alot. But what about the sorting ?\nkindly check the image above.",
"username": "Nabeel_Raza"
},
{
"code": "year_born",
"text": "The year_born values are not sorted.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "So how to view them in sorted ordered.",
"username": "Nabeel_Raza"
},
{
"code": " { $sort : {year_born:1} }",
"text": " { $sort : {year_born:1} }\nI just added this line but it doesn’t work for me.",
"username": "Nabeel_Raza"
},
{
"code": "db.collection.aggregate( [ \n { $unwind: \"$artists\" }, \n { $sort: { \"artists.year_born\": 1 } }, \n { $group: { _id: \"$id\", artists: { $push: \"$artists\" }, count: { $first: \"$count\" } } }\n] )\n",
"text": "You can try this:This will sort the unwound array elements and group the sorted ones back into an array.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Kindly review the Question again. I have collection and i made a Query and i need it to be sorted.",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "You can add the three stages from the aggregation I have provided to your aggregation query (at the end).",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "It doesn’t work for me.",
"username": "Nabeel_Raza"
},
{
"code": "db.artists.aggregate( [\n // First Stage\n {\n $bucket: {\n groupBy: \"$year_born\", // Field to group by\n boundaries: [ 1840, 1850, 1860, 1870, 1880 ], // Boundaries for the buckets\n default: \"NULL VALUES\", // Bucket id for documents which do not fall into a bucket\n output: { // Output for each bucket\n \"count\": { $sum: 1 },\n \"artists\" :\n {\n $push: {\n \"F_name\": \"$first_name\",\n \"L_name\": \"$last_name\",\n \"year_born\": \"$year_born\",\n \"year_died\" : \"$year_died\",\n \"nationality\": \"$nationality\"\n }\n }\n }\n }\n },\n { $unwind: \"$artists\" }, \n { $sort: { \"artists.year_born\": 1 } },\n { $group: { _id: \"$_id\", artists: { $push: \"$artists\" } } }\n\n] )",
"text": "",
"username": "Nabeel_Raza"
}
] | Bucket in mongodb | 2020-03-18T07:02:33.115Z | Bucket in mongodb | 3,362 |
null | [
"swift"
] | [
{
"code": "Uri.TryCreate(realmPath, UriKind.Relative, out var outPathUri);\nRealmConfigurationBase configuration = new FullSyncConfiguration(outPathUri, _currentUser, _realmFile)\n{\n ObjectClasses = typesInSelectedRealm,\n SchemaVersion = 1\n};\n\n_realmInstance = Realm.GetInstance(configuration);\n\nif(_realmInstance != null) _realmInstance.RealmChanged += LoadDataOnChange;\nif(_realmInstance != null) \n{\n _realmInstance.RealmChanged -= LoadDataOnChange; // figure it's good to clear all external references to Realm object\n _realmInstance.Dispose();\n GC.Collect();\n Thread.Sleep(1000); // both of these are me trying to just make darn sure that everything is cleaned up and has had enough time to do so\n Realm.DeleteRealm(_realmInstance.Config); // error\n}\nThe process cannot access the file '...\\test.realm' because it is being used by another process.",
"text": "Environment - .Net Core C#Loading in my dataTrying to copy .realm file with same format from a USB DriveError thrown: The process cannot access the file '...\\test.realm' because it is being used by another process.I’ve checked for IsClosed on the _realmInstance, and it is for sure working. Help! Why is Realm maintaining the lock on that file? Possible ideas include Visual Studio doing some nastiness, but I wanted to ask here in case someone knew what was going on.If I try and copy without deleting the .realm file I get a E_FAIL com component error. So I figure I have to solve this before I try anything else.",
"username": "Matthew_Farstad"
},
{
"code": "",
"text": "Also reported her: Can't .DeleteRealm() Even After .Dispose() · Issue #1970 · realm/realm-dotnet · GitHub where it’s answered.\nPlease make it clear when cross posting.\nThanks!",
"username": "Brian_Munkholm"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unable To DeleteRealm after .Dispose() Called | 2020-03-17T18:03:50.872Z | Unable To DeleteRealm after .Dispose() Called | 2,418 |
null | [
"sharding"
] | [
{
"code": "use led\n\nsh.enableSharding(\"led\")\n\nstations = [\"yll\", \"fep\", \"sas\"]\nsharded_collections = [ \"camera\", \"station\" ]\n\n// Drop the collections (for dev), and recreate later\nsharded_collections.forEach(collection => {\n db[collection].drop()\n})\n\n// Create indexes\nsharded_collections.forEach(collection => {\n db[collection].createIndex({ station: 1, _id: 1})\n})\n\nstations.forEach(stationCode => {\n sh.disableBalancing(\"led.\" + stationCode)\n})\n\nsh.addShardTag(\"shardyll\", \"yll\")\nsh.addShardTag(\"shardfep\", \"fep\")\nsh.addShardTag(\"shardsas\", \"sas\")\n\nsh.addTagRange(\n \"led.station\",\n { \"station\" : \"yll\", \"_id\" : MinKey },\n { \"station\" : \"yll\", \"_id\" : MaxKey },\n \"yll\"\n)\nsh.addTagRange(\n \"led.station\",\n { \"station\" : \"fep\", \"_id\" : MinKey },\n { \"station\" : \"fep\", \"_id\" : MaxKey },\n \"fep\"\n)\nsh.addTagRange(\n \"led.station\",\n { \"station\" : \"sas\", \"_id\" : MinKey },\n { \"station\" : \"sas\", \"_id\" : MaxKey },\n \"sas\"\n)\nsh.enableBalancing(\"led.station\")\n\n// insert a few docs to test\ndb.station.insertOne({ \"station\": \"yll\", message: \"hello1\"})\ndb.station.insertOne({ \"station\": \"fep\", message: \"hello1\"})\ndb.station.insertOne({ \"station\": \"fep\", message: \"hello2\"})\ndb.station.insertOne({ \"station\": \"sas\", message: \"hello1\"})\ndb.station.insertOne({ \"station\": \"sas\", message: \"hello2\"})\ndb.station.insertOne({ \"station\": \"sas\", message: \"hello3\"})\n\ndb.station.getShardDistribution()\nsh.statususe led\n\nsh.enableSharding(\"led\")\n\nstations = [\"yll\", \"fep\", \"sas\"]\nsharded_collections = [ \"camera\", \"station\" ]\n\n// Drop the collections (for dev), and recreate later\nsharded_collections.forEach(collection => {\n db[collection].drop()\n})\n\n// Create indexes\nsharded_collections.forEach(collection => {\n db[collection].createIndex({ station: 1, _id: 1})\n})\n\nstations.forEach(stationCode => {\n sh.disableBalancing(\"led.\" + stationCode)\n})\n\nsh.addShardTag(\"shardyll\", \"yll\")\nsh.addShardTag(\"shardfep\", \"fep\")\nsh.addShardTag(\"shardsas\", \"sas\")\n\nsh.addTagRange(\n \"led.station\",\n { \"station\" : \"yll\", \"_id\" : MinKey },\n { \"station\" : \"yll\", \"_id\" : MaxKey },\n \"yll\"\n)\nsh.addTagRange(\n \"led.station\",\n { \"station\" : \"fep\", \"_id\" : MinKey },\n { \"station\" : \"fep\", \"_id\" : MaxKey },\n \"fep\"\n)\nsh.addTagRange(\n \"led.station\",\n { \"station\" : \"sas\", \"_id\" : MinKey },\n { \"station\" : \"sas\", \"_id\" : MaxKey },\n \"sas\"\n)\nsh.enableBalancing(\"led.station\")\n\n// insert a few docs to test\ndb.station.insertOne({ \"station\": \"yll\", message: \"hello1\"})\ndb.station.insertOne({ \"station\": \"fep\", message: \"hello1\"})\ndb.station.insertOne({ \"station\": \"fep\", message: \"hello2\"})\ndb.station.insertOne({ \"station\": \"sas\", message: \"hello1\"})\ndb.station.insertOne({ \"station\": \"sas\", message: \"hello2\"})\ndb.station.insertOne({ \"station\": \"sas\", message: \"hello3\"})\nmongos> db.station.getShardDistribution()\nCollection led.station is not sharded.\nmongos> sh.status()\n--- Sharding Status --- \n sharding version: {\n \t\"_id\" : 1,\n \t\"minCompatibleVersion\" : 5,\n \t\"currentVersion\" : 6,\n \t\"clusterId\" : ObjectId(\"5e718d5beb2d31dadf510508\")\n }\n shards:\n { \"_id\" : \"shardfep\", \"host\" : \"shardfep/shardfep1:27022\", \"state\" : 1, \"tags\" : [ \"fep\" ] }\n { \"_id\" : \"shardsas\", \"host\" : \"shardsas/shardsas1:27023\", \"state\" : 1, \"tags\" : [ \"sas\" ] }\n { \"_id\" : \"shardyll\", \"host\" : \"shardyll/shardyll1:27021\", \"state\" : 1, \"tags\" : [ \"yll\" ] }\n active mongoses:\n \"3.4.24\" : 1\n autosplit:\n Currently enabled: yes\n balancer:\n Currently enabled: yes\n Currently running: no\n Collections with active migrations: \n balancer started at Tue Mar 17 2020 19:54:20 GMT-0700 (PDT)\n Failed balancer rounds in last 5 attempts: 0\n Migration Results for the last 24 hours: \n No recent migrations\n databases:\n { \"_id\" : \"config\", \"primary\" : \"config\", \"partitioned\" : true }\n { \"_id\" : \"led\", \"primary\" : \"shardfep\", \"partitioned\" : true }\n\n",
"text": "I have setup a three shards cluster:My goal is to separate the data into different location based on the “station” field in documents.I have run the below, but seems all the insert at the end results in documents in only one shard, the primary shard. Not sure what went wrong.Output for sh.status:Output for shard distributionOutput for sh.status()Any insight is much appreciated.",
"username": "Raymond929"
},
{
"code": "sh.shardCollection()",
"text": "I think you have missed a step in the sharding procedure. The following are the important ones, and to be followed in that order:(1) Enable sharding on a database\n(2) Create index on the shard key field\n(3) Shard a collectionI see you have missed the step (3), You have to use the command sh.shardCollection() for that.Note that you enable sharding on a database, but shard a collection. A sharded database base can have sharded as well as unsharded collections. More details at Deploy a Sharded Cluster - Procedure.",
"username": "Prasad_Saya"
}
] | Sharding question | 2020-03-18T03:59:47.988Z | Sharding question | 1,625 |
null | [
"aggregation"
] | [
{
"code": "db.getCollection('products').aggregate([\n { $unwind: \"$categories\" },\n { $unwind: \"$categories\"},\n {$group: {\"_id\": \"$_id\",\"title\": {$first:\"$title\"},\n \"asin\":{$first:\"$asin\"},\n \"categories\": { $push: \"$categories\" }} },\n { $match: { \"categories\": { $in: ['T-Shirts']}} },\n { \"$project\":{ \"_id\": 0, \"asin\":1, \"title\":1 } } ])\nvar cursor = \ndb.products.explain(\"allPlansExecution\").find(categories:{\"T-Shirts\"},{ categories:1, title:1, asin:1, _id:0,})\nwhile (cursor.hasNext()) {\n print(cursor.next());\n}\n",
"text": "I’m currently struggling to implement an index to my query. Here is the original query:This is my current code for my index:When I run the index code I should get nReturned as 8 currently at 0.Could someone please guide me how to do this? Or can someone tell me what they would add?",
"username": "Jeffery_Sharjah"
},
{
"code": "",
"text": "When I run the index code I should get nReturned as 8 currently at 0.It looks like your question is actually about your aggregation pipeline not returning the expected results rather than an indexing improvement.In order for someone to provide suggestions can you please comment with:Note: your aggregation query as written will not benefit from an index because it is processing all documents in the collection. See Pipeline Operators and Indexes for more information.Regards,\nStennie",
"username": "Stennie_X"
}
] | Aggregation pipeline isn't returning expected results | 2020-03-17T18:04:35.658Z | Aggregation pipeline isn’t returning expected results | 3,979 |
null | [
"queries"
] | [
{
"code": "$centerSphere$geoWithin",
"text": "Hi, I’m in the process of determining whether I should use MongoDB or Postgres (or something else) for an app that will be making pretty intensive geospatial queries.More specifically, they will involve determining the paths (a MultiLineString?) a coordinate point would be close to, and based on a matching timestamp.The vice versa would also need to happen. Given a path of coordinates, I would need to query which coordinate points this path got close to.Could MongoDB be the right tool for the job? Is MongoDB able to support this out of the box or with a relatively idiomatic query construction? This post seems to suggest using $centerSphere with $geoWithin would work despite the docs sayingSelects documents with geospatial data that exists entirely within a specified shape.Even if so and works with a multi-point path, can the query still work in vice versa where\nthe points are queried given a path? Thanks!",
"username": "cheng_soul"
},
{
"code": "$centerSphere$geoWithin",
"text": "Hi @cheng_soul, welcome!Could you elaborate more on what you’re trying to do with example inputs, queries (both cases), and the expected output?The use of $centerSphere with $geoWithin only works for Points, LineStrings, MultiLineStrings within the defined circle (spherical). For detecting intersections, there is currently an open ticket for this SERVER-30390 (Please upvote or add yourself as a watcher for notifications).Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Thanks! So if I say define the circle as the entire Earth, can I use those LineStrings and MultiLineStings and find where they intersect? Basically I’m looking for a database to track where people have traveled and whether they have crossed paths.",
"username": "cheng_soul"
},
{
"code": "",
"text": "Hi @cheng_soul,Basically I’m looking for a database to track where people have traveled and whether they have crossed paths.For that use case, it’s likely better to use $geoIntersects instead. It’s used to query documents whose geospatial data intersects with a specified GeoJSON object.Please see Tutorial: Find Restaurants with Geospatial QueriesRegards,\nWan.",
"username": "wan"
}
] | Geospatial Queries for Line Intersection | 2020-02-21T05:44:42.531Z | Geospatial Queries for Line Intersection | 6,104 |
null | [
"replication",
"performance"
] | [
{
"code": "",
"text": "Hi all,\nI would like to share an issue that I have with my MongoDB replica set (MongoDB Community edition, 4.2.3)I have a cluster with 3 nodes, with this hardware (two of them are virtual machines running on Vmware):I have a GridFS collection with around 1,8 TB of documentsTo reclaim space I removed my secondary node, I have recreated it and I have added it again to the replica set. Now MongoDB is copying documents from master to the replica.The problem is that is very slow. After 12 hours it has copied only 36% of my collection. I have tested this operation three times and usually the entire copy ended in 6/7 hours.In log files I see this warnings, but I don’t know what they mean2020-03-08T11:23:29.306+0100 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: config.system.sessions does not existAnyone have ideas?Thanks in advance,\nJack",
"username": "jack"
},
{
"code": "",
"text": "Hi @jack, welcome!The problem is that is very slow. After 12 hours it has copied only 36% of my collection. I have tested this operation three times and usually the entire copy ended in 6/7 hours.You can try creating a new member using seed data from another member. Restart the machine with a copy of a recent data directory from another member in the replica set. This procedure can replace the data more quickly but requires more manual steps. See also Replica Set Resync by Copying.When syncing a member, choose a time when the system has the bandwidth to move a large amount of data. Schedule the synchronization during a time of low usage or during a maintenance window.Alternatively, if you have MongoDB backup files see also Restore a Replica Set from MongoDB Backups.Regards,\nWan.",
"username": "wan"
}
] | Slow Replica Set | 2020-03-08T10:26:08.030Z | Slow Replica Set | 2,902 |
null | [
"text-search"
] | [
{
"code": " MongoError: must have $meta projection for all $meta sort keys\n operationTime: Timestamp { _bsontype: 'Timestamp', low_: 2, high_: 1584383573 },\n ok: 0,\n errmsg: 'must have $meta projection for all $meta sort keys',\n code: 2,\n codeName: 'BadValue',\n '$clusterTime': \n { clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 2, high_: 1584383573 },\n signature: { hash: [Object], keyId: [Object] } },\n name: 'MongoError',\n [Symbol(mongoErrorContextSymbol)]: {} }\n var userData = await UserDB.userModel.find(\n {\n $or: [\n {\n $text: {$search: 'jac'}\n },\n {score : { $meta: \"textScore\" } } ,\n {\n $or: [\n {'username': {$regex: '^jac'}},\n {'name.first': {$regex: '^jac'}},\n {'name.last': {$regex: '^jac'}}\n ]\n }\n ]\n }).sort( { score: { $meta: \"textScore\" } } )\n",
"text": "I am trying to sort by $meta score the results from my $text search but every time I do I get this error message.What I am trying to achieve is to allow users to search words like “ah or jo” and for my search query to find words that contain the strings passed and sort them based on relevance.I got it to work but it wasn’t finding parts of strings BUT only whole strings… For example, If I pass “Jacob” it will find names with that string but if I pass “Jac” it will come back as null.Here is my code",
"username": "Jon_Paricien"
},
{
"code": "{ name: \"mongo database\", description: \"replication with mongo\" }$meta$text { name: \"blueberry field\" }{ name: \"blueberries\" }",
"text": "There are two different functions and they have different purposes:(1) Text search\n(2) Regex search(1) Text Search:With text search you can search for words in different text fields. For example, search for “mongo” in the two text fields in a document:{ name: \"mongo database\", description: \"replication with mongo\" }For this you must create a text index (and this index can cover all text fields in a document). And, you can search for “mongo” or “replication”, but not “mong”, “ongo” or “repl”.The $meta operator is used with the text searches. It can be part of the projection and the sort operation.(2) Regex Search:With regex search, you can search for “mong” or “ongo” or “data”.Text Search and Stemmed Words:For case insensitive and diacritic insensitive text searches, the $text operator matches on the complete stemmed word.For documents:\n { name: \"blueberry field\" }\n{ name: \"blueberries\" }The following first two text searches will not match, but the last two will match:\ndb.collection.find( { $text: { $search: “blue” } } )\n{ $text: { $search: “blueberr” } }\n{ $text: { $search: “blueberry” } }\n{ $text: { $search: “blueberries” } }",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "My question is… Is it possible to sort a regex search by relevance ? I know $text and $regex are two different functions but is there a way to combine both?",
"username": "Jon_Paricien"
}
] | MongoError: must have $meta projection for all $meta sort keys | 2020-03-16T19:00:22.096Z | MongoError: must have $meta projection for all $meta sort keys | 3,375 |
null | [
"legacy-realm-cloud"
] | [
{
"code": "",
"text": "I’m trying to understand how the Event Handling works with respect to historical changes.If I set up a monitor for changes, will I only get changes that occur after the subscription or all historical changes?If only new changes, then what is the recommended pattern for getting changes that occurred before Event Handler started?",
"username": "Jason_Whetton"
},
{
"code": "",
"text": "Welcome to the forum @Jason_Whetton!If I set up a monitor for changes, will I only get changes that occur after the subscription or all historical changes?Changes are for updates that occur after you have subscribed.If only new changes, then what is the recommended pattern for getting changes that occurred before Event Handler started?Past events will already have been applied to the objects you fetch. Can you provide more detail on your use case?Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thanks for the response.Sure, I can describe the use case and perhaps you can advise if my solution is reasonable. I’m new to Realm so it could be that I’m approaching from the wrong angle.I’m building an inventory management app that retrieves a product catalog and allows users to scan updates (place orders, price changes etc).The product catalog is a single realm and that works just fine.The changes by the user are created in a private realm for each user (~/commands).\nI was intending to create a handler on the server that would see all changes in any users command realm and process them and update the status. For this to work though, I’d need to also be able to query across these realms in order to fetch any changes that existed before I subscribed (either that or open each realm individually but that feels really awkward.An alternative solution is to simply put all commands in a single realm for all users (/commands) and add the user id as a property. This way I could do a query for all unhandled commands directly after subscribing. There is an example on github that watches changes for settings from all users (~/settings), so I assumed this was a preferred pattern (segregating user data) but the example doesn’t consider changes that occur when the event handler process is offline.This is a classic problem with event handling and usually solved with a delta query placed directly after subscribing that is then merged with any notifications received so far.",
"username": "Jason_Whetton"
}
] | Event Handler historical changes | 2020-03-16T19:00:32.851Z | Event Handler historical changes | 1,775 |
null | [
"app-services-user-auth",
"stitch"
] | [
{
"code": "",
"text": "I have a client app working with the custom JWT authentication. I am setting up a webhook and not able to make valid requests. The webhook will accept GET requests and currently I have it set up returning 200 just to make sure things work. The problem is you are not allowed to add a body with a GET request where it seems Stitch is expecting the credentials.I am using the documentation here: https://docs.mongodb.com/stitch/services/create-a-service-webhook/#configure-user-authentication.Is there a way I can use the credentials (“jwtTokenString”: “<User’s JWT Token>”) in the header of the requests?",
"username": "Charlie_Hauser"
},
{
"code": "",
"text": "Hi Charlie – You should be able to send the credentials in the same format (“jwtTokenString”: “<User’s JWT Token>”) as part of the header. Let me know if you run into any issues!",
"username": "Drew_DiPalma"
},
{
"code": "fetch( “webhookurl”, {method:“GET”, headers: { jwtTokenString: “userJwtString”} } )",
"text": "I should also mention I am using the fetch API. I get a CORS preflight error when I do a GET request. I am able to do it using postman.\nMy fetch request is formatted as shown:\nfetch( “webhookurl”, {method:“GET”, headers: { jwtTokenString: “userJwtString”} } )",
"username": "Charlie_Hauser"
}
] | Webhook JWT application authentication | 2020-03-11T18:49:32.330Z | Webhook JWT application authentication | 2,125 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Hi, I am new to mongodb and find a quick and efficient way to perform following task. the scenario is as followsi have following 3 tablesone way that I know is the traditional way like get the contacts of 1 person, loop through all the contacts, find the matching and save in 3rd table. it will be long process, slow down the process and may be timeout at some certain stagemy question is how to make the best and fastest way to perform this process.",
"username": "mehhfooz"
},
{
"code": "{\n \"name\": \"Bruce Wayne\",\n \"nickname\": \"Batman\",\n \"contacts\": [\n {\n \"name\": \"Robin\",\n \"title\": \"Partner in crime\",\n \"phones\": [\"999-999-9999\", \"333-333-3333\"]\n },\n {\n \"name\": \"Albert\",\n \"title\": \"Friend\",\n \"phones\": [\"111-111-1111\", \"444-444-4444\"]\n }\n ]\n}\n",
"text": "Hi @mehhfooz!Welcome to the MongoDB Community.There are some design patterns for data modeling on MongoDB. The following link shows a summary of them: Building with Patterns: A Summary | MongoDB BlogOne of the advantages of using a document-store like MongoDB is the flexibility to store data as-is, without having to decompose an object (like your Users).I’m not sure if I fully understand your case, but my best guess based on my understanding would be to have a “users” collection with documents using the following schema:Contacts is an array of people in Bruce’s phonebook.Hope it helps.All the best,Rodrigo\n(a.k.a. Logwriter)",
"username": "logwriter"
}
] | Fastest query help to produce and make a relation between documents | 2020-03-16T19:01:08.439Z | Fastest query help to produce and make a relation between documents | 1,880 |
null | [
"indexes"
] | [
{
"code": "",
"text": "Hello,\nI have a program written in Go. The program writes into mongo a certain document, this type of document have a TTL set from user input (between 1-7 days).Now the catch is here, during my testing I set the document to expire after 35 minutes. I updated my code to make sure it adheres to the time set from user input. However, when I checked the saved document, it was deleted after 35 minutes although I set it (as user input) to be valid for about 1-day.\nIs there something I am missing? Does it mean when a TTL index is created it automatically sets the value of the expiration date to be the value from the first document that was saved? here is a snippet code:type object struct {\nExpireAt time.Time\n}\n…\nobj := object{\nExpireAt: time.Now().Add(duration * 24 * time.Hour),\n}Note the duration variable is the users input. The index gets created successfully but once it is saved into the DB, the index value stays contant for any other document save.The aim is that each document should hold its own TTL as specified by the user input.Can anyone provide better insights on this? Is there something that I am missing?",
"username": "Student_al_St"
},
{
"code": "",
"text": "If the timestamp is to be the expireAt value then set the expireAfterSeconds to 0 on the TTL index.",
"username": "chris"
},
{
"code": "data := mongo.IndexModel{ \t\tKeys: bson.D{{Key: \"ExpireAt\", Value: 1}}, \t\tOptions: options.Index().SetExpireAfterSeconds(0), \t}",
"text": "I have that set already, and this is the only method I’ve seen for the #Go driver:\ndata := mongo.IndexModel{ \t\tKeys: bson.D{{Key: \"ExpireAt\", Value: 1}}, \t\tOptions: options.Index().SetExpireAfterSeconds(0), \t}",
"username": "Student_al_St"
}
] | How does the TTL expiry work? | 2020-03-15T05:46:32.324Z | How does the TTL expiry work? | 5,488 |
null | [] | [
{
"code": "$sample$sample",
"text": "Hi, I’m curious about is there an efficient way to select 2M docs from within 30M+ records in db, I’m well aware of the $sample operator but am not sure about it’s performance when the base is rather large, also seemed $sample contains duplicated data, hence wondering maybe there’s better suggested approach? Thanks.",
"username": "a_b"
},
{
"code": "",
"text": "I do not know about efficiency, but if you have a field with unique index, $sample will return each document once.See at https://docs.mongodb.com/manual/core/read-isolation-consistency-recency/#faq-developers-isolate-cursors",
"username": "coderkid"
},
{
"code": "unique_id",
"text": "Thanks, creating an unique index on _id seemed to do the trick! ",
"username": "a_b"
},
{
"code": "_id",
"text": "@a_b the _id field is already unique and has an index on it. No need to create a new one.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Thanks for the heads-up, do you happen to know the performance of such a large query?",
"username": "a_b"
},
{
"code": "",
"text": "@a_b,It’s really hard to answer such a question. There are so many variables that will influence the performance of your query. It basically depends on:I believe the best way to answer your question would be to run a test yourself and track metrics like execution time, and query plan. If you didn’t like the result of your testing, further investigation will be needed to find out what would be the best solution for your use case.All the best,Rodrigo\n(a.k.a. Logwriter)",
"username": "logwriter"
}
] | Randomly select 2M docs from 30M docs, suggestions? | 2020-03-15T23:15:10.560Z | Randomly select 2M docs from 30M docs, suggestions? | 1,935 |
null | [
"sharding",
"performance"
] | [
{
"code": "",
"text": "HiWe face an issue when we do many map reduce operations on a sharded cluster.\nThe used memory increases more and more and when it is on the limit of the host memory, the mongod process crashes with an out of memry exception.Environment:We believe that the memory can be used by mongo up to the max availabe RAM.\nBut the mongod process should never crash.Many thank for a feedback!",
"username": "Hermann_Jaun"
},
{
"code": "",
"text": "@Hermann_Jaun What specific version of MongoDB server are you using?Can you share an excerpt of the log lines with the out of memory exception and any associated stack trace?Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "@Stennie_X\nThe additional inofos:",
"username": "Hermann_Jaun"
},
{
"code": "mongodrepo.mongodb.org",
"text": "@Hermann_Jaun Can you share the log lines with the out of memory exception and any associated stack trace?You originally mentioned the mongod process crashes, so I’m looking for some extra context on the reason for shutdown. Unexpected shutdown usually has some information in the logs followed by a stack trace.Can you also confirm the distro version of Linux used and whether you are running a package installed from repo.mongodb.org or an alternative source?Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi @Stennie_XThe logs are unfortunately already deleted, we have too much transactions to keep them so long. But we will save the next time the respective log file.I would like to mention, that we set in the meantime on 3.3.2020 the parameter storage.wiredTiger.engineConfig.cacheSizeGB to 35 GByte.\nIt looks like since then the memory consumption stays much lower than before (same load). It is since more than 2 weeks quite stable, growing slowly from the 35 GByte to 62 GByte.We are wondering what happens when the memory utilization reache 99%. Will the server crash again?Therefore we would like also to understand how MongoDB manage the memory. Even if we are not running any operation the memory is never full released. I.e it does not go down to the 35 GByte per shard. Is it normal?Thank you,\nHermann",
"username": "Hermann_Jaun"
}
] | Map Reduce - shard crashes with out of memory exception | 2020-03-03T10:53:55.038Z | Map Reduce - shard crashes with out of memory exception | 2,466 |
null | [
"aggregation",
"dot-net",
"indexes"
] | [
{
"code": "{\n \"command\": {\n \"aggregate\": \"contacts\",\n \"pipeline\": [\n {\n \"$match\": {\n \"accountId\": 158,\n \"deleted\": null\n }\n },\n {\n \"$group\": {\n \"_id\": 1,\n \"n\": {\n \"$sum\": 1\n }\n }\n }\n ],\n \"cursor\": {},\n \"$db\": \"Loopify\",\n \"lsid\": {\n \"id\": {\n \"$uuid\": \"a84a49c6-c5f5-d6ce-9f33-b8e38d45e1b3\"\n }\n },\n \"$clusterTime\": {\n \"clusterTime\": {\n \"$timestamp\": {\n \"t\": 1583238446,\n \"i\": 60\n }\n },\n \"signature\": {\n \"hash\": {\n \"$binary\": \"PHh4eHh4eD4=\",\n \"$type\": 0\n },\n \"keyId\": 6748499292626878000\n }\n }\n },\n \"planSummary\": [\n {\n \"IXSCAN\": {\n \"accountId\": 1,\n \"deleted\": 1,\n \"tagGroups.name\": 1,\n \"tagGroups.tags\": 1,\n \"firstName\": 1,\n \"lastName\": 1\n }\n }\n ],\n \"keysExamined\": 5677198,\n \"docsExamined\": 139458,\n \"cursorExhausted\": 1,\n \"numYields\": 44674,\n \"nreturned\": 1,\n \"reslen\": 238,\n \"locks\": {\n \"Global\": {\n \"acquireCount\": {\n \"r\": 89352\n }\n },\n \"Database\": {\n \"acquireCount\": {\n \"r\": 44676\n }\n },\n \"Collection\": {\n \"acquireCount\": {\n \"r\": 44676\n }\n }\n },\n \"protocol\": \"op_msg\",\n \"millis\": 32279\n}\n",
"text": "Using the c# driver to get a document count, using CountDocumentsAsync is see the following “very slow” command in Atlas profiler:The collection has 760K documents, so I don’t understand keysExamined\": 5677198.\nThe count takes 32 sec, while a find query on the same fields takes under a sec.What can be done to optimize this?MongodB: 3.6.17\nC# Driver: 2.10.2",
"username": "Erlend_Baerland"
},
{
"code": "mongoexplain(\"executionStats\")executionStats",
"text": "@Erlend_Baerland Welcome to the forum!Can you provide some more details to help investigate this:The executionStats output includes more detail of query processing stages (work performed such as keys & documents examined, ranges for index searches, etc).Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "db.contacts.explain(\"executionStats\").aggregate(\n[\n {\n \"$match\": {\n \"accountId\": 158,\n \"deleted\": null\n }\n },\n {\n \"$group\": {\n \"_id\": 1,\n \"n\": {\n \"$sum\": 1\n }\n }\n }\n ]\n)\n{ \n \"serverInfo\" : {\n \"host\" : \"<removed>\", \n \"port\" : 27017.0, \n \"version\" : \"3.6.17\", \n \"gitVersion\" : \"3d6953c361213c5bfab23e51ab274ce592edafe6\"\n }, \n \"stages\" : [\n {\n \"$cursor\" : {\n \"query\" : {\n \"accountId\" : 158.0, \n \"deleted\" : null\n }, \n \"queryPlanner\" : {\n \"plannerVersion\" : 1.0, \n \"namespace\" : \"Loopify.contacts\", \n \"indexFilterSet\" : false, \n \"parsedQuery\" : {\n \"$and\" : [\n {\n \"accountId\" : {\n \"$eq\" : 158.0\n }\n }, \n {\n \"deleted\" : {\n \"$eq\" : null\n }\n }\n ]\n }, \n \"winningPlan\" : {\n \"stage\" : \"FETCH\", \n \"filter\" : {\n \"deleted\" : {\n \"$eq\" : null\n }\n }, \n \"inputStage\" : {\n \"stage\" : \"IXSCAN\", \n \"keyPattern\" : {\n \"accountId\" : 1.0, \n \"deleted\" : 1.0, \n \"tagGroups.name\" : 1.0, \n \"tagGroups.tags\" : 1.0, \n \"firstName\" : 1.0, \n \"lastName\" : 1.0\n }, \n \"indexName\" : \"accountId_1_deleted_1_tagGroups.name_1_tagGroups.tags_1_firstName_1_lastName_1\", \n \"isMultiKey\" : true, \n \"multiKeyPaths\" : {\n \"accountId\" : [\n\n ], \n \"deleted\" : [\n\n ], \n \"tagGroups.name\" : [\n \"tagGroups\"\n ], \n \"tagGroups.tags\" : [\n \"tagGroups\", \n \"tagGroups.tags\"\n ], \n \"firstName\" : [\n\n ], \n \"lastName\" : [\n\n ]\n }, \n \"isUnique\" : false, \n \"isSparse\" : false, \n \"isPartial\" : false, \n \"indexVersion\" : 2.0, \n \"direction\" : \"forward\", \n \"indexBounds\" : {\n \"accountId\" : [\n \"[158.0, 158.0]\"\n ], \n \"deleted\" : [\n \"[null, null]\"\n ], \n \"tagGroups.name\" : [\n \"[MinKey, MaxKey]\"\n ], \n \"tagGroups.tags\" : [\n \"[MinKey, MaxKey]\"\n ], \n \"firstName\" : [\n \"[MinKey, MaxKey]\"\n ], \n \"lastName\" : [\n \"[MinKey, MaxKey]\"\n ]\n }\n }\n }, \n \"rejectedPlans\" : [\n {\n \"stage\" : \"FETCH\", \n \"filter\" : {\n \"deleted\" : {\n \"$eq\" : null\n }\n }, \n \"inputStage\" : {\n \"stage\" : \"IXSCAN\", \n \"keyPattern\" : {\n \"accountId\" : 1.0, \n \"mobile\" : 1.0\n }, \n \"indexName\" : \"accountId_1_mobile_1\", \n \"isMultiKey\" : false, \n \"multiKeyPaths\" : {\n \"accountId\" : [\n\n ], \n \"mobile\" : [\n\n ]\n }, \n \"isUnique\" : false, \n \"isSparse\" : false, \n \"isPartial\" : false, \n \"indexVersion\" : 2.0, \n \"direction\" : \"forward\", \n \"indexBounds\" : {\n \"accountId\" : [\n \"[158.0, 158.0]\"\n ], \n \"mobile\" : [\n \"[MinKey, MaxKey]\"\n ]\n }\n }\n }, \n {\n \"stage\" : \"FETCH\", \n \"filter\" : {\n \"deleted\" : {\n \"$eq\" : null\n }\n }, \n \"inputStage\" : {\n \"stage\" : \"IXSCAN\", \n \"keyPattern\" : {\n \"accountId\" : 1.0, \n \"email\" : 1.0\n }, \n \"indexName\" : \"accountId_1_email_1\", \n \"isMultiKey\" : false, \n \"multiKeyPaths\" : {\n \"accountId\" : [\n\n ], \n \"email\" : [\n\n ]\n }, \n \"isUnique\" : false, \n \"isSparse\" : false, \n \"isPartial\" : false, \n \"indexVersion\" : 2.0, \n \"direction\" : \"forward\", \n \"indexBounds\" : {\n \"accountId\" : [\n \"[158.0, 158.0]\"\n ], \n \"email\" : [\n \"[MinKey, MaxKey]\"\n ]\n }\n }\n }, \n {\n \"stage\" : \"FETCH\", \n \"filter\" : {\n \"deleted\" : {\n \"$eq\" : null\n }\n }, \n \"inputStage\" : {\n \"stage\" : \"IXSCAN\", \n \"keyPattern\" : {\n \"accountId\" : 1.0, \n \"_id\" : 1.0\n }, \n \"indexName\" : \"accountId_1__id_1\", \n \"isMultiKey\" : false, \n \"multiKeyPaths\" : {\n \"accountId\" : [\n\n ], \n \"_id\" : [\n\n ]\n }, \n \"isUnique\" : true, \n \"isSparse\" : false, \n \"isPartial\" : false, \n \"indexVersion\" : 2.0, \n \"direction\" : \"forward\", \n \"indexBounds\" : {\n \"accountId\" : [\n \"[158.0, 158.0]\"\n ], \n \"_id\" : [\n \"[MinKey, MaxKey]\"\n ]\n }\n }\n }, \n {\n \"stage\" : \"FETCH\", \n \"filter\" : {\n \"deleted\" : {\n \"$eq\" : null\n }\n }, \n \"inputStage\" : {\n \"stage\" : \"IXSCAN\", \n \"keyPattern\" : {\n \"accountId\" : 1.0\n }, \n \"indexName\" : \"accountId_1\", \n \"isMultiKey\" : false, \n \"multiKeyPaths\" : {\n \"accountId\" : [\n\n ]\n }, \n \"isUnique\" : false, \n \"isSparse\" : false, \n \"isPartial\" : false, \n \"indexVersion\" : 2.0, \n \"direction\" : \"forward\", \n \"indexBounds\" : {\n \"accountId\" : [\n \"[158.0, 158.0]\"\n ]\n }\n }\n }, \n {\n \"stage\" : \"FETCH\", \n \"filter\" : {\n \"deleted\" : {\n \"$eq\" : null\n }\n }, \n \"inputStage\" : {\n \"stage\" : \"IXSCAN\", \n \"keyPattern\" : {\n \"accountId\" : 1.0, \n \"deleted\" : 1.0, \n \"firstName\" : 1.0, \n \"lastName\" : 1.0\n }, \n \"indexName\" : \"accountId_1_deleted_1_firstName_1_lastName_1\", \n \"isMultiKey\" : false, \n \"multiKeyPaths\" : {\n \"accountId\" : [\n\n ], \n \"deleted\" : [\n\n ], \n \"firstName\" : [\n\n ], \n \"lastName\" : [\n\n ]\n }, \n \"isUnique\" : false, \n \"isSparse\" : false, \n \"isPartial\" : false, \n \"indexVersion\" : 2.0, \n \"direction\" : \"forward\", \n \"indexBounds\" : {\n \"accountId\" : [\n \"[158.0, 158.0]\"\n ], \n \"deleted\" : [\n \"[null, null]\"\n ], \n \"firstName\" : [\n \"[MinKey, MaxKey]\"\n ], \n \"lastName\" : [\n \"[MinKey, MaxKey]\"\n ]\n }\n }\n }, \n {\n \"stage\" : \"FETCH\", \n \"filter\" : {\n \"deleted\" : {\n \"$eq\" : null\n }\n }, \n \"inputStage\" : {\n \"stage\" : \"IXSCAN\", \n \"keyPattern\" : {\n \"accountId\" : 1.0, \n \"firstName\" : 1.0, \n \"lastName\" : 1.0\n }, \n \"indexName\" : \"accountId_1_firstName_1_lastName_1\", \n \"isMultiKey\" : false, \n \"multiKeyPaths\" : {\n \"accountId\" : [\n\n ], \n \"firstName\" : [\n\n ], \n \"lastName\" : [\n\n ]\n }, \n \"isUnique\" : false, \n \"isSparse\" : false, \n \"isPartial\" : false, \n \"indexVersion\" : 2.0, \n \"direction\" : \"forward\", \n \"indexBounds\" : {\n \"accountId\" : [\n \"[158.0, 158.0]\"\n ], \n \"firstName\" : [\n \"[MinKey, MaxKey]\"\n ], \n \"lastName\" : [\n \"[MinKey, MaxKey]\"\n ]\n }\n }\n }, \n {\n \"stage\" : \"FETCH\", \n \"filter\" : {\n \"deleted\" : {\n \"$eq\" : null\n }\n }, \n \"inputStage\" : {\n \"stage\" : \"IXSCAN\", \n \"keyPattern\" : {\n \"accountId\" : 1.0, \n \"externalId\" : 1.0, \n \"deleted\" : 1.0\n }, \n \"indexName\" : \"accountId_1_externalId_1_deleted_1\", \n \"isMultiKey\" : false, \n \"multiKeyPaths\" : {\n \"accountId\" : [\n\n ], \n \"externalId\" : [\n\n ], \n \"deleted\" : [\n\n ]\n }, \n \"isUnique\" : false, \n \"isSparse\" : false, \n \"isPartial\" : false, \n \"indexVersion\" : 2.0, \n \"direction\" : \"forward\", \n \"indexBounds\" : {\n \"accountId\" : [\n \"[158.0, 158.0]\"\n ], \n \"externalId\" : [\n \"[MinKey, MaxKey]\"\n ], \n \"deleted\" : [\n \"[null, null]\"\n ]\n }\n }\n }, \n {\n \"stage\" : \"FETCH\", \n \"filter\" : {\n \"deleted\" : {\n \"$eq\" : null\n }\n }, \n \"inputStage\" : {\n \"stage\" : \"IXSCAN\", \n \"keyPattern\" : {\n \"accountId\" : 1.0, \n \"deleted\" : 1.0, \n \"tagGroups.name\" : 1.0, \n \"tagGroups.tags\" : 1.0\n }, \n \"indexName\" : \"accountId_1_deleted_1_tagGroups.name_1_tagGroups.tags_1\", \n \"isMultiKey\" : true, \n \"multiKeyPaths\" : {\n \"accountId\" : [\n\n ], \n \"deleted\" : [\n\n ], \n \"tagGroups.name\" : [\n \"tagGroups\"\n ], \n \"tagGroups.tags\" : [\n \"tagGroups\", \n \"tagGroups.tags\"\n ]\n }, \n \"isUnique\" : false, \n \"isSparse\" : false, \n \"isPartial\" : false, \n \"indexVersion\" : 2.0, \n \"direction\" : \"forward\", \n \"indexBounds\" : {\n \"accountId\" : [\n \"[158.0, 158.0]\"\n ], \n \"deleted\" : [\n \"[null, null]\"\n ], \n \"tagGroups.name\" : [\n \"[MinKey, MaxKey]\"\n ], \n \"tagGroups.tags\" : [\n \"[MinKey, MaxKey]\"\n ]\n }\n }\n }, \n {\n \"stage\" : \"FETCH\", \n \"filter\" : {\n \"deleted\" : {\n \"$eq\" : null\n }\n }, \n \"inputStage\" : {\n \"stage\" : \"IXSCAN\", \n \"keyPattern\" : {\n \"accountId\" : 1.0, \n \"deleted\" : 1.0\n }, \n \"indexName\" : \"accountId_1_deleted_1\", \n \"isMultiKey\" : false, \n \"multiKeyPaths\" : {\n \"accountId\" : [\n\n ], \n \"deleted\" : [\n\n ]\n }, \n \"isUnique\" : false, \n \"isSparse\" : false, \n \"isPartial\" : false, \n \"indexVersion\" : 2.0, \n \"direction\" : \"forward\", \n \"indexBounds\" : {\n \"accountId\" : [\n \"[158.0, 158.0]\"\n ], \n \"deleted\" : [\n \"[null, null]\"\n ]\n }\n }\n }, \n {\n \"stage\" : \"FETCH\", \n \"filter\" : {\n \"deleted\" : {\n \"$eq\" : null\n }\n }, \n \"inputStage\" : {\n \"stage\" : \"IXSCAN\", \n \"keyPattern\" : {\n \"accountId\" : 1.0, \n \"deleted\" : 1.0, \n \"_id\" : 1.0, \n \"firstName\" : 1.0, \n \"lastName\" : 1.0\n }, \n \"indexName\" : \"accountId_1_deleted_1__id_1_firstName_1_lastName_1\", \n \"isMultiKey\" : false, \n \"multiKeyPaths\" : {\n \"accountId\" : [\n\n ], \n \"deleted\" : [\n\n ], \n \"_id\" : [\n\n ], \n \"firstName\" : [\n\n ], \n \"lastName\" : [\n\n ]\n }, \n \"isUnique\" : false, \n \"isSparse\" : false, \n \"isPartial\" : false, \n \"indexVersion\" : 2.0, \n \"direction\" : \"forward\", \n \"indexBounds\" : {\n \"accountId\" : [\n \"[158.0, 158.0]\"\n ], \n \"deleted\" : [\n \"[null, null]\"\n ], \n \"_id\" : [\n \"[MinKey, MaxKey]\"\n ], \n \"firstName\" : [\n \"[MinKey, MaxKey]\"\n ], \n \"lastName\" : [\n \"[MinKey, MaxKey]\"\n ]\n }\n }\n }\n ]\n }, \n \"executionStats\" : {\n \"executionSuccess\" : true, \n \"nReturned\" : 153151.0, \n \"executionTimeMillis\" : 48409.0, \n \"totalKeysExamined\" : 6319057.0, \n \"totalDocsExamined\" : 153151.0, \n \"executionStages\" : {\n \"stage\" : \"FETCH\", \n \"filter\" : {\n \"deleted\" : {\n \"$eq\" : null\n }\n }, \n \"nReturned\" : 153151.0, \n \"executionTimeMillisEstimate\" : 46454.0, \n \"works\" : 6319058.0, \n \"advanced\" : 153151.0, \n \"needTime\" : 6165906.0, \n \"needYield\" : 0.0, \n \"saveState\" : 49917.0, \n \"restoreState\" : 49917.0, \n \"isEOF\" : 1.0, \n \"invalidates\" : 0.0, \n \"docsExamined\" : 153151.0, \n \"alreadyHasObj\" : 0.0, \n \"inputStage\" : {\n \"stage\" : \"IXSCAN\", \n \"nReturned\" : 153151.0, \n \"executionTimeMillisEstimate\" : 14458.0, \n \"works\" : 6319058.0, \n \"advanced\" : 153151.0, \n \"needTime\" : 6165906.0, \n \"needYield\" : 0.0, \n \"saveState\" : 49917.0, \n \"restoreState\" : 49917.0, \n \"isEOF\" : 1.0, \n \"invalidates\" : 0.0, \n \"keyPattern\" : {\n \"accountId\" : 1.0, \n \"deleted\" : 1.0, \n \"tagGroups.name\" : 1.0, \n \"tagGroups.tags\" : 1.0, \n \"firstName\" : 1.0, \n \"lastName\" : 1.0\n }, \n \"indexName\" : \"accountId_1_deleted_1_tagGroups.name_1_tagGroups.tags_1_firstName_1_lastName_1\", \n \"isMultiKey\" : true, \n \"multiKeyPaths\" : {\n \"accountId\" : [\n\n ], \n \"deleted\" : [\n\n ], \n \"tagGroups.name\" : [\n \"tagGroups\"\n ], \n \"tagGroups.tags\" : [\n \"tagGroups\", \n \"tagGroups.tags\"\n ], \n \"firstName\" : [\n\n ], \n \"lastName\" : [\n\n ]\n }, \n \"isUnique\" : false, \n \"isSparse\" : false, \n \"isPartial\" : false, \n \"indexVersion\" : 2.0, \n \"direction\" : \"forward\", \n \"indexBounds\" : {\n \"accountId\" : [\n \"[158.0, 158.0]\"\n ], \n \"deleted\" : [\n \"[null, null]\"\n ], \n \"tagGroups.name\" : [\n \"[MinKey, MaxKey]\"\n ], \n \"tagGroups.tags\" : [\n \"[MinKey, MaxKey]\"\n ], \n \"firstName\" : [\n \"[MinKey, MaxKey]\"\n ], \n \"lastName\" : [\n \"[MinKey, MaxKey]\"\n ]\n }, \n \"keysExamined\" : 6319057.0, \n \"seeks\" : 1.0, \n \"dupsTested\" : 6319057.0, \n \"dupsDropped\" : 6165906.0, \n \"seenInvalidated\" : 0.0\n }\n }\n }\n }\n }, \n {\n \"$group\" : {\n \"_id\" : {\n \"$const\" : 1.0\n }, \n \"n\" : {\n \"$sum\" : {\n \"$const\" : 1.0\n }\n }\n }\n }\n ], \n \"ok\" : 1.0, \n \"operationTime\" : Timestamp(1583831635, 28), \n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1583831635, 28), \n \"signature\" : {\n \"hash\" : BinData(0, \"higLMSwtyXkmzuObjj6EgAsotPk=\"), \n \"keyId\" : 6748499292626878465\n }\n }\n}\n\n",
"text": "Hi,\nWe’re using M10 (General) - Replica Set - 3 nodesWith this query:The result is:",
"username": "Erlend_Baerland"
},
{
"code": "deleted: {$eq:null}\ndeleted: {$type:10}\n",
"text": "I think I’m running into this: https://jira.mongodb.org/browse/SERVER-18861If I remove the null check from the match, I get a COUNT_SCAN and executionTimeMillis is down to 420ms.I need to filter out documents where the “deleted” field has a value, so how can I rewrite this and still get the COUNT_SCAN?I’ve tried:and:… but it’s still not using a covered query",
"username": "Erlend_Baerland"
},
{
"code": "\"stage\" : \"PROJECTION_COVERED\"{ accountId: 1, deleted: 1 }db.test.aggregate( [\n { \n $match: { accountId: 12 } \n },\n { \n $group: { \n _id: null, \n n: { \n $sum: { \n $cond: [ { $eq: [ \"$deleted\", null ] }, 1, 0 ] \n } \n } \n } \n }\n] )\n{\n \"stages\" : [\n {\n \"$cursor\" : {\n \"query\" : {\n \"accountId\" : 12\n },\n \"fields\" : {\n \"deleted\" : 1,\n \"_id\" : 0\n },\n \"queryPlanner\" : {\n \"plannerVersion\" : 1,\n \"namespace\" : \"text.test\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"accountId\" : {\n \"$eq\" : 12\n }\n },\n \"queryHash\" : \"DED4FE97\",\n \"planCacheKey\" : \"C9C471AD\",\n \"winningPlan\" : {\n \"stage\" : \"PROJECTION_COVERED\",\n \"transformBy\" : {\n \"deleted\" : 1,\n \"_id\" : 0\n },\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"accountId\" : 1,\n \"deleted\" : 1\n },\n \"indexName\" : \"accountId_1_deleted_1\",\n \"isMultiKey\" : false,\n ...\n}",
"text": "With this modified query the explain output showed no FETCH stage and the query plan has a \"stage\" : \"PROJECTION_COVERED\".Note that I created an index : { accountId: 1, deleted: 1 }. I am using MongoDB Enterprise version 4.2.3 on a local PC.The query plan (part of it):",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thanks!I’m getting a PROJECTION stage (not a PROJECTION_COVERED), but my query now runs in under a sec. That’s a huge improvement (from 30+ secs)For this to work with the c# driver, I guess I need to construct the aggregate pipeline manually instead of using the countDocumentsAsync method.Anyway, thanks a lot for your suggestion.",
"username": "Erlend_Baerland"
}
] | How to optimize a count query | 2020-03-03T13:38:31.842Z | How to optimize a count query | 12,148 |
null | [
"kubernetes-operator",
"licensing"
] | [
{
"code": "",
"text": "Hi,The Ops Manager is asking to use MongoDB enterprise version(probably 4.2.3-ent) to use the Continous Backup feature to the S3 store. Right now, I am using 4.2.3 version.\nI wanted to know if a license will be required to use the enterprise version in the development server.\nI am trying to deploy MongoDB and Ops Manager in Kubernetes using MongoDB Operator.Thanks\nPiyush",
"username": "Piyush_Kumar"
},
{
"code": "",
"text": "@Piyush_Kumar MongoDB Ops Manager, MongoDB Enterprise Server, and the MongoDB Enterprise Kubernetes Operator are all subject to the same Customer Agreement.You can download trial versions for evaluation and development. Please see the customer agreement for specific terms and conditions.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "I didn’t understand.\nBasically I want to use the 4.2.3-ent version in the development server of MongoDB Kubernetes Operator. In Ops Manager, it says to run MongoDB on the development server, I don’t need to purchase a license. But I am not sure about ‘-ent’ version. Do I need to purchase a license for that?Thanks",
"username": "Piyush_Kumar"
},
{
"code": "",
"text": "Do I need to purchase a license for that?@Piyush_Kumar As per my earlier reply, please review the Customer Agreement for applicable licensing terms for MongoDB Enterprise software.Usage in accordance with the terms of the “Free Evaluation and Development” section of the Customer Agreement does not require licenses. Usage for any other purpose, including testing, quality assurance, or production purposes requires purchasing appropriate licenses.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Using Enterprise in development server of MongoDB Kubernetes Operator | 2020-03-11T18:49:23.445Z | Using Enterprise in development server of MongoDB Kubernetes Operator | 3,711 |
null | [
"replication"
] | [
{
"code": "",
"text": "Hi all,I’m new to MongoDB and I have a question about Replica sets. At the moment, I have 2 sites and 4 mongodb databases in replica set; one primary, one secondary in site A and one secondary and one observer in site B.For reasons, I have to break the comms between the sites and turn the secondary in site B to act as a primary - write data to it. I will have to rejoin the sites and re-establish the replica set later.My question is if I remove a member (secondary in site 2) from the replica set, break the comms between site A and B and then write data to the secondary database in site B, then re-add the member to the replica set to be replicated from the primary database in site A, would there be any data corruption?Thanks in advance",
"username": "Oscar_A"
},
{
"code": "",
"text": "Welcome to the community @Oscar_A!My question is if I remove a member (secondary in site 2) from the replica set, break the comms between site A and B and then write data to the secondary database in site B, then re-add the member to the replica set to be replicated from the primary database in site A, would there be any data corruption?All members of a replica set share a common write history.You can remove a member from a replica set and use that in standalone mode or as the seed for another replica set, but that creates a divergent write history. You cannot re-add that member to the original replica set without re-syncing or rolling back the writes (which would undo your goal of writing data to this member).If you want to merge data back into a single replica set, you will have to backup and restore the relevant data. Merging may be difficult if you have updated the same collections (or documents!) in different deployments.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi Stennie,Thanks for your reply.Yes the problem is the primary and one secondary will continue to work in site A, data being written to the primary. Then in site B after the link to site A is broken, data will also be written to the ex-secondary-now-primary database.However, data written to site B primary is of zero importance because I am doing this only for testing purposes for site B.So if I did the above and re-add that primary from site B to the original replica set, would it automatically be synced with the original primary from site A?Thanks",
"username": "Oscar_A"
},
{
"code": "mongodumpmongorestore",
"text": "So if I did the above and re-add that primary from site B to the original replica set, would it automatically be synced with the original primary from site A?No, as per my earlier comment you’ve now created two replica sets that happen to have the same name but have diverged in history. Any independent writes on site B cannot be automatically reconciled with site A. There can only be a single primary (and timeline of write history) for a given replica set.If you re-added members from B to replica set A and they still had an oplog entry in common, the members rejoining as a secondary would attempt to rollback to the common point (before history diverged) to make the members consistent. Documents that are rolled back are exported to BSON for manual reconciliation.If rollback isn’t possible, the members would have to be re-synced.If you want to merge data, you need to identify the changed data and use mongodump and mongorestore to load the relevant data into a single deployment.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thanks Stennie,\" If you re-added members from B to replica set A and they still had an oplog entry in common, the members rejoining as a secondary would attempt to rollback to the common point (before history diverged) to make the members consistent.\"Does this process occur independently? Or does it need human interaction?“Documents that are rolled back are exported to BSON for manual reconciliation.”I’m confused. which documents? Those in the original PRIMARY node or those in fake PRIMARY node? Are you talking about the documents that are written into the fake PRIMARY node or the ones in original PRIMARY node?\" If rollback isn’t possible, the members would have to be [re-synced]\"Automatically or via human interaction?Thanks",
"username": "Oscar_A"
},
{
"code": "mongodumpmongorestore",
"text": "@Oscar_A To be clear, I would not recommend the approach you are trying to take (splitting and attempting to merge two versions of a replica set). If you want to preserve all writes, manual effort will be required. Replica sets intentionally only allow a single primary and history of changes: breaking comms between members of a replica set does not change that fundamental requirement.Instead of merging members with different replica set histories, I would use mongodump and mongorestore to backup and restore the relevant data into a single replica set (assuming you have some way of identifying which data has been modified).Does this process occur independently? Or does it need human interaction?The rollback process (which returns a replica set member to a consistent state) is attempted automatically, but the recovery of conflicting documents (which get exported as BSON files to a rollback directory) requires human intervention.Rollback is generally not desirable, especially if your application is expecting that data that has been written will not be reverted. For most use cases the goal is to Avoid Replica Set Rollbacks.A simplified example:If the rollback process is successful, you then have to figure out what to do with the BSON files in the rollback data directory You will have the current version of the document in replica set A and the version exported from replicaB to a BSON file, but there may have been conflicting updates in the two different replica sets.Reconciling the rollback files will be a manual process.Are you talking about the documents that are written into the fake PRIMARY node or the ones in original PRIMARY node?I’m referring to the BSON files exported in the rollback directory.\" If rollback isn’t possible, the members would have to be [re-synced]\"Automatically or via human interaction?Re-sync requires human interaction. You need to decide which approach you want to use to Resync a member of a replica set. Since re-sync also involves removing the existing data, you would not want this to happen automatically.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Stennie, again thanks for your reply.I want to take this approach only to test something. Yes I want to re-add the fake Primary node into the original Replica Set and I don’t care about the data being written to it. I don’t have to retain what was written to the fake Primary node.I have also been doing some testing on Google Cloud environment. I have setup 4 VMs, each with 1 mongodb instance running. I have set up a Replica Set with them, using VM1 as the Primary by making it priority 4 and others 0.I isolated VM3 from the network, simulating an unavailable node. My plan was to somehow make it a standalone Primary and write data to it as part of testing. When testing is done, I was going to add it back to the original Replica Set and see if syncs (or manual re-sync it’d doesnt matter) from the original primary node, discarding the data written from the time when it was a Primary. I don’t care what happens to the data written to it when it was out of the Replica Set.Well I was not successful in doing so because I don’t know how to manually make it Primary or writable , assuming there is a way to do this. I’m stuck.",
"username": "Oscar_A"
},
{
"code": "replSetNamedbPath",
"text": "I isolated VM3 from the network, simulating an unavailable node. My plan was to somehow make it a standalone Primary and write data to it as part of testing. When testing is done, I was going to add it back to the original Replica Set and see if syncs (or manual re-sync it’d doesnt matter) from the original primary node, discarding the data written from the time when it was a Primary. I don’t care what happens to the data written to it when it was out of the Replica Set.Are you writing ephemeral data (like a cache)? It is otherwise unusual to want write availability without caring about saving the writes.If you’re fine discarding all writes, the safest path would be to:I would be very cautious messing about with trying to create two versions of the same replica set with different primaries as this may result in unexpected rollbacks or consistency problems.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi Stennie,I am very cautious about the unexpected rollbacks and/or consistency issues too which is why I have to do this test in the first place. Due to poor infrastructure planning from the other team in the production environment, this kind of scenario have to happen for a test I am going run. Site 2 needs to be able to continue without the availability of Site 1 but because the mongodb databases from both Site 1 and 2 are comprising of single Replica Set, I need to prove that Site 2 can live on without Site 1 and the only way to do this, at this stage, is to force Site 2 Secondary to Primary.Then of course it needs to be proven that when Site 1 is available again, the Replica Set with both Site 1 and 2 machines will continue on without any adverse effects.",
"username": "Oscar_A"
},
{
"code": "replSetNamedbPath",
"text": "“Are you writing ephemeral data (like a cache)? It is otherwise unusual to want write availability without caring about saving the writes.”No I need to be able to write into a database in the fake Primary. This is to test a feasibility of something…\" Restart the isolated member as a standalone (with the replSetName commented out in the config file).\"In my test environment, I checked in /etc/mongod.conf and there is nothing in #replication section, in Primary and Secondary node config files. I don’t understand because replica set is established and I have been writing data to it.\" * Write your data. Any writes in standalone mode will not be written to the oplog, so this violates consistency with the original replica set and the member should not be directly re-added to the original replica set.I need to be able to write to the existing database in the fake Primary. So if I understand you correctly, I would have to first change the dbPath before running mongod and copy the database into that different dbPath and run the standalone from it?",
"username": "Oscar_A"
},
{
"code": "",
"text": "No I need to be able to write into a database in the fake Primary. This is to test a feasibility of something…Can you elaborate on the scenario you are trying to test? I’m not clear on the goal, since you mention you are OK with discarding any writes to Site 2 (which seems contradictory to maintaining write availability).In a disaster scenario where Site 1 is fully unavailable, you have the manual administrative option of force reconfiguring Site 2 to accept writes.However, this is not intended to allow you to create two versions of the same replica set with different primaries and continuing writing to both.If you want to enable automatic failover from Site 1 to Site 2, your replica set config should have:See: Replica sets distributed across two or more data centres.Then of course it needs to be proven that when Site 1 is available again, the Replica Set with both Site 1 and 2 machines will continue on without any adverse effects.If you allow continued writes to Site 1 and Site 2, you will have a fundamental challenge on how to reconcile writes when both sites are available (with options as described earlier in this thread).Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Even if you manage to force reconfigure your B site is it worth the mess you then have to sort out when site A comes back?I would write this up as a TL;DR to the powers that be.We shouldn’t do this, it is a really bad idea. We should build this properly and add a third site for automatic HA/Recovery.If we are forced to do this there is a high probability of prolonged downtime and/or data loss.There is enough documentation and white papers to back this up.",
"username": "chris"
},
{
"code": "",
"text": "I have just ran a test in my google cloud environment.4 VMs, running one mongodb instances each. I have a replica set of thes 4 mongodb instances. VM1 is currently PRIMARY node. and VM2, 3 and 4 are SECONDARY. The replica set REPSET01 has one collection called RepCollection which I wrote test documents to it. the documents are replicated across all nodes.I isolated VM3 by adding some iptables rules on it’s own iptables to drop all traffic from VM1,2 and 4. Then I wrote data to the RepCollection via VM1 PRIMARY> node. I checked the RepCollection on VM2 and VM4, the data is instantly replicated.Then on VM3 I restarted mongod without -replSet option. I connected to itself and selected the RepCollection (I believe it became standalone as the cursor became > instead of SECONDARY>). I queried the contents db.RepCollection.find().pretty() and I saw all the documents replicated before it lost comms to the replica set exist, but not the data written to the RepCollection after I isolated it which makes sense.I inserted a document to the RepCollection of isolated VM3, checked it and it is there. Then I stopped mongod service. Then I stopped the iptables service wiping the temporary rules so VM3 has regained comms with the other VMs and thus the replica set. I started mongo with mongod --replSet “REPSET01” -f /etc/mongod.confI connected to it and saw it became SECONDARY> again. Then I selected the RepCollection and queried the contents. As I expected I saw the data written to the RepCollection via VM1 after VM3 was isolated, is now replicated across VM3 node BUT to my surprise, VM3’s RepCollection ALSO retained the document written to it while in isolation state. So as far as I can understand, doing this test definitely creates dependencies. Is there a way to manually force replicate VM3 node to be the same as every other node in the Replica Set?",
"username": "Oscar_A"
},
{
"code": "",
"text": "@Stennie_XTo elaborate, I have site 1 and site 2. Site 1 has 2 VMs running one mongodb instance each. Site 2 has 2 VMs running one mongodb instance each. VM1, VM2 (Site1) and VM3, VM4 (Site 2) make up the Replica Set. Site 1 is running an application suite, lets call it QueryApp1. QueryApp1 is live and it is reading and writing data to the PRIMARY node of the Replica Set which is on VM1. Site 2 also has identical application suite, lets call it QueryApp2 which is NOT live at the moment. Site 2’s QueryApp2 has not undergone a complete test and I need to do it now.For reasons beyond my own imagination, someone has decided to go live with QueryApp1, using the replica set. Now I have to test the QueryApp2 and I cannot write anything to the PRIMARY but QueryApp2 still need a PRIMARY to read and write data to it as part of the testing. What I am trying to do is do a full functionality test on QueryApp2 without touching the actual PRIMARY.So because the read/write activity on QueryApp2 is only for testing, the written data can be discarded when testing is completed which is why I mentioned it does not need to be kept.I understand what I have tried to do, with asking all these stupid questions to you guys, is NOT advisable but I still need to find a way to test the QueryApp2. Is there an alternative wary to achieve this?",
"username": "Oscar_A"
},
{
"code": "",
"text": "@Stennie_X and @Stennie_XI manually resynced the VM3 by following the mongodb manual: stop mongodb instance on VM3, delete everything in the dbPath and start mongodb instance with replSet option.This not only re-added the VM3 as a SECONDARY member into the Replica Set but also synced consistent data across all nodes.This may or may not work in my environment due to the sizes of the databases - it may take a while. Your thoughts?",
"username": "Oscar_A"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Removing and re-adding a replica set member | 2020-03-12T04:14:14.171Z | Removing and re-adding a replica set member | 7,674 |
null | [
"charts",
"on-premises"
] | [
{
"code": "# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1\nsystemd-r 550 systemd-resolve 13u IPv4 28466 0t0 TCP 127.0.0.53:53 (LISTEN)\ndockerd 24044 root 22u IPv6 2259007 0t0 TCP *:2377 (LISTEN)\ndockerd 24044 root 27u IPv6 2257392 0t0 TCP *:7946 (LISTEN)\ncupsd 105733 root 6u IPv6 2992175 0t0 TCP [::1]:631 (LISTEN)\ncupsd 105733 root 7u IPv4 2992176 0t0 TCP 127.0.0.1:631 (LISTEN)\nmongod 115722 mongodb 11u IPv4 3150950 0t0 TCP 127.0.0.1:27017 (LISTEN)\nNETWORK ID NAME DRIVER SCOPE\n792044a81858 bridge bridge local\n8799e71f7cb3 docker_gwbridge bridge local\n6dbede781330 host host local\nq2q7irgd4op5 ingress overlay swarm\nef873a4216de none null local\n[\n {\n \"Name\": \"bridge\",\n \"Id\": \"792044a818585bc4d0793324a9f7e28861f1008e26bb8f70e1b18263be3d1d7f\",\n \"Created\": \"2020-03-13T11:31:47.291514454-07:00\",\n \"Scope\": \"local\",\n \"Driver\": \"bridge\",\n \"EnableIPv6\": false,\n \"IPAM\": {\n \"Driver\": \"default\",\n \"Options\": null,\n \"Config\": [\n {\n \"Subnet\": \"172.17.0.0/16\"\n }\n ]\n },\n \"Internal\": false,\n \"Attachable\": false,\n \"Ingress\": false,\n \"ConfigFrom\": {\n \"Network\": \"\"\n },\n \"ConfigOnly\": false,\n \"Containers\": {},\n \"Options\": {\n \"com.docker.network.bridge.default_bridge\": \"true\",\n \"com.docker.network.bridge.enable_icc\": \"true\",\n \"com.docker.network.bridge.enable_ip_masquerade\": \"true\",\n \"com.docker.network.bridge.host_binding_ipv4\": \"0.0.0.0\",\n \"com.docker.network.bridge.name\": \"docker0\",\n \"com.docker.network.driver.mtu\": \"1500\"\n },\n \"Labels\": {}\n }\n]\n[\n {\n \"Name\": \"docker_gwbridge\",\n \"Id\": \"8799e71f7cb3392f704cfaaaf4c84ec1609240df2fe3cd820d9f87c52cff54d7\",\n \"Created\": \"2020-03-13T11:58:56.682955942-07:00\",\n \"Scope\": \"local\",\n \"Driver\": \"bridge\",\n \"EnableIPv6\": false,\n \"IPAM\": {\n \"Driver\": \"default\",\n \"Options\": null,\n \"Config\": [\n {\n \"Subnet\": \"172.18.0.0/16\",\n \"Gateway\": \"172.18.0.1\"\n }\n ]\n },\n \"Internal\": false,\n \"Attachable\": false,\n \"Ingress\": false,\n \"ConfigFrom\": {\n \"Network\": \"\"\n },\n \"ConfigOnly\": false,\n \"Containers\": {\n \"ingress-sbox\": {\n \"Name\": \"gateway_ingress-sbox\",\n \"EndpointID\": \"7776f228c8279348ebbeb331e31f055cc955f2dde72f9cca89ce47ae76028f2e\",\n \"MacAddress\": \"02:42:ac:12:00:02\",\n \"IPv4Address\": \"172.18.0.2/16\",\n \"IPv6Address\": \"\"\n }\n },\n \"Options\": {\n \"com.docker.network.bridge.enable_icc\": \"false\",\n \"com.docker.network.bridge.enable_ip_masquerade\": \"true\",\n \"com.docker.network.bridge.name\": \"docker_gwbridge\"\n },\n \"Labels\": {}\n }\n]\n|2|0.706409096|02:42:ac:11:00:02|Broadcast|ARP|42|Who has 172.17.0.1? Tell 172.17.0.2|\n|---|---|---|---|---|---|---|\n|3|0.706484847|02:42:a3:d8:70:b4|02:42:ac:11:00:02|ARP|42|172.17.0.1 is at 02:42:a3:d8:70:b4|\n|4|0.706492056|172.17.0.2|172.17.0.1|TCP|74|37812 → 27017 [SYN] Seq=0 Win=64240 Len=0 MSS=1460 SACK_PERM=1 TSval=1798269233 TSecr=0 WS=128|\n|5|0.706535508|172.17.0.1|172.17.0.2|TCP|54|27017 → 37812 [RST, ACK] Seq=1 Ack=1 Win=0 Len=0|\n|6|0.708572052|172.17.0.2|172.17.0.1|TCP|74|37814 → 27017 [SYN] Seq=0 Win=64240 Len=0 MSS=1460 SACK_PERM=1 TSval=1798269236 TSecr=0 WS=128|\n|7|0.708603281|172.17.0.1|172.17.0.2|TCP|54|27017 → 37814 [RST, ACK] Seq=1 Ack=1 Win=0 Len=0|\n|8|0.717341783|172.17.0.2|172.17.0.1|ICMP|98|Echo (ping) request id=0x000f, seq=1/256, ttl=64 (reply in 9)|\n|9|0.717389764|172.17.0.1|172.17.0.2|ICMP|98|Echo (ping) reply id=0x000f, seq=1/256, ttl=64 (request in 8)|\n",
"text": "I am new to MongoDB, docker stuff.\nI try to install Mongo Charts, but cannot pass test connections step.\nsudo docker run --rm Quay charts-cli test-connection mongodb://172.17.0.1I get “MongoNetworkError: connect ECONNREFUSED 172.17.0.1”.\nI don’t know what else to check to make this work.I am using Ubuntu and I have MongoDB installed locally. I can connect using RoboT3, compass etc.This is my mongod.confI can see the 27017 is listening modesudo docker network lssudo docker network inspect bridgesudo docker network inspect docker_gwbridgeWireshark capture shows that connection is being reset",
"username": "Piwo_Lexum"
},
{
"code": "#bindIp: 127.0.0.1\nsudo docker run --rm quay.io/mongodb/charts:19.12.1 charts-cli test-connection mongodb://127.0.0.1\n",
"text": "If i am not wrong; you are trying to install the charts on a another server.if it is the case;goto mongod.conf and comment out (put # infront of it) the lineHowever, you are trying to install the docker on the mongodb server, then use localhost ip (127.0.0.1) instead of public ip",
"username": "coderkid"
},
{
"code": "",
"text": "I figured out after I posted.\nI changed the bindIp: 127.0.0.1 to bindIp: 0.0.0.0",
"username": "Piwo_Lexum"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDb Charts Installation | 2020-03-15T19:35:35.521Z | MongoDb Charts Installation | 2,859 |
null | [
"dot-net",
"legacy-realm-cloud"
] | [
{
"code": "Notifier.StartAsync using (var notifier = await Notifier.StartAsync(config))\npublic class Program\n{\n public static void Main(string[] args) => MainAsync().Wait();\n\n public static async Task MainAsync()\n {\n\n // Login the admin user\n var credentials = Credentials.UsernamePassword(Constants.RealmUsername, Constants.RealmPassword, createUser: false);\n var admin = await User.LoginAsync(credentials, new Uri($\"https://{Constants.RealmUrl}\"));\n\n var config = new NotifierConfiguration(admin)\n {\n // Add all handlers that this notifier will invoke\n WorkingDirectory = Path.Combine(Directory.GetCurrentDirectory(), Constants.NotifierDirectory),\n Handlers = { new NotesHandler() }\n };\n config.WorkingDirectory = config.WorkingDirectory + \"/\" + RandomString(10);\n\n // Start the notifier. Your handlers will be invoked for as\n // long as the notifier is not disposed.\n using (var notifier = await Notifier.StartAsync(config))\n {\n do\n {\n Console.WriteLine(\"Type in 'exit' to quit the app.\");\n }\n while (Console.ReadLine() != \"exit\");\n }\n }\n\n public static string RandomString(int length)\n {\n Random random = new Random();\n const string chars = \"ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789\";\n return new string(Enumerable.Repeat(chars, length)\n .Select(s => s[random.Next(s.Length)]).ToArray());\n }\n\n class NotesHandler : RegexNotificationHandler\n {\n // The regular expression you provide restricts the observed Realm files\n // to only the subset you are actually interested in. This is done to \n // avoid the cost of computing the fine-grained change set if it's not\n // necessary.\n public NotesHandler() : base($\"^/{Constants.NotesRealm}$\")\n {\n }\n\n // The HandleChangeAsync method is called for every observed Realm file \n // whenever it has changes. It is called with a change event which contains \n // a version of the Realm from before and after the change, as well as\n // collections of all objects which were added, deleted, or modified in this change\n public override async Task HandleChangeAsync(IChangeDetails details)\n {\n if (details.Changes.TryGetValue(\"Note\", out var changeSetDetails) &&\n changeSetDetails.Insertions.Length > 0)\n {\n try\n {\n var notes = changeSetDetails.Insertions\n .Select(i => i.CurrentObject)\n .Select(o => (string)(o.Title + Environment.NewLine + o.Description))\n .ToArray();\n\n if (notes.Length == 0)\n {\n return;\n }\n\n }\n catch (Exception ex)\n {\n Console.WriteLine(ex.Message);\n Console.WriteLine(ex.StackTrace);\n }\n }\n }\n }\n}",
"text": "I’ve written a .NET server event handler. The problem is that when my code hits the Notifier.StartAsync line:the program just exits and stops - it never goes into a state where it’s waiting for changes. What am I doing wrong?This is my code:",
"username": "Phil_Seeman"
},
{
"code": "",
"text": "we have the same problem on Windows .Net Core 3.1",
"username": "Thomas_Kison"
},
{
"code": "",
"text": "Same issue here on Mac with .Net Core 3.1 and the 4.2.0 SDK.",
"username": "Jason_Whetton"
}
] | Can't get server Event Handling to work | 2020-03-07T22:16:42.254Z | Can’t get server Event Handling to work | 2,239 |
null | [
"replication",
"performance"
] | [
{
"code": "",
"text": "Dear Team,\nWe mongodb setup in our production environment. Two mongodbs are running in primary-secondary mode in the LAN with 1gbps connectivity. But there seems to be always some delay in the sync between the Primary and Secondary.We have verified the server load, Memory usage and iostat. Everything seems to be fine. But still the issue persists. Please guide on this issue.",
"username": "Saravanan_N"
},
{
"code": "",
"text": "if you are 100% sure, it has nothing to do with network or server; Did you try enable background indexing?MongoDB, like other databases, supports indexes. Building an index for an existing collection might be very resource intensive and…\nReading time: 2 min read\n",
"username": "coderkid"
}
] | MongoDB Version 3.4.17 sync is very slow in the LAN connectivity of 1gbps | 2020-03-11T18:49:04.438Z | MongoDB Version 3.4.17 sync is very slow in the LAN connectivity of 1gbps | 1,642 |
null | [] | [
{
"code": "",
"text": "I’m now in Chapter 3 but am posting here because it has to do with Compass connection. My favorites links have been working but today trying to connect to the class’s Atlas cluster to answer a quiz question, I got the error message – An error occurred while loading navigation: ‘not master and slaveOk=false’: It is recommended to change your read preference in the connection dialog to Primary Preferred or Secondary Preferred or provide a replica set name for a full topology connection.I’ll try to go forward using a shell connection instead but really want to get the Compass connection working again. No idea how to do that.",
"username": "Margaret_38982"
},
{
"code": "mongodb://m001-student:[email protected]:27017,cluster0-shard-00-01-jxeqq.mongodb.net:27017,cluster0-shard-00-02-jxeqq.mongodb.net:27017/test?authSource=admin&replicaSet=Cluster0-shard-0&readPreference=primarymongodb+srv://m001-student:[email protected]/test?authSource=admin&readPreference=primary",
"text": "Use one of these two connection strings in Compass and you’ll be fine going forward:mongodb://m001-student:[email protected]:27017,cluster0-shard-00-01-jxeqq.mongodb.net:27017,cluster0-shard-00-02-jxeqq.mongodb.net:27017/test?authSource=admin&replicaSet=Cluster0-shard-0&readPreference=primarymongodb+srv://m001-student:[email protected]/test?authSource=admin&readPreference=primary",
"username": "007_jb"
},
{
"code": "",
"text": "Thanks!Oddly, my favorites connection sprang to life and worked again!But if the failure reoccurs I’ll know what to do.Again - many thanks.",
"username": "Margaret_38982"
},
{
"code": "",
"text": "There’s a difference between what you currently have (which can cause the error during a Primary node election) and what I shared above. Suggest changing your connection string or you’ll get the same error message in the future.",
"username": "007_jb"
},
{
"code": "",
"text": "Thanks for the heads up. I may just have to go back to Chapter 1 and start from scratch connecting first to the class’s Atlas database, then to my own, uploaded database in order to get new connection strings.",
"username": "Margaret_38982"
},
{
"code": "jxeqq",
"text": "The connection string above is for the Class (public) cluster.\nYou can re-use the same connection string for your own Sandbox (private) cluster by changing the jxeqq part of the node(s) accordingly. Or get it directly from Atlas, no need to start from scratch.",
"username": "007_jb"
},
{
"code": "",
"text": "Hi @Margaret_38982,In addition to @007_jb,Most likely your are directly connecting to a node in the cluster. Nodes can change their state form Primary to secondary and vice-a-versa from time to time due to various reasons.So, you are able to connect fine when the node is in Primary state but when it switches to secondary state you get an error message.Please make the changes recommended by @007_jb in the above post.~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "",
"username": "system"
}
] | Compass Error Message Trying to Connect via Favorites | 2020-03-14T19:58:55.840Z | Compass Error Message Trying to Connect via Favorites | 1,138 |
null | [] | [
{
"code": "",
"text": "No laptop, all of my work is done on mobile. I found an app called MongoLime (MongoLime - manage databases on the App Store) that appears to do something similar, but I don’t want to drop the $12 in the case that it won’t get me through this course. I’m fairly familiar with Atlas, and I wonder if I can forego using Compass altogether? Any help will be appreciated, thank you.",
"username": "Joshua_95915"
},
{
"code": "",
"text": "Unfortunately you can’t forego Compass in favour of Atlas, they’re two different things. Plus, Compass is not the only requirement for this course.This course requires connections to clusters/servers that you created as well as the public cluster created and managed by MongoDB University. Besides that, the free tier Atlas UI lacks some features that Compass Stable (Enterprise) edition has.You’re trying to learn about a database (server and client), you really do need a machine as per the requirements:\n",
"username": "007_jb"
},
{
"code": "",
"text": "Hi @Joshua_95915,Yes, please go through the system requirement mentioned on the about page of the course.You’re trying to learn about a database (server and client), you really do need a machine as per the requirements:\nIn addition to this, you will be needing Compass in order to complete some of the assignments given in this course.Thanks,\nShubham Ranjan\nCurriculum Services Engineer",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "",
"username": "system"
}
] | Compass alternative for iPad? | 2020-03-14T20:46:36.063Z | Compass alternative for iPad? | 4,163 |
Subsets and Splits