image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [] | [
{
"code": "db.col.find({ a: 1 })",
"text": "We post a lot of JSON on this forum. It would be nice to have that JSON formatted better with pretty colors. It would also be nice to have shell commands (e.g. db.col.find({ a: 1 })) highlighted and colored in a user friendly way.",
"username": "Justin"
},
{
"code": "db.inventory.insertMany([\n { item: \"journal\", qty: 25, status: \"A\", size: { h: 14, w: 21, uom: \"cm\" }, tags: [ \"blank\", \"red\" ] },\n { item: \"notebook\", qty: 50, status: \"A\", size: { h: 8.5, w: 11, uom: \"in\" }, tags: [ \"red\", \"blank\" ] },\n { item: \"paper\", qty: 10, status: \"D\", size: { h: 8.5, w: 11, uom: \"in\" }, tags: [ \"red\", \"blank\", \"plain\" ] },\n { item: \"planner\", qty: 0, status: \"D\", size: { h: 22.85, w: 30, uom: \"cm\" }, tags: [ \"blank\", \"red\" ] },\n { item: \"postcard\", qty: 45, status: \"A\", size: { h: 10, w: 15.25, uom: \"cm\" }, tags: [ \"blue\" ] }\n]);\njsonjavascriptyamljavascriptdb.inventory.insertMany([\n { item: \"journal\", qty: 25, status: \"A\", size: { h: 14, w: 21, uom: \"cm\" }, tags: [ \"blank\", \"red\" ] },\n { item: \"notebook\", qty: 50, status: \"A\", size: { h: 8.5, w: 11, uom: \"in\" }, tags: [ \"red\", \"blank\" ] },\n { item: \"paper\", qty: 10, status: \"D\", size: { h: 8.5, w: 11, uom: \"in\" }, tags: [ \"red\", \"blank\", \"plain\" ] },\n { item: \"planner\", qty: 0, status: \"D\", size: { h: 22.85, w: 30, uom: \"cm\" }, tags: [ \"blank\", \"red\" ] },\n { item: \"postcard\", qty: 45, status: \"A\", size: { h: 10, w: 15.25, uom: \"cm\" }, tags: [ \"blue\" ] }\n]);\nmongojavascript",
"text": "Hi Justin,You can use fenced code blocks similar to Github markdown for improved formatting including syntax highlighting.Place triple backticks (```) before and after a block of text to style it using automatic language detection.Borrowing a code sample from the MongoDB manual:If auto-detection doesn’t work well for a code snippet (or you want to be more declarative), you can provide a hint on a language formatting to use by adding the formatter (json, javascript, yaml, … ) after the opening backticks.For example, javascript formatting adds highlighting for numeric values to this code sample:There currently isn’t a specific formatter for mongo shell code (which isn’t strictly JavaScript in interpretive mode), but the javascript formatter should handle this OK.Most of the formatters listed for Highlight.js should be supported.Since our forum discussion involves many different programming languages and formats, the default is auto-detection rather than JavaScript.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Aha! That makes sense.Thanks for the detailed reply!",
"username": "Justin"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Format and colorize JSON | 2020-04-11T00:09:04.825Z | Format and colorize JSON | 6,356 |
null | [
"c-driver"
] | [
{
"code": "",
"text": "Hello,I am (still) beginner with mongodb, and starting out on my first attempt at a trivial C-language experiment with mongoc driver and related libraries. I am working in Ubuntu 18.04 64-bit environment. MongoDB is installed locally (version 4.2.5) and has been working well so far (as a runtime package accessed with mongodb shell and related utilities).My problem now is that I cannot find mongoc.h anywhere even after installing what I think are the required packages using apt:libmongoc-1.0\nlibmongoc-dev\nlibbson-1.0I do not wish (at this point) to use cmake or pkg-config. As my first step, I wish only to use #include <mongoc/mongoc.h> in my source file (which is trivial) and link libmongoc-1.0 on the gcc command line. Searches on stackexchange all tell me to use cmake or pkg-config (I am trying minimize the number of moving parts until I am confident that the required files are even installed.)apt install tells me:\nlibmongoc-1.0-0 is already the newest version (1.9.2+dfsg-1build1).\nlibmongoc-dev is already the newest version (1.9.2+dfsg-1build1).\nlibbson-1.0-0 is already the newest version (1.9.2-1).So far, I cannot complete gcc for a C program which has #include <mongoc/mongoc.h>, as that file is not found. I cannot find a file called mongoc.h anywhere in my local file system. I cannot find a directory called mongoc anywhere (I’d expect it to be in /usr somewhere.)How can I proceed?Thank you very much!Dave",
"username": "Dave"
},
{
"code": "",
"text": "OK sorry burn the bandwidth - I found solution (but couldn’t immediately correct my post because of moderation lag.)The include files were stored in /usr/include; the -I parameters to gcc were pointing to /usr/local/include. Probably caused by following two separate versioned sets of instructions (one to install the development files, one to do the compile/link.)Once I got both to agree, I was able to compile/run the example from mongodb documentation.Sorry again for the bandwidth!Dave",
"username": "Dave"
},
{
"code": "/usr/include/usr/include/libbson-1.0/usr/include/libmongoc-1.0CFLAGS",
"text": "A bit of additional detail for those who might find this. Specific issue is that the headers are located in a subdirectory of /usr/include. Specifically, the subdirectories are /usr/include/libbson-1.0 and /usr/include/libmongoc-1.0. If you choose not use pkg-config or CMake to build against the C Driver, you will need to ensure that your CFLAGS contains “`-I/usr/include/libbson-1.0 -I/usr/include/libmongoc-1.0” so that the headers can be found.",
"username": "Roberto_Sanchez"
}
] | Beginner C question - can't find mongoc.h anywhere | 2020-04-12T20:33:55.543Z | Beginner C question - can’t find mongoc.h anywhere | 3,850 |
null | [
"java",
"beta"
] | [
{
"code": "",
"text": "The 4.1.0-beta1 MongoDB Java & JVM Drivers has been released, with support for the upcoming release of MongoDB 4.4.The documentation hub includes extensive documentation of the 4.1 driver, includingand much more.You can find a full list of bug fixes here.You can find a full list of improvements here.You can find a full list of new features here.https://mongodb.github.io/mongo-java-driver/4.1/apidocs/ ",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Java Driver 4.1.0-beta1 Released | 2020-04-13T13:07:12.400Z | MongoDB Java Driver 4.1.0-beta1 Released | 3,482 |
null | [] | [
{
"code": "",
"text": "Our 1st virtual meetup just happened. Thanks to everyone that attended. Here are the links to the slides and especially Sunil’s different initiatives:Keep the conversation going. If you have any questions around the topics or on the user group itself just reply to this thread.\nAlso: Please let us know if you have a project or a topic that you’d like to present at an upcoming virtual user group.",
"username": "Sven_Peters"
},
{
"code": "",
"text": "Thanks for posting the slide deck @Sven_Peters! I unfortunately wasn’t able to make today’s chat.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Hello @Sven_Peters\nthanks for the presentation, it was a very good display of the do’s and don’ts. As a independent consultant, working mostly remote, it was great to see how others do and also to see that the best practices seem to end all on the same path.\nThe format was very well done! One remark on the audio, it was lacking when the “speaker” moved to the left or right. But this is just in the upper 95% range, and most likely due to the Laptop mic.\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Thanks for the insight into the remote work practices presented in an user friendly way. I was at the meetup, yesterday.I had attended few freelancing meetups locally (I live in Bangalore, S.India) that promoted the idea of working from home and from co-working spaces. Working totally from home for 5+ years is cool. The best aspect is (I think) being close to your spouse and children .This concept was catching up right from the early 2000’s where corporations in USA were starting to have their project managers (software industry) work from home offices; I think in an effort to save resources like office space and commuting. Also the technology provided the means (better communications, conferencing, etc) and with newer management techniques.No matter what, remote working is the way of future!",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "…and here’s the recording of the user group meeting. Enjoy: - YouTube",
"username": "Sven_Peters"
},
{
"code": "",
"text": "Last evening I ran into this article on nytimes; it has some useful tips on the equipment for folks working from home: My Long, Unending Journey to Find Perfect Office Equipment",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | 1st Virtual User Group - Slides & Links | 2020-04-02T16:36:43.289Z | 1st Virtual User Group - Slides & Links | 3,803 |
null | [
"stitch"
] | [
{
"code": "created_atowner_id",
"text": "Is it possible to disable queryAnywhere and make queries from functions and webhooks only? I don’t want to let users make queries as they please.If the answer is no, how do I dictate some field values? Eg: fields like created_at owner_id should be set from the backend only.I’d be glad if anyone has a tutorial for proper CRUD rules with.",
"username": "Mehedi_Nahid"
},
{
"code": "",
"text": "Have you found a solution?",
"username": "Corona"
},
{
"code": "",
"text": "Ended up making database collections read, write, insert, delete protected and handling db queries from private system functions.",
"username": "Mehedi_Nahid"
}
] | Disable front-end query or protect some fields | 2020-04-04T10:56:47.352Z | Disable front-end query or protect some fields | 2,190 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Let’s say each user in my app has a List:In my understanding, each user would have a private realm in order to restrict others from editing their realm.I’m having trouble understanding how the above requirements would be met when the Lists must all be in separate realms for security.",
"username": "crystaln"
},
{
"code": "/globalRealm",
"text": "I think a bit more information is needed to understand the use case.The owner can decide whether or not other users can see the ListHow are you planning to let other users see the List if each List is private (on a separate Realm)The List references Things in a shared public realmWhat is meant by ‘references’?Users can see a collection of other users ListsWhere is this collection stored?You can’t really ‘share’ private data when each user has a separate Realm - see the docs Full Sync Permissions noting thisglobal read-only Realm (i.e. /globalRealm ) for data all users need to access",
"username": "Jay"
},
{
"code": "",
"text": "How are you planning to let other users see the List if each List is private (on a separate Realm)That’s the question! If they are in a shared Realm, then everyone has edit access to everything.What is meant by ‘references’?I mean, each List is a list of ThingsWhere is this collection stored?I don’t know!You can’t really ‘share’ private data when each user has a separate Realm - see the docs Full Sync Permissions noting thisOk, so how do I enable privacy and security, since being in the same Realm gives all users the same privileges?I could duplicate data?",
"username": "crystaln"
},
{
"code": "",
"text": "What is meant by ‘references’?You cannot have a reference across separate realms. Make a unique primary id of Thing (number, string, e.g. GUID is a good choice in such scenarios) and use it in the List. To get list of Things, your app gets List of IDs and then run multiple “objectForPrimaryKey()” queries to the Thing Realm.",
"username": "Ondrej_Medek"
},
{
"code": "ROS\n Jays_Realm\n Cyrstains_Realm\n Leroys_Realm\n Public_Realm\n",
"text": "Some of this can be handled via permissions: Access Levels would enable you to read/write to your own realm but only allow others read access.As new users come along, a user can offer the new users access to their Realm.If you have public data that everyone can share, that would be a public realm with read/write access for all.You’re structure would be:Does that fit the use case?",
"username": "Jay"
}
] | Semi-public user data in separate realms for edit permissions | 2020-04-09T20:05:18.781Z | Semi-public user data in separate realms for edit permissions | 2,312 |
null | [
"performance"
] | [
{
"code": "rs-shard-34:PRIMARY> db.serverStatus()\n{\n\t\"host\" : \"mongo-s105:27041\",\n\t\"version\" : \"3.6.9\",\n\t\"process\" : \"mongod\",\n\t\"pid\" : NumberLong(3035),\n\t\"uptime\" : 351477,\n\t\"uptimeMillis\" : NumberLong(351477511),\n\t\"uptimeEstimate\" : NumberLong(351477),\n\t\"localTime\" : ISODate(\"2020-04-11T17:20:51.856Z\"),\n\t\"asserts\" : {\n\t\t\"regular\" : 0,\n\t\t\"warning\" : 0,\n\t\t\"msg\" : 0,\n\t\t\"user\" : 26,\n\t\t\"rollovers\" : 0\n\t},\n\t\"backgroundFlushing\" : {\n\t\t\"flushes\" : 5857,\n\t\t\"total_ms\" : 498,\n\t\t\"average_ms\" : 0.08502646406009903,\n\t\t\"last_ms\" : 0,\n\t\t\"last_finished\" : ISODate(\"2020-04-11T17:19:55.014Z\")\n\t},\n\t\"connections\" : {\n\t\t\"current\" : 360,\n\t\t\"available\" : 838500,\n\t\t\"totalCreated\" : 30878\n\t},\n\t\"extra_info\" : {\n\t\t\"note\" : \"fields vary by platform\",\n\t\t\"page_faults\" : 636\n\t},\n\t\"globalLock\" : {\n\t\t\"totalTime\" : NumberLong(\"351477518000\"),\n\t\t\"currentQueue\" : {\n\t\t\t\"total\" : 0,\n\t\t\t\"readers\" : 0,\n\t\t\t\"writers\" : 0\n\t\t},\n\t\t\"activeClients\" : {\n\t\t\t\"total\" : 415,\n\t\t\t\"readers\" : 0,\n\t\t\t\"writers\" : 0\n\t\t}\n\t},\n\t\"locks\" : {\n\t\t\"Global\" : {\n\t\t\t\"acquireCount\" : {\n\t\t\t\t\"r\" : NumberLong(1253313716),\n\t\t\t\t\"w\" : NumberLong(915146120),\n\t\t\t\t\"W\" : NumberLong(136)\n\t\t\t},\n\t\t\t\"acquireWaitCount\" : {\n\t\t\t\t\"r\" : NumberLong(270),\n\t\t\t\t\"W\" : NumberLong(5)\n\t\t\t},\n\t\t\t\"timeAcquiringMicros\" : {\n\t\t\t\t\"r\" : NumberLong(1004235560),\n\t\t\t\t\"W\" : NumberLong(756)\n\t\t\t}\n\t\t},\n\t\t\"MMAPV1Journal\" : {\n\t\t\t\"acquireCount\" : {\n\t\t\t\t\"r\" : NumberLong(174801344),\n\t\t\t\t\"w\" : NumberLong(955655937)\n\t\t\t}\n\t\t},\n\t\t\"Database\" : {\n\t\t\t\"acquireCount\" : {\n\t\t\t\t\"r\" : NumberLong(169533721),\n\t\t\t\t\"w\" : NumberLong(915110949),\n\t\t\t\t\"R\" : NumberLong(16),\n\t\t\t\t\"W\" : NumberLong(64)\n\t\t\t},\n\t\t\t\"acquireWaitCount\" : {\n\t\t\t\t\"r\" : NumberLong(6),\n\t\t\t\t\"W\" : NumberLong(1)\n\t\t\t},\n\t\t\t\"timeAcquiringMicros\" : {\n\t\t\t\t\"r\" : NumberLong(36503468),\n\t\t\t\t\"W\" : NumberLong(139)\n\t\t\t}\n\t\t},\n\t\t\"Collection\" : {\n\t\t\t\"acquireCount\" : {\n\t\t\t\t\"R\" : NumberLong(145732116),\n\t\t\t\t\"W\" : NumberLong(457869148)\n\t\t\t},\n\t\t\t\"acquireWaitCount\" : {\n\t\t\t\t\"R\" : NumberLong(4695648),\n\t\t\t\t\"W\" : NumberLong(29664317)\n\t\t\t},\n\t\t\t\"timeAcquiringMicros\" : {\n\t\t\t\t\"R\" : NumberLong(875948378),\n\t\t\t\t\"W\" : NumberLong(\"5304444496\")\n\t\t\t}\n\t\t},\n\t\t\"Metadata\" : {\n\t\t\t\"acquireCount\" : {\n\t\t\t\t\"W\" : NumberLong(224)\n\t\t\t}\n\t\t},\n\t\t\"Mutex\" : {\n\t\t\t\"acquireCount\" : {\n\t\t\t\t\"r\" : NumberLong(5877),\n\t\t\t\t\"W\" : NumberLong(136)\n\t\t\t}\n\t\t},\n\t\t\"oplog\" : {\n\t\t\t\"acquireCount\" : {\n\t\t\t\t\"R\" : NumberLong(24347518),\n\t\t\t\t\"W\" : NumberLong(457240578)\n\t\t\t},\n\t\t\t\"acquireWaitCount\" : {\n\t\t\t\t\"R\" : NumberLong(573130),\n\t\t\t\t\"W\" : NumberLong(9268834)\n\t\t\t},\n\t\t\t\"timeAcquiringMicros\" : {\n\t\t\t\t\"R\" : NumberLong(62988274),\n\t\t\t\t\"W\" : NumberLong(809457017)\n\t\t\t}\n\t\t}\n\t},\n\t\"logicalSessionRecordCache\" : {\n\t\t\"activeSessionsCount\" : 224,\n\t\t\"sessionsCollectionJobCount\" : 1172,\n\t\t\"lastSessionsCollectionJobDurationMillis\" : 110,\n\t\t\"lastSessionsCollectionJobTimestamp\" : ISODate(\"2020-04-11T17:17:54.398Z\"),\n\t\t\"lastSessionsCollectionJobEntriesRefreshed\" : 147,\n\t\t\"lastSessionsCollectionJobEntriesEnded\" : 142,\n\t\t\"lastSessionsCollectionJobCursorsClosed\" : 0,\n\t\t\"transactionReaperJobCount\" : 1172,\n\t\t\"lastTransactionReaperJobDurationMillis\" : 0,\n\t\t\"lastTransactionReaperJobTimestamp\" : ISODate(\"2020-04-11T17:17:54.408Z\"),\n\t\t\"lastTransactionReaperJobEntriesCleanedUp\" : 0\n\t},\n\t\"network\" : {\n\t\t\"bytesIn\" : NumberLong(\"326805141325\"),\n\t\t\"bytesOut\" : NumberLong(\"659424467058\"),\n\t\t\"physicalBytesIn\" : NumberLong(\"311046556853\"),\n\t\t\"physicalBytesOut\" : NumberLong(\"421365369268\"),\n\t\t\"numRequests\" : NumberLong(773938805),\n\t\t\"compression\" : {\n\t\t\t\"snappy\" : {\n\t\t\t\t\"compressor\" : {\n\t\t\t\t\t\"bytesIn\" : NumberLong(\"339255270461\"),\n\t\t\t\t\t\"bytesOut\" : NumberLong(\"100679606642\")\n\t\t\t\t},\n\t\t\t\t\"decompressor\" : {\n\t\t\t\t\t\"bytesIn\" : NumberLong(\"22214791799\"),\n\t\t\t\t\t\"bytesOut\" : NumberLong(\"40462765820\")\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"serviceExecutorTaskStats\" : {\n\t\t\t\"executor\" : \"passthrough\",\n\t\t\t\"threadsRunning\" : 360\n\t\t}\n\t},\n\t\"opLatencies\" : {\n\t\t\"reads\" : {\n\t\t\t\"latency\" : NumberLong(\"11321420023\"),\n\t\t\t\"ops\" : NumberLong(165395215)\n\t\t},\n\t\t\"writes\" : {\n\t\t\t\"latency\" : NumberLong(\"72133410059\"),\n\t\t\t\"ops\" : NumberLong(456262434)\n\t\t},\n\t\t\"commands\" : {\n\t\t\t\"latency\" : NumberLong(\"4920879194\"),\n\t\t\t\"ops\" : NumberLong(152281155)\n\t\t}\n\t},\n\t\"opcounters\" : {\n\t\t\"insert\" : 0,\n\t\t\"query\" : 141562315,\n\t\t\"update\" : 358012592,\n\t\t\"delete\" : 99271652,\n\t\t\"getmore\" : 23838703,\n\t\t\"command\" : 152283579\n\t},\n\t\"opcountersRepl\" : {\n\t\t\"insert\" : 118714,\n\t\t\"query\" : 0,\n\t\t\"update\" : 309814,\n\t\t\"delete\" : 117900,\n\t\t\"getmore\" : 0,\n\t\t\"command\" : 0\n\t},\n\t\"repl\" : {\n\t\t\"hosts\" : [\n\t\t\t\"[2606:ae00:3001:8311:172:16:244:e]:27041\",\n\t\t\t\"[2606:ae00:3001:8311:172:16:244:d]:27041\",\n\t\t\t\"[2606:ae00:3001:8311:172:16:244:34]:27041\"\n\t\t],\n\t\t\"setName\" : \"rs-shard-34\",\n\t\t\"setVersion\" : 14,\n\t\t\"ismaster\" : true,\n\t\t\"secondary\" : false,\n\t\t\"primary\" : \"[2606:ae00:3001:8311:172:16:244:34]:27041\",\n\t\t\"me\" : \"[2606:ae00:3001:8311:172:16:244:34]:27041\",\n\t\t\"electionId\" : ObjectId(\"7fffffff0000000000000032\"),\n\t\t\"lastWrite\" : {\n\t\t\t\"opTime\" : {\n\t\t\t\t\"ts\" : Timestamp(1586625651, 1250),\n\t\t\t\t\"t\" : NumberLong(50)\n\t\t\t},\n\t\t\t\"lastWriteDate\" : ISODate(\"2020-04-11T17:20:51Z\"),\n\t\t\t\"majorityOpTime\" : {\n\t\t\t\t\"ts\" : Timestamp(1586625651, 1204),\n\t\t\t\t\"t\" : NumberLong(50)\n\t\t\t},\n\t\t\t\"majorityWriteDate\" : ISODate(\"2020-04-11T17:20:51Z\")\n\t\t},\n\t\t\"rbid\" : 1\n\t},\n\t\"storageEngine\" : {\n\t\t\"name\" : \"mmapv1\",\n\t\t\"supportsCommittedReads\" : false,\n\t\t\"readOnly\" : false,\n\t\t\"persistent\" : true\n\t},\n\t\"tcmalloc\" : {\n\t\t\"generic\" : {\n\t\t\t\"current_allocated_bytes\" : 44042192,\n\t\t\t\"heap_size\" : 360931328\n\t\t},\n\t\t\"tcmalloc\" : {\n\t\t\t\"pageheap_free_bytes\" : 35778560,\n\t\t\t\"pageheap_unmapped_bytes\" : 203984896,\n\t\t\t\"max_total_thread_cache_bytes\" : NumberLong(1073741824),\n\t\t\t\"current_total_thread_cache_bytes\" : 54490096,\n\t\t\t\"total_free_bytes\" : 77121776,\n\t\t\t\"central_cache_free_bytes\" : 17462112,\n\t\t\t\"transfer_cache_free_bytes\" : 5173472,\n\t\t\t\"thread_cache_free_bytes\" : 54481328,\n\t\t\t\"aggressive_memory_decommit\" : 0,\n\t\t\t\"pageheap_committed_bytes\" : 156946432,\n\t\t\t\"pageheap_scavenge_count\" : 1970981,\n\t\t\t\"pageheap_commit_count\" : 2789214,\n\t\t\t\"pageheap_total_commit_bytes\" : NumberLong(\"2477157507072\"),\n\t\t\t\"pageheap_decommit_count\" : 1970981,\n\t\t\t\"pageheap_total_decommit_bytes\" : NumberLong(\"2477000560640\"),\n\t\t\t\"pageheap_reserve_count\" : 45,\n\t\t\t\"pageheap_total_reserve_bytes\" : 360931328,\n\t\t\t\t\"spinlock_total_delay_ns\" : NumberLong(\"67961279365\"),\n\t\t\t\"formattedString\" : \"------------------------------------------------\\nMALLOC: 44055440 ( 42.0 MiB) Bytes in use by application\\nMALLOC: + 35778560 ( 34.1 MiB) Bytes in page heap freelist\\nMALLOC: + 17462112 ( 16.7 MiB) Bytes in central cache freelist\\nMALLOC: + 5173472 ( 4.9 MiB) Bytes in transfer cache freelist\\nMALLOC: + 54476848 ( 52.0 MiB) Bytes in thread cache freelists\\nMALLOC: + 3989760 ( 3.8 MiB) Bytes in malloc metadata\\nMALLOC: ------------\\nMALLOC: = 160936192 ( 153.5 MiB) Actual memory used (physical + swap)\\nMALLOC: + 203984896 ( 194.5 MiB) Bytes released to OS (aka unmapped)\\nMALLOC: ------------\\nMALLOC: = 364921088 ( 348.0 MiB) Virtual address space used\\nMALLOC:\\nMALLOC: 10252 Spans in use\\nMALLOC: 426 Thread heaps in use\\nMALLOC: 4096 Tcmalloc page size\\n------------------------------------------------\\nCall ReleaseFreeMemory() to release freelist memory to the OS (via madvise()).\\nBytes released to the OS take up virtual address space but no physical memory.\\n\"\n\t\t}\n\t},\n\t\"transactions\" : {\n\t\t\"retriedCommandsCount\" : NumberLong(0),\n\t\t\"retriedStatementsCount\" : NumberLong(0),\n\t\t\"transactionsCollectionWriteCount\" : NumberLong(0)\n\t},\n\t\"transportSecurity\" : {\n\t\t\"1.0\" : NumberLong(0),\n\t\t\"1.1\" : NumberLong(0),\n\t\t\"1.2\" : NumberLong(0),\n\t\t\"1.3\" : NumberLong(0),\n\t\t\"unknown\" : NumberLong(0)\n\t},\n\t\"mem\" : {\n\t\t\"bits\" : 64,\n\t\t\"resident\" : 6057,\n\t\t\"virtual\" : 9739,\n\t\t\"supported\" : true,\n\t\t\"mapped\" : 7773\n\t},\n\t\"metrics\" : {\n\t\t\"commands\" : {\n\t\t\t\"_isSelf\" : {\n\t\t\t\t\"failed\" : NumberLong(0),\n\t\t\t\t\"total\" : NumberLong(1)\n\t\t\t},\n\t\t\t\"buildInfo\" : {\n\t\t\t\t\"failed\" : NumberLong(0),\n\t\t\t\t\"total\" : NumberLong(55141)\n\t\t\t},\n\t\t\t\"collStats\" : {\n\t\t\t\t\"failed\" : NumberLong(0),\n\t\t\t\t\"total\" : NumberLong(49)\n\t\t\t},\n\t\t\t\"count\" : {\n\t\t\t\t\"failed\" : NumberLong(0),\n\t\t\t\t\"total\" : NumberLong(103)\n\t\t\t},\n\t\t\t\"create\" : {\n\t\t\t\t\"failed\" : NumberLong(1),\n\t\t\t\t\"total\" : NumberLong(1)\n\t\t\t},\n\t\t\t\"createIndexes\" : {\n\t\t\t\t\"failed\" : NumberLong(0),\n\t\t\t\t\"total\" : NumberLong(1230)\n\t\t\t},\n\t\t\t\"delete\" : {\n\t\t\t\t\"failed\" : NumberLong(0),\n\t\t\t\t\"total\" : NumberLong(99030933)\n\t\t\t},\n\t\t\t\"endSessions\" : {\n\t\t\t\t\"failed\" : NumberLong(0),\n\t\t\t\t\"total\" : NumberLong(26422)\n\t\t\t},\n\t\t\t\"find\" : {\n\t\t\t\t\"failed\" : NumberLong(0),\n\t\t\t\t\"total\" : NumberLong(141562316)\n\t\t\t},\n\t\t\t\"getLastError\" : {\n\t\t\t\t\"failed\" : NumberLong(0),\n\t\t\t\t\"total\" : NumberLong(5316)\n\t\t\t},\n\t\t\t\"getLog\" : {\n\t\t\t\t\"failed\" : NumberLong(0),\n\t\t\t\t\"total\" : NumberLong(14)\n\t\t\t},\n\t\t\t\"getMore\" : {\n\t\t\t\t\"failed\" : NumberLong(0),\n\t\t\t\t\"total\" : NumberLong(23838703)\n\t\t\t},\n\t\t\t\"getnonce\" : {\n\t\t\t\t\"failed\" : NumberLong(0),\n\t\t\t\t\"total\" : NumberLong(16)\n\t\t\t},\n\t\t\t\"isMaster\" : {\n\t\t\t\t\"failed\" : NumberLong(0),\n\t\t\t\t\"total\" : NumberLong(122572786)\n\t\t\t},\n\t\t\t\"listCollections\" : {\n\t\t\t\t\"failed\" : NumberLong(0),\n\t\t\t\t\"total\" : NumberLong(15)\n\t\t\t},\n\t\t\t\"listIndexes\" : {\n\t\t\t\t\"failed\" : NumberLong(1),\n\t\t\t\t\"total\" : NumberLong(1173)\n\t\t\t},\n\t\t\t\"logout\" : {\n\t\t\t\t\"failed\" : NumberLong(0),\n\t\t\t\t\"total\" : NumberLong(17)\n\t\t\t},\n\t\t\t\"ping\" : {\n\t\t\t\t\"failed\" : NumberLong(0),\n\t\t\t\t\"total\" : NumberLong(29)\n\t\t\t},\n\t\t\t\"replSetGetConfig\" : {\n\t\t\t\t\"failed\" : NumberLong(0),\n\t\t\t\t\"total\" : NumberLong(2011)\n\t\t\t},\n\t\t\t\"replSetGetRBID\" : {\n\t\t\t\t\"failed\" : NumberLong(0),\n\t\t\t\t\"total\" : NumberLong(2)\n\t\t\t},\n\t\t\t\"replSetGetStatus\" : {\n\t\t\t\t\"failed\" : NumberLong(0),\n\t\t\t\t\"total\" : NumberLong(156179)\n\t\t\t},\n\t\t\t\"replSetHeartbeat\" : {\n\t\t\t\t\"failed\" : NumberLong(0),\n\t\t\t\t\"total\" : NumberLong(5395247)\n\t\t\t},\n\t\t\t\"replSetUpdatePosition\" : {\n\t\t\t\t\"failed\" : NumberLong(0),\n\t\t\t\t\"total\" : NumberLong(23833422)\n\t\t\t},\n\t\t\t\"saslContinue\" : {\n\t\t\t\t\"failed\" : NumberLong(0),\n\t\t\t\t\"total\" : NumberLong(78600)\n\t\t\t},\n\t\t\t\"saslStart\" : {\n\t\t\t\t\"failed\" : NumberLong(945),\n\t\t\t\t\"total\" : NumberLong(40245)\n\t\t\t},\n\t\t\t\"serverStatus\" : {\n\t\t\t\t\"failed\" : NumberLong(0),\n\t\t\t\t\"total\" : NumberLong(91164)\n\t\t\t},\n\t\t\t\"update\" : {\n\t\t\t\t\"failed\" : NumberLong(0),\n\t\t\t\t\"total\" : NumberLong(357233300)\n\t\t\t},\n\t\t\t\"whatsmyuri\" : {\n\t\t\t\t\"failed\" : NumberLong(0),\n\t\t\t\t\"total\" : NumberLong(24396)\n\t\t\t}\n\t\t},\n\t\t\"cursor\" : {\n\t\t\t\"timedOut\" : NumberLong(0),\n\t\t\t\"open\" : {\n\t\t\t\t\"noTimeout\" : NumberLong(0),\n\t\t\t\t\"pinned\" : NumberLong(0),\n\t\t\t\t\"total\" : NumberLong(2)\n\t\t\t}\n\t\t},\n\t\t\"document\" : {\n\t\t\t\"deleted\" : NumberLong(99223517),\n\t\t\t\"inserted\" : NumberLong(0),\n\t\t\t\"returned\" : NumberLong(1056034713),\n\t\t\t\"updated\" : NumberLong(258788820)\n\t\t},\n\t\t\"getLastError\" : {\n\t\t\t\"wtime\" : {\n\t\t\t\t\"num\" : 4726,\n\t\t\t\t\"totalMillis\" : 232315\n\t\t\t},\n\t\t\t\"wtimeouts\" : NumberLong(0)\n\t\t},\n\t\t\"operation\" : {\n\t\t\t\"scanAndOrder\" : NumberLong(4),\n\t\t\t\"writeConflicts\" : NumberLong(0)\n\t\t},\n\t\t\"queryExecutor\" : {\n\t\t\t\"scanned\" : NumberLong(499566395),\n\t\t\t\"scannedObjects\" : NumberLong(1414047278)\n\t\t},\n\t\t\"record\" : {\n\t\t\t\"moves\" : NumberLong(1)\n\t\t},\n\t\t\"repl\" : {\n\t\t\t\"executor\" : {\n\t\t\t\t\"pool\" : {\n\t\t\t\t\t\"inProgressCount\" : 0\n\t\t\t\t},\n\t\t\t\t\"queues\" : {\n\t\t\t\t\t\"networkInProgress\" : 0,\n\t\t\t\t\t\"sleepers\" : 6\n\t\t\t\t},\n\t\t\t\t\"unsignaledEvents\" : 0,\n\t\t\t\t\"shuttingDown\" : false,\n\t\t\t\t\"networkInterface\" : \"\\nNetworkInterfaceASIO Operations' Diagnostic:\\nOperation: Count: \\nConnecting 0 \\nIn Progress 0 \\nSucceeded 5394020 \\nCanceled 3 \\nFailed 0 \\nTimed Out 0 \\n\\n\"\n\t\t\t},\n\t\t\t\"apply\" : {\n\t\t\t\t\"attemptsToBecomeSecondary\" : NumberLong(1),\n\t\t\t\t\"batches\" : {\n\t\t\t\t\t\"num\" : 127,\n\t\t\t\t\t\"totalMillis\" : 334556\n\t\t\t\t},\n\t\t\t\t\"ops\" : NumberLong(543752)\n\t\t\t},\n\t\t\t\"buffer\" : {\n\t\t\t\t\"count\" : NumberLong(0),\n\t\t\t\t\"maxSizeBytes\" : NumberLong(268435456),\n\t\t\t\t\"sizeBytes\" : NumberLong(0)\n\t\t\t},\n\t\t\t\"initialSync\" : {\n\t\t\t\t\"completed\" : NumberLong(1),\n\t\t\t\t\"failedAttempts\" : NumberLong(0),\n\t\t\t\t\"failures\" : NumberLong(0)\n\t\t\t},\n\t\t\t\"network\" : {\n\t\t\t\t\"bytes\" : NumberLong(190522612),\n\t\t\t\t\"getmores\" : {\n\t\t\t\t\t\"num\" : 1156,\n\t\t\t\t\t\"totalMillis\" : 61905\n\t\t\t\t},\n\t\t\t\t\"ops\" : NumberLong(564436),\n\t\t\t\t\"readersCreated\" : NumberLong(4)\n\t\t\t},\n\t\t\t\"preload\" : {\n\t\t\t\t\"docs\" : {\n\t\t\t\t\t\"num\" : 309814,\n\t\t\t\t\t\"totalMillis\" : 211\n\t\t\t\t},\n\t\t\t\t\"indexes\" : {\n\t\t\t\t\t\"num\" : 1155088,\n\t\t\t\t\t\"totalMillis\" : 399\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"storage\" : {\n\t\t\t\"freelist\" : {\n\t\t\t\t\"search\" : {\n\t\t\t\t\t\"bucketExhausted\" : NumberLong(0),\n\t\t\t\t\t\"requests\" : NumberLong(105282021),\n\t\t\t\t\t\"scanned\" : NumberLong(0)\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"ttl\" : {\n\t\t\t\"deletedDocuments\" : NumberLong(6702),\n\t\t\t\"passes\" : NumberLong(5851)\n\t\t}\n\t},\n\t\"ok\" : 1,\n\t\"operationTime\" : Timestamp(1586625651, 1251),\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1586625651, 1251),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"Ii1/aSkK5Ad0TW0GyfF2nkebgO4=\"),\n\t\t\t\"keyId\" : NumberLong(\"6806977870817132545\")\n\t\t}\n\t}\n}\nrs-shard-34:PRIMARY>\n",
"text": "I have been facing continuous RAM memory depletion on mongo hosts.\nAfter exploring it was found that mongodb tcmalloc natural behaviour is memory hungry however did not find specifics related to the threshold at which mongod will start releasing RAM.mongo_java_driver version we are using is 3.7.1We conducted a test to run a python script consuming 1gb of ram memory (with lesser nice value e.g. -16 than mongod which is running at -15) and found that the python process got killed by OOM.I have below 2 queries:\nWill mongodb wait for a threshold to release the memory, if yes then what is that threshold? OR there is no such threshold and it keeps hogging RAM and eventually ends up getting killed by OOM?I also observed that tcmallocReleaseRate was introduced however https://jira.mongodb.org/browse/SERVER-42697 mentions that even that is also not turning out to be beneficial so is there safer way apart from dropCache which ensures ram release with minimum side effect?",
"username": "kedar_sirshikar"
},
{
"code": "",
"text": "As you are using MMAPv1:",
"username": "chris"
},
{
"code": "",
"text": "n mongod which is running at -15) and found that the python process got killedThank you Chris for your reply, I had already been to that link that is why experimented with python script as mentioned in my original post however did not receive the expected results instead python process got killed by OOM Killer. It’ll be great if you can point to any particular wiki/document reference which details the anatomy of RAM usage/release.In addition, may I know if this “RAM Memory depletion” is caused in all versions ( < 3.6 & > 3.6) because I see same details in 3.4 and 4.0?",
"username": "kedar_sirshikar"
}
] | Mongodb 3.6.9 ram memory gradually depleting | 2020-04-11T19:55:32.034Z | Mongodb 3.6.9 ram memory gradually depleting | 2,975 |
null | [] | [
{
"code": "",
"text": "Hi all, my name is Doug and I’ve been working with MongoDB off an on since the 1.8 days. In my current role I don’t do much database work, but I do act as an internal consultant for teams running MongoDB and Cassandra.Fun facts:I hope to share what knowledge I can and look forward to learning from everyone.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Welcome @Doug_Duncan! Looking forward to catching up and sharing in this community with you!",
"username": "Michael_Grayson"
},
{
"code": "",
"text": "Looking forward to it @Michael_Grayson!",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Hey @Doug_Duncan good to see you here! ",
"username": "Adamo_Tonete"
},
{
"code": "",
"text": "It’s great to see you around as well @Adamo_Tonete!",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Hi @Doug_Duncan! Welcome to the forums. Really looking forward to working together & glad you are here! ",
"username": "Jamie"
},
{
"code": "",
"text": "Welcome to the forums @Doug_Duncan, it sounds like you’ve got the experience our community needs.\nI look forward to seeing you around here.If I ever come to Colorado downhill skiing is probably the first thing I’ll do ",
"username": "Peter"
},
{
"code": "",
"text": "Hello @Doug_Duncan,happy to see you around , quite some time since the Advocate Hub.Michael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Hi @Doug_Duncan, I remember when you were a teaching assistant on one of the MongoDB introductory courses that I took. Maybe more than one? I’m still a MongoDB user. And I really ought to sign up for additional classes soon – and I need to finish the aggregation course I had started long ago but somehow paused.Thanks so muchBobBob Cochran\nGreenbelt, Maryland",
"username": "Robert_Cochran"
},
{
"code": "",
"text": "Hi @Robert_Cochran great to see you around here. It’s been a long time since the days I was a TA over at MongoDB University. I’m glad to see former students here still using MongoDB after all this time.I know what you mean about getting back over there to take more courses. They’ve introduced a lot of new ones over the past couple of years.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Hi @Robert_Cochran,\nunfortunately the TA program is stopped. However this does not mean that the quality is lost, they worked a lot one the MDB university and improved it all over the place.\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "hi Doug, glad to see you here, you really were great at what you were doing, learnt alot during those courses \nYou might not remember me, but it put a smile on my face to see you on this forum, back then when i took those courses for the first time, i was from a “place”, where i was verbally called “a stupid woman working on stupid projects with stupid technologies” (a few of my projects were using Mongodb as the database as i had came across a mongodb pdf and felt in love with it…), and never received any consideration for my interest in open technologies, and it had been years i hadn’t received any normal feedback from someone who was supposed to spread knowledge and just teach or simply reply to e-mails the way teachers are supposed to, and the way those courses were given, learning so much in such a short amount of time, TA, you, answering to every question and every e-mail, really helped me in keeping what i had sworn to myself ( that i was going to be an expert in mongoDB, build a career around it and show him how stupid i was…) that day that man told me that (after crying for hours in a park as he had jeopardized a few of my plans)…You really great at spreading your passion for mongodb and just don’t change your style of answering to those e-mails, because i was used to being hindered from learning or not being considered at all and trust me it did feel good to not only be considered but have someone who answers to e-mails by pushing you even further…",
"username": "Lilia_Rigumye"
},
{
"code": "",
"text": " Hi @Lilia_Rigumye! I’m glad to see you here as well. I am glad that I was able to help you out on your path. My email is always open should you have questions or just want to talk.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Hi Doug\nMy name is Stefano from Italy. I am currently studying for DBA certification. Could you please tell me if an Ops Manager knowledge is requested ?? Thanks",
"username": "Stefano_Cinquegranel"
},
{
"code": "",
"text": "Welcome to the community @Stefano_Cinquegranel! It has been a very long time since I took the MongoDB DBA certification (I was one of the original people to take it when it was still in beta back in 2013). The university site has a study guide and practice exams that you can reference. Those would be the most up to date places to look for whats on the exam.Best of luck in your studies and on passing the exam!",
"username": "Doug_Duncan"
}
] | Hey all, Doug here | 2020-01-25T00:54:07.015Z | Hey all, Doug here | 3,538 |
null | [
"data-modeling"
] | [
{
"code": "{ \"_id\" : { \"major\" : 1 , \"minor\" : 0 } } \n{ \"_id\" : { \"major\" : 1 , \"minor\" : 1 } } \n{ \"_id\" : { \"major\" : 1 , \"minor\" : 2 } } \n{ \"_id\" : { \"major\" : 2 , \"minor\" : 0 } } \n",
"text": "Is there any drawbacks to use an object as the _id?For example,The mail goal is to be able to do queries for specific major/minor combination. While at the same time easily find all the minors of a major. I know I can have 2 normal fields with a compound unique index. The problem I have with the latter is that in a delete change stream I have the _id only. So I cannot know the specific major/minor being deleted. Being the _id, I have it.I found codeigniter - Store _Id as object or string in MongoDB? - Stack Overflow but not convincing enough for me not to do it.All opinions are welcomed.",
"username": "steevej"
},
{
"code": "_idmajorminor_id_id> db.foo.insertOne({ _id: { a: 1, b: 1 } })\n{ \"acknowledged\" : true, \"insertedId\" : { \"a\" : 1, \"b\" : 1 } }\n> db.foo.find({ _id: { a: 1, b: 1 } })\n{ \"_id\" : { \"a\" : 1, \"b\" : 1 } }\n> db.foo.find({ _id: { b: 1, a: 1 } })\n>\n{ a: 1, b: 1 }{ b: 1, a: 1 }{ _id: \"1|0\" }{ _id: \"1|1\" }{ _id: \"1|2\" }{ _id: \"2|1\" }{ _id: /^2|/ }",
"text": "Hi @steevej,You can definitely use an object with _id like you’re showing in your example! That will allow you to take advantage of the required index on _id and provide a unique constraint. However, there are a few things worth noting:First, querying on either major or minor within _id will not use the index on _id. You’d need an additional index to query on just the major or minor version number.Second, please consider the following:All queries must be an exact match for the index to be useful (or even for a result to appear at all). In the example above, swapping { a: 1, b: 1 } with { b: 1, a: 1 } is the difference between returning a document and returning none even though they are logically equivalent.What are you querying by the most often? If it’s an exact match or the major version (to retrieve all minor versions), may I suggest using a concatenated string? Try something like this:\n{ _id: \"1|0\" }\n{ _id: \"1|1\" }\n{ _id: \"1|2\" }\n{ _id: \"2|1\" }This approach has several benefits: (1) you can query by exact match and utilize the index on _id; (2) you can query on major version and list all minor versions and utilize the index on _id, e.g. { _id: /^2|/ } to return all documents with major version 2; and (3) I just like strings.I hope this helps!Thanks,Justin",
"username": "Justin"
},
{
"code": "",
"text": "Very good points.Thanks.I will come back later with a real analysis of all the points you covered.",
"username": "steevej"
},
{
"code": "",
"text": "Is there any drawbacks to use an object as the _id?Your application has to make sure that when inserting a document into the collection the object needs to be unique; that is the application handles the duplicate key insertion. This will be more of an application functionality rather than a drawback.",
"username": "Prasad_Saya"
},
{
"code": "majorminor_id_id{ a: 1, b: 1 }{ b: 1, a: 1 }> db.foo.find({ \"_id.b\" : 1 , \"_id.a\" : 1 } )\n{ \"_id\" : { \"a\" : 1, \"b\" : 1 } }\n> db.foo.find({ \"_id.a\" : 1 , \"_id.b\" : 1 } )\n{ \"_id\" : { \"a\" : 1, \"b\" : 1 } }\n{ _id : { parent : com , name : mongodb } }\n{ _id : { parent : com , name : google } }\n{ _id : { parent : null , name : com } }\n{ _id : { parent : mongodb.com : community } }\nor\n{ _id : { parent : null , name : / } }\n{ _id : { parent : / , name : etc } }\n{ _id : { parent : /etc , name : systemd } }\n{ _id : mongodb.com }\n{ _id : google.com }\n{ _id : com }\n{ _id : developer.mongodb.com/community/forums }\nor\n{ _id : / }\n{ _id : /etc }\n{ _id : /etc/systemd }\n{\n \"conclusion\" : \"string I will go\" ,\n \"thanks\" : \"all\"\n}\n",
"text": "That’s a major drawback.First, querying on either major or minor within _id will not use the index on _id . You’d need an additional index to query on just the major or minor version number.My goal was to avoid creating an index. 8-(swapping { a: 1, b: 1 } with { b: 1, a: 1 } is the difference between returning a document and returning noneThe above was surprising but would not have been a problem except may be when doing manual query while debugging. I am planning to have all object creation and all queries to be done via an well defined API. But I play a little bit and I found that the following works and it is usually how I do manual queries.I like strings too but wanted to avoid sub-string search or regex.Most of my use cases are complete matches on major/minor but one is to use the change stream on major to monitor the creating and deletion of minor.Actually, major/minor was may be over simplifying. It is more of a parent/child thing. Like may be domain names, where major would be like .com, .edu and minor would be subdomain like mongodb or google. File paths are also good example. So _id would be likeI think I could live withWhich use less memory anyway. There is also a minor drawback with Compass as by default object are not expanded so when I am looking for a specific document, I need to do an exact search or click on each document to expand the _id. I probably can tolerate a little bit of duplication by having a parent field for the few documents implicated in the use cases where parent is needed, because most will not.And finallythe object needs to be unique; that is the application handles the duplicate key insertion.yes just like I always do.",
"username": "steevej"
}
] | Any drawbacks in having _id an object | 2020-04-02T22:09:39.073Z | Any drawbacks in having _id an object | 5,242 |
null | [
"indexes"
] | [
{
"code": "",
"text": "Hello, for each collection mongodb will create a default unique index based on _id, then I create another unique index on two fields of the collection, i.e. {country_code : 1 , creation_time : -1}, if I search the docs in collections, I would like to have data returned sorted by creation_time in DESC, but from the test result I found it is not always the expected: sometimes it returns in DESC, sometimes no. From the stackoverflow I found something like :\n\"What if an index is used?\nIf an index is used, documents will be returned in the order they are found (which does necessarily match insertion order or I/O order). If more than one index is used then the order depends internally on which index first identified the document during the de-duplication process.\"\nso my question is, how can I make the index of {country_code : 1 , creation_time : -1} firstly be used when I search the data?thanks,James",
"username": "Zhihong_GUO"
},
{
"code": "{ country_code: 1, creation_time: 1 }country_codedb.collection.find({ country_code: \"us\" }).sort({ creation_time: 1 })country_codecreation_time",
"text": "Hi @Zhihong_GUO,What query are you running with your sort? The query is important because it’s a big factor in how the database picks a candidate index.The index { country_code: 1, creation_time: 1 } cannot be used to help with sorting unless the query also includes an equality on country_code like this:\ndb.collection.find({ country_code: \"us\" }).sort({ creation_time: 1 })Any query that doesn’t include country_code would prevent the index from being used to sort creation_time.I hope that helps!Thanks,Justin",
"username": "Justin"
},
{
"code": "",
"text": "Hello Justin,Yes I will search docs by country_code and hope to have docs sorted by creation_time, your answer exactly give me help.Thanks a lot,James",
"username": "Zhihong_GUO"
}
] | Multi index in same collection useful? | 2020-04-10T23:26:42.436Z | Multi index in same collection useful? | 2,365 |
null | [
"python"
] | [
{
"code": "",
"text": "Hello, everyone!I am creating a MongoDB web application using pymongo, Flask and Jinja2.I connect with the database and it’s three collections the usual way:client = MongoClient(“mongodb://127.0.0.1:27017”)\ndb = client.polymers# get the collections\ninventories = db.inventory\nclasses = db.classes\nvendors = db.vendorsOne of first things I’ve noticed is that I can reference the inventories collection as inventories.find(), etc. With the other two collections, I need to prepend db. and reference them as db.classes.find() and db.vendors.find().While I thought that was strange enough, everything worked well in my application until…I use Flask and Jinja2 to bind variables to the the HTML files. When I pass a variable from a query via the find() method to an HTML file via the render_template() method. However, when passing a variable with a parameter suppliedclass_id = request.values.get(“_id”)\nclass_edit = db.classes.find({“_id”: ObjectId(class_id)})It seems to return a null value. This is the true where I have to prepend db. on the classes and vendors collections.I tried the find_one() method, but received an internal server error in the {{ for class in classes }} Jinja2 template within the HTML file that is rendered.I would appreciate any advice. Thanks!Mike.",
"username": "Michael_Redlich"
},
{
"code": "dbfind_one()find() Cursorfind_onefind()db.coll.find({'_id': 'foo'})[0]db.classesrequest",
"text": "Hi Mike,Regarding the first issue - I am not able to reproduce this issue (needing to prepend db to some namespaces but not others) on my end. Can you provide a small repro script so that we can better assist you? Also, which version of Python and which version of the driver are you using?Regarding the second issue - I am not sure why the find_one() method is not working for you but your instinct to use that method is correct. The find() method returns Cursor and not a document itself whereas the find_one method returns a single document or None. To extract documents from the cursor returned by find(), you must iterate over it or use index notation (e.g. db.coll.find({'_id': 'foo'})[0]). If you still face issues after this, please share some sample documents from the db.classes collection and also examples of what your request objects look like to help in diagnose the problem.",
"username": "Prashant_Mital"
},
{
"code": "",
"text": "Hi Prashant:Thanks for getting back to me so quickly!I am using Python 3.7.3 and pymongo 3.10.1.For my first issue, consider the following:**client = MongoClient(“mongodb://127.0.0.1:27017”) **\ndb = client.polymers # get the database\n# get the collections\ninventories = db.inventory\nclasses = db.classes\nvendors = [email protected](\"/\")\ndef root():\n** # display the list of polymers in the inventory collection i **\n** inventory_list = inventories.find()**\n** a1 = “active”**\n** return render_template(‘index.html’, a1=a1, inventories=inventory_list, t=title, h=heading)**@app.route(\"/classes\")\ndef classes():\n** # display the list of classes from the classes collection**\n** class_list = db.classes.find()\n** a2 = \"active\"\n** return render_template(‘classes.html’, a2=a2, classes=class_list, t=title, h=heading)**Notice that I had to prepend the call to classes.find() with db. I have a similar function for the vendors collection where I had to prepend with db. on the call to vendors.find(). However, I didn’t have to do that with the call to inventories.find(). Could it have something to do with the first connection to a collection in my code is the inventories collection? If I leave out the db. with the calls to classes.find() and vendors.find(), I get the Internal Server Error in the browser and the terminal window error message ends in:File “app.py”, line 37, in classes\nclass_list = classes.find()\nAttributeError: ‘function’ object has no attribute ‘find’ 1For my second issue, consider the following:@app.route(\"/edit\")\ndef edit():\n** polymer_id = request.values.get(\"_id\")**\n** print(\"–> polymer id: “, polymer_id)**\n** inventory_edit = inventories.find({”_id\": ObjectId(polymer_id)})**\n** return render_template(‘edit.html’, inventories=inventory_edit, h=heading, t=title)**@app.route(\"/edit-class\")\ndef edit_class():\n** class_id = request.values.get(\"_id\")**\n** print(\"–> class id: “, class_id)**\n** class_edit = db.classes.find({”_id:\": ObjectId(class_id)})**\n** return render_template(‘edit-class.html’, classes=class_edit, h=heading, t=title)**As you can see, the edit() function uses the find() method containing an ObjectId that renders the edit.html file as expected. I am presented with a form to edit the document corresponding to the ObjectId. Awesome!However, the edit_class() function also uses the find() method containing an ObjectId does not render the edit-class.html file as expected. Nothing crashes, but the form and table within the {% for class in classes %}{% endfor %} is not processed.Note that I added print statements in the two functions. I get an ObjectId back from each HTML. The generated URL, /edit-class?_id= is also correct, so I suspect there is something in the call to the find() method containing the ObjectId. This works in the MongoDB shell, but not in the app.I tried find_one() function, but I totally forgot about it returning a document as opposed to a cursor. I believe I got an Internal Server Error with that, but it makes sense now since I was trying to process a result set instead of a document.I would appreciate any advice! Thanks!Mike.",
"username": "Michael_Redlich"
},
{
"code": "classes = db.classes\n...\[email protected](\"/classes\")\ndef classes():\n ...\n",
"text": "The problem is that the method named “classes” conflicts with the “classes” global variable:You’ll need to rename one of these.",
"username": "Shane"
},
{
"code": "",
"text": "Hi Shane:Thanks for that suggestion! I will indeed change that, but I have the same issue with a similar function called vendors() and edit_vendor().I will make the classes change to see if that will fix everything.Mike.",
"username": "Michael_Redlich"
}
] | Use of multiple collections with Flask/Jinja2 | 2020-04-09T20:25:32.303Z | Use of multiple collections with Flask/Jinja2 | 5,077 |
null | [
"python",
"beta"
] | [
{
"code": "python -m pip install https://github.com/mongodb/mongo-python-driver/archive/3.11.0b0.tar.gz \n",
"text": "We are pleased to announce the 3.11.0b0 release of PyMongo - MongoDB’s Python Driver. This beta release adds support for MongoDB 4.4.Note that this release will not be uploaded to PyPI and can be installed directly from the GitHub tag:",
"username": "Shane"
},
{
"code": "",
"text": "",
"username": "system"
}
] | PyMongo 3.11.0b0 Released | 2020-04-10T19:47:15.835Z | PyMongo 3.11.0b0 Released | 3,281 |
[
"dot-net",
"beta"
] | [
{
"code": "$metarandValsearchScoresearchHighlightsgeoNearDistancegeoNearPointrecordIdindexKeysortKeyfindAndModifyallowDiskUseMONGODB-AWStlsDisableCertificateRevocationCheckExceededTimeLimitLockTimeoutClientDisconnecttlsDisableCertificateRevocationCheck=true",
"text": "This is a beta release for the 2.11.0 version of the driver.The main new features in 2.11.0-beta1 support new features in MongoDB 4.4.0. These features include:Other new additions and updates in this beta include:The full list of JIRA issues that are currently scheduled to be resolved in this release is available at:https://jira.mongodb.org/issues/?jql=project%20%3D%20CSHARP%20AND%20fixVersion%20%3D%202.11.0%20ORDER%20BY%20key%20ASCThe list may change as we approach the release date.Documentation on the .NET driver can be found at:Because certificate revocation checking is now enabled by default, an\napplication that is unable to contact the OCSP endpoints and/or CRL\ndistribution points specified in a server’s certificate may experience\nconnectivity issues (e.g. if the application is behind a firewall with\nan outbound whitelist). This is because the driver needs to contact\nthe OCSP endpoints and/or CRL distribution points specified in the\nserver’s certificate and if these OCSP endpoints and/or CRL\ndistribution points are not accessible, then the connection to the\nserver may fail. In such a scenario, connectivity may be able to be\nrestored by disabling certificate revocation checking by adding\ntlsDisableCertificateRevocationCheck=true to the application’s connection\nstring.",
"username": "Vincent_Kam"
},
{
"code": "",
"text": "",
"username": "system"
}
] | .NET Driver 2.11.0-beta1 Released | 2020-04-10T19:31:15.968Z | .NET Driver 2.11.0-beta1 Released | 3,279 |
|
null | [
"golang",
"beta"
] | [
{
"code": "",
"text": "The MongoDB Go Driver Team is pleased to announce the release of 1.4.0-beta1 of the MongoDB Go Driver.This release contains support for some MongoDB server version 4.4 features and improvements to the driver API.You can obtain the driver source from GitHub under the v1.4.0-beta1 tag.General documentation for the MongoDB Go Driver is available on pkg.go.dev and on the MongoDB Documentation site. BSON library documentation is also available on pkg.go.dev. Questions can be asked through the MongoDB Developer Community and bug reports should be filed against the Go project in the MongoDB JIRA. Your feedback on the Go driver is greatly appreciated.Thank you,The Go Driver Team",
"username": "Divjot_Arora"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Go Driver 1.4.0-beta1 Released | 2020-04-10T19:11:25.117Z | MongoDB Go Driver 1.4.0-beta1 Released | 2,981 |
null | [
"legacy-realm-cloud"
] | [
{
"code": "",
"text": "Hi, there seems to be no easy way to cancel my $30.00/month subscription to Realm Cloud? How do I do this? I would really like to cancel asap. Any help is greatly appreciated. Surprised how difficult it is to find this information…",
"username": "Chase_Klingel1"
},
{
"code": "",
"text": "@Chase_Klingel1 Please email [email protected] with your account information and we will cancel it",
"username": "Ian_Ward"
},
{
"code": "",
"text": "[email protected] Ian. I have contacted them.",
"username": "Chase_Klingel1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How can I cancel Realm Cloud? | 2020-04-09T17:22:09.641Z | How can I cancel Realm Cloud? | 2,735 |
null | [
"indexes"
] | [
{
"code": "_User {\n \"v\": 2,\n \"unique\": true,\n \"key\": {\n \"email\": 1\n },\n \"name\": \"email_1\",\n \"ns\": \"production._User\",\n \"background\": true,\n \"sparse\": true\n }\n {\n \"v\": 2,\n \"key\": {\n \"email\": 1\n },\n \"name\": \"case_insensitive_email\",\n \"ns\": \"production._User\",\n \"background\": true,\n \"sparse\": true\n \"collation\": { \n \"locale\": \"en_US\",\n \"strength\": 2\n }\n }\n",
"text": "We are using Mongo 3.6.12 (wired tiger) in a production (and dev) instance on mLab. The open source server that we use, Parse Server, recently updated to 4.x and the breaking change that was made was the addition of a new collation index value on a collection key value that already had an index. Is this good practice?As an example, the _User table has an email key and the current index was:When the new version of the server started up, it added the following index:We found that this resulted in extremely poor performance when $regex queries were made on email as the collation index was often hit (and per docs, that is highly inefficient).We also found that it was extremely difficult to actually create a new index for a key that already had an index… but by making it a collation index we could do it.It seems like this is NOT recommended. Could anyone shed further light on this situation?Thanks",
"username": "Rob"
},
{
"code": "email_1case_insensitive_email$regex$regex",
"text": "Hi @Rob,Parse Server, recently updated to 4.x and the breaking change that was made was the addition of a new collation index value on a collection key value that already had an index. Is this good practice?Looks like this was a decision made by Parse Server (pull #5634) to add the case insensitive index to allow faster case insensitive signup check. i.e. username Rob vs username rob.You could probably remove the previous index email_1 instead of having two indexes. If you have more questions about the reasoning behind the new case_insensitive_email index itself, I would suggest to reach to the Parse community. See parse-community: SUPPORTWe found that this resulted in extremely poor performance when $regex queries were made on email as the collation index was often hit (and per docs, that is highly inefficient).You didn’t provide much about the $regex query, but please note that case insensitive regular expression queries generally cannot use indexes effectively. The $regex implementation is not collation-aware and is unable to utilise case-insensitive indexes. If you were using $regex query for case-insensitive search, it’s likely more performant to use normal query search with the case-insensitive index. See $regex query index use for more information.We also found that it was extremely difficult to actually create a new index for a key that already had an index… but by making it a collation index we could do it.This is actually documented on db.collection.createIndex(): Collation Option:Unlike other index options, you can create multiple indexes on the same key(s) with different collations. To create indexes with the same key pattern but different collations, you must supply unique index names.Regards,\nWan.",
"username": "wan"
},
{
"code": "email_1case_insensitive_email$regex$regex",
"text": "Evening Wan,Looks like this was a decision made by Parse Server (pull #5634) to add the case insensitive index to allow faster case insensitive signup check. i.e. username Rob vs username rob.Yep, we tracked that down and it definitely makes sense. Thanks for taking the time to look into it yourself, we didn’t expect that. You could probably remove the previous index email_1 instead of having two indexes. If you have more questions about the reasoning behind the new case_insensitive_email index itself, I would suggest to reach to the Parse community. See parse-community: SUPPORTWe hoped that could be a work around, but when Parse starts up it actually recreates that index.Because the above idea of deleting the index didn’t pan out and $regex doesn’t work well with collation indexes (an understatement), we wanted to check with the Mongo community before going back to the Parse community and suggesting alternatives.This brought us to the use of multiple indexes on the same key and wondering if that was good practice or not? The team at mLab support indicated that was not a good idea and they were curious how we ended up with 2 indexes on the same key.This is actually documented on db.collection.createIndex(): Collation Option:Thanks for that call out, we missed it in our review. That is good to know and I will pass that on to the team at mLab as well.So… that does create a follow on question. How does mongo choose the index it uses if it isn’t specified in the query?If we had both indexes and used a $regex query via Compass (avoiding Parse altogether), it seemed like it would always hit the collation index. Is there a priority or order to the indexes?Thanks again for the insight.",
"username": "Rob"
},
{
"code": "$regex",
"text": "Hi @Rob,If we had both indexes and used a $regex query via Compass (avoiding Parse altogether), it seemed like it would always hit the collation index. Is there a priority or order to the indexes?The query optimiser processes queries and chooses the most efficient query plan for a query, given available indexes. The query system then uses this query plan each time the query runs. See Query Plans for more information.So, it depends on the query itself. It’s worth to checkout the Explain Results for the query to learn more.Regards,\nWan.",
"username": "wan"
},
{
"code": "$regex$regex$regex$regex",
"text": "Hi @wan,If the query optimizer is supposed to be choosing the most efficient query plan for our $regex queries, it seems to be broken for the case where both a case-sensitive index and a case-insensitive index exists. As an example, when we run a particular $regex query and only the case-sensitive index exists, the query takes about 2 seconds. After we add a case-insensitive index, the $regex query tries to use that index and takes 8 seconds.How can we ensure that the $regex query will use the case-sensitive index, and NOT attempt to use the case-insensitive index? Using neither index would be more efficient than attempting to use the case-insensitive index.Thanks for your help",
"username": "Robby_Helms"
},
{
"code": "$regex",
"text": "Hi @Robby_Helms,Could you provide the output of the cursor.explain(“allPlansExecution”) when both indexes exist ? This hopefully should give more insights to the issue.How can we ensure that the $regex query will use the case-sensitive index, and NOT attempt to use the case-insensitive index?You can try to use cursor.hint() to override MongoDB’s default index selection.Regards,\nWan.",
"username": "wan"
},
{
"code": "[email protected]_insensitive_emailemail_1case_insensitive_emaildb.getCollection('_User').find({ email: { $regex: /^Robby@pocketprep\\.com$/i } }).explain(\"allPlansExecution\")\n{\n\t\"queryPlanner\" : {\n\t\t\"plannerVersion\" : 1,\n\t\t\"namespace\" : \"pocketprep._User\",\n\t\t\"indexFilterSet\" : false,\n\t\t\"parsedQuery\" : {\n\t\t\t\"email\" : {\n\t\t\t\t\"$regex\" : \"^Robby@pocketprep\\\\.com$\",\n\t\t\t\t\"$options\" : \"i\"\n\t\t\t}\n\t\t},\n\t\t\"winningPlan\" : {\n\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\"filter\" : {\n\t\t\t\t\"email\" : {\n\t\t\t\t\t\"$regex\" : \"^Robby@pocketprep\\\\.com$\",\n\t\t\t\t\t\"$options\" : \"i\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\"email\" : 1\n\t\t\t\t},\n\t\t\t\t\"indexName\" : \"case_insensitive_email\",\n\t\t\t\t\"collation\" : {\n\t\t\t\t\t\"locale\" : \"en_US\",\n\t\t\t\t\t\"caseLevel\" : false,\n\t\t\t\t\t\"caseFirst\" : \"off\",\n\t\t\t\t\t\"strength\" : 2,\n\t\t\t\t\t\"numericOrdering\" : false,\n\t\t\t\t\t\"alternate\" : \"non-ignorable\",\n\t\t\t\t\t\"maxVariable\" : \"punct\",\n\t\t\t\t\t\"normalization\" : false,\n\t\t\t\t\t\"backwards\" : false,\n\t\t\t\t\t\"version\" : \"57.1\"\n\t\t\t\t},\n\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\"email\" : [ ]\n\t\t\t\t},\n\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\"isSparse\" : true,\n\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\"email\" : [\n\t\t\t\t\t\t\"[\\\"\\\", {})\",\n\t\t\t\t\t\t\"[/^Robby@pocketprep\\\\.com$/i, /^Robby@pocketprep\\\\.com$/i]\"\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"rejectedPlans\" : [\n\t\t\t{\n\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\"email\" : {\n\t\t\t\t\t\t\t\"$regex\" : \"^Robby@pocketprep\\\\.com$\",\n\t\t\t\t\t\t\t\"$options\" : \"i\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\"email\" : 1\n\t\t\t\t\t},\n\t\t\t\t\t\"indexName\" : \"email_1\",\n\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\"email\" : [ ]\n\t\t\t\t\t},\n\t\t\t\t\t\"isUnique\" : true,\n\t\t\t\t\t\"isSparse\" : true,\n\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\"email\" : [\n\t\t\t\t\t\t\t\"[\\\"\\\", {})\",\n\t\t\t\t\t\t\t\"[/^Robby@pocketprep\\\\.com$/i, /^Robby@pocketprep\\\\.com$/i]\"\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t]\n\t},\n\t\"executionStats\" : {\n\t\t\"executionSuccess\" : true,\n\t\t\"nReturned\" : 1,\n\t\t\"executionTimeMillis\" : 7718,\n\t\t\"totalKeysExamined\" : 1902678,\n\t\t\"totalDocsExamined\" : 1902678,\n\t\t\"executionStages\" : {\n\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\"filter\" : {\n\t\t\t\t\"email\" : {\n\t\t\t\t\t\"$regex\" : \"^Robby@pocketprep\\\\.com$\",\n\t\t\t\t\t\"$options\" : \"i\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"nReturned\" : 1,\n\t\t\t\"executionTimeMillisEstimate\" : 6030,\n\t\t\t\"works\" : 1902679,\n\t\t\t\"advanced\" : 1,\n\t\t\t\"needTime\" : 1902677,\n\t\t\t\"needYield\" : 0,\n\t\t\t\"saveState\" : 25181,\n\t\t\t\"restoreState\" : 25181,\n\t\t\t\"isEOF\" : 1,\n\t\t\t\"invalidates\" : 0,\n\t\t\t\"docsExamined\" : 1902678,\n\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\"nReturned\" : 1902678,\n\t\t\t\t\"executionTimeMillisEstimate\" : 1170,\n\t\t\t\t\"works\" : 1902679,\n\t\t\t\t\"advanced\" : 1902678,\n\t\t\t\t\"needTime\" : 0,\n\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\"saveState\" : 25181,\n\t\t\t\t\"restoreState\" : 25181,\n\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\"invalidates\" : 0,\n\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\"email\" : 1\n\t\t\t\t},\n\t\t\t\t\"indexName\" : \"case_insensitive_email\",\n\t\t\t\t\"collation\" : {\n\t\t\t\t\t\"locale\" : \"en_US\",\n\t\t\t\t\t\"caseLevel\" : false,\n\t\t\t\t\t\"caseFirst\" : \"off\",\n\t\t\t\t\t\"strength\" : 2,\n\t\t\t\t\t\"numericOrdering\" : false,\n\t\t\t\t\t\"alternate\" : \"non-ignorable\",\n\t\t\t\t\t\"maxVariable\" : \"punct\",\n\t\t\t\t\t\"normalization\" : false,\n\t\t\t\t\t\"backwards\" : false,\n\t\t\t\t\t\"version\" : \"57.1\"\n\t\t\t\t},\n\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\"email\" : [ ]\n\t\t\t\t},\n\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\"isSparse\" : true,\n\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\"email\" : [\n\t\t\t\t\t\t\"[\\\"\\\", {})\",\n\t\t\t\t\t\t\"[/^Robby@pocketprep\\\\.com$/i, /^Robby@pocketprep\\\\.com$/i]\"\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"keysExamined\" : 1902678,\n\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\"dupsTested\" : 0,\n\t\t\t\t\"dupsDropped\" : 0,\n\t\t\t\t\"seenInvalidated\" : 0\n\t\t\t}\n\t\t},\n\t\t\"allPlansExecution\" : [\n\t\t\t{\n\t\t\t\t\"nReturned\" : 0,\n\t\t\t\t\"executionTimeMillisEstimate\" : 1340,\n\t\t\t\t\"totalKeysExamined\" : 1320493,\n\t\t\t\t\"totalDocsExamined\" : 0,\n\t\t\t\t\"executionStages\" : {\n\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\"nReturned\" : 0,\n\t\t\t\t\t\"executionTimeMillisEstimate\" : 1340,\n\t\t\t\t\t\"works\" : 1320493,\n\t\t\t\t\t\"advanced\" : 0,\n\t\t\t\t\t\"needTime\" : 1320493,\n\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\"saveState\" : 20632,\n\t\t\t\t\t\"restoreState\" : 20632,\n\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\"invalidates\" : 0,\n\t\t\t\t\t\"docsExamined\" : 0,\n\t\t\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\"email\" : {\n\t\t\t\t\t\t\t\t\"$regex\" : \"^Robby@pocketprep\\\\.com$\",\n\t\t\t\t\t\t\t\t\"$options\" : \"i\"\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"nReturned\" : 0,\n\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 1290,\n\t\t\t\t\t\t\"works\" : 1320493,\n\t\t\t\t\t\t\"advanced\" : 0,\n\t\t\t\t\t\t\"needTime\" : 1320493,\n\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\"saveState\" : 20632,\n\t\t\t\t\t\t\"restoreState\" : 20632,\n\t\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\t\"invalidates\" : 0,\n\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\"email\" : 1\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"indexName\" : \"email_1\",\n\t\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\"email\" : [ ]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"isUnique\" : true,\n\t\t\t\t\t\t\"isSparse\" : true,\n\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\"email\" : [\n\t\t\t\t\t\t\t\t\"[\\\"\\\", {})\",\n\t\t\t\t\t\t\t\t\"[/^Robby@pocketprep\\\\.com$/i, /^Robby@pocketprep\\\\.com$/i]\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"keysExamined\" : 1320493,\n\t\t\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\t\t\"dupsTested\" : 0,\n\t\t\t\t\t\t\"dupsDropped\" : 0,\n\t\t\t\t\t\t\"seenInvalidated\" : 0\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"nReturned\" : 0,\n\t\t\t\t\"executionTimeMillisEstimate\" : 4130,\n\t\t\t\t\"totalKeysExamined\" : 1320493,\n\t\t\t\t\"totalDocsExamined\" : 1320493,\n\t\t\t\t\"executionStages\" : {\n\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\"email\" : {\n\t\t\t\t\t\t\t\"$regex\" : \"^Robby@pocketprep\\\\.com$\",\n\t\t\t\t\t\t\t\"$options\" : \"i\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"nReturned\" : 0,\n\t\t\t\t\t\"executionTimeMillisEstimate\" : 4130,\n\t\t\t\t\t\"works\" : 1320493,\n\t\t\t\t\t\"advanced\" : 0,\n\t\t\t\t\t\"needTime\" : 1320493,\n\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\"saveState\" : 20632,\n\t\t\t\t\t\"restoreState\" : 20632,\n\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\"invalidates\" : 0,\n\t\t\t\t\t\"docsExamined\" : 1320493,\n\t\t\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\"nReturned\" : 1320493,\n\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 790,\n\t\t\t\t\t\t\"works\" : 1320493,\n\t\t\t\t\t\t\"advanced\" : 1320493,\n\t\t\t\t\t\t\"needTime\" : 0,\n\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\"saveState\" : 20632,\n\t\t\t\t\t\t\"restoreState\" : 20632,\n\t\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\t\"invalidates\" : 0,\n\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\"email\" : 1\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"indexName\" : \"case_insensitive_email\",\n\t\t\t\t\t\t\"collation\" : {\n\t\t\t\t\t\t\t\"locale\" : \"en_US\",\n\t\t\t\t\t\t\t\"caseLevel\" : false,\n\t\t\t\t\t\t\t\"caseFirst\" : \"off\",\n\t\t\t\t\t\t\t\"strength\" : 2,\n\t\t\t\t\t\t\t\"numericOrdering\" : false,\n\t\t\t\t\t\t\t\"alternate\" : \"non-ignorable\",\n\t\t\t\t\t\t\t\"maxVariable\" : \"punct\",\n\t\t\t\t\t\t\t\"normalization\" : false,\n\t\t\t\t\t\t\t\"backwards\" : false,\n\t\t\t\t\t\t\t\"version\" : \"57.1\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\"email\" : [ ]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\"isSparse\" : true,\n\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\"email\" : [\n\t\t\t\t\t\t\t\t\"[\\\"\\\", {})\",\n\t\t\t\t\t\t\t\t\"[/^Robby@pocketprep\\\\.com$/i, /^Robby@pocketprep\\\\.com$/i]\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"keysExamined\" : 1320493,\n\t\t\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\t\t\"dupsTested\" : 0,\n\t\t\t\t\t\t\"dupsDropped\" : 0,\n\t\t\t\t\t\t\"seenInvalidated\" : 0\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t]\n\t},\n\t\"serverInfo\" : {\n\t\t\"host\" : \"h083332.mongolab.com\",\n\t\t\"port\" : 31949,\n\t\t\"version\" : \"3.6.12\",\n\t\t\"gitVersion\" : \"c2b9acad0248ca06b14ef1640734b5d0595b55f1\"\n\t},\n\t\"ok\" : 1,\n\t\"operationTime\" : Timestamp(1585918338, 1),\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1585918338, 1),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"Wrc5pfaJ6T92PVv8AR+NwoZQTWo=\"),\n\t\t\t\"keyId\" : NumberLong(\"6768738634519543813\")\n\t\t}\n\t}\n}\nemail_1db.getCollection('_User').find({ email: { $regex: /^Robby@pocketprep\\.com$/i } }).hint(\"email_1\").explain(\"allPlansExecution\")\n{\n\t\"queryPlanner\" : {\n\t\t\"plannerVersion\" : 1,\n\t\t\"namespace\" : \"pocketprep._User\",\n\t\t\"indexFilterSet\" : false,\n\t\t\"parsedQuery\" : {\n\t\t\t\"email\" : {\n\t\t\t\t\"$regex\" : \"^Robby@pocketprep\\\\.com$\",\n\t\t\t\t\"$options\" : \"i\"\n\t\t\t}\n\t\t},\n\t\t\"winningPlan\" : {\n\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\"filter\" : {\n\t\t\t\t\t\"email\" : {\n\t\t\t\t\t\t\"$regex\" : \"^Robby@pocketprep\\\\.com$\",\n\t\t\t\t\t\t\"$options\" : \"i\"\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\"email\" : 1\n\t\t\t\t},\n\t\t\t\t\"indexName\" : \"email_1\",\n\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\"email\" : [ ]\n\t\t\t\t},\n\t\t\t\t\"isUnique\" : true,\n\t\t\t\t\"isSparse\" : true,\n\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\"email\" : [\n\t\t\t\t\t\t\"[\\\"\\\", {})\",\n\t\t\t\t\t\t\"[/^Robby@pocketprep\\\\.com$/i, /^Robby@pocketprep\\\\.com$/i]\"\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"rejectedPlans\" : [ ]\n\t},\n\t\"executionStats\" : {\n\t\t\"executionSuccess\" : true,\n\t\t\"nReturned\" : 1,\n\t\t\"executionTimeMillis\" : 1405,\n\t\t\"totalKeysExamined\" : 1902678,\n\t\t\"totalDocsExamined\" : 1,\n\t\t\"executionStages\" : {\n\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\"nReturned\" : 1,\n\t\t\t\"executionTimeMillisEstimate\" : 1293,\n\t\t\t\"works\" : 1902679,\n\t\t\t\"advanced\" : 1,\n\t\t\t\"needTime\" : 1902677,\n\t\t\t\"needYield\" : 0,\n\t\t\t\"saveState\" : 14864,\n\t\t\t\"restoreState\" : 14864,\n\t\t\t\"isEOF\" : 1,\n\t\t\t\"invalidates\" : 0,\n\t\t\t\"docsExamined\" : 1,\n\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\"filter\" : {\n\t\t\t\t\t\"email\" : {\n\t\t\t\t\t\t\"$regex\" : \"^Robby@pocketprep\\\\.com$\",\n\t\t\t\t\t\t\"$options\" : \"i\"\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"nReturned\" : 1,\n\t\t\t\t\"executionTimeMillisEstimate\" : 1233,\n\t\t\t\t\"works\" : 1902679,\n\t\t\t\t\"advanced\" : 1,\n\t\t\t\t\"needTime\" : 1902677,\n\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\"saveState\" : 14864,\n\t\t\t\t\"restoreState\" : 14864,\n\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\"invalidates\" : 0,\n\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\"email\" : 1\n\t\t\t\t},\n\t\t\t\t\"indexName\" : \"email_1\",\n\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\"email\" : [ ]\n\t\t\t\t},\n\t\t\t\t\"isUnique\" : true,\n\t\t\t\t\"isSparse\" : true,\n\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\"email\" : [\n\t\t\t\t\t\t\"[\\\"\\\", {})\",\n\t\t\t\t\t\t\"[/^Robby@pocketprep\\\\.com$/i, /^Robby@pocketprep\\\\.com$/i]\"\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"keysExamined\" : 1902678,\n\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\"dupsTested\" : 0,\n\t\t\t\t\"dupsDropped\" : 0,\n\t\t\t\t\"seenInvalidated\" : 0\n\t\t\t}\n\t\t},\n\t\t\"allPlansExecution\" : [ ]\n\t},\n\t\"serverInfo\" : {\n\t\t\"host\" : \"h083332.mongolab.com\",\n\t\t\"port\" : 31949,\n\t\t\"version\" : \"3.6.12\",\n\t\t\"gitVersion\" : \"c2b9acad0248ca06b14ef1640734b5d0595b55f1\"\n\t},\n\t\"ok\" : 1,\n\t\"operationTime\" : Timestamp(1585918753, 305),\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1585918753, 305),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"y1dKXHvMsn/J6ApjvkRg8UHXe1E=\"),\n\t\t\t\"keyId\" : NumberLong(\"6768738634519543813\")\n\t\t}\n\t}\n}\nhint",
"text": "The data I’m testing with is a _User collection with about 4.4 million users. One of the users has an email of [email protected]. I have two indexes on email: a case-insensitive one called case_insensitive_email and a case-sensitive one called email_1.Explain Output Without HintExplain Output With HintSo, yes, we can improve the request time by providing the hint, but I’m very confused as to why the query optimizer would be selecting the index that results in a request that is 5.5x slower in this case.Appreciate your help and input! Thanks.",
"username": "Robby_Helms"
},
{
"code": "allPlansExecutionworksdb._User.getPlanCache().clear()email_1$regexdb._User.find({\"email\":\"[email protected]\"}).collation({\"locale\":\"en_US\", \"strength\":2})\n",
"text": "Hi @Robby_Helms, Thanks for providing the explain output.I’m very confused as to why the query optimizer would be selecting the index that results in a request that is 5.5x slower in this case.Based on the allPlansExecution, both execution plans have really high number of works (1.3M+). This usually implies that there is no index that can efficiently support the query shape using the given predicate values. This is inline with the documentation $regex: Index Use.You could try to flush the plan cache to see whether it’s picking up a different plan i.e. db._User.getPlanCache().clear(). It is possible for a query plan to be chosen that is faster for initial results (or the same query shape with different values), but suboptimal for later results. There are few improvements added in the current version 4.2 to debug this further. (i.e. $planCacheStats operator).Having said the above, the regex pattern is anchored on both sides with insensitive option set to true. In this case it is better to remove the first index (email_1) and do a query with a collation to utilise the case insensitive index instead of $regex i.e.Regards,\nWan.",
"username": "wan"
},
{
"code": "allPlansExecutionworksallPlansExecutionemail_1executionTimeMillisEstimate1340case_insensitive_emailexecutionTimeMillisEstimate4130hint()regex",
"text": "Hi @wan,Thanks for the response.Based on the allPlansExecution , both execution plans have really high number of works (1.3M+). This usually implies that there is no index that can efficiently support the query shape using the given predicate values. This is inline with the documentation $regex: Index Use.While it may be true none of the indexes are “efficient” we have certainly observed that one of them is consistently much more efficient than the other. And it seems like the information shown under the allPlansExecution property demonstrates that the execution plan should have known which plan to choose. The plan that used the email_1 index had an executionTimeMillisEstimate of 1340 while the plan that used the case_insensitive_email index had an executionTimeMillisEstimate of 4130. Those estimates line up pretty closely with what we have observed, and so it’s confusing that the query seemingly is saying “this plan looks like it will take longer…and that’s the one we’ll use”.We do have workarounds in place by either refactoring our queries or by utilizing the hint() method, but it still feels like this is a bug in the way that query plans are resolved for regex queries.Thanks again!",
"username": "Robby_Helms"
}
] | Collation index best practice | 2020-03-31T22:29:19.563Z | Collation index best practice | 6,526 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 4.4.0-rc0, the first release candidate of MongoDB 4.4, is out and is ready for testing. This is the culmination of the 4.3.x development series, and includes many exciting new features. Please review the release notes for more about what’s new, upgrade procedures, and how to report an issue. Here are some of the highlights:MongoDB 4.4 Release Notes | Changelog | Downloads– The MongoDB Team",
"username": "Kelsey_Schubert"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB 4.4.0-rc0 is released | 2020-04-10T17:31:29.995Z | MongoDB 4.4.0-rc0 is released | 3,089 |
null | [
"python"
] | [
{
"code": "",
"text": "Hi there! I’m using pymongo for connecting to my cluster. I’m developing this for an open source project, how do I connect to the client since the pymongo MongoClient requires your credentials to proceed.MongoClient DocsI don’t want to store that secret key in my code since it is to be distributed, so how can I connect to my cluster using any other technique?",
"username": "Aadhav_Vignesh"
},
{
"code": "$ export APP_URI='mongodb://user:[email protected]/?tls=true'\n$ python3\n>>> import os\n>>> client = MongoClient(os.environ['APP_URI'])\n>>> client.admin.command('ping')\n{'ok': 1}\n",
"text": "The most common pattern to avoid storing the password in the application code is to put the entire connection string (or URI) in an environment variable like this:",
"username": "Shane"
},
{
"code": "usernamepasswordMongoClient$ export APP_PASSWORD='mypassword123'\n$ python3\n>>> import os\n>>> client = MongoClient('mongodb://mongodb.net/?tls=true', username='user', password=os.environ['APP_PASSWORD'])\n>>> client.admin.command('ping')\n{'ok': 1}\n",
"text": "I’d like to point out that one other way of doing this would be to only store the secret password in an environment variable and then use the username, password parameters of the MongoClient constructor to connect to the cluster. Note however, that this method is PyMongo-specific as not all drivers would expose similar parameters for specifying usernames/passwords.",
"username": "Prashant_Mital"
},
{
"code": "",
"text": "Hi Prashant! I want my app to be usable by others too. Since the password is stored in the environment variable, d during compilation, if I don’t include the .env, the app will not be able to connect to a cluster.How do I overcome this problem? Thanks in advance!",
"username": "Aadhav_Vignesh"
},
{
"code": "",
"text": "Hi Shane! Thanks for helping!Just one more thing, how do I deploy applications to desktop because I need to access the APP_URI, and for using it, the executable needs to access the environment variable which is not stored client’s side.The environment variable is secret and stored in .env files, and during compilation if I don’t include the .env, the app will not be able to connect to a database.I hope you get my problem. Thanks in advance!",
"username": "Aadhav_Vignesh"
},
{
"code": "",
"text": "I think the solution to your problem depends on how you are deploying your app. Most deployment tools provide some API to define encrypted secrets that can be used at runtime (in this case as an environment variable).For instance, if you are using ansible to deploy the app on the target machine, you can use ansible-vault (Protecting sensitive data with Ansible vault — Ansible Documentation) to encrypt the secrets which can then be used in your ansible playbooks to export the appropriate environment variable.",
"username": "Prashant_Mital"
},
{
"code": "",
"text": "I am trying to deploy my app using PyInstaller. The code is converted to a binary executable, so I don’t think anything can be done with an API. The credentials still need to go through the API.I might be wrong with my interpretation, but can you please help me further?",
"username": "Aadhav_Vignesh"
},
{
"code": "",
"text": "In the case of applications that are distributed as an executable (as opposed to being deployed to a webserver which serves requests issued by the end-user), the closest analogue to secrets is probably a software license. What I mean here by a license is any additional file without which, the executable either cannot run or can only run with restricted functionality (e.g. ‘Trial mode’). In your case, the license can be an encrypted file which the main executable knows how to decrypt and expects to find at a fixed location on the filesystem (or relative to its own location). The ‘license’ file can contain the connection string secret.Please keep in mind that I am only offering ideas based on what I think you are trying to achieve. The correct answer will need to be informed most closely by your intended use-case.",
"username": "Prashant_Mital"
}
] | Securing passwords using PyMongo during client connection | 2020-04-09T09:11:12.274Z | Securing passwords using PyMongo during client connection | 4,297 |
null | [
"python"
] | [
{
"code": "",
"text": "Hi,I’m looking for help for the following problem:I develop a blockchain web application in Python 3.7, using MongoDB as a backend, so obviously I use the pymongo driver and Flask as well.If I collect the data as elements of a response of a web request and write it to the DB (but not reading from it), the ObjectID becomes the part of the response (in-memory) and get an error 500 as the ObjectID - of course - is not a JSON serializable. I already tried to define a json decoder but it didn’t have any affect. I also tried to use a different variable and call the DB write operation from a completely separated module but the ObjectID still becomes the part of the in-memory response. So all in all, I’m suspecting that pymongo tricks me.Do you have any recommendation, where to go from now? I really appreciate any recommendation!Kind regards,\nLaNps: I do apologize if my english is poor, I’m not a native speaker.",
"username": "LaN"
},
{
"code": "",
"text": "Can you post the code here please. That would help.",
"username": "Brett_Donovan"
},
{
"code": "block = {\n 'index': len(self.chain) + 1,\n 'timestamp': time(),\n 'transactions': self.current_transactions,\n 'proof': proof,\n 'previous_hash': previous_hash or self.hash(self.chain[-1]),\n }\n\n # Reset the current list of transactions\n self.current_transactions = []\n self.chain.append(block)\n data = block\n dbWriter(data)\n print(block)\n # chainColl.insert_one(block)\n return block\n",
"text": "Here is the relevant code snippet:You can see above, I tried to insert the “block” data directly from here (“chainColl” is the collection name in mongo) but then the “return block” contained the ObjectID. Then I wrote my own module (dbWriter) and used a completely different variable “data” but ObjectID also got into “block” after writing into the DB.",
"username": "LaN"
},
{
"code": "_idRELAXED_JSON_OPTIONS>>> from bson.json_util import dumps, RELAXED_JSON_OPTIONS\n>>> from bson.objectid import ObjectId\n>>> dumps({'_id': ObjectId(), 'list': [], 'i': 1}, json_options=RELAXED_JSON_OPTIONS)\n'{\"_id\": {\"$oid\": \"5e8fa5e2f8fceeb62d28b6e3\"}, \"list\": [], \"i\": 1}'\n",
"text": "PyMongo automatically adds the _id field when inserting documents as described here:\nhttps://pymongo.readthedocs.io/en/stable/faq.html#why-does-pymongo-add-an-id-field-to-all-of-my-documentsYou can use PyMongo’s JSON helpers to encode these documents to MongoDB extended JSON. You’ll most likely want to use RELAXED_JSON_OPTIONS:To learn more about MongoDB extended JSON see:",
"username": "Shane"
},
{
"code": "",
"text": "Hi Shane, thanks for having a look on this. If PyMongo adds the “_id” field into the document on the disk, that’s fine, I was aware. My problem is that this field also became the part of the web response, what is an in-memory object and I found it strange.Stay safe!",
"username": "LaN"
},
{
"code": ">>> my_doc = {'x': 1}\n>>> collection.insert_one(my_doc)\n<pymongo.results.InsertOneResult object at 0x7f3fc25bd640>\n>>> my_doc\n{'x': 1, '_id': ObjectId('560db337fba522189f171720')}\n",
"text": "What you are describing is the documented behavior of PyMongo:https://pymongo.readthedocs.io/en/stable/faq.html#why-does-pymongo-add-an-id-field-to-all-of-my-documents",
"username": "Shane"
},
{
"code": "",
"text": "Thanks for the clarification, Shane. Stay safe!",
"username": "LaN"
}
] | Strange behaviour of pymongo | 2020-03-20T16:51:25.223Z | Strange behaviour of pymongo | 3,779 |
null | [] | [
{
"code": "",
"text": "Join us this Thursday, 9 April 2020 at 12pm EDT as we welcome @valerybriz to the mic to teach us about PyMongo queries.Register for this free event here: https://live.mongodb.com/events/details/mongodb-mongodb-global-virtual-community-presents-making-the-most-of-pymongo-queries-a-free-virtual-meetupWe’ll hear from community member, Valery Calderon Briz. Then, we’ll break into discussion groups so you can get to know your fellow members, ask questions, and discuss what you’ve learned.Join the MongoDB Global Virtual Community to hear about new virtual meetup events, hosted by your community team and presented by your fellow community members, as soon as they’re published.",
"username": "Jamie"
},
{
"code": "",
"text": "For everyone that missed the event: Here’s the recording from the virtual user group meeting: - YouTube",
"username": "Sven_Peters"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Community Virtual Meetup: Making the Most of PyMongo Queries with @ValeryBriz | 2020-04-06T23:51:26.672Z | Community Virtual Meetup: Making the Most of PyMongo Queries with @ValeryBriz | 3,079 |
null | [
"security"
] | [
{
"code": "",
"text": "Integrating AWS KMS for multi-tenant apps with 1 DB / tenant right now… I am reading a few articles that say you can generate a master key to encrypt/decrypt multiple mongoDB local keys. How do I do this for multiple master keys if each tenant will have a different master key? How do I also create a DB that shares multiple master keys?Thanks,",
"username": "Mark_Chang"
},
{
"code": "",
"text": "Mark, I’m not super familiar with KMS / Atlas - but I don’t believe you can subdivide keys for a database… That is to say that I believe Atlas enables you to maintain a master key per cluster… The atlas interface doesn’t have the ability, or the granularity to let you have multiple keys per cluster, or database. Although you may be able to enc/de-enc multiple local keys - there’s no way in the interface for you to manage the multi-tenancy. I will see if I can find someone more familiar with KMS internally to shed some light.",
"username": "Michael_Lynn"
},
{
"code": "",
"text": "Mark, I’m not super familiar with KMS / Atlas - but I don’t believe you can subdivide keys for a database… That is to say that I believe Atlas enables you to maintain a master key per cluster… The atlas interface doesn’t have the ability, or the granularity to let you have multiple keys per cluster, or database. Although you may be able to enc/de-enc multiple local keys - there’s no way in the interface for you to manage the multi-tenancy. I will see if I can find someone more familiar with KMS internally to shed some light.Please do follow up because I am reading many posts regarding setting up multitenancy via 1 DB per tenant on MongoDB, yet no one seems to know an answer on how that can be securely achieved.",
"username": "Mark_Chang"
},
{
"code": "",
"text": "Chatted with some folks today. You can’t do what you’re trying to do on Atlas. You cannot have a different key per collection. You may be able to do this with one key per DB - but Kenn White (our security Guru) has pointed out that unless you’re using FLE and are extremely careful - this is a massive security risk.",
"username": "Michael_Lynn"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Integrating AWS KMS for multi-tenant apps with 1 DB / tenant | 2020-04-07T20:35:18.176Z | Integrating AWS KMS for multi-tenant apps with 1 DB / tenant | 3,068 |
null | [
"cxx"
] | [
{
"code": "bsoncxx::types::b_binaryvector<uint8_t>bsoncxx::document::value",
"text": "I’m currently trying to write C++ code with which I can store and retrieve blobs (binary large objects) into my database.So far, I successfully wrote regular arrays, but I have doubts about whether they are stored in a dense form. Or am I mistaken here and I could simply write an array with the data being encoded in some integer type and be done with it?\nI took a look at bsoncxx::types::b_binary, but the documentation is a little bit short.\nSo far, I managed to find no example or tutorial that deals with blobs.In my case in particular, amongst other things, I need to store the data from PNG images in the database. What is the best practice for that?If possible, I’d like some basic example code that goes from having the binary data in some reasonable form (like vector<uint8_t>) and declaring some appropriate builder to a finished bsoncxx::document::value.\nI’d also be interested in the best practice of retrieving the value, especially how to have the least copy operations in order to arrive at the binary data being in the original form as above.\nBut in any case, any sort of help would be appreciated.(Note that I just began using mongocxx and MongoDB in general, and not completely familiar with the full terminology.)",
"username": "Ksortakh_Kraxthar"
},
{
"code": "bsoncxx::types::b_oid write_blob()\n{\n mongocxx::uri uri(\"mongodb://localhost:27017\");\n mongocxx::client client(uri);\n mongocxx::database database = client[\"test_database\"];\n mongocxx::collection collection = database[\"test_collection\"];\n\n std::vector<uint8_t> elements = {7, 8, 9};\n auto doc = bsoncxx::builder::basic::document{};\n doc.append(bsoncxx::builder::basic::kvp(\"data\", [&elements](bsoncxx::builder::basic::sub_array child) {\n\tfor (const auto& element : elements) {\n\t\tchild.append(element);\n\t}\n }));\n\n bsoncxx::document::value value = doc.extract();\n\n bsoncxx::stdx::optional<mongocxx::result::insert_one> result =\n collection.insert_one(value.view());\n\t\t\n std::cout << \"id is: \" << result->inserted_id().get_oid().value.to_string() << std::endl;\n\n return result->inserted_id().get_oid();\n}\nvoid read_blob(bsoncxx::types::b_oid hash)\n{\n mongocxx::uri uri(\"mongodb://localhost:27017\");\n mongocxx::client client(uri);\n mongocxx::database database = client[\"test_database\"];\n mongocxx::collection collection = database[\"test_collection\"];\n\n bsoncxx::stdx::optional<bsoncxx::document::value> value = collection.find_one(bsoncxx::builder::stream::document{} << \"_id\" << hash << bsoncxx::builder::stream::finalize);\n \n for(size_t i=0; i<value->view()[\"data\"].length(); i++)\n {\n if(value->view()[\"data\"][i].type() == bsoncxx::type::k_int32)\n\t{\n\t\tstd::cout << value->view()[\"data\"][i].get_int32() << std::endl;\n\t}\n }\n}\nlength()",
"text": "My current attempt is to work with arrays. The code I use so far is for writing data:and for receiving data:Now this works to a certain degree so far, but I have several issues with this:Ideally, I’d like to see somebody to rewrite my functions.",
"username": "Ksortakh_Kraxthar"
}
] | MongoCxx: how to insert blobs? | 2020-04-06T13:13:29.869Z | MongoCxx: how to insert blobs? | 3,748 |
null | [
"python"
] | [
{
"code": "",
"text": "mongodb query return command cursor .to iterate the cursor and get the data its very slow takes around 90 sec for 250 records .Can someone helpusing pymongo 3.9.0",
"username": "Yogita_Pal"
},
{
"code": "",
"text": "Hi @Yogita_Pal,First of all, welcome to the MongoDB Community.\nCould you share your code?All the best,Rodrigo (a.k.a. Logwriter)",
"username": "logwriter"
},
{
"code": "cursor= db.collection.aggregate(\n [\n {\n '$addFields':\n {\n\n 'yearSubstring': {'$substr': [\"$__json.created_on\", 0, 10]},\n }\n },\n {\n '$match': {\n '$and': [{'yearSubstring': {'$gte': '12/01/2019'}}, {'yearSubstring': {'$lte': '12/02/2019'}}]\n }\n }\n\n ])\n",
"text": "#query take around 4-5 seconds\nwhile cursor and cursor.alive:\nlist_element= cursor.next()\nitems.append(list_element)\n#but this while to read each cursor data take around 100 seconds",
"username": "Yogita_Pal"
},
{
"code": "",
"text": "Just call list(cursor) to create a list of the results.",
"username": "Bernie_Hackett"
},
{
"code": "",
"text": "I had already done that before and tried again it also takes same takes same time .what is standard bench mark for reading 1000 records from mongodb server using pymongo 3.9 .",
"username": "Yogita_Pal"
},
{
"code": "python3 -c 'import bson;print(bson.has_c())'python3 -m cProfile -s time myscript.py",
"text": "90 seconds for 250 records does not seem normal to me.what is standard bench mark for reading 1000 records from mongodb server using pymongo 3.9Assuming these are small documents (<1KB) they can all be returned in a single network roundtrip to the server and the total time should be roughly equal to the network latency.Although in general the answer depends on a number of factors:A final note, you can use cProfile to determine where the CPU time (not I/O time) is being spent:",
"username": "Shane"
}
] | Cursor iteration is very slow in pymongo | 2020-03-17T11:35:00.158Z | Cursor iteration is very slow in pymongo | 9,615 |
null | [
"legacy-realm-cloud"
] | [
{
"code": "",
"text": "Hi,I´m currently trying to migrate my realm object server on a self hosted instance to realm cloud. I could not find any online documentation on how to transfer the data to the cloud since do not have access to the cloud instances file system. What are the options there, is there a migration guide? I´m running on the latest ROS server.",
"username": "Christian_Huck"
},
{
"code": "",
"text": "Welcome @Christian_Huck,Migration should be fairly straightforward if you are on the latest version of ROS, but does require some assistance to restore backups to the cloud.If you reach out to your Account Executive or [email protected] they should be able to provide guidance on options.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thanks @Stennie_X. I wrote an e-mail to sales.",
"username": "Christian_Huck"
},
{
"code": "",
"text": "@Stennie_X Unfortunately nobody got back to me yet in more than a week ",
"username": "Christian_Huck"
},
{
"code": "",
"text": "@Christian_Huck My colleague who monitors this email has been out, I have replied asking for a time to talk. Thank you!",
"username": "Ian_Ward"
}
] | RealmSwift to Realm Platform Migration | 2020-03-31T06:02:44.292Z | RealmSwift to Realm Platform Migration | 2,340 |
null | [
"legacy-realm-cloud"
] | [
{
"code": "",
"text": "I just signed up for what I thought was the free trial on Realm and now I’ve received an invoice for $30.I only signed up to learn how it works to help answer a question on StackOverflow. I don’t want to subscribe to this.Now there seems to be no way to cancel the subscription through the website.How is that legal?!?!?!?Can someone let me know how to cancel my subscription and the payment that has just been invoiced. I really don’t want to have to cancel my card to stop this payment.Thanks",
"username": "Oliver_Foggin"
},
{
"code": "",
"text": "@Oliver_Foggin Please email [email protected] with your account information and we will cancel it",
"username": "Ian_Ward"
}
] | Cancel Realm.io payment | 2020-04-09T17:22:24.035Z | Cancel Realm.io payment | 2,624 |
null | [
"dot-net",
"change-streams"
] | [
{
"code": " var options = new ChangeStreamOptions();\n options.FullDocument = ChangeStreamFullDocumentOption.UpdateLookup;\n\n using (var cursor = this.Database.Watch(options))\n {\n while (cursor.MoveNext())\n {\n if (!cursor.Current.Any())\n {\n break;\n }\n\n using (var enumerator = cursor.Current.GetEnumerator())\n {\n while (enumerator.MoveNext())\n {\n var document = enumerator.Current;\n // evaluating the object\n }\n }\n }\n }\n[21:37:23 INF] CHANGE STREAM DOCUMENT:\n**BackingDocument:** [\"_id={ \\\"_data\\\" : \\\"825E8A902B000000022B022C0100296E5A1004B805E49E7E3F45C781C4AB942B479670465A5F6964005A100408D7D9CFCB7D62096892F2000172E3CF0004\\\" }\", \"operationType=update\", \"clusterTime=6812415900709289986\", **\"fullDocument=BsonNull\"**, \"ns={ \\\"db\\\" : \\\"submission\\\", \\\"coll\\\" : \\\"SubmissionContainers\\\" }\", \"documentKey={ \\\"_id\\\" : CSUUID(\\\"08d7d9cf-cb7d-6209-6892-f2000172e3cf\\\") }\", \"updateDescription={ \\\"updatedFields\\\" : { \\\"lastModified\\\" : \\\"2020-04-06T02:12:58.7825657\\\", \\\"submission\\\" : { \\\"effectiveDate\\\" : \\\"2020-04-08\\\", \\\"workersCompensation\\\" : { \\\"employersLiability\\\" : { \\\"eachAccident\\\" : 100000, \\\"eachEmployee\\\" : 100000, \\\"eachPolicy\\\" : 500000 }, \\\"legalEntities\\\" : [{ \\\"businessType\\\" : \\\"LimitedLiabilityCompany\\\", \\\"states\\\" : [{ \\\"code\\\" : \\\"Colorado\\\", \\\"experienceModification\\\" : { \\\"factor\\\" : NumberDecimal(\\\"1\\\") }, \\\"locations\\\" : [{ \\\"exposure\\\" : [{ \\\"payroll\\\" : 300000, \\\"class\\\" : \\\"9083\\\", \\\"rate\\\" : NumberDecimal(\\\"0\\\"), \\\"hazardGroup\\\" : \\\"\\\\u0000\\\", \\\"overrideRate\\\" : false, \\\"state\\\" : \\\"0\\\" }], \\\"fullTimeEmployeeCount\\\" : 1, \\\"partTimeEmployeeCount\\\" : 1, \\\"address\\\" : { \\\"line1\\\" : \\\"P.O. Box 100\\\", \\\"city\\\" : \\\"Broomfield\\\", \\\"zip\\\" : \\\"80020\\\", \\\"state\\\" : \\\"Colorado\\\" } }] }], \\\"name\\\" : \\\"PPTEST: Testing 001\\\", \\\"taxId\\\" : \\\"121111111\\\" }] }, \\\"contacts\\\" : [{ \\\"firstName\\\" : \\\"ProdIshop\\\", \\\"lastName\\\" : \\\"Agent\\\", \\\"email\\\" : \\\"[email protected]\\\", \\\"type\\\" : 5 }], \\\"namedInsured\\\" : \\\"PPTEST: Testing 001\\\" } }, \\\"removedFields\\\" : [] }\"],\n**FullDocument: null,**\nDocumentKey: [\"_id=UuidStandard:0x08d7d9cfcb7d62096892f2000172e3cf\"],\nResumeToken: [\"_data=825E8A902B000000022B022C0100296E5A1004B805E49E7E3F45C781C4AB942B479670465A5F6964005A100408D7D9CFCB7D62096892F2000172E3CF0004\"],\nUpdateDescription: MongoDB.Driver.ChangeStreamUpdateDescription,\nCollectionNamespace: submission.SubmissionContainers,\nOperationType: Update,\nClusterTime: 6812415900709289986\n",
"text": "I’m getting a null fullDocument on a change stream update event from the C# Mongo Change Streams API when ChangeStreamFullDocumentOption.UpdateLookup was set in the options. What are the ways in which this can happen? Can it happen if someone updates the object and sets it to an empty or null document?Example code:Here is the object that I was able to print out:",
"username": "Jeremy_Buch"
},
{
"code": "",
"text": "Most of the time things are working fine, but it seems like there are certain (usually human) operations that happen from developers in the test environments or when manually cleaning up data that causes these null documents on updates - is this just a bug in the C# APIs?@wan - any ideas or pointers to others who may know?",
"username": "Jeremy_Buch"
},
{
"code": "updateDescription",
"text": "Hi @Jeremy_Buch,What are the ways in which this can happen? Can it happen if someone updates the object and sets it to an empty or null document?This is the behaviour from the MongoDB server, not from the C# APIs.If there are one or more majority-committed operations that modified the updated document after the update operation but before the lookup, the full document returned may differ significantly from the document at the time of the update operation. In this case, it’s null if another operation deletes the document before the lookup operation happens.The deltas information under updateDescription should still be correct however. See Lookup Full Document for Update Operations for more information.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Wan,Thanks for responding. I hadn’t seen this kind of issue other than dev-initiated bulk deletes with robo3t, so it may be changing the object and then deleting it, but either way the scenario is a match for what I’m seeing. I saw the documentation and had assumed that this was not relevant since I had tested this case manually by issuing an insert, a delete, a recreation and an update and the oplog was returning accurate results each step along the way even though I didn’t call it until after all of the operations had been applied. Does this really only show up when the number of deletes is excessive?At any rate, this is the scenario that I’m seeing since I know it was dev deletes - thanks for confirming since my simple tests early on showed that I could see through deletes for simple operations. Because of this, I wasn’t expecting this to still be an outstanding issue.Thanks Wan!\nJeremy Buch",
"username": "Jeremy_Buch"
},
{
"code": "",
"text": "Hi @Jeremy_Buch,Does this really only show up when the number of deletes is excessive?It’s not quite about excessiveness, it’s about operations interleaving for the same document. For example:Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "OK - thanks Wan.I have literally tested exactly that scenario on a small scale and did not see the loss of update information, so I’ll need to repeat that and confirm. I stopped the stream reader, did an update, a delete and a recreate of the same object with different data and then started the stream reader and caught up on records - I was able to see all of the interactions and the accurate state of the object at each step. I saw this note in the documentation before and was concerned by it, so I tested it and didn’t see it exhibit itself.I’ll re-run my tests to understand it better, but either way it shouldn’t be a blocker for us since we don’t do deletes regularly as part of the workflow (other than manual dev operations).Thanks Wan!",
"username": "Jeremy_Buch"
}
] | How can change stream update operations come with null fullDocument (when ChangeStreamFullDocumentOption.UpdateLookup was used)? | 2020-04-08T15:26:38.900Z | How can change stream update operations come with null fullDocument (when ChangeStreamFullDocumentOption.UpdateLookup was used)? | 6,256 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "When I try and install MongoDB.Driver with NuGet in Visual Studio 2019 I get the following error:Error\t\tCould not install package ‘runtime.debian.8-x64.runtime.native.System.Security.Cryptography.OpenSsl 4.3.2’. You are trying to install this package into a project that targets ‘.NETCoreApp,Version=v2.1’, but the package does not contain any assembly references or content files that are compatible with that framework. For more information, contact the package author.",
"username": "Ian_Blakeley"
},
{
"code": "",
"text": "Hi @Ian_Blakeley,Can you check which MongoDB.Driver was installed, I’ve just tested with a new Dotnet Core app with the latest driver [2.10.3] and it’s working fine reading a list of collections from an existing Atlas database for me.The only difference I can see at the moment is my Dotnet Core is 3.1.201 and the error above is referencing 2.1.",
"username": "Will_Blackburn"
},
{
"code": "",
"text": "The error occurs when I am trying to install MongoDB.Driver (ver 2.10.3) ie I am unable to install the driver… my project is .net core 3.1.201",
"username": "Ian_Blakeley"
}
] | Error installing .net driver for .net core project | 2020-04-09T06:59:47.787Z | Error installing .net driver for .net core project | 3,030 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "Hi everyone,I am just starting with mongo, if you can help me with a query, I have a collection that your foreign field is an array elements, I need to join it with your table to fetch your datathanks :)))",
"username": "Sergio_Cordova"
},
{
"code": "$lookupforeignField$lookup: {\n from: < collection to join with - must be from same database and must not be sharded >,\n localField: < field from the input documents >,\n foreignField: < field from the documents of the 'from' collection >,\n as: < output array field >\n}\n",
"text": "With the $lookup aggregation pipeline stage you can use the array field to do the match.The $lookup has the following syntax; you can specify an array field in the foreignField parameter.",
"username": "Prasad_Saya"
}
] | Help join collection with foreign is array element | 2020-04-08T15:25:02.436Z | Help join collection with foreign is array element | 2,552 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 4.0.18-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.0.17. The next stable release 4.0.18 will be a recommended upgrade for all 4.0 users.Fixed in this release:4.0 Release Notes | All Issues | DownloadsAs always, please let us know of any issues.– The MongoDB Team",
"username": "Kelsey_Schubert"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB 4.0.18-rc0 is released | 2020-04-09T03:30:05.209Z | MongoDB 4.0.18-rc0 is released | 1,900 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "If an app is mostly client data that is synced, and has some data that should not be synced but needs to be queried, is it reasonable to stick with one database - Realm - for the latter.For instance, let’s say I have a product catalog I don’t want synced on my clients, but I want to be able to query. Does it make sense to use Realm for this, and is there a good way to query the Realm server directly to make an API?",
"username": "crystaln"
},
{
"code": "",
"text": "Realm is live-sychronized database so there’s no way to only store Realm data in the cloud (currently). It will always be sychronized - one copy locally and the other copy in Realm Cloud.Once MongoDB Realm becomes and actual thing, that may open up additional options but that’s a long ways off currently.If you have data that should be available to all clients but not changable by the client, you may want to bundle it with your app as a Realm file, or if it’s a lot of data, look to another source. I would suggest looking at Firebase for online only storage",
"username": "Jay"
},
{
"code": "",
"text": "Hey @crystaln - there is no problem using multiple realm files in a single app, one for sync, and one for local, you simply open two and pass them different configurations - a SyncConfiguration for the former, a regular Configuration for the latter. Be sure to name them different variables that make sense to you - ie. localRealm and syncRealm.I would not say MongoDB Realm is a long way off - we are pretty close actually.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Thanks Ian. I assume by “local” you mean local to a device. I’m referring to data like a product catalog, or the entire list of user generated content, that would be on the server.Let’s say I have:My app will have something similar to both of those.For UGC, my understanding is a client can query those relationally and fetch only matching records, so Realm makes sense for that.For the product catalog, would it be possible to do the same, and just import the data using a pseudo client or server API, then do searches using Realm client API?I don’t think I want all users to have the full database. That seems like a privacy, size, and performance issue, tho realistically it is feasible in the short term, it seems like not a great design.",
"username": "crystaln"
},
{
"code": "",
"text": "@crystaln Ah I see, yes by “local” I meant non-synced. I understand now.You can separate different types of data into different full-sync realms and open each with a different realm URI path - for example, /productCatalog and /~/userContent - productCatalog will be a global realm that is read-only, /~/userContent is a realm that the user has read-write permissions toKey to Realm Platform is the concept of a synchronized Realm. This guide discusses how to get started with a synced Realm.As long as you are not storing images in the productCatalog it should be fine to sync that down to the client - we have users that have millions of objects in a product catalog - no problem.Of course, I can understand for privacy concerns in other use cases you may not want to sync down all that data just to query it locally. To accomplish this you’ll probably need to stand up some web endpoint that runs the query when called by the client. In the future MongoDB Realm product, you will be able to call a function from the client without having to standup a web server.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Ian_Ward I am really confused by what you are suggesting here. Maybe I am misinterpreting what you’re suggesting but saying thisit should be fine to sync that down to the client - we have users that have millions of objects in a product catalog - no problem.leads one to think there is some control over how much data is ‘sync’d down to the client’ which isn’t really how it works. No? Isn’t it all or none?As you know, Query/Partially sync’d realms are not going to be supported and are going away. Any Realm stored in the ‘cloud’ is a/will be a Fully Sync’d realm - that means ALL of the data is stored both locally on the device as well as in Realm Cloud.If you have 100Gb of products in your product catalog, and an iPhone with 32G Capacity, what would happen?Of course, I can understand for privacy concerns in other use cases you may not want to sync down all that data just to query it locally.Is that possible with a fully sync’d realm? How does one ‘not sync down all that data’?To accomplish this you’ll probably need to stand up some web endpoint that runs the query when called by the client.Is that documented somewhere as it would be invaluable when dealing with very large datasets (as we are). Is there something in the Swift SDK that allows a query to be run against a Realm without actually being connected to that realm?Oh andI would not say MongoDB Realm is a long way off - we are pretty close actually.Pretty close to you and pretty close to us are way different things. We have an app that we want to bring to market but because of the unknown timeframe and changes, we have to hold off - and have been holding off for 6 months so far. Another year is 18 months and thats a ‘long way off’.",
"username": "Jay"
},
{
"code": "",
"text": "How would you implement something like Facebook’s visibility settings, where a post can be visible only to certain people or even just to me? What if I as a user decide to unpublish an object?",
"username": "crystaln"
},
{
"code": "",
"text": "I’m also confused by the apparent deprecation of query-synced realms, and this is making me wonder whether this is an appropriate platform for my application as I start planning the schema.Let’s say as you suggest I have a realm for public chats. What if a user gets banned from one public chat? They still have all the data because the realm is fully synced, and it seems like there is no way to control different levels of permission by user, so I guess they probably have full write privileges too. So every chat needs to be in a different realm. Oh and any admin or otherwise privileged data needs to be in yet another realm, what about private chats, and anything else with different permission levels. None of the data can be linked easily, and I need to manage figuring out which realm all the data is in. That’s a bit of a silly example, however the idea that in order to separate access control to different data, I need to separate the data into essentially different database, seems really poor.Without query based sync and object level permissions, I don’t understand how Realm can be used for any complex application, where view and edit privileges are not so simple.Even your team example in the “App Architecture With Full Sync” article seems contrived. Isn’t there some data that everyone on the team shouldn’t see? Can’t some people on the team edit some data and others edit other data?I’m really struggling to understand how Realm with Full Sync doesn’t make my life even harder than API based apps.Am I missing something? Should I just use Firestore, despite its query and search limiitations?",
"username": "crystaln"
},
{
"code": "",
"text": "Yes, we are aware of how valuable query-based sync is, but unfortunately, the performance was incredibly poor and after careful analysis, no amount of iterative improvement could get it to the level we wanted - so we decided to remove it in the new product and then rebuild it from the ground up to be scalable.I would encourage you to take a look at our published roadmap, not only for a timeframe of the launch of MongoDB Realm but also for the re-launch of something akin to query-based syncMongoDB’s developer data platform is flexible, scalable, and ensures that you can deliver reactive application experiences on mobile devices.Building a chat app is complicated with our without Realm, but using full-sync requires a level of backend complexity (copying chat objects around to different realms, keeping metadata on which objects belong to which realms) that most people opt for a 3rd party provider.The original post was on a product catalog or inventory application - which we have done multiple times with full sync. If your product catalog is 100GB, that will not work, and you will need to think of a way to partition that data into smaller realms or find another solution, such as a GET endpoint + query parameter or the command pattern.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Yes, the original post was a simplified example for my application. Realm without query sync seems too limiting to serve as the primary database for any application I am working on (or frankly any non-trivial application I can think of). There are already cases where it’s going to immensely complicate things for me, and I’m certain that I will feel my hands are tied in the future.I’m frankly frustrated that I delved so far into exploring Realm before understanding this and wish you made the fact that the product is going through such a large transition more clear. These limitations seem critical to share up front. Simply saying “query based sync is not recommended” is an extreme understatement as to the impact of this on new developers.I look forward to your relaunch of something like query based sync. Realm + MongoDB seems like a killer combo. Sadly I can’t develop on those promises.Thanks for your answers here.",
"username": "crystaln"
}
] | Realm for non-client data | 2020-04-07T07:12:40.712Z | Realm for non-client data | 3,611 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "I’m new in mongodb world, i need to create a structure where a person has a job so person is the db and job is one of it’s collection, inside this collection i need to store for example “it developer” but this it developer has different fields, like ‘web designer’, ‘game designer’ etc… so how can i use mongo db for create this structure?\nfor now i’ve just created the collection ‘jobs’ but i don’t know how to proceed",
"username": "Lorenzo_Tarquini"
},
{
"code": "",
"text": "I think you should take some courses https://university.mongodb.com/.",
"username": "steevej"
}
] | How to design collection structure with relationships | 2020-04-08T19:28:30.530Z | How to design collection structure with relationships | 1,646 |
null | [
"data-modeling",
"security",
"legacy-realm-cloud"
] | [
{
"code": "",
"text": " Hey, I’m researching Realm for a new project and I’m not quite sure I fully understand some architecture / security points of the system.In many situations when storing data in any team or multi-user scenario I wish to also store a related user along with said data - the easy example being a real-time chat application in which you need to see which user sent which message.I don’t see any obvious way using Realm to prevent a malicious user from connecting to the database with their credentials and inserting data impersonating another user. Perhaps I’m missing something or I don’t fully understand Realm’s architecture? If anyone can direct me to the correct solution for keeping user-data and object ownership secure that would be really great.",
"username": "Henry_Sipp"
},
{
"code": "",
"text": "I have the same question so many times over. With query based sync and object level permissions, which word is they are deprecating for full sync, everything seems possible.I don’t know how to architect a system with full sync in any way that makes sense.",
"username": "crystaln"
}
] | User ownership of objects in a shared realm? | 2020-04-03T21:41:23.292Z | User ownership of objects in a shared realm? | 2,433 |
null | [
"python"
] | [
{
"code": "import sys\nimport pymongo.mongo_client\nfrom PyQt5 import QtCore, QtWidgets\n\n\nconnection = pymongo.MongoClient('localhost', 27017)\ndatabase = connection['bremi691_ead']\ncollection = database['usuarios']\n\n\nglobal first_doc\nglobal previous_doc\nglobal next_doc\nglobal last_doc\n\n\nclass Ui_Dialog(object):\n\n def __init__(self):\n self.lineEdit_name = QtWidgets.QLineEdit(Dialog)\n self.lineEdit_email = QtWidgets.QLineEdit(Dialog)\n self.lineEdit_pwd = QtWidgets.QLineEdit(Dialog)\n self.lineEdit_market = QtWidgets.QLineEdit(Dialog)\n\n self.pushButton_first = QtWidgets.QPushButton(Dialog)\n self.pushButton_previous = QtWidgets.QPushButton(Dialog)\n self.pushButton_next = QtWidgets.QPushButton(Dialog)\n self.pushButton_last = QtWidgets.QPushButton(Dialog)\n\n def setupUi(self, Dialog):\n Dialog.setObjectName(\"Dialog\")\n Dialog.resize(448, 300)\n\n self.lineEdit_name.setGeometry(QtCore.QRect(130, 50, 241, 21))\n self.lineEdit_name.setInputMethodHints(QtCore.Qt.ImhUppercaseOnly)\n self.lineEdit_name.setObjectName(\"lineEdit_name\")\n\n self.lineEdit_email.setGeometry(QtCore.QRect(130, 90, 191, 21))\n self.lineEdit_email.setInputMethodHints(QtCore.Qt.ImhEmailCharactersOnly)\n self.lineEdit_email.setObjectName(\"lineEdit_email\")\n\n self.lineEdit_pwd.setGeometry(QtCore.QRect(130, 130, 131, 21))\n self.lineEdit_pwd.setInputMethodHints(QtCore.Qt.ImhSensitiveData | QtCore.Qt.ImhUppercaseOnly)\n self.lineEdit_pwd.setObjectName(\"lineEdit_pwd\")\n\n self.lineEdit_market.setGeometry(QtCore.QRect(130, 170, 131, 21))\n self.lineEdit_market.setInputMethodHints(QtCore.Qt.ImhUppercaseOnly)\n self.lineEdit_market.setObjectName(\"lineEdit_market\")\n\n self.pushButton_first.setGeometry(QtCore.QRect(70, 240, 61, 28))\n self.pushButton_first.setObjectName(\"pushButton_first\")\n self.pushButton_first.clicked.connect(ShowFirst)\n\n self.pushButton_previous.setGeometry(QtCore.QRect(150, 240, 61, 28))\n self.pushButton_previous.setObjectName(\"pushButton_previous\")\n self.pushButton_previous.clicked.connect(ShowPrevious)\n\n self.pushButton_next.setGeometry(QtCore.QRect(230, 240, 61, 28))\n self.pushButton_next.setObjectName(\"pushButton_next\")\n self.pushButton_next.clicked.connect(ShowNext)\n\n self.pushButton_last.setGeometry(QtCore.QRect(310, 240, 61, 28))\n self.pushButton_last.setObjectName(\"pushButton_last\")\n self.pushButton_last.clicked.connect(ShowLast)\n\n self.retranslateUi(Dialog)\n\n def retranslateUi(self, Dialog):\n _translate = QtCore.QCoreApplication.translate\n Dialog.setWindowTitle(_translate(\"Dialog\", \"Dialog\"))\n self.pushButton_first.setText(_translate(\"Dialog\", \"First\"))\n self.pushButton_previous.setText(_translate(\"Dialog\", \"Previous\"))\n self.pushButton_next.setText(_translate(\"Dialog\", \"Next\"))\n self.pushButton_last.setText(_translate(\"Dialog\", \"Last\"))\n\n ShowFirst()\n\n\ndef ShowFirst():\n global first_doc\n coll = collection.find().sort(\"nome\", 1).limit(1)\n for first_doc in coll: ('{0}'.format(first_doc['nome']))\n ui.lineEdit_name.setText(str(first_doc['nome']))\n ui.lineEdit_email.setText(str(first_doc['email'])),\n ui.lineEdit_pwd.setText(str(first_doc['senha'])),\n ui.lineEdit_market.setText(str(first_doc['como_chegou']))\n coll.close()\n\n\ndef ShowPrevious():\n print(\"Insert code to move to previous document\")\n\n\ndef ShowNext():\n print(\"Insert code to move to next document\")\n\n\ndef ShowLast():\n global last_doc\n coll = collection.find().sort(\"nome\", -1).limit(1)\n for last_doc in coll: ('{0}'.format(last_doc['nome']))\n ui.lineEdit_name.setText(str(last_doc['nome']))\n ui.lineEdit_email.setText(str(last_doc['email'])),\n ui.lineEdit_pwd.setText(str(last_doc['senha'])),\n ui.lineEdit_market.setText(str(last_doc['como_chegou']))\n coll.close()\n\n\nif __name__ == \"__main__\":\n app = QtWidgets.QApplication(sys.argv)\n Dialog = QtWidgets.QDialog()\n ui = Ui_Dialog()\n ui.setupUi(Dialog)\n Dialog.show()\n sys.exit(app.exec_())",
"text": "Hello everyone,Greetings from Brazil! I’m a beginner programmer learning Python 3. I created a form in Qt5 Designer to navigate through my mongoDB database.The insert document form works great. Also, I was able to create 50% of the navigation form funcionality. It has four buttons to display documents: First, Previous, Next and Last. The buttons for First and Last document work, but I’m strugging with Previous and Next.They need to step forward and back in the collection every time the buttons are pressed and I can’t figure out how to do this. Suggestions are welcome! Please refer to my code below. I really appreciate your time and help. Thanks a lot.Best regards.What I have tried:",
"username": "JC_Carmo"
},
{
"code": "",
"text": "Hi @JC_Carmo,They need to step forward and back in the collection every time the buttons are pressed and I can’t figure out how to do this.Essentially this is similar to performing a pagination, although in your case it’s just one document being displayed at a time.You need to record a unique id for the document on display to keep track of where the display is at. For previous or next action, you could retrieve a document from the database with either previous/next id in the collection. This should be similar to an example in Using Range Queries just without the skip and limit.Depending on your use case, you could also retrieve a batch of documents (i.e. 20) to prevent the application running single query to the database for every click action.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Thanks for your feed-back, Wan! I am going to try your suggestion. Best regards and stay well. ",
"username": "JC_Carmo"
}
] | Pymongo and qt5 designer form - Need code for buttons to move to previous and next documents | 2020-04-01T18:45:36.133Z | Pymongo and qt5 designer form - Need code for buttons to move to previous and next documents | 3,753 |
null | [
"c-driver"
] | [
{
"code": "",
"text": "I want to include and link libmongoc in my program, also I do not like this tutorial below\nhttp://mongoc.org/libmongoc/current/tutorial.html\nI just build and install in a self-defined path, then copy inlude directory and lib directory to my project and in CMakeLists.txt I include_directories and link directories separately.\nThere comes a link error:\ncannot find -llibbson-1.0\ncannot find -llibmongoc-1.0\nBut I have libbson-1.0.so in the lib directory, and other 3rd libs are working fine, what is the problem?",
"username": "11115"
},
{
"code": "",
"text": "@11115 please provide the complete command sequence you are using, starting with the build of the C driver. Ensure you include the complete output of the C driver build and the output of your failed program build.",
"username": "Roberto_Sanchez"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Build mongo-c-driver error | 2020-04-08T11:15:41.882Z | Build mongo-c-driver error | 1,898 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Please I need to design a database for classifieds site\nManaging Categories and Subcategories\nManaging Buyers, Sellers, and Users\nAdding and Retrieving Category Attributes in the Property Table\nManaging Posts and Their Attributes",
"username": "cherif_bayazine"
},
{
"code": "",
"text": "Hi @cherif_bayazine,Can I suggest MongoDB University to help you design your schema? They’ve got all sorts of useful free classes on all topics relating to MongoDB. One of my favorites is M340, which focuses exclusively on schema design for your application.A full class catalog is available here: MongoDB Courses and Trainings | MongoDB UniversityI hope that helps!Thanks,Justin",
"username": "Justin"
},
{
"code": "",
"text": "A good source that helped me is Building with Patterns: A Summary | MongoDB Blog.",
"username": "steevej"
},
{
"code": "",
"text": "Please I need to design a database for classifieds siteData modeling with databases, any kind, initially involves finding out the entities, relationships (the 1:1, 1:N and N:N) between the entities, and the entity attributes. With MongoDB document model the main aspects to consider are the elements of a document and whether to use embedding and/or referencing. These are the things that come to mind, first, when modeling for the MongoDB database.Useful references:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "thanks brother :قلب:",
"username": "cherif_bayazine"
}
] | Designing classifieds data model | 2020-04-07T21:39:45.553Z | Designing classifieds data model | 2,324 |
[
"dot-net",
"production"
] | [
{
"code": "# .NET Driver Version 2.10.3 Release Notes\n\nThis is a patch release that fixes several bugs reported since 2.10.2 was released.\n\nAn online version of these release notes is available at:\n\nhttps://github.com/mongodb/mongo-csharp-driver/blob/master/Release%20Notes/Release%20Notes%20v2.10.3.md\n\nThe list of JIRA tickets resolved in this release is available at:\n\nhttps://jira.mongodb.org/issues/?jql=project%20%3D%20CSHARP%20AND%20fixVersion%20%3D%202.10.3%20ORDER%20BY%20key%20ASC\n\nDocumentation on the .NET driver can be found at:\n\nhttps://mongodb.github.io/mongo-csharp-driver/\n\n## Upgrading\n\nThere are no known backwards breaking changes in this release.\n",
"text": "This is a patch release that fixes several bugs reported since 2.10.2 was released.An online version of these release notes is available at:The list of JIRA tickets resolved in this release is available at:https://jira.mongodb.org/issues/?jql=project%20%3D%20CSHARP%20AND%20fixVersion%20%3D%202.10.3%20ORDER%20BY%20key%20ASCDocumentation on the .NET driver can be found at:There are no known backwards breaking changes in this release.",
"username": "Vincent_Kam"
},
{
"code": "",
"text": "",
"username": "system"
}
] | .NET Driver 2.10.3 Released | 2020-04-07T20:11:42.962Z | .NET Driver 2.10.3 Released | 1,652 |
|
null | [
"compass"
] | [
{
"code": "",
"text": "I built an aggregation pipeline using the Compass aggregation pipeline builder. Saved it. But sadly, when I try to load it, it crashes my Compass which gets stuck completely.Where would I find it, that I can salvage my today’s work? Is it saved in the filesystem somewhere? Or in the database?/Christoph",
"username": "Christoph_Lange"
},
{
"code": "",
"text": "Hi Christoph,If you are using versions of Compass older than 1.21 (which is currently in beta), the saved aggregations are stored in IndexedDB format in the local application preferences and are not easily accessible. In Compass 1.21, saved pipelines have moved to the filesystem (see: Where are saved aggregations in Compass? - #3 by Stennie_X).However, Compass should not crash or hang when loading a saved aggregation pipeline.Please file a bug report in the MongoDB Jira issue tracker with more details including your specific version of Compass and any error message or behaviour related to this problematic pipeline: http://jira.mongodb.org/browse/COMPASS.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thanks, Stennie.Found the pipelines, and removed the ‘$merge’ stage from the end. Now it works again: if it times out, it just properly shows it with red frames etc. But the GUI is still usable.With the $merge in the end, it would load, time out, then block the GUI with up to 200% CPU load.",
"username": "Christoph_Lange"
},
{
"code": "",
"text": "Hi Christoph,Glad you were able to find a quick fix!Thanks for filing a bug report as well: https://jira.mongodb.org/browse/COMPASS-4233.Regards,\nStennie",
"username": "Stennie_X"
}
] | Where are the aggregation pipelines from Compass saved? | 2020-04-07T17:35:52.722Z | Where are the aggregation pipelines from Compass saved? | 5,306 |
null | [
"change-streams"
] | [
{
"code": "",
"text": "Hello team can you help me how we can trigger change stream event from primary node only\nthis is because because I move to production server to achieve high ability of server\nI have deploy same code over 4 server when i perform any CRUD operation in any of one server the change stream trigger from all the 4 server so what i want to be it should be trigger from the same server where operation are perform or it should be trigger form primary node\nCan any one help me to how to go with this issue\nThanks in advanced",
"username": "neeraj_bishst"
},
{
"code": "_id",
"text": "can you help me how we can trigger change stream event from primary node onlyHi Neeraj,Welcome to the MongoDB community!If you have deployed four copies of your code watching the same change stream, it is expected that all will get the same events.Your application logic needs to coordinate how concurrent watchers should interact. For example, you could have only one watcher actively processing at any given point or perhaps push unique change events onto a distributed work queue where multiple workers could process events.Change streams are designed to be resumable, so you can use the resume token (_id of the change stream event document) to resume notifications after a failover or restart of your watcher application.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How we can trigger change stream event from primary node only? | 2020-04-07T17:36:02.296Z | How we can trigger change stream event from primary node only? | 2,368 |
null | [
"php"
] | [
{
"code": "$this->check(\"192_168_223_136_old\", \"Application\", ['MachineName'=>'Client1 ']);\n$this->check(\"192_168_223_136_old\", \"Application\", [\"MachineName\" => \"Client1.evilzone.h4niz\"]);\npublic function check($db, $col, $filter)\t{\n\t$collection = (new MongoDB\\Client)->$db->$col;\n\t\n\t$rs = $collection->find($filter);\n\n\tforeach ($rs as $r) {\n\t\t# code...\n\t\tprint_r(var_dump($r) . \"<br>\");\n\t}\n}\n#Get WntEvt\nfunction GetEventLog\n{\n param([String]$path)\n\n\n \"[+] - Getting Windows Eventlog ...\" | Out-File -Append -FilePath $path\\Status.txt\n $log = foreach ($tmp in (Get-EventLog -List)){if ($tmp.Entries.Count -gt 0){$tmp}}\n $i = 1\n $lCount = $log.Count\n while ($i -le $lCount)\n { \n if (![String]::IsNullOrEmpty($log[$i].Log)) {[String] $s = $path + '\\' + $log[$i].Log + '.json'}\n $log[$i].Entries | ConvertTo-Json -Compress | Out-File -FilePath $s -Encoding ascii\n $i++\n }\n\n \"[v] -\"+$log[$i].Log+\".json-wrote completed!\" | Out-File -Append -FilePath $dest\\Status.txt\n}\n\nGetEventLog $path\n {\n \"0\": {\n \"MachineName\": \"Client1 \",\n \"Data\": [{\n \"$numberInt\": \"77\"\n }, {\n \"$numberInt\": \"0\"\n }, {\n \"$numberInt\": \"83\"\n }, {\n \"$numberInt\": \"0\"\n }, {\n \"$numberInt\": \"68\"\n }, {\n \"$numberInt\": \"0\"\n }, {\n \"$numberInt\": \"84\"\n }, {\n \"$numberInt\": \"0\"\n }, {\n \"$numberInt\": \"67\"\n }, {\n \"$numberInt\": \"0\"\n }],\n \"Index\": {\n \"$numberInt\": \"509\"\n },\n \"Category\": \"(1)\",\n \"CategoryNumber\": {\n \"$numberInt\": \"1\"\n },\n \"EventID\": {\n \"$numberInt\": \"4111\"\n },\n \"EntryType\": {\n \"$numberInt\": \"4\"\n },\n \"Message\": \"The description for Event ID '1073745935' in Source 'MSDTC' cannot be found. The local computer may not have the necessary registry information or message DLL files to display the message, or you may not have permission to access them. The following information is part of the event:\",\n \"Source\": \"MSDTC\",\n \"ReplacementStrings\": [],\n \"InstanceId\": {\n \"$numberInt\": \"1073745935\"\n },\n \"TimeGenerated\": \"/Date(1584345755000)/\",\n \"TimeWritten\": \"/Date(1584345755000)/\",\n \"UserName\": null,\n \"Site\": null,\n \"Container\": null\n },\n \"1\": {\n \"MachineName\": \"Client1 \",\n \"Data\": [],\n \"Index\": {\n \"$numberInt\": \"510\"\n },\n \"Category\": \"General\",\n \"CategoryNumber\": {\n \"$numberInt\": \"1\"\n },\n \"EventID\": {\n \"$numberInt\": \"327\"\n },\n \"EntryType\": {\n \"$numberInt\": \"4\"\n },\n \"Message\": \"svchost (1032) The database engine detached a database (1, C:\\\\Windows\\\\system32\\\\LogFiles\\\\Sum\\\\Current.mdb). (Time=0 seconds)\\r\\n\\r\\n\\r\\n\\r\\nInternal Timing Sequence: [1] 0.000, [2] 0.000, [3] 0.000, [4] 0.000, [5] 0.000, [6] 0.000, [7] 0.000, [8]\n 0.000, [9] 0.000, [10] 0.000, [11] 0.016, [12] 0.000.\\r\\n\\r\\nRevived Cache: 0 0\",\n \"Source\": \"ESENT\",\n \"ReplacementStrings\": [\"svchost\", \"1032\", \"\", \"1\", \"C:\\\\Windows\\\\system32\\\\LogFiles\\\\Sum\\\\Current.mdb\", \"0\", \"[1] 0.000, [2] 0.000, [3] 0.000, [4] 0.000, [5] 0.000, [6] 0.000, [7] 0.000, [8]\n 0.000, [9] 0.000, [10] 0.000, [11] 0.016, [12] 0.000.\", \"0 0\"],\n \"InstanceId\": {\n \"$numberInt\": \"327\"\n },\n \"TimeGenerated\": \"/Date(1584345756000)/\",\n \"TimeWritten\": \"/Date(1584345756000)/\",\n \"UserName\": null,\n \"Site\": null,\n \"Container\": null\n },\n//truncated\n \"183\": {\n \"MachineName\": \"Client1.evilzone.h4niz\",\n \"Data\": [],\n \"Index\": {\n \"$numberInt\": \"692\"\n },\n \"Category\": \"(0)\",\n \"CategoryNumber\": {\n \"$numberInt\": \"0\"\n },\n \"EventID\": {\n \"$numberInt\": \"1003\"\n },\n \"EntryType\": {\n \"$numberInt\": \"4\"\n },\n \"Message\": \"The Software Protection service has completed licensing status check.\\r\\nApplication Id=55c92734-d682-4d71-983e-d6ec3f16059f\\r\\nLicensing Status=\\n1: 4fc45a88-26b5-4cf9-9eef-769ee3f0a016, 1, 1 [(0 [0x00000000, 1, 0], [(?)( 1 0x00000000)(?)( 2 0x00000000 0 0 msft:rm/algorithm/hwid/4.0 0x00000000 0)(?)( 9 0x00000000 180 259048)( 10 0x00000000 msft:rm/algorithm/flags/1.0)(?)])(1 )(2 )]\\n2: 9d0bb49b-21a1-4354-9981-ec5dd9393961, 1, 0 [(0 [0xC004F014, 0, 0], [(?)(?)(?)(?)(?)(?)(?)(?)])(1 )(2 )]\\n\\n\",\n \"Source\": \"Software Protection Platform Service\",\n \"ReplacementStrings\": [\"55c92734-d682-4d71-983e-d6ec3f16059f\", \"\\n1: 4fc45a88-26b5-4cf9-9eef-769ee3f0a016, 1, 1 [(0 [0x00000000, 1, 0], [(?)( 1 0x00000000)(?)( 2 0x00000000 0 0 msft:rm/algorithm/hwid/4.0 0x00000000 0)(?)( 9 0x00000000 180 259048)( 10 0x00000000 msft:rm/algorithm/flags/1.0)(?)])(1 )(2 )]\\n2: 9d0bb49b-21a1-4354-9981-ec5dd9393961, 1, 0 [(0 [0xC004F014, 0, 0], [(?)(?)(?)(?)(?)(?)(?)(?)])(1 )(2 )]\\n\\n\"],\n \"InstanceId\": {\n \"$numberInt\": \"1073742827\"\n },\n \"TimeGenerated\": \"/Date(1584355522000)/\",\n \"TimeWritten\": \"/Date(1584355522000)/\",\n \"UserName\": null,\n \"Site\": null,\n \"Container\": null\n },\n//truncated\n}\nvar_dump($rs)$rs = $collection->find($filter);check()object(MongoDB\\Driver\\Cursor)#6 (10) { [\"database\"]=> string(19) \"192_168_223_136_old\" [\"collection\"]=> string(11) \"Application\" [\"query\"]=> object(MongoDB\\Driver\\Query)#7 (3) { [\"filter\"]=> object(stdClass)#9 (1) { [\"MachineName\"]=> string(22) \"Client1.evilzone.h4niz\" } [\"options\"]=> object(stdClass)#13 (0) { } [\"readConcern\"]=> NULL } [\"command\"]=> NULL [\"readPreference\"]=> object(MongoDB\\Driver\\ReadPreference)#11 (1) { [\"mode\"]=> string(7) \"primary\" } [\"session\"]=> NULL [\"isDead\"]=> bool(true) [\"currentIndex\"]=> int(0) [\"currentDocument\"]=> NULL [\"server\"]=> object(MongoDB\\Driver\\Server)#8 (10) { [\"host\"]=> string(9) \"127.0.0.1\" [\"port\"]=> int(27017) [\"type\"]=> int(1) [\"is_primary\"]=> bool(false) [\"is_secondary\"]=> bool(false) [\"is_arbiter\"]=> bool(false) [\"is_hidden\"]=> bool(false) [\"is_passive\"]=> bool(false) [\"last_is_master\"]=> array(11) { [\"ismaster\"]=> bool(true) [\"maxBsonObjectSize\"]=> int(16777216) [\"maxMessageSizeBytes\"]=> int(48000000) [\"maxWriteBatchSize\"]=> int(100000) [\"localTime\"]=> object(MongoDB\\BSON\\UTCDateTime)#13 (1) { [\"milliseconds\"]=> string(13) \"1585731874914\" } [\"logicalSessionTimeoutMinutes\"]=> int(30) [\"connectionId\"]=> int(1313) [\"minWireVersion\"]=> int(0) [\"maxWireVersion\"]=> int(8) [\"readOnly\"]=> bool(false) [\"ok\"]=> float(1) } [\"round_trip_time\"]=> int(0) } } \n",
"text": "I am now quering data from MongoDB using PHP, but there are some problems I did not how to fix it.Firstly, I list documents that I follow MongoDB PHP Library: Install the MongoDB PHP Library — PHP Library Manual 1.2. And then I follow its tutorial to query data, it works correctly.The problem is when I import Windows Event Logs, which export from Powershell then convert to JSON. But I can not query find() function to find data in MongoDB as the same way I did successful with sample data.\nSo, how can I do it?PHP script:and:Check function:Powershell script to get winevt logs:My flow:Dir tree data in MongoDb look like that:And this is var_dump($rs) with $rs = $collection->find($filter); in check() function which I list above.Thanks for your reading!",
"username": "Quoc_Anh_Nguyen_Le"
},
{
"code": "find()find()MachineName",
"text": "Hi @Quoc_Anh_Nguyen_Le,But I can not query find() function to find data in MongoDB as the same way I did successful with sample data.Could you elaborate the problem that you are having ? i.e. find() not returning any result, or perhaps find() returns all data, etc.Dir tree data in MongoDb look like that:I’m not sure what do you mean by dir tree data here, but perhaps you stored all of this within a single document in MongoDB ? Based on your query, I think you meant to store each of these array document into separate documents in the collection. That way, you should be able to find a particular machine based on the MachineName field easily.Regards,\nWan.",
"username": "wan"
}
] | Query data in MongoDB using PHP | 2020-04-01T11:41:51.777Z | Query data in MongoDB using PHP | 9,466 |
[
"java",
"change-streams"
] | [
{
"code": "",
"text": "I want to run a function whenever a new document is added to my collection. But for each is not applicable in the watch method. How to use change stream properly in java android.\n\nas1920×1080 147 KB\nPlease help.",
"username": "Rahul_Sonia"
},
{
"code": "",
"text": "What is a RemoteMongoCollection? Is that a library you wrote or is it from a third party?",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "I am not too sure but I think you block needs to be between () not after.You have db.watch().forEach() { } but what you need is db.watch().forEach( { } ). And your {} must implements Block< ChangeStreamDocument< Document > > and as such you must have a method public void apply( final ChangeStreamDocument change ).",
"username": "steevej"
},
{
"code": "",
"text": "I removed the changed it to MongoCollection and the forEach() was working, but I was not able to initialize the MongoCollection (ie. db).\nb11920×1080 145 KB\n\nThen I changed RemoteMongoClient to MongoClient, but then I was unable to initialize that also.\nAll the screenshots are below as I can’t upload more that one image in the forum.https://drive.google.com/drive/folders/1aLWHo6WwBacMVi6GLb90fFujuHYWncGx?usp=sharing\nPlease help.",
"username": "Rahul_Sonia"
},
{
"code": "",
"text": "I am probably using “Remote” because I am using Stitch of MongoDB",
"username": "Rahul_Sonia"
}
] | How to use change stream in java | 2020-04-08T00:18:49.572Z | How to use change stream in java | 2,450 |
|
null | [
"java",
"production"
] | [
{
"code": "",
"text": "The 3.12.3 MongoDB Java Driver release is a patch to the 3.12.2 release and a recommended upgrade.The documentation hub includes extensive documentation of the 3.12 driver, includingand much more.You can find a full list of bug fixes here .http://mongodb.github.io/mongo-java-driver/3.12/javadoc/",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Java Driver 3.12.3 Released | 2020-04-07T22:15:43.231Z | MongoDB Java Driver 3.12.3 Released | 2,259 |
null | [
"java",
"production"
] | [
{
"code": "",
"text": "The 4.0.2 MongoDB Java & JVM Drivers release is a patch to the 4.0.1 release and a recommended upgrade.The documentation hub includes extensive documentation of the 4.0 driver, includingand much more.You can find a full list of bug fixes here .https://mongodb.github.io/mongo-java-driver/4.0/apidocs/ ",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Java Driver 4.0.2 Released | 2020-04-07T22:10:28.336Z | MongoDB Java Driver 4.0.2 Released | 2,798 |
[
"java",
"change-streams"
] | [
{
"code": "",
"text": "I want to use the change stream to run a function whenever a new document is created in the database.\nBut the compiler cannot resolve the watch() method.\n",
"username": "Rahul_Sonia"
},
{
"code": "",
"text": "Hi @Rahul_Sonia,Welcome to the MongoDB Community forums!Can you provide the version of the MongoDB Java driver you’re using? The latest version is 4.0.1 and (according to the documentation) should have a watch method. Also, is it failing to compile or is it only failing to properly auto-complete in the IDE?Thanks,Justin",
"username": "Justin"
},
{
"code": "",
"text": "Thanks,I actually had the outdated version.",
"username": "Rahul_Sonia"
}
] | Cannot resolve watch() method in java | 2020-04-07T02:34:02.772Z | Cannot resolve watch() method in java | 2,622 |
|
null | [
"aggregation"
] | [
{
"code": "mapreducemongo query{\n $project: {\n data: {\n $map: {\n input: params.timeArray,\n in: {\n \"key\": \"$this\",\n \"value\": { \"$cond\": [{ \"$in\": [\"$this\", \"$data.date\"] }, \"$data\", []] }\n }\n }\n }\n }\n}\nreduce{\n $project: {\n id: 1,\n data: { $reduce: { input: \"$data\", initialValue: [], in: { $concatArrays: [\"$value\", \"$this\"] } } }\n }\n}\nProjectionOperation projectionOperation1 = Aggregation.project()\n .and(VariableOperators.mapItemsOf(\"data\")\n .as(\"input\")\n .andApply(context -> new BasicDBObject(\"value\",\n ConditionalOperators.when(where(\"$this\").in(\"$data.date\"))\n .then(\"$data\")\n .otherwise(new HashSet<>()))))\n .as(\"data\");\n",
"text": "I can not able to do map and reduce operation in java mongodb in ProjectionOperation.I have project operation in native mongo query like this:And I also want to do reduce operation in another projection operation like this:I have tried this:But it didn’t give me desired result.Any help would be appreciated!!",
"username": "Prem_Parmar"
},
{
"code": "",
"text": "Would you explain what your original document looks like, and what result you are trying to get?That can be helpful to understand what your code is trying to do.",
"username": "Asya_Kamsky"
},
{
"code": "\"$$this\"$reduce$this$value$$this$$value",
"text": "“key”: “$this”,Did you mean to use \"$$this\" to reference the variable?I just noticed in $reduce you’re also using $this and $value instead of $$this and $$value.",
"username": "Asya_Kamsky"
},
{
"code": "\"$this\"mapreducemongo query{\n $project: {\n data: {\n $map: {\n input: params.timeArray,\n in: {\n \"key\": \"$this\",\n \"value\": { \"$cond\": [{ \"$in\": [\"$this\", \"$data.date\"] }, \"$data\", []] }\n }\n }\n }\n }\n}\nreduce{\n $project: {\n id: 1,\n data: { $reduce: { input: \"$data\", initialValue: [], in: { $concatArrays: [\"$value\", \"$this\"] } } }\n }\n}\nProjectionOperation projectionOperation1 = Aggregation.project()\n .and(VariableOperators.mapItemsOf(\"data\")\n .as(\"input\")\n .andApply(context -> new BasicDBObject(\"value\",\n ConditionalOperators.when(where(\"$this\").in(\"$data.date\"))\n .then(\"$data\")\n .otherwise(new HashSet<>()))))\n .as(\"data\");\n",
"text": "Did you mean to use \"$this\" to reference the variable?Yesss[quote=“Prem_Parmar, post:1, topic:1856, full:true”]\nI can not able to do map and reduce operation in java mongodb in ProjectionOperation.I have project operation in native mongo query like this:And I also want to do reduce operation in another projection operation like this:I have tried this:But it didn’t give me desired result.Any help would be appreciated!!\n[/quote]Yes, “$$this” is reference of the variable",
"username": "Prem_Parmar"
},
{
"code": "\"$this\"'this'\"$$this\"'this'",
"text": "Can you check that you’re using TWO dollar signs as prefix? Your examples show one dollar sign.\"$this\" says \"value of field named 'this'\"\"$$this\" says \"value of variable named 'this'\"",
"username": "Asya_Kamsky"
}
] | Map and Reduce Operation in Mongo-Db Query using JAVA - Springboot | 2020-03-20T10:14:06.217Z | Map and Reduce Operation in Mongo-Db Query using JAVA - Springboot | 4,741 |
null | [
"aggregation",
"views"
] | [
{
"code": "function(empName) { db.gaSMAXAPI_api.aggregate( [ { $match : { companyContacts_api.employeeName_api : empName } } ] ); };db.getCollection(\"gaSMAXAPI_api\").aggregate(\n [\n { \n \"$match\" : { \n \"companyContacts_api.employeeName_api\" : \"$$empName\"\n }\n }\n ], \n { \n \"allowDiskUse\" : false\n }\n);\n",
"text": "My primary goal is to create a view querying a sub-collection field using a variable.I tried, first, to create a view using a variable but that didn’t work. So, next I tried a function and then accessing the function to return a record:function(empName) { db.gaSMAXAPI_api.aggregate( [ { $match : { companyContacts_api.employeeName_api : empName } } ] ); };But when I call the function, no results are returned.I even modified the function to a wild card view (find()) with no params and that won’t execute successfully either. This is original view I created before I started messing about with functions:But no permutations of invoking the view returns any data. I think that’s because the $$ marker is for placeholder, or reserved, variables like $$NOW…So, question is: How do I create a database view to query a collection using a value to be determined at run-time? Second part is: How is this object invoked successfully (returns data)?Thanks!!–mike",
"username": "Micheal_Shallop"
},
{
"code": "",
"text": "Hi Mike,I’d like to paraphrase to ensure I understand your question. You’d like to create a view or “variable query” that takes in a variable from the application. That variable replaces part of the query to return specific results. Am I correct in my understanding of your question?Essentially, it sounds like you’re asking for a stored procedure. You’d like to pass a variable to the server and have it return a cursor. Unfortunately, MongoDB doesn’t currently support stored procedures.Even so, views may be useful here. If the goal is to simplify a complex $match (or query) so the application is abstracted away from the complexity, writing a view will solve this. Have the view contain a single stage ($match) and include the constant part of your query. The application then queries the view with its own match stage containing the variable portion of the query, greatly simplifying the overall application logic.Thanks,Justin",
"username": "Justin"
},
{
"code": "",
"text": "My interpretation of what you said is:functions do not work in terms of being able to pass in a variable value to be used as a query qualifier for the filterviews won’t work either for the same reason; the variable will not be parsed at run-time.I’ve built the functionality programatically, but was wondering why my attempts (above) wouldn’t work. I was pretty sure I’d figured out why the view wouldn’t work (can’t use $$ to delineate a non-reserved word in a view) but I’ve no idea why the function wouldn’t work since I pulled that off an example in the mongo doc proper unless it’s b/c that kind of function only works in the mongo shell (client-side).Thanks for the reply, Justin!–mike",
"username": "Micheal_Shallop"
}
] | Help with creating a function/view using a variable | 2020-04-02T22:25:16.272Z | Help with creating a function/view using a variable | 4,427 |
null | [
"production",
"golang"
] | [
{
"code": "",
"text": "The MongoDB Go Driver Team is pleased to announce the release of 1.3.2 of the MongoDB Go Driver.This release includes several bugfixes. One of the tickets in this release, https://jira.mongodb.org/browse/GODRIVER-1540, addresses a regression introduced in v1.3.0 which caused a deadlock if a connection encountered a network error during establishment. We recommend any users on v1.3.0 or v1.3.1 update to this version. For more information please see the release notes.You can obtain the driver source from GitHub under the v1.3.2 tag.General documentation for the MongoDB Go Driver is available on pkg.go.dev and on the MongoDB Documentation site. BSON library documentation is also available on pkg.go.dev. Questions can be asked through the MongoDB Developer Community and bug reports should be filed against the Go project in the MongoDB JIRA. Your feedback on the Go driver is greatly appreciated.Thank you,The Go Driver Team",
"username": "Divjot_Arora"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Go Driver 1.3.2 Released | 2020-04-07T17:38:29.120Z | MongoDB Go Driver 1.3.2 Released | 1,744 |
null | [
"data-modeling"
] | [
{
"code": "eg: title",
"text": "I’m not pretty sure could I do this. My application(buy and sell) have around five to ten categorieseg: vehicles, real estate, and clothsI thought of creating separate MongoDB collections for each category because each of the categories contains different fieldseg: vehicle[km, body type, brand], cloths[size, colour, gender], This would be very easy to manage and scale.Now can I able to send one query to those multiple collections at onceserver: nodejs",
"username": "chawki"
},
{
"code": "",
"text": "Better to keep all vehicles in one collection and add a vehicle_type field. Then you can query on all vehicles without joins but also select groups of vehicle types.",
"username": "Joe_Drumgoole"
},
{
"code": "",
"text": "already vehicles is a single collection but I need to send a full text search quries to all my collections which contains real estate, cloths and others",
"username": "chawki"
},
{
"code": "",
"text": "You can put all items (vehicles, real estate, clothes, etc) into a single collection as MongoDB does not force all documents to have the same schema. You would need to make sure your code properly handled the differences when trying to display that data. You would also need to make sure that any indexes you created took into account the differences in schema to make your queries as efficient as possible.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Hi! If you were to put all of these categories in a single collection, you could easily do a search query across categories in 1 query. The index for Search is created per collection and must be used in the first stage of the aggregation ($searchBeta), and it cannot be used in a $lookup pipeline. Here are more details for that: https://docs.atlas.mongodb.com/reference/atlas-search/query-syntax/#behavior",
"username": "Karen_Huaulme"
}
] | How to join multiple collections and do queries? | 2020-04-07T07:12:47.079Z | How to join multiple collections and do queries? | 3,549 |
null | [
"app-services-user-auth",
"atlas",
"stitch"
] | [
{
"code": "",
"text": "I am not able to put the user credentials in the header because the OPTIONS pre-flight request fails. I believe this has to do with a CORS issue. Any guidance would be appreciated.",
"username": "Charlie_Hauser"
},
{
"code": "",
"text": "Hi @Charlie_Hauser,Could you elaborate more what you’re trying to do, perhaps with an example ?\nIncluding which Stitch Authentication Providers you are using.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "I don’t think webhooks are supposed to be called from browser. You have to send username and password for every call or use api key - no temporary session token. The custom provider might work, have no idea about that.",
"username": "Mehedi_Nahid"
},
{
"code": " {\n \"jwtTokenString\": \"<User's JWT Token>\"\n }\n",
"text": "I am hosting a react app using stitch. I have configured the custom JWT authentication using Microsoft Azure AD. I would like to use the webhooks as a rest api. I followed the documentation, here, to create an incoming webhook and authenticate requests with user credentials. The documentation suggests placing the credentials in either the header or body of the request with the syntax:I would like to send the credentials in the header because GET requests do not have body. When I place the JWT in the header as shown in the documentation the browser sends a CORS preflight request because it is not a simple request. This request then fails because the response from stitch doesn’t have any of the CORS headers.Thanks for the response and let me know if you need anymore information.",
"username": "Charlie_Hauser"
},
{
"code": "",
"text": "Hi all,I am having the same problem with the UserPass authentication on a HTTP.GET request webhook. I am having a “400 Bad request” response when I try to send the credentials like normal Http authentication.I use the HttpRequest.Builder() to build the request object. I tried to follow the guidelines at https://docs.mongodb.com/stitch/services/http-actions/http.get/#request-authentication but still getting the 400 Bad Request response.Thank you,\nStéphane",
"username": "Lesauf"
}
] | How to include user credentials in GET Request to webhook? | 2020-03-30T17:29:40.796Z | How to include user credentials in GET Request to webhook? | 2,349 |
null | [
"crud",
"tutorial"
] | [
{
"code": "",
"text": "Hi everyone, i’m newbie here, do you have any links to share about basic to mastering CRUD?thanks a lot",
"username": "Jegs_Megano"
},
{
"code": "",
"text": "Which language would you prefer? We have a mix of written and video tutorials on most languages and our university.mongodb.com courses are free.",
"username": "Joe_Drumgoole"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | CRUD video tutorial | 2020-04-05T22:36:35.535Z | CRUD video tutorial | 1,447 |
null | [] | [
{
"code": "",
"text": "Hello,i want to save documents in realm and want to use byte arrays for this. i can´t find informations on the maximum size of such byte arrays in realm in the cloud instance at the documentation.can anyone help me?best regardsVolkhard",
"username": "Volkhard_Vogeler"
},
{
"code": "NSDataNSString",
"text": "For realm, the max size is 16Mb for Data and String properties.NSData and NSString properties cannot hold data exceeding 16MB in size. To store larger amounts of data, either break it up into 16MB chunks or store it directly on the file system, storing paths to these files in the Realm. An exception will be thrown at runtime if your app attempts to store more than 16MB in a single property.The Realm documentation is hereKeep in mind that Realm discourages L A R G E files to be stored in realm - you would be better off storing them in another source, such as Firebase Storage and keeping a link in realm.To avoid size limitations and a performance impact, it is best not to store large blobs (such as image and video files) directly in Realm",
"username": "Jay"
},
{
"code": "",
"text": "For realm, the max size is 16Mb for Data and String properties.Do you know, if the limit is 16 * 1,000,000B or 16 * 1,048,576B?",
"username": "Ondrej_Medek"
},
{
"code": "",
"text": "I would guess that it would be 16MB (bytes), a multiple of a single byte. So it would be most accuratelty represented by 1,048,576. (1,048,576 being 1 Megabyte in computer-ese or 2^20 bytes)",
"username": "Jay"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Maximum size of byte array in Realm cloud | 2020-02-17T09:54:31.632Z | Maximum size of byte array in Realm cloud | 4,845 |
null | [
"stitch"
] | [
{
"code": "",
"text": "How can I convert my aggregation results to JSON format inside a Stitch function?I want to do this to post them via http.",
"username": "Daniel_Gold"
},
{
"code": " .toArray()\n .then(docs => console.log(\"all documents\", JSON.stringify(docs)));",
"text": "I found the answer, I’ll include here for reference. Need to include this at the end of the aggregation pipeline:",
"username": "Daniel_Gold"
}
] | Convert aggregation results to JSON | 2020-04-06T22:49:54.014Z | Convert aggregation results to JSON | 6,384 |
null | [
"swift"
] | [
{
"code": "ignoredProperties()@objc dynamicimport Foundation\nimport RealmSwift\n\npublic class ModelMatriz:Object {\n dynamic private var min = RealmOptional<Float>()\n dynamic private var max = RealmOptional<Float>()\n @objc dynamic private var plazoS:[String] = []\n}\nreturn getNonIgnoredMirrorChildren(for: object).compactMap { prop in\n guard let label = prop.label else { return nil }\n var rawValue = prop.value\n if let value = rawValue as? RealmEnum {\n rawValue = type(of: value)._rlmToRawValue(value)\n }\n\n guard let value = rawValue as? _ManagedPropertyType else {\n if class_getProperty(cls, label) != nil {\n throwRealmException(\"Property \\(cls).\\(label) is dec...\"\nplazoS[String](lldb) po rawValue \n0 elements\n\n(lldb) print rawValue \n([String]) $R12 = 0 values {}\n",
"text": "Property ModelM.plazoS is declared as Array, which is not a supported managed Object property type. If it is not supposed to be a managed property, either add it to ignoredProperties() or do not declare it as @objc dynamic. See Object Class Reference for more information.And ModelM is declared asI got that error from this piece of code (last line is the one from the quote above)But plazoS should be managed by Realm, and there are other members that are also [String]and they seem to work correctly.so I used lldb and got this:",
"username": "Ty_oc"
},
{
"code": "public class ModelMatriz:Object {\n dynamic private var min = RealmOptional<Float>()\n dynamic private var max = RealmOptional<Float>()\n let plazoS = List<String>()\n}\nclass PlazosClass: Object {\n @objc dynamic var someString = \"\"\n}\n\npublic class ModelMatriz:Object {\n dynamic private var min = RealmOptional<Float>()\n dynamic private var max = RealmOptional<Float>()\n let plazoS = List< PlazosClass >()\n}\n@objc dynamic private var min = RealmOptional<Float>()\n@objc dynamic private var max = RealmOptional<Float>()",
"text": "@objc dynamic private var plazoS:[String] = You can’t store an array of strings in Realm as a managed object. It’s not supported. There are a number of solutionsOption 1: Store the String in a list propertyhowever - and this may be very importantNote that querying List’s containing primitive values is currently not supported.Option 2: A better solution is to create a managed realm object that contains a string propretyOh, if you want min and max to be managed you’ll need to add @objc to those vars",
"username": "Jay"
},
{
"code": "",
"text": "Yes, it was this, I put everywhere List, but the problem with this is that it was breaking with an exception and no print on debugger terminal (even thought I think exceptions have strings that describe them).",
"username": "Ty_oc"
},
{
"code": "",
"text": "You should add a breakpoint in your code and step through it line by line, examining the code flow and your vars along the way. When something doesn’t perform as expected - for example a var being nil when it should have had a string, you can then troubleshoot from there.",
"username": "Jay"
},
{
"code": "",
"text": "Oh yeah, for sure I know how to debug even cryptic things but as I have said, here is what I needed to do to track down where… https://github.com/realm/realm-cocoa/issues/6096#issuecomment-597407518 but as I have said the console didnt print the real source of the error like “put List instead of array” or something more helpful that doesnt require “debug skillz”",
"username": "Ty_oc"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Error on a list of 0 elements | 2020-03-24T23:02:00.841Z | Error on a list of 0 elements | 3,088 |
null | [
"sharding",
"upgrading"
] | [
{
"code": "",
"text": "I’m trying to upgrade some of our mongoDB hosts. This will change them from CentOS 6 and mongoDB 3.0 to CentOS 7 and mongoDB 3.6.We have a shard replica set and a 3-member sharded config server. There are three hosts, with two of them having a full shard replica, while the other is just an arbiter.I’m trying to test the upgrade procedure on some other virtual machines. I haven’t had a problem with the other upgrades, but they didn’t have a shard replica set. They were standalone shards that were only later converted into a single member shard replica set.To test this, I shutdown the arbiter config server and copied its files to the three test VMs. Since they are supposed to be identical copies, I assume it doesn’t matter which host I get these files from. This appears to work. I setup mongos with the new config server names and I was able to update the replica set hostnames.For the shard replica set, I shutdown the secondary shard and copied its files to the two non-arbiter test hosts. I started mongod with the replica set name on the arbiter with a blank data directory.I followed this guide (https://docs.mongodb.com/manual/tutorial/change-hostnames-in-a-replica-set/#replica-set-change-hostname-downtime) to change the replica set configuration hostnames and then started mongod with the replica set name on each of the two non-arbiter test hosts.The arbiter node seems okay, and I can run rs.status() from any of the three nodes, but the two nodes with data are in ‘RECOVERING’ state. I don’t know if this is normal, or if they are stuck there. We are far past the last piece of information in the oplog.Is there a step I have missed? Will I need to use more recent shard backup data? How can I recover from ‘RECOVERING’? Can I force one of the members to become primary? The docs suggest that if a member falls far enough behind, it may require manual intervention. But I don’t know what to do about that? It seems to suggest a full resync, but with no member being primary, I don’t know how I would do that.Thanks.",
"username": "John_Ratliff"
},
{
"code": "",
"text": "To test this, I shutdown the arbiter config server and copied its files to the three test VMsAn arbiter does not hold data for the repilcaSet. You need to copy the data from a Secondary or Primary.There is another tutorial for what you are doing:",
"username": "chris"
},
{
"code": "",
"text": "The config servers are not replica sets. They are sharded config servers because mongo 3.0 did not support replica set config servers. So the only data I copied from the arbiter host was for the config server.That document doesn’t really cover my situation. It talks about moving servers and then talking to the primary to make sure data is still in sync. In my scenario, restoring the data from a replica shard set to new hosts, no host or server was ever primary and I don’t know how to make one be so.I was able to resync a newer version of the replica set data, and that worked, but I don’t understand why it worked and the first copy did not. Perhaps it was a fluke?",
"username": "John_Ratliff"
}
] | All mongoDB shard replica set members in recovery state | 2020-04-03T16:47:34.541Z | All mongoDB shard replica set members in recovery state | 2,640 |
null | [
"dot-net"
] | [
{
"code": " public IEnumerable<ChangeStreamDocument<BsonDocument>> InfiniteWatch(ChangeStreamOptions options, int millisecondsToDelayBetweenReads = 50)\n {\n using (var cursor = this.Database.Watch(options))\n {\n while (cursor.MoveNext())\n {\n if (!cursor.Current.Any())\n {\n Thread.Sleep(millisecondsToDelayBetweenReads); // Avoid hitting the databases too hard while polling\n continue; // no data\n }\n\n using (var enumerator = cursor.Current.GetEnumerator())\n {\n while (enumerator.MoveNext())\n {\n var document = enumerator.Current;\n yield return document;\n }\n }\n }\n }\n }\n",
"text": "I am using the C# change stream API to watch for all events on a given database. I would like to capture the raw change stream documents (which are ChangeStreamDocument objects) in order for reproduction during unit tests for the translation logic (I take these ChangeStreamDocument objects and convert them into ChangeStreamMessage objects that we use internally). Having a set of the original messages serialized (outside of the oplog) allows me to avoid using the change stream API during these tests, but still validate the translation logic with regression tests.I’m not sure what the best way is to serialize this data safely - I’ve seen examples for how to do this when reading from a collection (both serializing and deserializing), but not for change events. The examples use the initialized collection object to get a document serializer - see below:https://mongodb.github.io/mongo-csharp-driver/2.10/examples/exporting_json/\nhttps://mongodb.github.io/mongo-csharp-driver/2.10/examples/importing_json/What is the best way to do this for arbitrary change stream documents from the DB? I want to make sure that the internal state of the object isn’t corrupted at all when it is reloaded for testing.What do you recommend?JeremyDETAILSHere is an example of some of the code that is watching the database and handing off an IEnumerable<ChangeStreamDocument> for use by the translator - I’d like to be able to replace this with something that can just load from a file, for example. What is a safe way to reliably do this given the MongoDB.Bson libraries?PS - @wan - any pointers on this change stream question? (I saw you had answered a previous question about change streams and thought this might be right up your alley)",
"username": "Jeremy_Buch"
},
{
"code": "var doc = new ChangeStreamDocument<BsonDocument>(changeStreamEvent.BackingDocument,\n new BsonDocumentSerializer());\n",
"text": "It looks like I got back a reliable result from doing the following (so far):This allows reconstructing the change stream document from a backing document - if I serialize the BackingDocument (BsonDocument), then I should be able to pick it back up and utilize it. I’m planning to use MongoDB.Bson libs to do this serialization to make sure that I don’t introduce incompatibilities through use of Newtonsoft. If anyone has a pointer to the best usage of those libraries, I’d very much appreciate it!",
"username": "Jeremy_Buch"
},
{
"code": " // given: ChangeStreamDocument<BsonDocument> changeStreamDocument; // received from MongoDB Change Streams\n var subject = new ChangeStreamDocumentSerializer<BsonDocument>(BsonDocumentSerializer.Instance);\n\n string json;\n using (var textWriter = new StringWriter())\n using (var writer = new MongoDB.Bson.IO.JsonWriter(textWriter))\n {\n var context = BsonSerializationContext.CreateRoot(writer);\n subject.Serialize(context, changeStreamDocument);\n json = textWriter.ToString();\n }\n // given: string json; // the original json that was recorded from above\n ChangeStreamDocument<BsonDocument> changeStreamDocument; \n var subject = new ChangeStreamDocumentSerializer<BsonDocument>(BsonDocumentSerializer.Instance);\n\n using (var reader = new MongoDB.Bson.IO.JsonReader(json))\n {\n var context = BsonDeserializationContext.CreateRoot(reader);\n changeStreamDocument = subject.Deserialize(context);\n }\n",
"text": "I hunted through the open source code for an example - here is a way that I found to reliably serialize and deserialize the ChangeStreamDocument:If you need to serialize a change stream object, this is the way to do it while using the MongoDB.Bson libraries without creating artifacts in the data (from what I can see so far). Reading them back simply involves:I’ve been able to verify that these reliably work over a variety of change stream documents and objects we used.",
"username": "Jeremy_Buch"
},
{
"code": "",
"text": "Hi @Jeremy_Buch,Glad that you’ve found a solution!Regards,\nWan.",
"username": "wan"
}
] | How can I safely serialize change stream documents with C#? | 2020-04-03T18:05:58.443Z | How can I safely serialize change stream documents with C#? | 5,021 |
null | [] | [
{
"code": "",
"text": "OverflowError: MongoDB can only handle up to 8-byte intsI’m using a find() to get data from a collection then I’m looping and inserting it into an api call and pushing the values in the response to a new collection.I works but then I get those error when it reaches a larger number of records.Any ideas on how to fix?",
"username": "Phuong_Hoang"
},
{
"code": "",
"text": "Please check these links",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Thanks. So it looks like it’s due to the letter ‘E’ being parsed and returned as a scientific notation, e+.So a value of 5E62 is coming in as an exponential number, 5e+62. I took the json response and converted it to a string and did a replace(“e”,“E”), which it seemed to work. But then when I converted it back to json it reverted it back to the exponential number. I’m using ast.literal_eval(string) to convert back to json.Is there a way to keep the “E” and not have it convert back when converting back to json?Thanks.\nPhuong",
"username": "Phuong_Hoang"
},
{
"code": "",
"text": "It reverts back when I convert a string to a dictionary so I can import it into the database.",
"username": "Phuong_Hoang"
},
{
"code": "",
"text": "I figured out how to get around it, I pulled the key and put quotes around the values. That seemed to keep the data in tact.",
"username": "Phuong_Hoang"
},
{
"code": "",
"text": "Hi @Phuong_Hoang, welcome!I’m using a find() to get data from a collection then I’m looping and inserting it into an api call and pushing the values in the response to a new collection.Could you elaborate more on what you’re trying to insert into MongoDB i.e. example of value ?\nAlso, it would be useful to know what your intention of usage of the field’s value. i.e. are you going to run further computation/calculation on the value ?Regards,\nWan.",
"username": "wan"
}
] | I'm receiving an error when trying to insert_many | 2020-04-03T21:41:09.403Z | I’m receiving an error when trying to insert_many | 4,940 |
null | [] | [
{
"code": "",
"text": "I’m stuck at procedure number 5 please help me out.",
"username": "Proffin_72505"
},
{
"code": "",
"text": "What problem do you face?",
"username": "steevej"
},
{
"code": "",
"text": "sudo nano /etc/pathswhat im i supposed to edit and how to add it after running that command",
"username": "Proffin_72505"
},
{
"code": "",
"text": "Could you provide a link to the instructions you are following? I do not remember having to edit /etc/paths.",
"username": "steevej"
},
{
"code": "",
"text": "https://university.mongodb.com/mercury/M001/2020_March_17/chapter/Chapter_0_Setup/lesson/5963b30cc1da5a32116dc5b5/lecture",
"username": "Proffin_72505"
},
{
"code": "",
"text": "im using linux",
"username": "Proffin_72505"
},
{
"code": "",
"text": "The course must have changed because I do not have a chapter 0 in my version of the course. Hopefully somebody that have access to the same version of the course can help you.",
"username": "steevej"
},
{
"code": "",
"text": "You have to update your PATH with mongodb\nPlease check this link which gives some screenshots on how to do it and type of errors you get",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "@Proffin_72505, by reading @Ramachandra_37567, I found out that these instructions are not for Linux but for Mac.",
"username": "steevej"
},
{
"code": "sudo nano /etc/paths",
"text": "Hi all,Yes, we have recently upgraded this course to reflect some of the changes that has taken place since the time of the recording of the video.sudo nano /etc/pathswhat im i supposed to edit and how to add it after running that command@Proffin_72505 sudo nano /etc/paths command will open the paths file in your terminal. You have to add the path of the bin directory (as mentioned in the lecture) in this file so that you can use some of the MongoDB based commands form any directory.If you are still having any issue then please share a screenshot of the work that you have done so far.~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "Hi Shubham I’ve added the path of the bin directory in /etc/paths, but instead of getting expected output, I’m getting like this\nCommand ‘mongo’ not found, but can be installed with:\nsudo apt install mongodb-clients\nI’m sure something went wrong but I can’t figured it out. So can you please help me in that?",
"username": "Milan_44499"
},
{
"code": "",
"text": "Please provide the output ofAlso try to logout and login. The setup might only be accessible to new shells.",
"username": "steevej"
},
{
"code": "",
"text": "Hi @Milan_44499,In addition to @steevej-1495,Please share the output of this command :echo $PATH~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "@Shubham_Ranjan, here’s the output of echo $PATH\n/home/milan/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/go/binAnd @steevej-1495, I’m sorry but I didn’t get you exactly.",
"username": "Milan_44499"
},
{
"code": "",
"text": "Hi @Milan_44499,/home/milan/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/go/binI don’t see any entry for MongoDB.Please go back to this lecture and follow the instructions mentioned here : Chapter 0 : Installing the mongo Shell (OSX / Linux).~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "@Shubham_Ranjan, after your reply I’ve set the path and found the entry for MongoDB, but not getting expected output. And you can find an attachment of screen shot of echo $PATH ",
"username": "Milan_44499"
},
{
"code": "",
"text": "Output of",
"username": "steevej"
},
{
"code": "",
"text": "\nScreenshot from 2020-04-03 20-32-012390×768 629 KB\n",
"username": "Milan_44499"
},
{
"code": "",
"text": "This is not the output of PathToMongoDbFromYourScreenshot. In your screenshot the path ends with usr/bin",
"username": "steevej"
},
{
"code": "ls -l /home/milan/mongodb-org-server_4.2.5_amd64/data/usr/bin",
"text": "Hi @Milan_44499,What’s the output of ls -l /home/milan/mongodb-org-server_4.2.5_amd64/data/usr/bin ?~ Shubham",
"username": "Shubham_Ranjan"
}
] | Issues with installing the mongo shell on linux | 2020-03-20T11:35:00.351Z | Issues with installing the mongo shell on linux | 3,403 |
null | [
"sharding"
] | [
{
"code": "2020-04-05T10:53:15.910+0300 F - [conn2] Invariant failure maxWireVersion == WireVersion::LATEST_WIRE_VERSION src/mongo/db/s/config/sharding_catalog_manager_shard_operations.cpp 350\n2020-04-05T10:53:15.910+0300 F - [conn2] \n\n***aborting after invariant() failure\n\n\n2020-04-05T10:53:15.949+0300 F - [conn2] Got signal: 6 (Aborted).\n 0x556f9539aea1 0x556f9539a0b9 0x556f9539a59d 0x7feaec7595e0 0x7feaec3bc1f7 0x7feaec3bd8e8 0x556f93976ef8 0x556f93fe8848 0x556f93fec60b 0x556f93c08ca8 0x556f94dff469 0x556f93a2c18a 0x556f93a2da79 0x556f93a2e9c1 0x556f93a1a12a 0x556f93a26d8a 0x556f93a21a37 0x556f93a25251 0x556f94be8472 0x556f93a1fc20 0x556f93a22d65 0x556f93a21177 0x556f93a21abd 0x556f93a25251 0x556f94be89d5 0x556f952f1f44 0x7feaec751e25 0x7feaec47f34d\n----- BEGIN BACKTRACE -----\n{\"backtrace\":[{\"b\":\"556F92F56000\",\"o\":\"2444EA1\",\"s\":\"_ZN5mongo15printStackTraceERSo\"},{\"b\":\"556F92F56000\",\"o\":\"24440B9\"},{\"b\":\"556F92F56000\",\"o\":\"244459D\"},{\"b\":\"7FEAEC74A000\",\"o\":\"F5E0\"},{\"b\":\"7FEAEC387000\",\"o\":\"351F7\",\"s\":\"gsignal\"},{\"b\":\"7FEAEC387000\",\"o\":\"368E8\",\"s\":\"abort\"},{\"b\":\"556F92F56000\",\"o\":\"A20EF8\",\"s\":\"_ZN5mongo22invariantFailedWithMsgEPKcRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES1_j\"},{\"b\":\"556F92F56000\",\"o\":\"1092848\",\"s\":\"_ZN5mongo22ShardingCatalogManager20_validateHostAsShardEPNS_16OperationContextESt10shared_ptrINS_21RemoteCommandTargeterEEPKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_16ConnectionStringE\"},{\"b\":\"556F92F56000\",\"o\":\"109660B\",\"s\":\"_ZN5mongo22ShardingCatalogManager8addShardEPNS_16OperationContextEPKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_16ConnectionStringEx\"},{\"b\":\"556F92F56000\",\"o\":\"CB2CA8\"},{\"b\":\"556F92F56000\",\"o\":\"1EA9469\",\"s\":\"_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_19CommandReplyBuilderE\"},{\"b\":\"556F92F56000\",\"o\":\"AD618A\"},{\"b\":\"556F92F56000\",\"o\":\"AD7A79\"},{\"b\":\"556F92F56000\",\"o\":\"AD89C1\",\"s\":\"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE\"},{\"b\":\"556F92F56000\",\"o\":\"AC412A\",\"s\":\"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE\"},{\"b\":\"556F92F56000\",\"o\":\"AD0D8A\",\"s\":\"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE\"},{\"b\":\"556F92F56000\",\"o\":\"ACBA37\",\"s\":\"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE\"},{\"b\":\"556F92F56000\",\"o\":\"ACF251\"},{\"b\":\"556F92F56000\",\"o\":\"1C92472\",\"s\":\"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE\"},{\"b\":\"556F92F56000\",\"o\":\"AC9C20\",\"s\":\"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE\"},{\"b\":\"556F92F56000\",\"o\":\"ACCD65\",\"s\":\"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE\"},{\"b\":\"556F92F56000\",\"o\":\"ACB177\",\"s\":\"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE\"},{\"b\":\"556F92F56000\",\"o\":\"ACBABD\",\"s\":\"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE\"},{\"b\":\"556F92F56000\",\"o\":\"ACF251\"},{\"b\":\"556F92F56000\",\"o\":\"1C929D5\"},{\"b\":\"556F92F56000\",\"o\":\"239BF44\"},{\"b\":\"7FEAEC74A000\",\"o\":\"7E25\"},{\"b\":\"7FEAEC387000\",\"o\":\"F834D\",\"s\":\"clone\"}],\"processInfo\":{ \"mongodbVersion\" : \"4.0.12\", \"gitVersion\" : \"5776e3cbf9e7afe86e6b29e22520ffb6766e95d4\", \"compiledModules\" : [], \"uname\" : { \"sysname\" : \"Linux\", \"release\" : \"3.10.0-693.el7.x86_64\", \"version\" : \"#1 SMP Tue Aug 22 21:09:27 UTC 2017\", \"machine\" : \"x86_64\" }, \"somap\" : [ { \"b\" : \"556F92F56000\", \"elfType\" : 3, \"buildId\" : \"EF47E1B8B5FC85C7DF8A0916362ACC8238487CB2\" }, { \"b\" : \"7FFFC5252000\", \"elfType\" : 3, \"buildId\" : \"7FB8E16CEA1B913E2703A6E4159FB468CD1E3507\" }, { \"b\" : \"7FEAEDB77000\", \"path\" : \"/lib64/libcurl.so.4\", \"elfType\" : 3, \"buildId\" : \"CE3116A1A44937EC00E131632BFDE144772F83D7\" }, { \"b\" : \"7FEAED95D000\", \"path\" : \"/lib64/libresolv.so.2\", \"elfType\" : 3, \"buildId\" : \"FF4E72F4E574E143330FB3C66DB51613B0EC65EA\" }, { \"b\" : \"7FEAED4FC000\", \"path\" : \"/lib64/libcrypto.so.10\", \"elfType\" : 3, \"buildId\" : \"BC0AE9CA0705BEC1F0C0375AAD839843BB219CB1\" }, { \"b\" : \"7FEAED28A000\", \"path\" : \"/lib64/libssl.so.10\", \"elfType\" : 3, \"buildId\" : \"ED0AC7DEB91A242C194B3DEF27A215F41CE43116\" }, { \"b\" : \"7FEAED086000\", \"path\" : \"/lib64/libdl.so.2\", \"elfType\" : 3, \"buildId\" : \"8CC796BA7CA23193AD753D8625018B61264724BE\" }, { \"b\" : \"7FEAECE7E000\", \"path\" : \"/lib64/librt.so.1\", \"elfType\" : 3, \"buildId\" : \"5B629F64AC6EA7AAC602BE56ED834BB6398C72AC\" }, { \"b\" : \"7FEAECB7C000\", \"path\" : \"/lib64/libm.so.6\", \"elfType\" : 3, \"buildId\" : \"5FAABA77B1848347CEC4B0CE7B31811D7D00D2FA\" }, { \"b\" : \"7FEAEC966000\", \"path\" : \"/lib64/libgcc_s.so.1\", \"elfType\" : 3, \"buildId\" : \"361D73E3AA2ACE6AF32B03D1B74A22E1FF68AB2D\" }, { \"b\" : \"7FEAEC74A000\", \"path\" : \"/lib64/libpthread.so.0\", \"elfType\" : 3, \"buildId\" : \"B8FBCA68CA56E79556BF7884DACF89504096ADEB\" }, { \"b\" : \"7FEAEC387000\", \"path\" : \"/lib64/libc.so.6\", \"elfType\" : 3, \"buildId\" : \"C3F28802314AF4EE866BF8D2E1B506B7BBF34CF6\" }, { \"b\" : \"7FEAEDDE0000\", \"path\" : \"/lib64/ld-linux-x86-64.so.2\", \"elfType\" : 3, \"buildId\" : \"962B8EEE329A2C57184AF2780756ED2035DEAAC0\" }, { \"b\" : \"7FEAEC154000\", \"path\" : \"/lib64/libidn.so.11\", \"elfType\" : 3, \"buildId\" : \"2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5\" }, { \"b\" : \"7FEAEBF2A000\", \"path\" : \"/lib64/libssh2.so.1\", \"elfType\" : 3, \"buildId\" : \"C4EDF92922B4FE091F9A855C50343288AAD3243D\" }, { \"b\" : \"7FEAEBCDE000\", \"path\" : \"/lib64/libssl3.so\", \"elfType\" : 3, \"buildId\" : \"A5EC8F3A4BED5873F94B4418BF732FC208DD0C55\" }, { \"b\" : \"7FEAEBAB7000\", \"path\" : \"/lib64/libsmime3.so\", \"elfType\" : 3, \"buildId\" : \"007CD03E03B51795E5499A88CD670B85061FA226\" }, { \"b\" : \"7FEAEB78D000\", \"path\" : \"/lib64/libnss3.so\", \"elfType\" : 3, \"buildId\" : \"F5AC0CD5031E4F8A1DDF38CB9EEDCE3D2B12FCA2\" }, { \"b\" : \"7FEAEB560000\", \"path\" : \"/lib64/libnssutil3.so\", \"elfType\" : 3, \"buildId\" : \"BE353742BC2074F126BB55F42B79E8C4FBE51CD5\" }, { \"b\" : \"7FEAEB35C000\", \"path\" : \"/lib64/libplds4.so\", \"elfType\" : 3, \"buildId\" : \"CFFD213A7908702160E6EAA8F3F57BCBF906AF94\" }, { \"b\" : \"7FEAEB157000\", \"path\" : \"/lib64/libplc4.so\", \"elfType\" : 3, \"buildId\" : \"76D4FA4D9CF5FD577C48D3C196AE734A7F9E6CAD\" }, { \"b\" : \"7FEAEAF19000\", \"path\" : \"/lib64/libnspr4.so\", \"elfType\" : 3, \"buildId\" : \"7B54DBF79ECEB3E7BEF71224EC5EFF3D1B425FA1\" }, { \"b\" : \"7FEAEACCC000\", \"path\" : \"/lib64/libgssapi_krb5.so.2\", \"elfType\" : 3, \"buildId\" : \"DA322D74F55A0C4293085371A8D0E94B5962F5E7\" }, { \"b\" : \"7FEAEA9E4000\", \"path\" : \"/lib64/libkrb5.so.3\", \"elfType\" : 3, \"buildId\" : \"B69E63024D408E400401EEA6815317BDA38FB7C2\" }, { \"b\" : \"7FEAEA7B1000\", \"path\" : \"/lib64/libk5crypto.so.3\", \"elfType\" : 3, \"buildId\" : \"A48639BF901DB554479BFAD114CB354CF63D7D6E\" }, { \"b\" : \"7FEAEA5AD000\", \"path\" : \"/lib64/libcom_err.so.2\", \"elfType\" : 3, \"buildId\" : \"A3832734347DCA522438308C9F08F45524C65C9B\" }, { \"b\" : \"7FEAEA39E000\", \"path\" : \"/lib64/liblber-2.4.so.2\", \"elfType\" : 3, \"buildId\" : \"38C80306EF7534FF3CD84E946F39806563DE2F3F\" }, { \"b\" : \"7FEAEA14A000\", \"path\" : \"/lib64/libldap-2.4.so.2\", \"elfType\" : 3, \"buildId\" : \"05BEC066BCE8D1487506C628B4DE39DB79743777\" }, { \"b\" : \"7FEAE9F34000\", \"path\" : \"/lib64/libz.so.1\", \"elfType\" : 3, \"buildId\" : \"EA8E45DC8E395CC5E26890470112D97A1F1E0B65\" }, { \"b\" : \"7FEAE9D26000\", \"path\" : \"/lib64/libkrb5support.so.0\", \"elfType\" : 3, \"buildId\" : \"6FDF5B013FD2739D304CFB9D723DCBC149EE03C9\" }, { \"b\" : \"7FEAE9B22000\", \"path\" : \"/lib64/libkeyutils.so.1\", \"elfType\" : 3, \"buildId\" : \"2E01D5AC08C1280D013AAB96B292AC58BC30A263\" }, { \"b\" : \"7FEAE9905000\", \"path\" : \"/lib64/libsasl2.so.3\", \"elfType\" : 3, \"buildId\" : \"2936CB6F2025214EC2687205007D819060CE5620\" }, { \"b\" : \"7FEAE96DE000\", \"path\" : \"/lib64/libselinux.so.1\", \"elfType\" : 3, \"buildId\" : \"A88379F56A51950A33198890D37F5F8AEE71F8B4\" }, { \"b\" : \"7FEAE94A7000\", \"path\" : \"/lib64/libcrypt.so.1\", \"elfType\" : 3, \"buildId\" : \"27C3D04725E31259F87EB3EA8478CF65A0D59568\" }, { \"b\" : \"7FEAE9245000\", \"path\" : \"/lib64/libpcre.so.1\", \"elfType\" : 3, \"buildId\" : \"9CA3D11F018BEEB719CDB34BE800BF1641350D0A\" }, { \"b\" : \"7FEAE9042000\", \"path\" : \"/lib64/libfreebl3.so\", \"elfType\" : 3, \"buildId\" : \"60C388B53B5D33B7E48DFBFB51E5D8429743BEE8\" } ] }}\n mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x556f9539aea1]\n mongod(+0x24440B9) [0x556f9539a0b9]\n mongod(+0x244459D) [0x556f9539a59d]\n libpthread.so.0(+0xF5E0) [0x7feaec7595e0]\n libc.so.6(gsignal+0x37) [0x7feaec3bc1f7]\n libc.so.6(abort+0x148) [0x7feaec3bd8e8]\n mongod(_ZN5mongo22invariantFailedWithMsgEPKcRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES1_j+0x0) [0x556f93976ef8]\n mongod(_ZN5mongo22ShardingCatalogManager20_validateHostAsShardEPNS_16OperationContextESt10shared_ptrINS_21RemoteCommandTargeterEEPKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_16ConnectionStringE+0xAE8) [0x556f93fe8848]\n mongod(_ZN5mongo22ShardingCatalogManager8addShardEPNS_16OperationContextEPKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_16ConnectionStringEx+0x30B) [0x556f93fec60b]\n mongod(+0xCB2CA8) [0x556f93c08ca8]\n mongod(_ZN5mongo12BasicCommand10Invocation3runEPNS_16OperationContextEPNS_19CommandReplyBuilderE+0xD9) [0x556f94dff469]\n mongod(+0xAD618A) [0x556f93a2c18a]\n mongod(+0xAD7A79) [0x556f93a2da79]\n mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x3D1) [0x556f93a2e9c1]\n mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3A) [0x556f93a1a12a]\n mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xBA) [0x556f93a26d8a]\n mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x97) [0x556f93a21a37]\n mongod(+0xACF251) [0x556f93a25251]\n mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x1A2) [0x556f94be8472]\n mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x150) [0x556f93a1fc20]\n mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0xB55) [0x556f93a22d65]\n mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x357) [0x556f93a21177]\n mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x11D) [0x556f93a21abd]\n mongod(+0xACF251) [0x556f93a25251]\n mongod(+0x1C929D5) [0x556f94be89d5]\n mongod(+0x239BF44) [0x556f952f1f44]\n libpthread.so.0(+0x7E25) [0x7feaec751e25]\n libc.so.6(clone+0x6D) [0x7feaec47f34d]\n----- END BACKTRACE -----\n",
"text": "Hi Team,Am getting below error on primary config server while adding shard .Appreciate your help on this!Log trace",
"username": "Mohammedhusen_khatib"
},
{
"code": "/**\n * Copyright (C) 2018-present MongoDB, Inc.\n *\n * This program is free software: you can redistribute it and/or modify\n * it under the terms of the Server Side Public License, version 1,\n * as published by MongoDB, Inc.\n *\n * This program is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n * Server Side Public License for more details.\n *\n * You should have received a copy of the Server Side Public License\n * along with this program. If not, see\n * <http://www.mongodb.com/licensing/server-side-public-license>.\n *\n * As a special exception, the copyright holders give permission to link the\n * code of portions of this program with the OpenSSL library under certain\n * conditions as described in each individual source file and distribute\n * linked combinations including the program with the OpenSSL library. You\n",
"text": "Please check the extract maxWireVersion section in below linkLooks like some mismatch in binary version of node being added and cluster’s featureCompatibilityVersion",
"username": "Ramachandra_Tummala"
}
] | "Invariant failure maxWireVersion" error while adding shard | 2020-04-05T08:24:21.382Z | “Invariant failure maxWireVersion” error while adding shard | 1,731 |
null | [
"sharding"
] | [
{
"code": "",
"text": "**Could not find host matching read preference { mode: “primary” } for set **I used sh.status() in mongos.\nI checked the following error in the balancerwhat is the error\nPlease tell me how to fix the error",
"username": "Park_49739"
},
{
"code": "",
"text": "Could you please give some more information about your sharded cluster setup. IE number of shards, and number of servers in each shard replica set?Most of the time when I see this error it is when a shard replica set doesn’t have a primary node so the mongos/configs can’t find the primary node. This causes issues because it can’t route writes to the primary of the shard.I would check all of the shard servers directly to verify they are in a healthy state and each shard has a primary node.",
"username": "tapiocaPENGUIN"
}
] | "Could not find host matching read preference { mode: \"primary\" } for set" | 2020-04-02T12:56:46.757Z | “Could not find host matching read preference { mode: \”primary\” } for set” | 9,116 |
null | [] | [
{
"code": "",
"text": "I trying to use the following commandsmongo “mongodb+srv://cluster0-jxeqq.mongodb.net/test” --username m001-student -password m001-mongodb-basicsI thought maybe the above command needs a one ‘-’ more so I modified\nmongo “mongodb+srv://cluster0-jxeqq.mongodb.net/test” --username m001-student --password m001-mongodb-basicsBut the same errors appearsI also tried the following string connection:mongo “mongodb://cluster0-shard-00-00-jxeqq.mongodb.net:27017,cluster0-shard-00-01-jxeqq.mongodb.net:27017,cluster0-shard-00-02-jxeqq.mongodb.net:27017/test?replicaSet=Cluster0-shard-0” --authenticationDatabase admin --ssl --username m001-student --password m001-mongodb-basicsBecasuse of this advice “Starting in MongoDB version 4.2, the ssl option has been deprecated and the new corresponding tls option has been added”, I modified the string to:mongo “mongodb://cluster0-shard-00-00-jxeqq.mongodb.net:27017,cluster0-shard-00-01-jxeqq.mongodb.net:27017,cluster0-shard-00-02-jxeqq.mongodb.net:27017/test?replicaSet=Cluster0-shard-0” --authenticationDatabase admin --tls --username m001-student --password m001-mongodb-basicsMy versión is 4.2.5.Thanks in advance for your help.",
"username": "rltr"
},
{
"code": "",
"text": "Please the first command you fired in your snapshot\nYou are at mongo prompt when you fired mongo and you are trying to connect to Class cluster\nIt won’t work\nPlease exit and run from your os prompt\nAre you not using Vagrant?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "mongo “mongodb+srv://cluster0-jxeqq.mongodb.net/test” --username m001-student -password m001-mongodb-basicsThank you! It works!",
"username": "rltr"
},
{
"code": "",
"text": "Closing this thread as the issue has been resolved.",
"username": "Shubham_Ranjan"
}
] | Cannot to the class atlas cluster from the mongo shell | 2020-04-04T23:43:09.844Z | Cannot to the class atlas cluster from the mongo shell | 1,156 |
[
"performance"
] | [
{
"code": "",
"text": "Hello,I’m using mongodb for my first project and it was really working fine, until…I promoted my website so the database size increased a lot (30k documents before vs 100k now).Now, every query I make to my database is really slow! (600ms for a simple query vs 20ms before).Even if I only use index in my match params.I don’t really know what to do. Do I just need to upgrade my server ? The CPU/RAM look fine, can it be an issue with Read/Write speed ?I’m currently using a 5$ DigitalOcean droplet to host my website.Here are some screenshots of the graphs:\nLast 6 hours:\n\n6hours883×737 99.9 KB\n\nLast 24 hours:\n\n24hours885×739 121 KB\nThe specs of my server are: Shared CPU, 1 vCPU, 1GB RAM, 25 GB SSD, 1GB Transfer.Even now that the traffic on the website is really slow, the queries still take 600ms to 3000ms:\n\nnow896×796 57.5 KB\nThanks a lot if someone has an idea, I’m pretty much lost with all of that.Best,\nValentin",
"username": "vkaelin"
},
{
"code": "",
"text": "I can’t edit my post because new users can only add 1 image but I wanted to add that I tried in local with a similar size database (currently 92k documents), and the queries were really fast (about 30ms).So it should be an hardware issue ?EDIT: The issue was a missing index. I thought I had created it but no. Sorry for that.",
"username": "vkaelin"
}
] | Performance issues when database grows | 2020-04-04T11:58:04.028Z | Performance issues when database grows | 2,275 |
|
null | [
"configuration"
] | [
{
"code": "",
"text": "● mongod.service - MongoDB Database Server\nLoaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)\nActive: failed (Result: exit-code) since Sun 2020-04-05 13:39:12 IST; 1h 5min ago\nDocs: https://docs.mongodb.org/manual\nProcess: 3720 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=2)\nMain PID: 3720 (code=exited, status=2)Apr 05 13:39:12 sumeet-vpceh36en systemd[1]: Started MongoDB Database Server.\nApr 05 13:39:12 sumeet-vpceh36en mongod[3720]: Error parsing YAML config file: yaml-cpp: error at line 33, column 15: illegal map value\nApr 05 13:39:12 sumeet-vpceh36en mongod[3720]: try ‘/usr/bin/mongod --help’ for more information\nApr 05 13:39:12 sumeet-vpceh36en systemd[1]: mongod.service: Main process exited, code=exited, status=2/INVALIDARGUMENT\nApr 05 13:39:12 sumeet-vpceh36en systemd[1]: mongod.service: Failed with result ‘exit-code’",
"username": "Sumeet_Boga"
},
{
"code": "",
"text": "Your config file has an error at line 33error at line 33, column 15: illegal map value",
"username": "steevej"
}
] | My mongodb service is not starting on ubuntu bionic | 2020-04-05T09:37:20.491Z | My mongodb service is not starting on ubuntu bionic | 4,992 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "i am getting two collection in two calls , is it possible t get both in one call.var varrec = database.GetCollection(“abc”).AsQueryable().Where(w => w.BridgeId == “BkN”).ToList();var varrec1 = database.GetCollection(“extabc”).AsQueryable().Where(w => w.BridgeId == “BkN”).ToList();",
"username": "Rajesh_Yadav"
},
{
"code": "",
"text": "Hi Rajesh, It’s not possible to mix two collections in a single call, but I’m curious as to why you would want too?",
"username": "Will_Blackburn"
},
{
"code": "",
"text": "You should check to see if you could use the $lookup aggregation stage.",
"username": "steevej"
},
{
"code": "",
"text": "after getting two collections i write a linq query to fillter few records by joining these two collections on bridgeid and some other clause. for ease i have just given bridgeid here.so if it is possible to write linq query to get those records which are exiting in both tables based on bridgid then also we are good to go.",
"username": "Rajesh_Yadav"
},
{
"code": "",
"text": "i am using c# mongo driver to get to mongo pls give one dummy example of $loopup through c#",
"username": "Rajesh_Yadav"
},
{
"code": "$lookup",
"text": "@Rajesh_Yadav, here’s a link to the MongoDB C# driver documentation for $lookup. They have a couple of examples there that might be of help. I’ve not used the C# driver, but generally the MongoDB documentation is good and a great place to start.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "I also have no experience with c#.You could also use Compass to build your aggregation and then export to c#. I usually build them with Compass, however I seldom export them. I keep them in resource files (java) and I load them at runtime. This way I do not need to recompile and I can use the same file in the shell and in node. One of my idea is to store them in MongoDB directly in a library collection this was they are available to all my processes. But I am not there yet. That is one of the beauty of it. It is a document, an array of document, so it can be stored in a collection.",
"username": "steevej"
},
{
"code": "var query = from p in collection.AsQueryable()\n join o in otherCollection on p.Name equals o.Key into joined\n select new { p.Name, AgeSum: joined.Sum(x => x.Age) };\n",
"text": "thank you, more thingq1) in otherCollection, do i have to put .AsQueryable() like following\notherCollection.AsQueryable() or it sould be only otherCollectionq2) in above query we write collection.AsQueryable() , does it get the data at the very time or it executes the whole query at once, that is , the fillter is applied in the database it self and it gets filltered data from the database or it gets the data of both collection in c# from the database and then applys the fillter ( on p.Name equals o.Key) on the caller side.q3) this query does not have .ToList(), is it required to put it, if i want to execute it at whole query level.",
"username": "Rajesh_Yadav"
}
] | How to get two collections at same time | 2020-04-02T05:39:39.994Z | How to get two collections at same time | 9,318 |
null | [
"legacy-realm-cloud"
] | [
{
"code": "",
"text": "I’m particularly interested in whether the Realm Object Server will ever invalidate a session causing a user to need to re-authenticate.I presume that this will only be possible with a network connection and therefore the user can never be “kicked off” while offline - Is this correct?My use cases include the user wanting the app to be running in the background (it is recording and transmitting) while returning into network service at which point sync-ing would occur.Will my app ever be confronted by an invalid session upon returning to network service? This could drastically affect this important feature of the app.",
"username": "Eric_Lightfoot"
},
{
"code": "",
"text": "Following up. Does anyone have experience with this subject or have they encountered this scenario? It is central to my project’s design considerations but it will not be at a stage where I can test this for quite some time. I’d like to be able to see it coming, instead of crash into it.Can anyone chip in?Thanks",
"username": "Eric_Lightfoot"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Are there best practices / guidance for sync session management? | 2020-03-17T18:03:19.974Z | Are there best practices / guidance for sync session management? | 2,308 |
null | [] | [
{
"code": "",
"text": "hi All, In video i see Schema option, but i dont see that option in MangoDB compass Tool installed in my PC",
"username": "Mahesha_37723"
},
{
"code": "",
"text": "I do not know if the instructions are very different but when I took the course the instructions wereThe last sentence of the above might be your error.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks steevej it helped me",
"username": "Mahesha_37723"
},
{
"code": "",
"text": "Closing this thread as the issue has been resolved.",
"username": "Shubham_Ranjan"
}
] | Not showing Schema option in my MangoDB compass Tool installed in PC | 2020-04-04T20:17:09.974Z | Not showing Schema option in my MangoDB compass Tool installed in PC | 1,826 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "primarly i have decimal(13,4) in sqlserver, so what should i keep in mongodb.\nsecondly if i use double in mongodb what should i keep in c#\nand if i keep decimal in mongodb then what should i keep in c#yours sincerley",
"username": "Rajesh_Yadav"
},
{
"code": "BsonTypeDecimal128DoubleDecimal128",
"text": "Hi Rajesh,For available data types in MongoDB, please refer to BSON Types. The C#/.NET driver provides equivalent BsonType representations.primarly i have decimal(13,4) in sqlserver, so what should i keep in mongodb.The equivalent MongoDB data type is Decimal128.if I use double in mongodb what should i keep in c#Double.if i keep decimal in mongodb then what should i keep in c#Decimal128.If your use case for floating point values requires precision (such as monetary data), you will want to use Decimal128. Doubles are subject to floating point accuracy problems.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "2)and in c# i have used decimal , so i think it will not create problem. because we get data from mongo which is quite small.? please correct me if i am worng.yours sincerley",
"username": "Rajesh_Yadav"
},
{
"code": "",
"text": "What is this data used for? For example, is it monetary data? And, what kind arithmetic is performed on this data? These aspects can affect how the data is to be defined and handled.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "it is length of many bridges like 300.56789345 meters.\nand it is used for sum and division to get the percentage of job done.",
"username": "Rajesh_Yadav"
},
{
"code": "",
"text": "The input to your application has data defined as two data types, and your application uses them as only one data type. And, you are also saying, that changing the data type at the input to same type is not there.I think, it depends upon you application and user needs how you work with your data. Since, you know the input and the resulting output, you have to figure whether it works correctly for your needs or not.For instance, if the calculated final result is not accurate enough for your users with the present data types, what will you consider?",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "as u can see the input of sql server is 13,4 this is what we will enter in double of mango also. so input is quite smaller than double precision. and decimal which is in c# it has much bigger precision than the double of mongo or the entered 13,4. so this is were u have correct me if i am wrong .i am not asking about the calculation we do in c# after getting data from sql server and mongo.q1) so if i enter any thing less than 999999999.9999 then double of mongo will not change it ?\nq2) and what ever is in double of mongo will come as it is in decimal of c#yours sincerley",
"username": "Rajesh_Yadav"
},
{
"code": "decimaldoublemongodecimalNumberDecimal()",
"text": "The decimal BSON type uses the IEEE 754 decimal128 decimal-based floating-point numbering format. Unlike binary-based floating-point formats (i.e., the double BSON type), decimal128 does not approximate decimal values and is able to provide the exact precision required for working with monetary data.From the mongo shell decimal values are assigned and queried using the NumberDecimal() constructor.",
"username": "samuel_otomewo"
},
{
"code": "",
"text": "my question was different. I asked in a given situation , what could happen.",
"username": "Rajesh_Yadav"
},
{
"code": "",
"text": "i am just asking what i have though is correct or not.",
"username": "Rajesh_Yadav"
},
{
"code": "",
"text": "i mean the standards u have written is fine.but the solution i have given u in this situation is correct or not that is the question.\nelse i know that i have to research a lot to get to the roots, but i do not have time and experience. that is why i have asked.",
"username": "Rajesh_Yadav"
}
] | What is decimal of SQL Server in MongoDB | 2020-03-26T05:03:46.988Z | What is decimal of SQL Server in MongoDB | 3,852 |
null | [
"queries",
"text-search"
] | [
{
"code": "user: {\n _id: \"xxx\",\n name: \"Lucas\",\n items: [\n { name: \"shoes\", description: \"nice shoes\" },\n { name: \"pants\", description: \"old pants\" },\n ],\n places: [\n {name: \"my house\", loc: { type: \"Point\", coordinates: [-27, -43] }}\n ]\n}\nawait User.find({ $text: { $search: \"shoes\" } });\n",
"text": "Hello, i use mongoose with node.jsThis is an example of my User document:I need to perform a text-search ($text) that returns only the items. For example:This works! But it also returns pants, since it returns the user and not only the array item. And that is the problem, i need to paginate over the items in my database. So I need to return only the array items that matches the $text search. I know that if items was a collection itself it would work, but in my case I need those inside the user because I combine $text for items and $geoWithin for places.So, how do I return the User keeping only his items that matched my $text search?",
"username": "Fabiano_Jardim"
},
{
"code": "{ name: \"shoes\", description: \"nice shoes\" }find$elemMatchaggregate$project$filter$unwind$match$project$filter",
"text": "I think you are looking to return something like this: { name: \"shoes\", description: \"nice shoes\" }.You use a projection to limit the fields to be in the result document. But, when you have to project only a filtered set of array elements, the find method has a limited use. Using $elemMatch projection operator you can project only the first matching element.To project all the matching items use the aggregate method on the collection. Aggregation has $project and this can be used with the $filter array operator to get all the matching array elements. Another way is, to use a combination of $unwind, $match and $project stages.[ EDIT ADD ]Here is the link to a post with an example aggregate query about using the $filter on an array: Java driver - $filter aggregation operator",
"username": "Prasad_Saya"
}
] | Return only array items that matches $text | 2020-04-03T14:20:16.481Z | Return only array items that matches $text | 4,573 |
null | [
"replication"
] | [
{
"code": "2020-04-03T22:07:28.826+0530 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\n2020-04-03T22:07:28.830+0530 W ASIO [main] No TransportLayer configured during NetworkInterface startup\n2020-04-03T22:07:28.830+0530 I CONTROL [initandlisten] MongoDB starting : pid=4208 port=27017 dbpath=/data/db 64-bit host=midhilesh-X542UQ\n2020-04-03T22:07:28.830+0530 I CONTROL [initandlisten] db version v4.2.5\n2020-04-03T22:07:28.830+0530 I CONTROL [initandlisten] git version: 2261279b51ea13df08ae708ff278f0679c59dc32\n2020-04-03T22:07:28.830+0530 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1 11 Sep 2018\n2020-04-03T22:07:28.830+0530 I CONTROL [initandlisten] allocator: tcmalloc\n2020-04-03T22:07:28.830+0530 I CONTROL [initandlisten] modules: none\n2020-04-03T22:07:28.830+0530 I CONTROL [initandlisten] build environment:\n2020-04-03T22:07:28.830+0530 I CONTROL [initandlisten] distmod: ubuntu1804\n2020-04-03T22:07:28.830+0530 I CONTROL [initandlisten] distarch: x86_64\n2020-04-03T22:07:28.830+0530 I CONTROL [initandlisten] target_arch: x86_64\n2020-04-03T22:07:28.830+0530 I CONTROL [initandlisten] options: { replication: { replSet: \"myDevReplSet\" } }\n2020-04-03T22:07:28.830+0530 I STORAGE [initandlisten] exception in initAndListen: IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db, terminating\n2020-04-03T22:07:28.830+0530 I NETWORK [initandlisten] shutdown: going to close listening sockets...\n2020-04-03T22:07:28.830+0530 I - [initandlisten] Stopping further Flow Control ticket acquisitions.\n2020-04-03T22:07:28.830+0530 I CONTROL [initandlisten] now exiting\n2020-04-03T22:07:28.830+0530 I CONTROL [initandlisten] shutting down with code:100\n",
"text": "I am trying to connect mongodb with neo4j. I tried to follow the instruction given in neo4j blog. They have asked to create replica set. I tried to run “mongod --replSet myDevReplSet” and got the following errorsPlease help me how to solve this problem.",
"username": "midhilesh_elavazhaga"
},
{
"code": "",
"text": "[initandlisten] exception in initAndListen: IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db, terminatingPlease check if user has proper permissions on the dirs",
"username": "Ramachandra_Tummala"
}
] | Unable to create replica set: attempted to create a lock file on a read-only directory | 2020-04-03T20:03:19.122Z | Unable to create replica set: attempted to create a lock file on a read-only directory | 5,082 |
null | [
"installation"
] | [
{
"code": "",
"text": "As I was going through the log generated by MongoDB 4.2.3 installer, I realized that compass installation occurs by downloading from https://compass.mongodb.com/api/v2/download/latest/compass-community/stable/windowsDoes this mean that if there are no internet, the compass will not be downloaded nor installed?MSI (s) (AC:70) [14:20:14:822]: Invoking remote custom action. DLL: C:\\WINDOWS\\Installer\\MSIAAA0.tmp, Entrypoint: WixQuietExec64WixQuietExec64: Downloading Compass from https://compass.mongodb.com/api/v2/download/latest/compass-community/stable/windowsWixQuietExec64: Installing Compass",
"username": "Shintaro_Takechi"
},
{
"code": "",
"text": "Does this mean that if there are no internet, the compass will not be downloaded nor installed?Hi,Your understanding is correct: installing Compass is an optional task during MongoDB server installation, so the installer is downloaded separately.If you want to install Compass offline (or separately from the MongoDB server), you can download a standalone Compass installer from the MongoDB Download Centre.Regards,\nStennie",
"username": "Stennie_X"
}
] | Is internet mandatory in order to install compass with MongoDB 4.2.3 installer? | 2020-04-04T01:08:54.740Z | Is internet mandatory in order to install compass with MongoDB 4.2.3 installer? | 2,052 |
null | [
"compass"
] | [
{
"code": "",
"text": "I have a requirement where I cannot suppress the installation of MongoDB Compass (https://docs.mongodb.com/manual/tutorial/install-mongodb-on-windows-unattended/#run-the-windows-installer-from-the-windows-command-interpreter), but I have to suppress the initial setup screen popped up.I am using WiX Bundle and I have the DisplayInternalUI=“no” so main mongodb installation UI is suppressed. However the MongoDB Compass is not.Is it possible to suppress the initial setup screen of MongoDB Compass?",
"username": "Shintaro_Takechi"
},
{
"code": "<MsiPackage SourceFile=\"..\\..\\Payload\\mongodb-win32-x86_64-2008plus-ssl-4.0.12-signed.msi\" Id=\"MongoDB_4.0.12\" Cache=\"yes\" Visible=\"yes\" DisplayInternalUI=\"no\" \n DisplayName=\"MongoDB 4.0.0 Document Database\" Compressed=\"yes\" Permanent=\"yes\"\n InstallCondition=\"NOT Mongo_4_0_12_Installed\">\n\n <!--Use this MsiProperty to pass variables created with the <Variable> element to the MSI package.-->\n <MsiProperty Name=\"INSTALLLOCATION\" Value=\"[MongoDbInstallPathBurnVariable]\" />\n <MsiProperty Name=\"TARGETDIR\" Value=\"[MongoDbInstallPathBurnVariable]\" />\n <MsiProperty Name=\"INSTALLFOLDER\" Value=\"[MongoDbInstallPathBurnVariable]\" />\n <MsiProperty Name=\"ADDLOCAL\" Value=\"Server,ServerNoService,Client,Router,MonitoringTools,ImportExportTools,MiscellaneousTools\"/>\n </MsiPackage>",
"text": "Hey,This is what I use below:",
"username": "Bill_Leibold"
},
{
"code": "",
"text": "Thanks for the reply.I see 4 MsiProperties and first 3 (INSTALLLOCATION/TARGETDIR/INSTALLFODER) seems to be installation destination related properties, so I tried just ADDLOCAL property.While that did suppress the MongoDB Compass, it also seems to not install Compass. Which unfortunately is not what I desire. (If I just wanted to suppress Compass installation all together, I would use SHOULD_INSTALL_COMPASS=“0”)",
"username": "Shintaro_Takechi"
},
{
"code": "",
"text": "SHOULD_INSTALL_COMPASSWhat if you used ADDLOCAL=“all”?",
"username": "Bill_Leibold"
},
{
"code": "",
"text": "Now compass gets installed but with initialization launched. So back to original without ADDLOCAL assigned ",
"username": "Shintaro_Takechi"
},
{
"code": "",
"text": "I decided to download separate mongodb-compass-community-1.20.5-win32-x64.msi and install it as another entry and adopt SHOULD_INSTALL_COMPASS=“0” flag.\nThis way the initialization does not get launched.Thank you very much for you advices.",
"username": "Shintaro_Takechi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Can I suppress the initial setup screen for MongoDB Compass? | 2020-04-03T16:47:46.133Z | Can I suppress the initial setup screen for MongoDB Compass? | 3,031 |
null | [
"installation"
] | [
{
"code": "> MongoDB shell version v4.0.3\n> git version: 7ea530946fa7880364d88c8d8b6026bbc9ffa48c\n> OpenSSL version: OpenSSL 1.1.1d 10 Sep 2019\n> allocator: tcmalloc\n> modules: none\n> build environment:\n> distarch: x86_64\n> target_arch: x86_64\nsudo systemctl start mongodFailed to start mongod.service: Unit mongod.service is masked.\n",
"text": "I am running manjaro Linux - not my usual flavour of linux, but I’m having real trouble getting the service working on this distribution. I have mongo 4 installed as followsI have created the mongo /data folder and given permission and mongod (daemon runs). I can then use the mongo command which all works. But I cannot get the service running using systemctrl. For example:sudo systemctl start mongodgivesBut I cannot unmask this using sudo systemctl unmask mongod.service. This makes no difference. How can I reliably get the service running?",
"username": "Brett_Donovan"
},
{
"code": "",
"text": "What error you get while unmaskPlease try thissudo systemctl unmask mongod",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Tried this and the one you above. Makes no difference",
"username": "Brett_Donovan"
},
{
"code": "",
"text": "sudo systemctl start mongodSo command runs but makes no difference to\nsudo systemctl start mongod and you are getting same error?Can you check below\nsystemctl list-unit-files | grep mongod\nsudo systemctl status mongodMay be this file /etc/systemd/system/mongod.service.d needs to be edited and service enabled",
"username": "Ramachandra_Tummala"
}
] | Running on Manjaro Linux | 2020-04-03T09:58:26.735Z | Running on Manjaro Linux | 4,130 |
null | [] | [
{
"code": "",
"text": "When i downloaded Compass from MongoDB Compass Download | MongoDB, it saved as .dmg file. Need assistance on installing.",
"username": "ramya29p"
},
{
"code": "",
"text": "\nimage924×204 4.26 KB\n",
"username": "007_jb"
},
{
"code": "",
"text": "Hi @ramya29pAs @007_jb mentioned, please select your Operating System from the dropdown menu and it will automatically download the right package.Hope it helps!Please feel free to get back to us if you have any other query.Thanks,\nShubham Ranjan\nCurriculum Services Engineer",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "",
"username": "system"
}
] | Issue on installing Compass | 2020-04-01T16:50:18.882Z | Issue on installing Compass | 1,254 |
null | [] | [
{
"code": "",
"text": "I am using below query\nvar cursor =db.getCollection(‘user_info’).find({})\nwhile (cursor.hasNext()) {\nvar record = cursor.next();\nprint(record.Login +’|’+ record.Email +’|’ + record.LastLoginDate )\n}Query is pulling back the right fields with no problem, but not in the right time format – even though when looking at the field in its normal state, it is in the right format.\nExample:\nSun Dec 15 2019 18:50:37 GMT-0800 (PST) <-- query returns this\nISODate(“2020-04-01T19:29:27.522Z”) **<-- field contains this\n** Any ideas about how to adjust the query to return the date fields in this format?\nYYYY-MM-dd’T’HH:mm:ss.SSS",
"username": "shubham_udata"
},
{
"code": "printprintjsonprint",
"text": "Sun Dec 15 2019 18:50:37 GMT-0800 (PST) ← query returns this\nISODate(“2020-04-01T19:29:27.522Z”) **<-- field contains thisThe query returns the date type only, it is the print method which prints the date as the string version of the date. Use the printjson instead of print.See: Date data type in shell",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Hi Prasad…\nI tried with printjson as well.It is giving same result as earlier…",
"username": "shubham_udata"
},
{
"code": "",
"text": "Oops…sorry it worked…Thanks a lot…",
"username": "shubham_udata"
},
{
"code": "",
"text": "when i have added printjson(record.LastLoginDate) it is showing correct format,\nBut when i am using printjson(record.Login +’|’+ record.Email +’|’ + record.LastLoginDate )again the result is showing in string format…how to solve this issue?",
"username": "shubham_udata"
},
{
"code": "var cursor = db.getCollection('user_info').find();\n\nwhile (cursor.hasNext()) {\n\n var record = cursor.next();\n var printStr = record.Login + '|' + record.Email + '|' + JSON.stringify(record.LastLoginDate);\n print(printStr);\n}",
"text": "Okay, try this:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thanks it worked… As Date is in ISO date(IST)format can you tell me if there is any query or way to convert into PST from…",
"username": "shubham_udata"
}
] | Query to fetch data in right time format below | 2020-04-02T06:33:49.684Z | Query to fetch data in right time format below | 2,971 |
null | [
"queries"
] | [
{
"code": "db.getCollection('my-collection').insertMany([{x: 0, y: 0, z: 0},\n {x: 1, y: 0, z: 0},\n {x: 0, y: 1, z: 0},\n {x: 0, y: 0, z: 1}])\n\n[{x: 1, y: 0, z: 0}, {x: 0, y: 1, z: 0}, {x: 0, y: 0, z: 1}]\n",
"text": "Lets say I have created some recordsand I want to fetch all records that doesnt have x,y and z equal to zero\nso the first record would be filtered out from the results because all of its fields are equal to zero\nand I would get as a result",
"username": "Pablo_Botelho"
},
{
"code": "if the value of any of the fields (x, y, z) is not equal to zero\nthen print the document \nelse ignore\n{ $or: [ { x: { $ne: 0 } }, { y: { $ne: 0 } }, { z: { $ne: 0 } } ] }if the value of any of the fields (x, y, z) is greater than zero\nthen print the document \nelse ignore\n",
"text": "You are trying this:So, the filter is: { $or: [ { x: { $ne: 0 } }, { y: { $ne: 0 } }, { z: { $ne: 0 } } ] }You can also try the following with the same result:[Also, try with less than instead of greater than condition].",
"username": "Prasad_Saya"
}
] | Help with filtering in query | 2020-04-02T21:35:14.418Z | Help with filtering in query | 2,124 |
null | [
"c-driver"
] | [
{
"code": "exception: connect failed\n",
"text": "Hello All,I am writing a module to check if mongod/mongos is running on a give host-port combination. I am making use of the ping command with mongoc driver to achieve this.My mongos server is bound to the IP of the machine and not the hostname. Lets say the IP is 192.168.1.6 and hostname is myHost. In /etc/hosts, I have 192.168.1.6 myHost. mongos is running on port 27020.For me, the following command from shell gives a connection error. This is expected as my mongos is not bound to myHost. It is bound to 192.168.1.6.============================================================================\n>mongo --port 27020 --host myHost --eval “db.adminCommand({ping: 1})”\nMongoDB shell version: 3.2.22\nconnecting to: myHost:27020/test\n2020-03-31T16:37:19.598+0530 W NETWORK [thread1] Failed to connect to 127.0.0.1:27020, in(checking socket for error after poll), reason: errno:111 Connection refused\n2020-03-31T16:37:19.598+0530 E QUERY [thread1] Error: couldn’t connect to server centostemp:27020, connection attempt failed :\nconnect@src/mongo/shell/mongo.js:231:14\n@(connect):1:6============================================================================But, when running the ping command using mongoc driver with host as myHost and port as 27020. It successfully connects which I don’t expect to happen. As a result, my connectivity check module is giving false positives.I suspect there is a dns lookup happening when running the command using mongoc driver. Is there anyway to stop this from happening?Thanks,\nSanthanu",
"username": "santhanu_mukundan"
},
{
"code": "",
"text": "@santhanu_mukundan your description is inconsistent. There is no such thing as being able to bind to the hostname. Any daemon will bind to 1 or more specified IP addresses (or the special “all” address of 0.0.0.0). Your shell output includes “Failed to connect to 127.0.0.1:27020”, which indicates that when the shell resolves the hostname “myHost” that the resolver is returning the address 127.0.0.1. That could be the result of a wide variety of configuration-related things. For instance, despite what is in your hosts file, the system may be configured to consult a DNS server before looking in the hosts file.Also, it doesn’t make sense that you say the server is bound to 192.168.1.6, that /etc/hosts has an entry “192.168.1.6 myHost” and that you expect the ping to fail to connect when you give it the hostname “myHost”. The shell resolving “myHost” to 127.0.0.1 seems suspect given the information that you have provided.Please confirm the address resolution configuration of your system (nslookup and dig might be helpful in this regard). You may also want to consider providing your actual hosts file, the contents of /etc/nsswitch.conf, /etc/resolv.conf (or their equivalents if you use different services) and the complete terminal output showing the server launch, the shell interaction with the server, and the ping of the server.",
"username": "Roberto_Sanchez"
}
] | Mongoc driver ping command giving false positives | 2020-03-31T11:17:32.352Z | Mongoc driver ping command giving false positives | 2,421 |
null | [
"atlas-search"
] | [
{
"code": "",
"text": "The Text Search has followed a rather strange release schedule. M30, Free/M2/M5 … We’re wondering when it will hit M10/M20 ? Is there a date set ?",
"username": "Mark_Lynch"
},
{
"code": "",
"text": "Hi Mark,Atlas’ Full Text Search feature (beta) builds on Apache Lucene which requires provisioning additional processes that run within an Atlas cluster. Originally this feature was only available on M30 or higher dedicated clusters to ensure adequate performance.Full Text Search is now available on shared clusters (M0/M2/M5) because the search infrastructure can also be shared for these deployments.The M10 and M20 dedicated clusters have lower available resources, so there are some performance concerns that we are addresseing before making Full Text Search available for those tiers. There is currently no public date set, but when available this will definitely be widely announced.You could also raise this as a feature suggestion on the MongoDB Feedback Site so that others can upvote and watch the updates.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thanks for the detailed answer @Stennie_X . Fingers crossed it’s soon.",
"username": "Mark_Lynch"
},
{
"code": "",
"text": "Hi @Mark_Lynch - Atlas Search is now available for M10 and M20 clusters.",
"username": "Doug_Tarr"
},
{
"code": "",
"text": "Oh that’s great, thanks for the headsup. I was checking the blog every day or two but didn’t see it. Looking forward to using this in our products.",
"username": "Mark_Lynch"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Atlas Search in M10/M20 | 2020-02-04T19:29:04.812Z | Atlas Search in M10/M20 | 3,664 |
null | [
"aggregation",
"configuration"
] | [
{
"code": "",
"text": "In this old thread from 2016) it was asked whether there was a way to increase the 100mb in memory limit of each stage of an aggregation pipeline. The responses centered around two points:I believe this subject needs to be revisited for the following reasons:This really comes down to who knows best what the limit should be, the company’s developers and database / server ops teams or the MongoDB developers? The MongoDB developers obviously have much more knowledge on the inner workings of Mongo, but that advice can still be taken into account by having a default limit of 100mb. I believe very strongly that today in 2020 this limit should be configurable so that we can customize the behavior for our own specific situation. The documentation can of course provide a warning that changing the setting can impact performance (although everyone already knows that.)If you want to get into the details of why my company needs to increase this limit, and why it has been holding us back from switching from SQL Server to MongoDB for the last year (despite spending 6 months developing a working Mongo solution) then we can get into specifics. But really that will distract away from the central issue, which is that different customers have different scenarios, and a “one size fits all” solution isn’t appropriate and needs to be customizable.",
"username": "Justin_Toth"
},
{
"code": "",
"text": "Hi Justin,Welcome to the MongoDB Community forum and thank you for the detailed product feedback.This would be a great improvement to suggest on the new MongoDB Feedback site so others can upvote and watch the request. I could copy your suggestion there, but it would be better if you file it directly so you automatically get updates and can follow up on any questions or feedback.A general design goal is to try to have reasonable defaults without an overwhelming variety of tuning knobs (particularly when new features are introduced). However, this is a good example where a configurable option could be beneficial to suit different use cases and available resources.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Posted to https://feedback.mongodb.com/forums/924280-database/suggestions/40081492-allow-configuration-of-100mb-memory-limit-per-aggr.",
"username": "Justin_Toth"
}
] | Allow configuration of 100mb memory limit per aggregation pipeline stage | 2020-04-02T12:51:20.508Z | Allow configuration of 100mb memory limit per aggregation pipeline stage | 4,696 |
null | [] | [
{
"code": "",
"text": "I am using MongoDB version 4.0, and I want to delay mongodb fsyncLock for test purpose. Please can you suggest any way to achieve this?\nAlso is it possible to make MongoDB sleep?",
"username": "Akshaya_Srinivasan"
},
{
"code": "fsyncLocksleep()mongosleep(1000); db.fsyncLock()\n",
"text": "Hi @Akshaya_Srinivasan,The fsyncLock command is sent from a client or driver, so you should be able to add a sleep/delay if the client or driver supports one.This would just be a delay in sending the command to your MongoDB deployment. The server does not have a concept of “sleeping”.For example, using sleep() in the mongo shell to suspend client-side execution for 1 second before sending a command:Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thanks Stennie. Also this helped when tried from python.",
"username": "Akshaya_Srinivasan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB sleep command | 2020-03-24T09:46:40.454Z | MongoDB sleep command | 2,587 |
null | [] | [
{
"code": "startLocation: {\n // GeoJSON\n type: {\n type: String,\n default: 'Point',\n enum: ['Point']\n },\n coordinates: [Number],\n address: String,\n description: String\n },\n locations: [\n {\n type: {\n type: String,\n default: 'Point',\n enum: ['Point']\n },\n coordinates: [Number],\n address: String,\n description: String,\n day: Number\n }\n ]\n{\n \"name\": \"The Test Tourxx\",\n \"duration\": 1,\n \"maxGroupSize\": 1,\n \"difficulty\": \"medium\",\n \"price\": 501,\n \"summary\": \"Nothing exciting here man!\",\n \"imageCover\": \"tour-3-cover.jpg\",\n \"ratingsAverage\": 4,\n \"guides\": [\"5e85e24bd1b3f918dc07bb3d\", \"5e85e262d1b3f918dc07bb3e\"]\n}\n{\n \"status\": \"error\",\n \"error\": {\n \"driver\": true,\n \"name\": \"MongoError\",\n \"index\": 0,\n \"code\": 16755,\n \"errmsg\": \"Can't extract geo keys: { _id: ObjectId('5e85fc64a4031f1734b55515'), startLocation: { type: \\\"Point\\\", coordinates: [] }, ratingsAverage: 4, ratingsQuantity: 0, rating: 4.5, images: [], createdAt: new Date(1585839200315), startDates: [], secretTour: false, guides: [ ObjectId('5e85e24bd1b3f918dc07bb3d'), ObjectId('5e85e262d1b3f918dc07bb3e') ], name: \\\"The Test Tourxx\\\", duration: 1, maxGroupSize: 1, difficulty: \\\"medium\\\", price: 501, summary: \\\"Nothing exciting here man!\\\", imageCover: \\\"tour-3-cover.jpg\\\", locations: [], slug: \\\"the-test-tourxx\\\", __v: 0 } Point must only contain numeric elements\",\n \"statusCode\": 500,\n \"status\": \"error\"\n },\n \"stack\": \"MongoError: Can't extract geo keys: { _id: ObjectId('5e85fc64a4031f1734b55515'), startLocation: { type: \\\"Point\\\", coordinates: [] }, ratingsAverage: 4, ratingsQuantity: 0, rating: 4.5, images: [], createdAt: new Date(1585839200315), startDates: [], secretTour: false, guides: [ ObjectId('5e85e24bd1b3f918dc07bb3d'), ObjectId('5e85e262d1b3f918dc07bb3e') ], name: \\\"The Test Tourxx\\\", duration: 1, maxGroupSize: 1, difficulty: \\\"medium\\\", price: 501, summary: \\\"Nothing exciting here man!\\\", imageCover: \\\"tour-3-cover.jpg\\\", locations: [], slug: \\\"the-test-tourxx\\\", __v: 0 } Point must only contain numeric elements\\n at Function.create (C:\\\\Users\\\\Cody\\\\Desktop\\\\X-Files\\\\NodeJS\\\\4-natours\\\\node_modules\\\\mongodb\\\\lib\\\\core\\\\error.js:43:12)\\n at toError (C:\\\\Users\\\\Cody\\\\Desktop\\\\X-Files\\\\NodeJS\\\\4-natours\\\\node_modules\\\\mongodb\\\\lib\\\\utils.js:149:22)\\n at C:\\\\Users\\\\Cody\\\\Desktop\\\\X-Files\\\\NodeJS\\\\4-natours\\\\node_modules\\\\mongodb\\\\lib\\\\operations\\\\common_functions.js:265:39\\n at handler (C:\\\\Users\\\\Cody\\\\Desktop\\\\X-Files\\\\NodeJS\\\\4-natours\\\\node_modules\\\\mongodb\\\\lib\\\\core\\\\sdam\\\\topology.js:913:24)\\n at C:\\\\Users\\\\Cody\\\\Desktop\\\\X-Files\\\\NodeJS\\\\4-natours\\\\node_modules\\\\mongodb\\\\lib\\\\cmap\\\\connection_pool.js:352:13\\n at handleOperationResult (C:\\\\Users\\\\Cody\\\\Desktop\\\\X-Files\\\\NodeJS\\\\4-natours\\\\node_modules\\\\mongodb\\\\lib\\\\core\\\\sdam\\\\server.js:487:5)\\n at MessageStream.messageHandler (C:\\\\Users\\\\Cody\\\\Desktop\\\\X-Files\\\\NodeJS\\\\4-natours\\\\node_modules\\\\mongodb\\\\lib\\\\cmap\\\\connection.js:270:5)\\n at MessageStream.emit (events.js:311:20)\\n at processIncomingData (C:\\\\Users\\\\Cody\\\\Desktop\\\\X-Files\\\\NodeJS\\\\4-natours\\\\node_modules\\\\mongodb\\\\lib\\\\cmap\\\\message_stream.js:144:12)\\n at MessageStream._write (C:\\\\Users\\\\Cody\\\\Desktop\\\\X-Files\\\\NodeJS\\\\4-natours\\\\node_modules\\\\mongodb\\\\lib\\\\cmap\\\\message_stream.js:42:5)\\n at doWrite (_stream_writable.js:441:12)\\n at writeOrBuffer (_stream_writable.js:425:5)\\n at MessageStream.Writable.write (_stream_writable.js:316:11)\\n at Socket.ondata (_stream_readable.js:714:22)\\n at Socket.emit (events.js:311:20)\\n at addChunk (_stream_readable.js:294:12)\"\n}\n{\n \"status\": \"success\",\n \"data\": {\n \"tour\": {\n \"startLocation\": {\n \"type\": \"Point\",\n \"coordinates\": []\n },\n \"ratingsAverage\": 4,\n \"images\": [],\n \"guides\": [\n \"5e85e24bd1b3f918dc07bb3d\",\n \"5e85e262d1b3f918dc07bb3e\"\n ],\n \"_id\": \"5e85fb952754013728fc950c\",\n \"name\": \"The Test Tourxx\",\n \"duration\": 1,\n \"maxGroupSize\": 1,\n \"difficulty\": \"medium\",\n \"price\": 501,\n \"summary\": \"Nothing exciting here man!\",\n \"imageCover\": \"tour-3-cover.jpg\",\n \"locations\": [],\n \"__v\": 0,\n }\n }\n}\n",
"text": "My Model’s geo data implementation portion belowSame Request Body for each case belowResponse when connected local MongoDB belowResponse when connected to MongoDB Atlas(Removed unrelated potion) belowAs it can be seen local db throws error when creating empty array for “locations” but Atlas does it just fine.Please Help me identify the cause.Mongoose: 5.9.7\nMockgoose: Not Used in the project\nMongoDB: 4.2.3(Local and Atlas both)",
"username": "Kebby_Otis"
},
{
"code": "",
"text": "I have same issue except i am not providing any geo data.\nData gets saved in real altas cluster but mongo throws errorWelcome to the community @Kebby_Otis!Are you also using Mockgoose and experiencing the reported issue, or do you mean you have a similar error message in your application? If you are using Mockgoose, the issue you referenced suggests upgrading to Mockgoose 7.x or later.The Mockgoose README also notes the package as deprecated and recommends using mongodb-memory-server instead.So others can help investigate the issue, can you:Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Can't extract geo keys - Working with Atlas but not Local MongoDB | 2020-04-02T12:02:46.634Z | Can’t extract geo keys - Working with Atlas but not Local MongoDB | 7,115 |
null | [
"indexes"
] | [
{
"code": "",
"text": "Hi,\nWhat is the mongo behavior when a createIndex command is reissued for an existing index on an existing collection?\nWe use spring-mongo to interact with the db and are using @indexed for index creation in application. This means that the createIndex command is reissued every time the application starts up. Is there a performance overhead from this? I would expect this to get ignored by mongo but would love to get a clarification since i couldnt find anything concrete in the documentation.\nAppreciate any leads. Thanks!",
"username": "Sneh_Bharati"
},
{
"code": "db.collection.createIndex()",
"text": "From mongo documentationIf you call db.collection.createIndex() for an index that already exists, MongoDB does not recreate the index.",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Is this applicable for older mongo versions? We use 3.6 and 4.0",
"username": "Sneh_Bharati"
},
{
"code": "createIndexesbackgroundtrue@Indexed(background=true)",
"text": "Hi @Sneh_Bharati,Behaviour for attempted re-creation of an existing index is the same in older versions of MongoDB. Create index commands will not recreate existing indexes and instead return a success message indicating “all indexes already exist” (the underlying server createIndexes command supports creating one or more indexes).However, since you are using MongoDB 3.6 and 4.0 there is an important caveat if your application is ensuring indexes on startup. The default behaviour is to build new indexes in the foreground, which can have significant consequences for a production environment.Foreground index builds on a populated collection in MongoDB 4.0 and older will block all other operations on the database that holds the collection, so you would not want to accidentally have lengthy index builds start unless you also have a planned maintenance window.To avoid this issue you can either set the background property to true in your Spring index annotation using @Indexed(background=true), or use an admin strategy like rolling index builds for a replica set or sharded cluster. See: Index Build Operations on a Populated Collection (MongoDB 4.0).MongoDB 4.2+ uses an optimised index build process to minimise the impact on a production deployment. Collection-level locks are briefly held at the beginning and end of the process, but the rest of the index build runs as a background task. See: Index Builds on Populated Collections (MongoDB 4.2).Unfortunately Spring Data does not currently provide a way to set a global default for background indexes. A relevant issue to upvote and watch is DATAMONGO-1895: Add option to specify default background index building in AbstractMongoConfiguration in Spring Data’s issue tracker.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "I think yes\nYou can try it out on some test DB/collectionI tested it on 4.0.5First run created new index\n“createdCollectionAutomatically” : false,\n“numIndexesBefore” : 1,\n“numIndexesAfter” : 2,\n“ok” : 1Second run when i try to create index on same field“numIndexesBefore” : 2,\n“numIndexesAfter” : 2,\n“note” : “all indexes already exist”,\n“ok” : 1",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "he default behaviour is to build new indexes in the foreground, which can have significant consequences for a production environment.Foreground index builds on a populated collection in MongoDB 4.0 and older will block all otherSpring data does allow passing an optional parameter to specify indexing in background vs foreground. I havent tried it out yet - but thats what we are planning to do going forward.\nThanks for your help.",
"username": "Sneh_Bharati"
},
{
"code": "@Indexed(background=true)",
"text": "Spring data does allow passing an optional parameter to specify indexing in background vs foreground.Hi @Sneh_Bharati,Yes, that is the option I mentioned: @Indexed(background=true).However, you also have to include that for every index annotation rather than configuring a global default (as suggested in DATAMONGO-1895).Regards,\nStennie",
"username": "Stennie_X"
}
] | Behavior of createIndex for an existing index | 2020-04-02T08:53:54.072Z | Behavior of createIndex for an existing index | 17,284 |
null | [
"data-modeling"
] | [
{
"code": "SystemSchema: {\n id:\n name:\n .\n .\n statics: [array of numbers]\n}\nStaticSchema: {\n id:\n name:\n}\n$lookupstatics",
"text": "Hey all,I’ve got a question about the structure of my data, and to see if there is a more efficient way of modifying my data to prevent the use of $lookup. This is a rough structure of my data:I am using a $lookup on the statics array, each of which map to a StaticSchema on the id field (Essentially a 1:Many relationship). Is it possible to quickly modify the statics array and replace the numeric ids with the ObjectIDs of the StaticSchemas?",
"username": "Curry"
},
{
"code": "statics$lookup",
"text": "Is it possible to quickly modify the statics array and replace the numeric ids with the ObjectIDs of the StaticSchemas?Yes. Is there a relationship between the numeric ids and the the ObjectIDs (i.e., numeric id is to be replaced with a corresponding ObjectID)? Or, is it just replacing the whole array with newer one with ObjectIDs?I’ve got a question about the structure of my data, and to see if there is a more efficient way of modifying my data to prevent the use of $lookup.Post the $lookup query you are using now.",
"username": "Prasad_Saya"
},
{
"code": "systems{\n \"_id\" : ObjectId(\"5e84afc0cb954cbb64b4dfeb\"),\n \"id\" : 31001421,\n ...\n \"statics\" : [\n 30690,\n 30691\n ],\n}\nstatics30690{\n \"_id\" : ObjectId(\"5e84b0d550824abdb757eacf\"),\n \"id\" : 30690,\n \"name\" : \"Example\"\n}\ndb.systems.aggregate([\n {\n $match: {\n \"id\": 31001421\n }\n },\n {\n $lookup: {\n from: \"statics\",\n localField: \"statics\",\n foreignField: \"id\",\n as: \"staticObjects\"\n }\n }\n]\n",
"text": "Sure! Here is one item in the systems collection:Here is the item from the statics collection that has ID 30690This is the $lookup query mongoose is essentially making:As for the first question, I am trying to replace the numeric ID with the ObjectID",
"username": "Curry"
},
{
"code": "",
"text": "How does this serve “Modifying structure to prevent using lookup”? It only changes the fields in the lookup.",
"username": "Prasad_Saya"
}
] | Modifying structure to prevent using lookup | 2020-04-01T22:27:56.535Z | Modifying structure to prevent using lookup | 1,926 |
[
"java"
] | [
{
"code": "",
"text": "Following is the code screen shotvalues of variables areafter running mongotemplate aggregate with following payload\nI am getting following errorPlease help me in writing projections to get the Constants.NESTED_CATEGORY_ITEMIDS",
"username": "SatishReddy"
},
{
"code": "{\n \"menuId\": \"5e597a2be08a070bab329ef3\",\n \"categories\": [\n {\n \"categoryName\": \"Dinner\",\n \"level\": 2,\n \"subCategories\": [\n {\n \"level\": 0, \n \"subCategoryName\": \"Fish Taco\",\n \"items\": [ { \"itemId\": \"bar\" }, { \"itemId\": \"foo\" } ]\n }\n ]\n }\n ]\n}\nAggregateIterable<Document> iterable = collection.aggregate(Arrays.asList(\n Aggregates.match(Filters.eq(\"menuId\", \"5e597a2be08a070bab329ef3\")), \n Aggregates.project(\n Projections.fields(\n Projections.computed(\n \"itemIds\",\n new Document(\"$arrayElemAt\", \n Arrays.asList(new Document(\"$arrayElemAt\", \n Arrays.asList(new Document(\"$arrayElemAt\",\n Arrays.asList(\"$categories.subCategories.items.itemId\", 0)), 0)), 1))\n )\n )\n )\n));\nProjections.computed()MappingMongoConverterProjections.computed()",
"text": "Hi @SatishReddy, welcome!Please help me in writing projections to get the Constants.NESTED_CATEGORY_ITEMIDSAssuming that you have the following document structure:Using your example $arrayElemAt, here’s how you can use com.mongodb.client.model.Projections.computed():Please see also MongoDB Java driver: Use Aggregation Expressions.after running mongotemplate aggregate with following payloadIf you’re using spring-data MongoTemplate, make sure that the version supports the driver’s Projections.computed(). The error that you posted looks like related to spring-data Mapping and MappingMongoConverter, make sure that it supports Projections.computed() as well.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Thanks @wan. Using Projections.computed() returns Bson. Can you please let me know how to convert Bson to ProjectOperation or how to do using ProjectionOperation.",
"username": "SatishReddy"
},
{
"code": "",
"text": "Hi @SatishReddy,Can you please let me know how to convert Bson to ProjectOperation or how to do using ProjectionOperation.Perhaps you are trying to mix Aggregation between MongoDB Java driver and spring-data-mongodb Projection Operation. The above example is for MongoDB Java driver, if you’re using spring-data-mongodb you could try to use ArrayOperators.ArrayElemAt aggregation operator instead.If you have further questions about the use of spring-data-mongodb, I’d suggest posting a question on StackOverflow: spring-data-mongodb to reach wider audience with the expertise.Regards,\nWan",
"username": "wan"
},
{
"code": "",
"text": "What does the output (the result document fields and values) from the projection look like?",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "@Prasad_SayaUsing following documenthow to get following result using ProjectionOperation",
"username": "SatishReddy"
},
{
"code": "MongoTemplateMongoOperations mongoOps = new MongoTemplate(MongoClients.create(), \"test\");\n\nAggregation agg = newAggregation(\n project()\n .and(arrayOf(\"categories.subCategories.items\").elementAt(0))\n .as(\"items\")\n .andExclude(\"_id\")\n);\n\nAggregationResults<Document> results = mongoOps.aggregate(agg, \"collection\", Document.class);\nresults.forEach(doc -> System.out.println(doc.toJson()));",
"text": "how to get following result using ProjectionOperation{“items”: [ { “itemId”: “bar” }, { “itemId”: “foo” } ]}This is the MongoDB Spring Data (v2.2.6) code using MongoTemplate, and returns the expected output document:",
"username": "Prasad_Saya"
}
] | How to write Java ProjectionOperation computation | 2020-03-18T11:55:29.735Z | How to write Java ProjectionOperation computation | 9,008 |
|
null | [
"data-modeling",
"performance"
] | [
{
"code": "",
"text": "Hello All,I am a newbie to Mongo. I am working on making an API which stores data records of individuals. I am using a Virtual Private server for hosting the API and the server. I am looking at possible mechanisms/schema choices I can use to isolate the data of individuals, so that data records of individuals which belong to a group can be accessed by admin users of that group, despite all the data about individuals being stored in one table(‘group’ is a property of each entry).I have the following queries:Any help appreciated,\nThanks,\nGeorge.",
"username": "George_Joseph"
},
{
"code": "",
"text": "Hi @George_Joseph, welcome!Your question is quite broad. In terms of your questions (data permission and scalability), I’d suggest to check out MongoDB Stitch. It’s a serverless platform built on top of MongoDB Atlas.It has built-in rules system to define permissions, see Stitch: Define Roles and Permissions. See also:Regards,\nWan.",
"username": "wan"
}
] | Mongo schema and scaling considerations | 2020-03-05T21:58:53.345Z | Mongo schema and scaling considerations | 1,762 |
null | [
"mongodb-shell"
] | [
{
"code": "",
"text": "Hello - mongoDB beginner user and community beginner, so would be grateful if you could point me towards the right place to pose this question.Running mongodb community server 4.2.3 on Ubuntu 18.04. I am solely using the shell (command “mongo”) at this point. I am using a very simple terminal emulator which does not process ANSI codes (instead, it prints the control character bytes in decimal so they are visible.)My question/problem is this - does mongoDB generate ANSI codes as part of it’s output back to a user?When I enter a command (for example, help), the response I get back includes an echo of the command with some embedded ANSI (ANSI-type?) sequences (each sequence begins with two bytes - [ - followed by another byte or two.Example: When I type help (and hit the enter key),I get this response: help[3G[Jhelp[7Gfollowed by the expected response to a help command:db.help() help on db methods\ndb.mycoll.help() … and so onbut with the addition of a tab character (byte value 9) at the start of each line of text.Here are my specific questions:Thanks very much, and again, as I am new to the community, please assist by pointing me to the correct user group if this is not the appropriate place.Thanks!Dave",
"username": "Dave"
},
{
"code": "mongolinenoiseTERM",
"text": "Running mongodb community server 4.2.3 on Ubuntu 18.04. I am solely using the shell (command “mongo”) at this point. I am using a very simple terminal emulator which does not process ANSI codes (instead, it prints the control character bytes in decimal so they are visible.)Welcome to the MongoDB community @Dave!The mongo shell uses a fork of the linenoise library which requires a subset of ANSI escape sequences based on VT100 features.What terminal emulator and TERM environment setting are you using? I’d recommend upgrading to a more capable terminal client, if possible.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thanks, Stennie for the quick reply!The ANSI stuff appears only to be generated generated as part of echo from the shell, so that fits very much with “linenoise” which I hadn’t heard of before.Is it possible to tell mongodb shell to suppress all echoes, so that the only output it generates are responses to commands?Of course the proper path for me is to harness mongodb driver for the language I’m using ( C ), but for the very first getting-started phase, I wanted to build on my initial interactions with the shell (using a simple program which I wrote), which have gone well otherwise.Thank you again for the quick and illuminating reply!Dave",
"username": "Dave"
},
{
"code": "helpmongo--quietmongo",
"text": "Is it possible to tell mongodb shell to suppress all echoes, so that the only output it generates are responses to commands?Hi Dave,Can you provide a bit more context on your use case – are you trying to write a script for the shell? What sort of interactions are you scripting? Your example of the help command is a feature specific to an interactive shell session.You can also pass scripts to the shell from the command line, which is generally better suited for automation if you don’t want to use a driver. I would strongly recommend using a supported driver if you want to have more control over I/O and error handling.See Differences Between Interactive and Scripted mongo if you are interested in trying to write non-interactive shell scripts. You can include the --quiet option to suppress the normal startup output and warnings from the mongo shell.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "–quiet! Cool! No time to test till tomorrow morning, but in a nutshell that sounds promising.I am doing this AS A RANK BEGINNER (in particular, I like to write “out to the bare metal” and didn’t feel confident trying to write/debug my own interface to the MongoDB “wire” protocol, at this stage, so I figured scripted interface into the shell would be sufficient, given that I have no need for concurrency, “sharding”, or any other special features (yet.)What I am doing - a simple state machine (written in C) build on top of a home-made telnet client, that watches for a character from mongo (like “>”) and/or a timeout (number of seconds), and given this and state number, do something (send a message to mongoDB like insert one record is the most common case).All of this works well enough (20 records/second seems to be the fixed ceiling I can find and that’s sufficient for now), so I hope(d) to continue this path (scripting vs mongodb shell) while I’m learning.PS I really like MongoDB - the more I read about it’s design, my reaction is “that’s nice!”. Congrats to the designers/architects - as a meager beginner, this is a great introduction to the world of noSQL!Dave",
"username": "Dave"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | mongoDB shell - escape sequences in output (response) text - i.e., "help" | 2020-04-01T21:49:18.193Z | mongoDB shell - escape sequences in output (response) text - i.e., “help” | 3,692 |
null | [
"dot-net"
] | [
{
"code": "var playerDoc = trainingsCollection.AsQueryable()\n .SingleOrDefault(p => p.ID == playerID);\npublic class Training \n{\n [BsonId]\n public int ID { get; set; }\n\n // The player's skill data as a nested object\n public List<DaySkills> SkillsHistory { get; set; }\n}\npublic class DaySkills\n{\n public short Day { get; set; }\n public byte[] Skills { get; set; }\n}\nvar lastTwoDaySkills = playerDoc.DaySkills\n .TakeLast(2);\n",
"text": "I know how to retrieve a document from a collection.\nThis is just an example:The training object is quite simple:where the DaySkills class is defined here:If I want to retrieve the last two skills in the list, I can simply create a Linq query on the playerDoc:I wonder if I can retrieve the last two DaySkills in the list from the DB without retrieving the entire document.\nThanks forward for any suggestion.",
"username": "Leonardo_Daga"
},
{
"code": "var query = (from t in collection.AsQueryable<Training>()\n select t.SkillsHistory.Take(-2));\n",
"text": "Hi @Leonardo_Daga,I wonder if I can retrieve the last two DaySkills in the list from the DB without retrieving the entire document.You can try the following:Which essentially utilises aggregation operator $slice on a $project stage.Regards,\nWan.",
"username": "wan"
},
{
"code": "var lastTwoDaySkills = collection.AsQueryable()\n .Where(p => p.ID == playerID)\n .Select(p => p.SkillsHistory.Take(-2))\n .SingleOrDefault();\nvar lastTwoDaySkills = (from t in collection.AsQueryable()\n where t.ID == playerID\n select t.SkillsHistory.Take(-2))\n .SingleOrDefault();\nvar lastTwoDaySkills = collection.AsQueryable()\n .SingleOrDefault(p => p.ID == playerID)\n .SkillsHistory\n .TakeLast(2);\nvar playerDoc = collection.AsQueryable()\n .SingleOrDefault(p => p.ID == playerID);\n\nvar lastTwoDaySkills = playerDoc.SkillsHistory\n .TakeLast(2);\n",
"text": "Thank you @wan for your reply.\nI followed your suggestion and I compared the time needed from the instruction you propose (changed just in sintax), that I report here:Solution1:or its equivalent:Solution 2:with my first attempt:Solution 3:Results were slightly favourable in your approach (especially the second solution I wrote), but surprisingly, separating the query and the data extraction as follows:\nSolution 4:solution 4 works slightly faster than 1 and 3, as the solution 2.I report here the results, for your information (average time for 192 queries like this on the same database):In conclusion, it seems that the solution you proposed doesn’t decrease the access time to the information. Better results I obtained in solution 4 are maybe just a lucky run.I just wonder if the fact I have not used the typed version of AsQueryable respect your proposal and the fact that I added the Where method to restrict the search to a single player means that I’ve missed something from your answer.I published the project with the four solution tested at the following link: https://github.com/LeonardoDaga/MongoDbSample/tree/master/MongoDbSample/MongoDbConsoleDocumentAccessAny further suggestion or recommendation is welcome.King regards,\nLeonardo",
"username": "Leonardo_Daga"
}
] | Querying an array inside a document | 2020-03-28T21:58:13.518Z | Querying an array inside a document | 3,463 |
null | [
"node-js"
] | [
{
"code": "const { MongoClient } = require('mongodb')\n\nconst url = 'mongodb://localhost'\nconst config = {\n connectTimeoutMS: 5000,\n useUnifiedTopology: true\n}\n\nasync function mongodb() {\n console.time('connect')\n try { await MongoClient.connect(url, config) }\n catch (error) { console.error(error) }\n finally { console.timeEnd('connect') }\n}\n\nmongodb()\nMongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017\nconnect: 30034.340ms\n(node:19040) DeprecationWarning: current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true } to the MongoClient constructor.\n\nMongoNetworkError: failed to connect to server [localhost:27017] on first connect [Error: connect ECONNREFUSED 127.0.0.1:27017\n\nconnect: 2025.937ms\nconst { MongoClient } = require('mongodb')\n\nconst fetchLimit = 500000\nconst url = 'mongodb://localhost'\nconst config = {\n connectTimeoutMS: 5000,\n socketTimeoutMS: 5000,\n useUnifiedTopology: true\n}\n\nasync function mongodb() {\n let client\n\n console.time('connect')\n try { client = await MongoClient.connect(url, config) }\n catch (error) { console.error(error) }\n finally { console.timeEnd('connect') }\n\n const coll = client.db('reps').collection('req')\n\n console.time('find')\n try { await coll.find({ p: 'XYZ' }).limit(fetchLimit).toArray() }\n catch (error) { console.error(error) }\n finally { console.timeEnd('find') }\n}\n\nmongodb()\nconnect: 28.459ms\nfind: 8731.263ms\nconnect: 70.304ms\nMongoNetworkError: connection 2 to 127.0.0.1:27017 timed out\nfind: 1225.078ms\nfindcountawait coll.countDocuments()",
"text": "Hi Guys! I hope I’m posting this in correct place…I’m working with MongoDB using NodeJS mongodb driver. But I can’t make few options to work… My setup:\nOS: Windows 10 x64\nNodeJS mongodb driver: 3.5.5\nMongoDB: 4.2.5I want to limit:Description of connectTimeoutMS from mongodb.github.io: How long to wait for a connection to be established before timing outDescription from jira.mongodb.org: dictates how long to wait for an initial connection before considering it a timeout. This is used exclusively when we create a TCP/TLS socketSo I tried to connect to disabled DB with the following code:Which produces the following logs:If I will change useUnifiedTopology to false I will get this log:The issue that I have connectTimeoutMS set to 5 000 ms (5 secs) but as you can see it gets timed out in 2 or 30 seconds not in 5 seconds.Next issue is with query execution timeout.Description of socketTimeoutMS from mongodb.github.io: How long a send or receive on a socket can take before timing outDescription from jira.mongodb.org: dictates how long to wait for any operation on an existing connection before timing out. Once the TCP socket has been connected, we use this for our actual operationsSo I tried to make a query into DB with 5 seconds timeout limit:And I got this:I though that after 5 secs I will get timeout error… What’s wrong ?\nEven if I will set socketTimeoutMS to 30 ms I will get the same result…\nBut if I will set it to 20 ms I will get this:If I replace find with count (collection has more than 10 millions docs) and use await coll.countDocuments() I will get Timeout Error after 60 secs (not after 5)Can anybody assist me, I dont understand what’s going on…",
"username": "Vlad_Kote"
},
{
"code": "connectTimeoutMSMongoClient#connectMongoClientserverSelectionTimeoutMSserverSelectionTimeoutMSMongoClientconst client = new MongoClient(..., { serverSelectionTimeoutMS: 5000 });\nawait client.connect();\nsocketTimeoutMSsocketTimeoutMSmaxTimeMStry {\n const collection = client.db().collection('test_collection');\n\n console.time('find');\n await collection.find({ $where: 'sleep(1000)' }).limit(10).maxTimeMS(10).toArray();\n console.timeEnd('find');\n} catch (err) {\n console.timeEnd('find');\n console.dir({ err });\n}\nfind: 44.282ms\n{\n err: MongoError: operation exceeded time limit\n at MessageStream.messageHandler (/home/mbroadst/Development/mongo/node-mongodb-native/lib/cmap/connection.js:261:20)\n at MessageStream.emit (events.js:209:13)\n at processIncomingData (/home/mbroadst/Development/mongo/node-mongodb-native/lib/cmap/message_stream.js:144:12)\n at MessageStream._write (/home/mbroadst/Development/mongo/node-mongodb-native/lib/cmap/message_stream.js:42:5)\n at doWrite (_stream_writable.js:428:12)\n at writeOrBuffer (_stream_writable.js:412:5)\n at MessageStream.Writable.write (_stream_writable.js:302:11)\n at Socket.ondata (_stream_readable.js:722:22)\n at Socket.emit (events.js:209:13)\n at addChunk (_stream_readable.js:305:12) {\n ok: 0,\n errmsg: 'operation exceeded time limit',\n code: 50,\n codeName: 'ExceededTimeLimit',\n operationTime: Timestamp { _bsontype: 'Timestamp', low_: 1, high_: 1585574598 },\n '$clusterTime': { clusterTime: [Timestamp], signature: [Object] },\n name: 'MongoError',\n [Symbol(mongoErrorContextSymbol)]: {}\n }\n}\n",
"text": "Hi @Vlad_Kote, welcome to the community forums! I’ll try to answer your questions in two sections below:The connectTimeoutMS option is used by the driver to determine when to timeout an attempt to connect an individual connection to a server (one of many in your connection pool). It does not have a direct relation to MongoClient#connect. When you attempt to connect a MongoClient it attempts server selection, which means that it will attempt for up to serverSelectionTimeoutMS to connect to the cluster before reporting that it was unable to find a suitable server.If you want to get “fast fail” behavior on your connect, you can pass serverSelectionTimeoutMS to your MongoClient:The socketTimeoutMS option corresponds to Node’s Socket#setTimeout method, and guides the behavior around socket inactivity. In your case it seems like you want to guarantee that an operation succeeds or fails in a given time range. Your best bet for this today is to use a combination of socketTimeoutMS (in case the socket is indeed inactive, due to network issues) and maxTimeMS which will cause the operation to fail after the specified time on the server side:in this case will result in:Hope that helps!",
"username": "mbroadst"
},
{
"code": "",
"text": "Thanks a lot, @mbroadst !And yes, ‘serverSelectionTimeoutMS’ exactly what I need.\nAnd ‘maxTimeMS’ method also did the trick here…Looks like I was confused from the beginning…\nI’m working with Mongoose lib which has clear notice in the docs that “we pass Option object into underlying mongodb lib”. But I didn’t find such notice in mongodb lib. That’s why I thought that mongodb couldn’t handle it…So serverSelectionTimeoutMS option is handled by… By whom ? The mongodb lib source uses that param but I can’t find mention about it in their docs mongodb.github.io/node-mongodb-native/3.5/api/. Am I looking in wrong place ?Lib docs hasn’t Connection SettingsBut the MongoDB docs itself has…I will try now to achieve what I want with Mongoose.Again, thanks a lot, @mbroadst",
"username": "Vlad_Kote"
},
{
"code": "serverSelectionTimeoutMSMongoClientMongoClient",
"text": "Glad to hear it helped!You’re not crazy, our documentation is not helping you out right now - but we’re working on that:serverSelectionTimeoutMS is a top-level MongoClient or connection string option that is only supported by the “Unified Topology”. Since this topology is still gated by a feature flag, its documentation was not merged in with MongoClient. In the next patch release of the driver these options will be added, with a note that they are only relevant to the unified topology.We are currently introducing type checking to the driver, which will result in much higher quality API documentation, and intellisense if you happen to be using an editor with LSP supportFinally, we have a larger project in place to rewrite our Reference documentation to include in-depth details on topics such as this. The project is being handled by our fantastic documentation team, so you can expect to have a better experience with our documentation in the future.",
"username": "mbroadst"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Connect timeout and execution timeout in nodejs driver | 2020-03-29T23:01:16.068Z | Connect timeout and execution timeout in nodejs driver | 62,303 |
null | [
"data-modeling",
"realm-studio"
] | [
{
"code": "",
"text": "Hi,I’m looking for any tips about making it easy to generate realm model objects across different platforms. It seems quite fiddly getting them matched and I wondered if there’s any good strategies for making it simpler.I saw that it’s possible to generate language specific models using the Realm Studio tool, but the generated code isn’t always the best object structure and it’s a manual process.Ideal would be something like protocol buffers that you could check into a single repo and reference from many projects. Is that too hopeful?",
"username": "Jason_Whetton"
},
{
"code": "",
"text": "Hi Jason - sorry for the long wait …Currently Realm Studio is the only tool that we provide (and I know of) which can generate the language specific code. Although it’s not a separate package, the part of Realm Studio responsible for this is highly decoupled from the rest of Realm Studio: realm-studio/src/services/schema-export at channel/major-13 · realm/realm-studio · GitHub, it should be fairly easy to repurpose for your need.\nI added an issue in the Realm JS repository with a suggestion to build and publish this as a separate package on NPM. Although I don’t know what the priority of getting that solved would be.but the generated code isn’t always the best object structureI would love specific suggestions on how to improve this.",
"username": "kraenhansen"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm Model generation across platforms | 2020-03-17T18:02:58.724Z | Realm Model generation across platforms | 3,832 |
null | [
"python"
] | [
{
"code": "",
"text": "I am trying to get long running queries (sec_running >= 5) from MongoDB using Python.Though the command works fine from Mongo console, it throws error or not working as expected when executed using PyMongo. Can someone please guide me on this ?.How can I execute the below code from Python ?.db.currentOp({“secs_running”: {$gte: 5}})I am able to execute “currentOp” using PyMongo, however not able filter for the “secs_running”.Please help, Thanks in advance. !",
"username": "SAN"
},
{
"code": "mongocurrentOpclient = pymongo.MongoClient() \ndatabase = client.admin \nresponse = database.command(\"currentOp\", {\"secs_running\": {\"$gt\": 5}})\n",
"text": "Hi @SAN,Though the command works fine from Mongo console, it throws error or not working as expected when executed using PyMongo. Can someone please guide me on this ?.The db.currentOp() from mongo shell is a wrapper for an admin command currentOp.Using PyMongo, you can issue database commands using Database.command method. For currentOp database admin command you could try the following:Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Thanks Wan !.\nLet me try with this script as well.",
"username": "SAN"
},
{
"code": "",
"text": "Hi Wan,The commands gets executed successfully.\nHowever the filter doesn’t work as expected.\nThe result set has all the queries that are running irrespective of the “secs_running” value.Basically it’s the output of “currentOp” as it is.\nFilter {\"$gt\": 5} doesn’t really seems to be working here.Thanks agian.",
"username": "SAN"
}
] | Python script to get long running mongoDB queries using PyMongo | 2020-03-27T05:02:02.677Z | Python script to get long running mongoDB queries using PyMongo | 4,309 |
null | [
"stitch"
] | [
{
"code": "exports = async function(payload, response) {\n const mongodb = context.services.get(\"mongodb-atlas\");\n const eventsdb = mongodb.db(\"mydatabase\");\n const eventscoll = eventsdb.collection(\"mycollection\");\n const result= await eventscoll.insertOne(payload.query);\n var id = result.insertedId.toString();\n if (result) {\n return JSON.stringify(id,false,false);\n }\n return { text: `Error saving` };\n}\n",
"text": "Hi, I’m working on an incoming webhook post request in Stitch and have a question about filtering the payload please – is it possible to filter the payload so that only a select field is inserted into the database collection?\nI’ve used a mongodb youtube tutorial as the basis, which is great and uses an async/await function to insert the entire payload with “insertOne” – but I only want to insert a specific field (e.g. “name”:”John”) from the this payload into the database.\nMany thanks, AndiPS here is the function I’m using",
"username": "a_Jn"
},
{
"code": "const body = payload.body.text(); \nconst document = EJSON.parse(body); \nconsole.log(\"Only Name Field:\", document.name);\n{\"name\": document.name}",
"text": "Hi @a_Jn, welcome!is it possible to filter the payload so that only a select field is inserted into the database collection?For POST webhook, you could retrieve a specific field as below:Please note that this is the value (string), if you would like to insert this as a document you would need to create a document. i.e. {\"name\": document.name}.\nFor more information see Stitch: Incoming WebHooksRegards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Great thank you so much Wan,I’m a total newbie (as you can probably tell=) and trying out Mongo to use as nosql database for future applications.Yes, I’d like to insert just the name field as a new document but I’m struggling to integrate {“name”: document.name} into that function, which is currently working really well to insert the entire payload as a new document.I also have one last question please, to see how to make use of a GET webhook – where we only want to show the id field (not the _id):[{“_id”:{“$oid”:“5e83219bea6a9407daef5b77”},“id”:“3456787”},{“_id”:{“$oid”:“5e8322a71a4ca4049533353b”},“id”:“3456789”},{“_id”:{“$oid”:“5e8322df1ef1af501adbcc9b”},“id”:“3456788”}]https://webhooks.mongodb-stitch.com/api/client/v2.0/app/app1-ifvea/service/httpGET/incoming_webhook/webhookhttpGETPS here is the function I’m using, I’m struggling with the last line, as to only show the id:exports = function(payload) {\nconst mongodb = context.services.get(“mongodb-atlas”);\nconst mycollection = mongodb.db(“mydatabase”).collection(“mycollection”);\nreturn mycollection.find({}).toArray();\n};Many thanks !Andi",
"username": "a_Jn"
},
{
"code": "_idreturn collection.find({}, {_id:0}).toArray();\n",
"text": "Hi @a_Jn,PS here is the function I’m using, I’m struggling with the last line, as to only show the id:You can use projection to exclude the _id field, for example:See also Stitch: collection.find(), especially the projection options.I’m a total newbie (as you can probably tell=) and trying out Mongo to use as nosql database for future applications.Not a problem, we are all learning . You may find MongoDB Stitch Tutorials useful as well.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Thank you so much Wan, that projection works really well for excluding a field, I’m going to dig deeper into the Stitch tutorials, looks like it has super potential for apps! Many thanks Andi",
"username": "a_Jn"
}
] | Stitch filter incoming post webhook payload | 2020-03-30T19:28:09.095Z | Stitch filter incoming post webhook payload | 2,884 |
null | [] | [
{
"code": "Write errors: [BulkWriteError{index=47, code=40333, message='Concurrent operations on the same resource, please try again', details={}}]. ; nested exception is com.mongodb.MongoBulkWriteException: Bulk write operation error on server",
"text": "Hi All,I’m dealing with following exception when doing upsert operations in large dataset, which I couldn’t find the root cause or identify a smaller set of data that consistently reproduce the issue:Write errors: [BulkWriteError{index=47, code=40333, message='Concurrent operations on the same resource, please try again', details={}}]. ; nested exception is com.mongodb.MongoBulkWriteException: Bulk write operation error on serverDoes anyone know what the potential issues are behind this message ? Especially, what kind “resource” that the error refer to ? Or what kind of troubleshooting I can do to tackle this issue.Thanks,\nT",
"username": "Tuan_Dinh"
},
{
"code": "",
"text": "Does anyone know what the potential issues are behind this message ?Hi @Tuan_Dinh,To help investigate this, can you confirm:Especially, what kind “resource” that the error refer to ?I’m having trouble finding this error message in the MongoDB server source code. Are you using a hosted or emulated service?Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "exception=org.springframework.dao.DataIntegrityViolationException: Bulk write operation error on server <server>:27017. Write errors: [BulkWriteError{index=63, code=40333, message='Concurrent operations on the same resource, please try again', details={}}]. ; nested exception is com.mongodb.MongoBulkWriteException: Bulk write operation error on server <server>:27017. Write errors: [BulkWriteError{index=63, code=40333, message='Concurrent operations on the same resource, please try again', details={}}]. backoff={1000ms}",
"text": "Thanks @Stennie_X for the prompt reply.I’m using Spring Web Flux with reactive Mongo, sort of delegate to framework to deal with the driver etc.As for the exception, it is nested within a Spring’s DataIntegrityViolationException:exception=org.springframework.dao.DataIntegrityViolationException: Bulk write operation error on server <server>:27017. Write errors: [BulkWriteError{index=63, code=40333, message='Concurrent operations on the same resource, please try again', details={}}]. ; nested exception is com.mongodb.MongoBulkWriteException: Bulk write operation error on server <server>:27017. Write errors: [BulkWriteError{index=63, code=40333, message='Concurrent operations on the same resource, please try again', details={}}]. backoff={1000ms}I’ve checked: The MongoBulkWriteException is with mongo-driver-core 3.11.2 which is a dependency in our service.And yes, the server is not exactly Mongo. It’s AWS Document DB (but it is known to be mongo underneath with version 3.6 according to their doc)So you reckon either the Spring framework or DocumentDB throws the exception ?Thanks\nT",
"username": "Tuan_Dinh"
},
{
"code": "",
"text": "And yes, the server is not exactly Mongo. It’s AWS Document DB (but it is known to be mongo underneath with version 3.6 according to their doc)Hi @Tuan_Dinh,DocumentDB is actually an emulation of the MongoDB server with many functional differences and incomplete feature support versus the claimed 3.6 version compatibility (both examples are from AWS documentation).The general messaging may be a bit unclear, but as per their documentation on functional differences:Amazon DocumentDB emulates the MongoDB 3.6 API on a purpose-built database engineThe server error you are encountering is specific to DocumentDB, so you will have to contact AWS support or ask on a site like Stack Overflow.Regards,\nStennie",
"username": "Stennie_X"
}
] | Causes behind exception "Concurrent operations on the same resource" | 2020-04-01T02:26:17.843Z | Causes behind exception “Concurrent operations on the same resource” | 4,666 |
null | [
"php"
] | [
{
"code": "",
"text": "Does MongoDB support sessions like the way PHP does? I want to have stored values the can be used across multiple pages.",
"username": "kev_stev"
},
{
"code": "MongoDB\\Driver\\Session",
"text": "Welcome to the community @kev_stev!Does MongoDB support sessions like the way PHP does?MongoDB’s client sessions (and the associated MongoDB\\Driver\\Session class in the PHP driver) provide context for a single client connection and are a different concept from the application sessions you are thinking of. These logical sessions are used to enable MongoDB features like retryable writes and transactions, and cannot be shared between multiple clients or threads. You also cannot write custom data to a MongoDB session; these are for internal resource tracking.I want to have stored values the can be used across multiple pages.If you want to share data between pages in your PHP application, you would continue to use PHP sessions.Regards,\nStennie",
"username": "Stennie_X"
}
] | Does MongoDB support sessions across multiple pages? | 2020-03-31T22:29:24.482Z | Does MongoDB support sessions across multiple pages? | 2,138 |
null | [] | [
{
"code": "",
"text": "Hello all,I’m Rick, CTO & Co-founder of a two sided marketplace application named Sussd.We recently started getting direct feedback through customer interviews which allowed us to gain insight in to what our customers really want, and as a result we’ve pushed our existing backlog back and created a new one. Which has given me the opportunity to get some foundation work done, starting with swapping over from AWS Aurora to MongoDB.I’ve been using MongoDB now since 2014. I’m excited about all the new things that are in the pipeline for Atlas, and looking forward to using them to grow our platform!",
"username": "Rick_Craig"
},
{
"code": "",
"text": " Hi @Rick_Craig and welcome to the community!\nThis forum is a great place to get all the news about things “in the pipeline”.\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Welcome, @Rick_Craig! Glad to have you here and always pleased to learn about companies adapting to real customer feedback – especially when that customer feedback leads them here!",
"username": "Jamie"
},
{
"code": "",
"text": "Hi @Rick_CraigWelcome to the community ",
"username": "dhayalramk"
},
{
"code": "",
"text": "Greetings from Dublin @Rick_Craig. What was your motivation for moving from AWS Aurora to MongoDB?Best wishes,Michael",
"username": "Michael_Jones"
}
] | Good Morning, I am Rick from Northern Ireland | 2020-03-13T11:06:46.703Z | Good Morning, I am Rick from Northern Ireland | 2,127 |
null | [
"sharding"
] | [
{
"code": "mongos> db.my_collection.findOne()\n2020-03-31T16:48:32.280+0000 E QUERY [js] uncaught exception: Error: error: {\n\t\"ok\" : 0,\n\t\"errmsg\" : \"Failed to run query after 10 retries :: caused by :: version mismatch detected for my_db.my_collection\",\n\t\"code\" : 13388,\n\t\"codeName\" : \"StaleConfig\",\n\t\"ns\" : \"my_db.my_collection\",\n\t\"vReceived\" : Timestamp(857, 21),\n\t\"vReceivedEpoch\" : ObjectId(\"5e7e58930972ff979849eb57\"),\n\t\"vWanted\" : Timestamp(855, 33),\n\t\"vWantedEpoch\" : ObjectId(\"5e7e58930972ff979849eb57\"),\n\t\"operationTime\" : Timestamp(1585673310, 2),\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1585673311, 7),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"R0DXB77kTWRd47etf7iLrlreUq0=\"),\n\t\t\t\"keyId\" : NumberLong(\"6790649384410284063\")\n\t\t}\n\t}\n} :\n_getErrorWithCode@src/mongo/shell/utils.js:25:13\nDBCommandCursor@src/mongo/shell/query.js:696:15\nDBQuery.prototype._exec@src/mongo/shell/query.js:111:28\nDBQuery.prototype.hasNext@src/mongo/shell/query.js:282:5\nDBCollection.prototype.findOne@src/mongo/shell/collection.js:255:10\n@(shell):1:1\n",
"text": "I was running multiple aggregate/$merge (in python script) over the same collection and suddenly I got an error OperationFailure: “version mismatch detected for my_db.my_collection”Now, in mongo cli, if a try a\ndb.my_collection.findOne()\nI getI tried to do a flushRouterConfig but it does not help.Any idea of how to solve this problem and regain access to the collection ?Context : mongodb 4.2.3, sharded replicated cluster",
"username": "RemiJ"
},
{
"code": "",
"text": "I had a look to the shards mongodb.log and in one of them (and only one), I have warnings like this one :2020-03-31T15:48:23.552+0000 W SHARDING [conn1894082] requested shard version differs from config shard version for my_db.my_collection, requested version is 857|21||5e7e58930972ff979849eb57 but found version 855|33||5e7e58930972ff979849eb57which correspond to the timestamps in the error message…",
"username": "RemiJ"
}
] | Failed to run query after 10 retries :: caused by :: version mismatch detected | 2020-03-31T17:00:23.197Z | Failed to run query after 10 retries :: caused by :: version mismatch detected | 3,091 |
null | [
"production",
"cxx"
] | [
{
"code": "cxx-driver",
"text": "The MongoDB C++ Driver Team is pleased to announce the availability of mongocxx-3.5.0. This release provides support for new features in MongoDB 4.2.Please note that this version of mongocxx requires the MongoDB C driver 1.15.0 or higher.See the MongoDB C++ Driver Manual and the Driver Installation Instructions for more details on downloading, installing, and using this driver.The mongocxx 3.5.x series does not promise API or ABI stability across patch releases.Please feel free to post any questions on the MongoDB Community forum in the Drivers, ODMs, and Connectors category tagged with cxx-driver . Bug reports should be filed against the CXX project in the MongoDB JIRA. Your feedback on the C++11 driver is greatly appreciated.Sincerely,The C++ Driver Team",
"username": "Kevin_Albertson"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB C++11 Driver 3.5.0 Released | 2020-03-31T17:40:34.712Z | MongoDB C++11 Driver 3.5.0 Released | 1,742 |
[
"c-driver"
] | [
{
"code": "",
"text": "Hello everybody,\nI am new to MongoDB and I’m trying to install the mongo-c-driver and the connector.\nBut I always get the following CMake error: \nCMake_Error1169×295 23.8 KB\n\nDoes anybody have a idea how to fix it?",
"username": "Simon_Reitbauer"
},
{
"code": "..",
"text": "Hi @Simon_Reitbauer,The installation instructions have the last argument to the cmake command as the path to the directory containing CMakeLists.txt. Try adding .. to the end of your cmake command.",
"username": "Kevin_Albertson"
},
{
"code": "",
"text": "Thank you, now it’s working. Totally forgot that.",
"username": "Simon_Reitbauer"
},
{
"code": "",
"text": "Well, now I am trying to install the mongocxx driver with this command:cmake -G “Visual Studio 16 2019” \\ -DBOOST_ROOT=D:\\boost_1_72_0 \\ -DCMAKE_PREFIX_PATH=C:\\mongo-c-driver \\ -DCMAKE_INSTALL_PREFIX=C:\\mongo-cxx-driver \\ …\nfound on http://mongocxx.org/mongocxx-v3/installation/, but I get the following error: \nCXX_Error1883×921 34.5 KB\nPlease excuse my questions, but I have never installed anything like this.",
"username": "Simon_Reitbauer"
},
{
"code": "",
"text": "Hi Simon,It appears you’re building the C driver with MinGW’s build tools, and the C++ driver with Visual Studio. I actually did not realize this until I tried it, but those need to be consistent. Either build both with MinGW or both with Visual Studio. In your case, since you have the Visual Studio compiler available, try building and installing the C driver with Visual Studio instead of MinGW. Then I think configuring the C++ driver should work.Best,\nKevin",
"username": "Kevin_Albertson"
},
{
"code": "",
"text": "So, I tried this version now, but it won’t work. Here is the error:\n\nimage1526×443 21.8 KB\n",
"username": "Simon_Reitbauer"
},
{
"code": "-DBUILD_VERSION=\"1.16.2\"cmake -G \"Visual Studio 16 2019\" -DCMAKE_INSTALL_PREFIX=C:\\mongo-c-driver -DCMAKE_PREFIX_PATH=C:\\mongo-c-driver -DBUILD_VERSION=\"1.16.2\" ..\n",
"text": "Apologies for all of these build frustrations. That is because the C driver is trying to determine the current version from the git history, but is unable to execute git. You can either install git, or more simply specify the version manually by adding -DBUILD_VERSION=\"1.16.2\" to the cmake arguments, i.e.",
"username": "Kevin_Albertson"
},
{
"code": "",
"text": "So, now I get another error, when I try to execute “msbuild.exe /p:Configuration=RelWithDebInfo ALL_BUILD.vcxproj” or “msbuild.exe INSTALL.vcxproj”: \nimage986×292 12.3 KB\n\nI’ve checked the web but found nothing useful for that. Looks like a Windows error to me.",
"username": "Simon_Reitbauer"
},
{
"code": "cmake-build",
"text": "That may be due to artifacts of the previous build interfering. If you haven’t already, try deleting the cmake-build directory and rebuilding.",
"username": "Kevin_Albertson"
},
{
"code": "",
"text": "Not, thats not working. I try to contact the Microsoft support.",
"username": "Simon_Reitbauer"
},
{
"code": "",
"text": "So, the c-driver is installed correctly now, building the cxx-driver also works fine. But when I try to install it, I get an error.Command: cmake -G “Visual Studio 16 2019” \\ -DBOOST_ROOT=D:\\boost_1_72_0 \\ -DCMAKE_PREFIX_PATH=C:\\mongo-c-driver \\ -DCMAKE_INSTALL_PREFIX=C:\\mongo-cxx-driver \\ …Error: \nimage1898×267 23.5 KB\n",
"username": "Simon_Reitbauer"
},
{
"code": "-DENABLE_EXTRA_ALIGNMENT=OFF# Building C driver\ncmake -G \"Visual Studio 16 2019\" -DCMAKE_INSTALL_PREFIX=\"C:\\mongo-c-driver-1.16.2\" -DENABLE_EXTRA_ALIGNMENT=OFF ..\ncmake --build . --target INSTALL --config Debug\n# Building C++ driver\ncmake -G \"Visual Studio 16 2019\" -DCMAKE_PREFIX_PATH=\"C:\\mongo-c-driver-1.16.2\" -DBOOST_ROOT=\"C:\\boost_1_72_0\\boost_1_72_0\" -DCMAKE_INSTALL_PREFIX=\"C:\\mongo-cxx-driver-3.4.1\" ..\ncmake --build . --target INSTALL --config Debug\n",
"text": "Hi Simon,Ah, I believe this is an error on newer Visual Studio compilers (see https://jira.mongodb.org/browse/CXX-1678, which is fixed on master but not yet released). This is related to extra alignment specifiers in the C driver, which can be disabled with -DENABLE_EXTRA_ALIGNMENT=OFF to work around this issue (it is recommended to disable the extra alignment anyway, and would be the default if it was not an ABI breaking change).I tested on a Windows machine with VS 2019 to ensure I wasn’t missing anything, and was able to build both with the following:Apologies again for all of the frustrations. The C++ driver is actively tested against Visual Studio 2017, but not yet VS 2019 compilation, but hopefully should be soon.",
"username": "Kevin_Albertson"
},
{
"code": "",
"text": "Thank you, now everything works fine!",
"username": "Simon_Reitbauer"
},
{
"code": "",
"text": "One last thing: I’ve included everything needed now, but I get an linking error related to mongocxx.lib.\n\nimage1303×67 3 KB\n",
"username": "Simon_Reitbauer"
},
{
"code": "",
"text": "Problem has been solved, thank you for the good support!",
"username": "Simon_Reitbauer"
},
{
"code": "",
"text": "Fantastic, glad to hear!",
"username": "Kevin_Albertson"
}
] | CMake error installing mongo-c-driver | 2020-03-23T23:04:10.982Z | CMake error installing mongo-c-driver | 5,846 |
|
[
"configuration"
] | [
{
"code": "2020-03-31T08:53:51.977+0100 I CONTROL [main] ***** SERVER RESTARTED *****\n2020-03-31T08:53:51.982+0100 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\n2020-03-31T08:53:52.850+0100 W ASIO [main] No TransportLayer configured during NetworkInterface startup\n2020-03-31T08:53:52.851+0100 I CONTROL [main] Trying to start Windows service 'MongoDB'\n\n2020-03-31T08:54:12.058+0100 I CONTROL [main] ***** SERVER RESTARTED *****\n2020-03-31T08:54:12.889+0100 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\n2020-03-31T08:54:12.894+0100 W ASIO [main] No TransportLayer configured during NetworkInterface startup\n2020-03-31T08:54:12.895+0100 I CONTROL [initandlisten] MongoDB starting : pid=6892 port=27017 dbpath=C:\\Users\\Bob\\Sync\\Programming\\MongoDB\\data 64-bit host=Music-PC\n2020-03-31T08:54:12.895+0100 I CONTROL [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2\n2020-03-31T08:54:12.895+0100 I CONTROL [initandlisten] db version v4.2.5\n2020-03-31T08:54:12.895+0100 I CONTROL [initandlisten] git version: 2261279b51ea13df08ae708ff278f0679c59dc32\n2020-03-31T08:54:12.896+0100 I CONTROL [initandlisten] allocator: tcmalloc\n2020-03-31T08:54:12.896+0100 I CONTROL [initandlisten] modules: none\n2020-03-31T08:54:12.896+0100 I CONTROL [initandlisten] build environment:\n2020-03-31T08:54:12.896+0100 I CONTROL [initandlisten] distmod: 2012plus\n2020-03-31T08:54:12.896+0100 I CONTROL [initandlisten] distarch: x86_64\n2020-03-31T08:54:12.896+0100 I CONTROL [initandlisten] target_arch: x86_64\n2020-03-31T08:54:12.896+0100 I CONTROL [initandlisten] options: { config: \"C:\\Program Files\\MongoDB\\Server\\4.2\\bin\\mongod.cfg\", net: { bindIp: \"127.0.0.1\", port: 27017 }, storage: { dbPath: \"C:\\Users\\Bob\\Sync\\Programming\\MongoDB\\data\", journal: { enabled: true } }, systemLog: { destination: \"file\", logAppend: true, path: \"C:\\Users\\Bob\\Sync\\Programming\\MongoDB\\log\\mongod.log\" } }\n2020-03-31T08:54:12.900+0100 I STORAGE [initandlisten] Detected data files in C:\\Users\\Bob\\Sync\\Programming\\MongoDB\\data created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.\n2020-03-31T08:54:12.901+0100 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=3582M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],\n2020-03-31T08:54:13.006+0100 I STORAGE [initandlisten] WiredTiger message [1585641253:6456][6892:140724896882256], txn-recover: Recovering log 17 through 18\n2020-03-31T08:54:13.164+0100 I STORAGE [initandlisten] WiredTiger message [1585641253:164656][6892:140724896882256], txn-recover: Recovering log 18 through 18\n2020-03-31T08:54:13.362+0100 I STORAGE [initandlisten] WiredTiger message [1585641253:361919][6892:140724896882256], txn-recover: Main recovery loop: starting at 17/7808 to 18/256\n2020-03-31T08:54:13.654+0100 I STORAGE [initandlisten] WiredTiger message [1585641253:653905][6892:140724896882256], txn-recover: Recovering log 17 through 18\n2020-03-31T08:54:13.812+0100 I STORAGE [initandlisten] WiredTiger message [1585641253:812105][6892:140724896882256], txn-recover: Recovering log 18 through 18\n2020-03-31T08:54:13.959+0100 I STORAGE [initandlisten] WiredTiger message [1585641253:958585][6892:140724896882256], txn-recover: Set global recovery timestamp: (0, 0)\n2020-03-31T08:54:13.992+0100 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)\n2020-03-31T08:54:14.001+0100 I STORAGE [initandlisten] Timestamp monitor starting\n2020-03-31T08:54:14.012+0100 I CONTROL [initandlisten] \n2020-03-31T08:54:14.013+0100 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.\n2020-03-31T08:54:14.013+0100 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.\n2020-03-31T08:54:14.013+0100 I CONTROL [initandlisten] \n2020-03-31T08:54:14.023+0100 I SHARDING [initandlisten] Marking collection local.system.replset as collection version: <unsharded>\n2020-03-31T08:54:14.027+0100 I STORAGE [initandlisten] Flow Control is enabled on this deployment.\n2020-03-31T08:54:14.027+0100 I SHARDING [initandlisten] Marking collection admin.system.roles as collection version: <unsharded>\n2020-03-31T08:54:14.027+0100 I SHARDING [initandlisten] Marking collection admin.system.version as collection version: <unsharded>\n2020-03-31T08:54:14.031+0100 I SHARDING [initandlisten] Marking collection local.startup_log as collection version: <unsharded>\n2020-03-31T08:54:14.501+0100 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory 'C:/Users/Bob/Sync/Programming/MongoDB/data/diagnostic.data'\n2020-03-31T08:54:14.503+0100 I SHARDING [LogicalSessionCacheRefresh] Marking collection config.system.sessions as collection version: <unsharded>\n2020-03-31T08:54:14.504+0100 I NETWORK [listener] Listening on 127.0.0.1\n2020-03-31T08:54:14.504+0100 I SHARDING [LogicalSessionCacheReap] Marking collection config.transactions as collection version: <unsharded>\n2020-03-31T08:54:14.504+0100 I NETWORK [listener] waiting for connections on port 27017\n2020-03-31T08:54:15.005+0100 I SHARDING [ftdc] Marking collection local.oplog.rs as collection version: <unsharded>\n2020-03-31T08:55:47.470+0100 I CONTROL [thread1] Ctrl-C signal\n2020-03-31T08:55:47.470+0100 I CONTROL [consoleTerminate] got CTRL_C_EVENT, will terminate after current cmd ends\n2020-03-31T08:55:47.471+0100 I NETWORK [consoleTerminate] shutdown: going to close listening sockets...\n2020-03-31T08:55:47.472+0100 I - [consoleTerminate] Stopping further Flow Control ticket acquisitions.\n2020-03-31T08:55:47.472+0100 I CONTROL [consoleTerminate] Shutting down free monitoring\n2020-03-31T08:55:47.473+0100 I FTDC [consoleTerminate] Shutting down full-time diagnostic data capture\n2020-03-31T08:55:47.477+0100 I STORAGE [consoleTerminate] Deregistering all the collections\n2020-03-31T08:55:47.477+0100 I STORAGE [consoleTerminate] Timestamp monitor shutting down\n2020-03-31T08:55:47.477+0100 I STORAGE [consoleTerminate] WiredTigerKVEngine shutting down\n2020-03-31T08:55:47.477+0100 I STORAGE [consoleTerminate] Shutting down session sweeper thread\n2020-03-31T08:55:47.478+0100 I STORAGE [consoleTerminate] Finished shutting down session sweeper thread\n2020-03-31T08:55:47.478+0100 I STORAGE [consoleTerminate] Shutting down journal flusher thread\n2020-03-31T08:55:47.577+0100 I STORAGE [consoleTerminate] Finished shutting down journal flusher thread\n2020-03-31T08:55:47.577+0100 I STORAGE [consoleTerminate] Shutting down checkpoint thread\n2020-03-31T08:55:47.577+0100 I STORAGE [consoleTerminate] Finished shutting down checkpoint thread\n2020-03-31T08:55:47.698+0100 I STORAGE [consoleTerminate] shutdown: removing fs lock...\n2020-03-31T08:55:47.698+0100 I CONTROL [consoleTerminate] now exiting\n2020-03-31T08:55:47.699+0100 I CONTROL [consoleTerminate] shutting down with code:12\n",
"text": "Hi\nI have been using MongoDB Community edition (version mongodb-win32-x86_64-2012plus-4.2.5-signed) on my PC until yesterday, running Windows 10. Not sure if this is relevant, but for background I am accessing the same DB from a laptop (same versions of MongoDB and Windows10), not at the same time (!), so that I can do development work at home or when away. I have the MongoDB files set up in a folder that is mirrored across devices using sync.com (similar to Dropbox). This has worked all worked fine until yesterday.Yesterday mongoDB would not start as a service on my main PC (note it continues to work fine on my laptop - the service starts and I can access the db no problem). I have tried various options suggested on the internet including --repair, trying to give admin permissions to everything (although think this is unnecessary because it worked without special permissions before and from what I can see most recommendations about this are the unix/linux installs) and finally I have done a full uninstall and clean install. I am at a point where the Windows service will not start, nor will it work if I run it from the command line… unless I remove the --server option and then it works fine.So, if I run the following at the command line the service doesn’t start (this is copied and pasted from the Windows services “path to executable”):\n“C:\\Program Files\\MongoDB\\Server\\4.2\\bin\\mongod.exe” --config “C:\\Program Files\\MongoDB\\Server\\4.2\\bin\\mongod.cfg” --serviceHowever, if I run the following at the command line then it works fine and I am able to connect using Compass/run my app:\n“C:\\Program Files\\MongoDB\\Server\\4.2\\bin\\mongod.exe” --config “C:\\Program Files\\MongoDB\\Server\\4.2\\bin\\mongod.cfg”Note this is running in the command prompt without admin rights in both cases, which I think suggests this isn’t a rights issue(?)The log file for the failed start isn’t very informative (to me at least, but then I am completely new to this!) I have included the log below that includes both the failed start using the --server option and then a successful run with the --server option removed. Note I have inserted a blank line between the two runs just for clarity.And finally, if I start the service via the Windows Services app I get error 1053 as per the screenshot below. If I look in the Windows Event Viewer (Windows Logs\\System) there are two entries for this. One with the message below and the second says, “A timeout was reached (30000 milliseconds) while waiting for the MongoDB Server service to connect.” (although it takes less than a couple of seconds to fail, not the 30 seconds this suggests).Does anyone have any suggestions as to why this is happening and how I can fix it?",
"username": "Twelve1110"
},
{
"code": "NetworkServiceLocal System accountNetworkServiceCmd+RservicesMongoDB ServerPropertiesLogOnLocal System accountOK",
"text": "I don’t know what the root cause is, but I appear to have fixed the problem.I had set the database service to startup using the NetworkService account in Windows, which I think was the default as part of the installation process (I wouldn’t swear to it, but I don’t know enough about it to change the option). I have now changed this to use the Local System account option and the service now starts and works without any problems. I have other services running as the NetworkService user to not sure why this is an issue.Path to change things:\nPress Cmd+R keys to bring up start menu\nType services and run the app\nLocate MongoDB Server, right-click and select Properties\nAccess LogOn tab\nSelect Local System account radio button\nSelect OKJob done. I don’t know what problems this might cause me later, but I’ll deal with that when I get to it. ",
"username": "Twelve1110"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB won't start as a windows service, but will start at the command line | 2020-03-31T09:12:17.959Z | MongoDB won’t start as a windows service, but will start at the command line | 38,109 |
|
null | [
"database-tools",
"backup"
] | [
{
"code": "{\n \"ts\":{\n \"$timestamp\":{\n \"t\":1585005237,\n \"i\":1\n }\n },\n \"t\":{\n \"$numberLong\":\"1\"\n },\n \"h\":{\n \"$numberLong\":\"0\"\n },\n \"v\":{\n \"$numberInt\":\"2\"\n },\n \"op\":\"i\",\n \"ns\":\"hello.items\",\n \"ui\":{\n \"$binary\":{\n \"base64\":\"q6+dIqy6TJihZj02fwNm7w==\",\n \"subType\":\"04\"\n }\n },\n \"wall\":{\n \"$date\":{\n \"$numberLong\":\"1585005237006\"\n }\n },\n \"o\":{\n \"_id\":{\n \"$oid\":\"5e7942b415f1175f7de3440c\"\n },\n \"name\":\"Raushan\",\n \"branch\":\"cse\"\n }\n }\n",
"text": "mongodump --host=“rs0/localhost:27017,localhost:27018,localhost:27019” --readPreference=secondary -d local -c oplog.rs --query “{”$and\":[{“o.msg”:{\"$ne\":“periodic noop”}},{“ns”:“hello.items”}]}\"-vvv -o /home/anupama/backupec2/full_backI used it to dump hello database from replicaset using oplog dump but I want to know how can i restore the hello database in another new instance from dump oplog file using mongorestore ismongorestore ./backup/inc_back/local/oplog.bsonmy bson dump output",
"username": "raushan_sharma"
},
{
"code": "",
"text": "Is this restore of a DB or a collection?You can use --nsInclude with --db optionsThe --db option for mongodump specifies the source database to dump.\nThe --db option for mongorestore specifies the target database to restore into.",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I tried but not working even oplog file is not getting changed in other instance.",
"username": "raushan_sharma"
},
{
"code": "",
"text": "I am collecting data from oplog.rs. and looking to restore my db hello and collection item from the oplog.",
"username": "raushan_sharma"
},
{
"code": "--drop--drop",
"text": "So restore succeeds but you don’t see changes?\nDid you try --drop option--drop `` ¶Before restoring the collections from the dumped backup, drops the collections from the target database. --drop does not drop collections that are not in the backup.",
"username": "Ramachandra_Tummala"
}
] | Restore oplog Dump into another instance | 2020-03-31T07:02:02.221Z | Restore oplog Dump into another instance | 3,828 |
null | [
"aggregation"
] | [
{
"code": "db.getCollection('reg').aggregate([{\n '$match': {\n '$and': [\n {\n 'companyID': 11\n },\n {\n 'created': {\n '$gte': 1556726597\n }\n },\n {\n 'created': {\n $lt: 1580572997\n }\n }\n ]\n }\n},\n{\n '$project': {\n\n 'testID': 1,\n\n }\n},\n{\n '$group': {\n '_id': '$testID',\n 'registrationsCount': { '$sum': 1 },\n },\n},\n{\n $group: {\n _id: null,\n count: { $sum: 1 }\n }\n}\n]) \n{\n \"_id\" : null,\n \"count\" : 10.0\n} \n{\n \"_id\" : NumberLong(1),\n \"appUserID\" : NumberLong(4294967295),\n \"companyID\" : NumberLong(5),\n \"created\" : NumberLong(1372625588),\n \"testID\" : NumberLong(11),\n \"isCheckIn\" : true,\n \"lastModified\" : NumberLong(1372625588),\n \"source\" : \"upload\",\n \"timeArrived\" : NumberLong(1343062512),\n}\n",
"text": "I need a query that takes multiple ‘companyID’s’ and return the count for each company.Currently this query only does this for one companyID and it does not return the id but just ‘null’ like show below.I understand that I can use the ‘in’ operator for multiple companyID’s but not sure how I would go about having the query return multiple objects of counts for each companyID.The result belowSchema below",
"username": "Jamal_Westfield"
},
{
"code": "{ _id: 1, companyID: 12, testID: 411, created: 1556726597 },\n{ _id: 2, companyID: 12, testID: 612, created: 1556726598 },\n{ _id: 3, companyID: 15, testID: 913, created: 1556726599 }, // created is out of range\n{ _id: 4, companyID: 19, testID: 814, created: 1556726586 }, // companyID doesn't match\n{ _id: 5, companyID: 12, testID: 215, created: 1556726588 },\n{ _id: 6, companyID: 15, testID: 719, created: 1556726591 }\ndb.test.aggregate( [\n { \n $match: { \n companyID: { $in: [ 12, 15 ] }, \n created: { $gt: 1556726585, $lt: 1556726599 }\n } \n },\n { \n $group: { \n _id: \"$companyID\", \n registrationsCount: { $sum: 1 } \n } \n },\n { \n $project: { \n companyID: \"$_id\", \n registrationsCount: 1, \n _id: 0 \n } \n }\n] )\n{ \"registrationsCount\" : 1, \"companyID\" : 15 }\n{ \"registrationsCount\" : 3, \"companyID\" : 12 }\n$match",
"text": "I need a query that takes multiple ‘companyID’s’ and return the count for each company.\n…\nI understand that I can use the ‘in’ operator for multiple companyID’s but not sure how I would go about having the query return multiple objects of counts for each companyID.Consider the following documents in a test collection (similar to that of yours):The following aggregationreturns:Reference: $in query operator used in the above aggregation’s $match stage.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "stID: 411, created: 1556726597 }, { _id: 2, companyID: 12, testID: 612, created: 1556726598 }, { _id: 3, companyID: 15, testID: 913, created: 1556726599 }, // created is out of range { _id: 4, companyID: 19, testID: 814, created: 1556726586 }, // companyID doesn’t match { _id: 5, companyID: 12, testID: 215, created: 1556726588 }, { _id: 6, companyID: 15, testID:@Prasad_Saya Thanks for this but I not exactly what I was after. companyID can have multiple testID;s. So, I was counting all test ids that had a regcount over 1. Only issue was that it returns id as null.I need a way to include multiple company ids that returns the test counts for each of the companyIDs entered.",
"username": "Jamal_Westfield"
},
{
"code": "",
"text": "I need a way to include multiple company ids that returns the test counts for each of the companyIDs entered.Please post what a sample output document looks like.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "{ “_id” : null, “count” : 10.0 }@Prasad_Saya below is the response, this means that company had 10 testID’s with a regcount of 1 or more. The issue is that the second $group makes the id null, which i want as the companyid. The next issue is I want to make the query return a object like below for each companyID I put in the query{\n“_id” : null,\n“count” : 10.0\n}",
"username": "Jamal_Westfield"
},
{
"code": "companyID testID count\n11 112 2\n11 119 1\n11 120 5\n12 145 3\n12 290 2\n\n{ \"companyID \" : 11, \"count\" : 8 }\n{ \"companyID \" : 12, “count\" : 5 }",
"text": "It is not quite clear to me, I am afraid. But, I think the representation of data (after counting the testID’s for the companies) and the output I am thinking is like this:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "@Prasad_Saya That output is correct, Just for clarity - the query you proposed does not output this correct? because the count would need derive from testID",
"username": "Jamal_Westfield"
},
{
"code": "",
"text": "Just for clarity - the query you proposed does not output this correct? because the count would need derive from testIDActually, the count from my previous query returns the same output. Is there separate count field for each testID in the input schema (testID is an identifier, not a counter, I think)?",
"username": "Prasad_Saya"
},
{
"code": "db.getCollection('registrations').aggregate([ {\n '$match' : {\n '$and' : [\n {\n 'companyID' : 7837\n },\n {\n 'created' : {\n '$gte' : 1532131200\n }\n },\n {\n 'created' : {\n $lt : 1556560058\n }\n }\n ]\n }\n },\n {\n '$project' : {\n 'eventID' : 1\n }\n },\n {\n '$group' : {\n '_id' : '$eventID',\n 'registrationsCount' : {'$sum' : 1},\n },\n }\n// { \n// $group: { \n// _id: null, \n// count: { $sum: 1 } \n// } \n// }\n])\n{\n \"_id\" : 240399,\n \"registrationsCount\" : 23.0\n}\n\n/* 2 */\n{\n \"_id\" : 238853,\n \"registrationsCount\" : 23.0\n}\n\n/* 3 */\n{\n \"_id\" : 238104,\n \"registrationsCount\" : 2.0\n}\n\n/* 4 */\n{\n \"_id\" : 237096,\n \"registrationsCount\" : 49.0\n}\n{\n \"_id\" : null,\n \"count\" : 4.0\n}\n",
"text": "db.test.aggregate( [ { $match: { companyID: { $in: [ 12, 15 ] }, created: { $gt: 1556726585, $lt: 1556726599 } } }, { $group: { _id: “$companyID”, registrationsCount: { $sum: 1 } } }, { project: { companyID: \"_id\", registrationsCount: 1, _id: 0 } } ] )@Prasad_Saya sorry Im still beginner level - the below may help clear up;This is the same query as posted by the second group commented outThis is the output - as you can each “_id” is now the testID and each has their own registrationsCountThe output from original query would be the below;",
"username": "Jamal_Westfield"
},
{
"code": "",
"text": "can each “_id” is now the testID and each has their own registrationsCount\n{\n“_id” : 240399,@Prasad_Saya $eventID is the same as when I mention $testID - I just changed for this post",
"username": "Jamal_Westfield"
}
] | Edit query to output multiple IDs | 2020-03-30T21:01:25.675Z | Edit query to output multiple IDs | 7,559 |
null | [
"containers",
"installation"
] | [
{
"code": "",
"text": "I want to setup a raspberry pi 4 as a MongoDB server.\nThe latest official mongoDB docker image does not support the raspberry pi 4 arhitecture.\nIs there any docker image you would recommend I should use?\n(The one I found available and working was 2.4.0 version, I would prefer to work with a more newer version, mostly because I’m just starting to work with this.)\n**Note that I’m a starter in docker, mongodb, raspberry.",
"username": "Emil_Marian_Pasca"
},
{
"code": "",
"text": "Welcome to the community @Emil_Marian_Pasca!What O/S version are you running on your Pi 4? WiredTiger, the default storage engine since MongoDB 3.2, requires a 64-bit O/S.Given the Pi has limited resources I would be inclined to install packages directly rather than using Docker.As an alternative to installing a MongoDB server on your Pi, you could also consider using the MongoDB Atlas free tier to get started quickly.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thanks for your suggestions, my specs are:\nLinux piOne 4.19.75-v7l+ BST 2019 armv7l GNU/Linux\nDistributor ID: Raspbian\nDescription: Raspbian GNU/Linux 10 (buster)\nRelease: 10\nCodename: busterI’ve switch to the MongoDB Atlass free tier, which suits my current needs, as I’m just following the MongoDB university lessons.\nHowever, I like to have a local sandbox for testing purposes on my raspberry pi, docker helps me keep the OS clean and easy to manage, I would probably consider installing it directly if my project requires this.I have just figured out that I am running a 32 bit O/S. So this is probably the reason I could not install a later version of MongoDB.",
"username": "Emil_Marian_Pasca"
},
{
"code": "",
"text": "I have just figured out that I am running a 32 bit O/S. So this is probably the reason I could not install a later version of MongoDB.Hi Emil,A 64-bit distro is definitely required. I believe Raspian is remaining 32-bit for the foreseeable future, but you might want to look into Ubuntu for Pi.I don’t have a Pi 4 to confirm the installation, but Ubuntu 18.04 LTS on ARM64 is a supported platform for MongoDB 4.2.Regards,\nStennie",
"username": "Stennie_X"
}
] | MongoDB raspberry pi 4 docker image? | 2020-03-30T20:10:01.678Z | MongoDB raspberry pi 4 docker image? | 21,403 |
null | [
"node-js",
"legacy-realm-cloud"
] | [
{
"code": "",
"text": "Hello,does Realm Js 5.x.x work with QBS? Is everything available (Permissions, etc)?Thank you.",
"username": "Aurelio_Petrone"
},
{
"code": "",
"text": "Yes. The release note will always list breaking changes: Releases · realm/realm-js · GitHub.",
"username": "Brian_Munkholm"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Does Realm Js 5.x.x work with QBS? | 2020-03-31T08:12:07.420Z | Does Realm Js 5.x.x work with QBS? | 2,069 |
null | [
"legacy-realm-cloud"
] | [
{
"code": "",
"text": "Hi there,We use currently use realm studio to access the instance logs but that only shows the logs for the last few hours. How can we access the full log for the entire day/week for example?Cheers.",
"username": "Mo_Basm"
},
{
"code": "",
"text": "Hi Mo,For additional logs or help investigating a Realm Cloud operational issue, please open a new case on the Support Portal.Regards,\nStennie",
"username": "Stennie_X"
}
] | How to access the full realm cloud instance logs? | 2020-03-27T22:27:54.278Z | How to access the full realm cloud instance logs? | 2,388 |
Subsets and Splits