image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"dot-net",
"atlas-device-sync"
] | [
{
"code": "",
"text": "I had a (working) single-realm realm-app, which has now been partitioned into 3 realms.\nMomentarily worked fine.\nA smaller Android version which just needed to access one of the realms tried to link but failed.\nSorted out the problem (field nullable on one side not the other). This now connects and syncs.\nSince then, the Mother app crashes with the dreaded 0xc0000409 error and tells Microsoft about the problem! (I presume the problem here is the lack of an intermediate JNI-like layer in Dot-net).\nThe Logs show everything is fine.\nThe crash happens at Realms.Realm.GetInstanceAsync(config) call.\nInterestingly, it occurs on all three of the realms individually.\nIs there any way at all of finding out what the issue is?Maybe it’s Embedded objects. A couple of questions:Thanks a lot",
"username": "Richard_Fairall"
},
{
"code": "",
"text": "Ugh! That crash is concerning. Does this reproduce consistently and do you have anything meaningful in the client logs before the crash?To your questions - no, embedded objects don’t need _id and partition field and no, they can’t have their own collections.On a side note, for bug/crash reports, you’ll get faster response times if you file a GitHub issue directly as engineers will prioritize those over community forums (which is why it took so long to get a response here - sorry about that ).",
"username": "nirinchev"
},
{
"code": "",
"text": "Thanks for your reply. It wasn’t so urgent because we have another version without the embedded objects.\nHowever, I do prefer the embedded system.I seem to have found the problem:\nThe Android embedded objects have to have optional bools. These were changed to optional in the Dot-net version and the app no longer crashes. I terminated sync, restarted and the Dot-net mother app has finally synced.\nWorth mentioning in the documentation, although I don’t think there are too many users syncing Android and Dot-net apps yet.Shame you can’t find a way of tunnelling an error or exception through to the user on dot-net.\nAndroid is more informative.The documentation for Embedded objects is really confusing. The Dot-net example shows an Object Id and a partition field, whereas the IOS and Android version don’t.\nOddly, the schema examples show declarations of the string fields like:\n“properties”: {\n“street”: “string”,\n“city”: “string”,\n“country”: “string”,\n“postalCode”: “string”\n}\nHowever I get “invalid JSON schema” when saving these.\nI have to do the full monty\n“properties”: {\n“street”: {bysonType: “string”},\netc\n}Thanks again, I will check out the Github issues.",
"username": "Richard_Fairall"
},
{
"code": "_id_partitionBooleanbooleanbool?bool",
"text": "Hey, thanks for the feedback and glad to hear things are working now! I filed docs tickets to remove the _id and _partition properties from the .NET example - they shouldn’t have been there. I also filed a ticket to fix the JSON Schema declaration.Regarding the nullability of properties - in Java, if you use Boolean, the field will be optional, but boolean will be required. Those are equivalent to bool? and bool in C#.Finally, in the Cloud portal, we have a page that shows the data models for the various languages so you don’t have to manually type it out. So if you start with Android, turn on dev mode, sync your schema, then turn off dev mode, you’ll be able to just copy-paste the C# models, and since those are generated from the JSON Schema, they’ll match exactly the Android types. Here’s an example:Screen Shot 2021-04-23 at 09.58.071644×1504 286 KBObviously, since those are generated, they may not match precisely your coding style preferences, so you may need to adjust them, but should eliminate those pesky hard to track inconsistencies where a type is nullable in one language but required in another.",
"username": "nirinchev"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm.GetInstanceAsync(config) call fails - Could it be embedded objects? | 2021-04-16T15:48:01.478Z | Realm.GetInstanceAsync(config) call fails - Could it be embedded objects? | 2,666 |
null | [] | [
{
"code": "if registry == nil {\n\n registry = bson.DefaultRegistry\n\n}\n\nif val == nil {\n\n return nil, ErrNilDocument\n\n}\n\nif bs, ok := val.([]byte); ok {\n\n // Slight optimization so we'll just use MarshalBSON and not go through the codec machinery.\n\n val = bson.Raw(bs)\n\n}\n\nif !mapAllowed {\n\n refValue := reflect.ValueOf(val)\n\n if refValue.Kind() == reflect.Map && refValue.Len() > 1 {\n\n return nil, ErrMapForOrderedArgument{paramName}\n\n }\n\n}\n\n// TODO(skriptble): Use a pool of these instead.\n\nbuf := make([]byte, 0, 256)\n\nb, err := bson.MarshalAppendWithRegistry(registry, buf[:0], val)\n\nif err != nil {\n\n return nil, MarshalError{Value: val, Err: err}\n\n}\n\nreturn b, nil\n",
"text": "func transformBsoncoreDocument(registry *bsoncodec.Registry, val interface{}, mapAllowed bool, paramName string) (bsoncore.Document, error) {}Multiple conditional sort is not supported from the source code?why?",
"username": "ezreal_pan"
},
{
"code": "",
"text": "I know why but how to solve this problem?",
"username": "ezreal_pan"
}
] | No support for multiple conditional sorts? | 2021-04-23T05:40:19.348Z | No support for multiple conditional sorts? | 1,739 |
null | [] | [
{
"code": "2021-04-06T13:21:09.988+0000 I NETWORK [listener] connection accepted from 10.45.40.185:32792 #2579476 (580 connections now open)\n2021-04-06T13:21:09.989+0000 I NETWORK [conn2579476] received client metadata from 10.45.40.185:32792 conn2579476: { driver: { name: \"PyMongo\", version: \"3.7.1\" }, os: { type: \"Linux\", name: \"Linux\", architecture: \"x86_64\", version: \"4.14.154-99.181.amzn1.x86_64\" }, platform: \"CPython 3.6.9.final.0\" }\n2021-04-06T13:21:09.990+0000 I NETWORK [listener] connection accepted from 10.45.40.185:32794 #2579477 (581 connections now open)\n2021-04-06T13:21:09.990+0000 I NETWORK [conn2579477] received client metadata from 10.45.40.185:32794 conn2579477: { driver: { name: \"PyMongo\", version: \"3.7.1\" }, os: { type: \"Linux\", name: \"Linux\", architecture: \"x86_64\", version: \"4.14.154-99.181.amzn1.x86_64\" }, platform: \"CPython 3.6.9.final.0\" }\n2021-04-06T13:21:15.832+0000 F - [conn2575456] Got signal: 3 (Quit).\n0x562ab49ddda1 0x562ab49dcfb9 0x562ab49dd49d 0x7f155a62a600 0x7f155a629b7d 0x562ab4417e5a 0x562ab4417f18 0x562ab424c11e 0x562ab4255b49 0x562ab4262a5f 0x562ab426635c 0x562ab42668e8 0x562ab307d223 0x562ab307da2d 0x562ab3081101 0x562ab4230df5 0x562ab4934e24 0x7f155a622e75 0x7f155a34b8fd\n----- BEGIN BACKTRACE -----\n{\"backtrace\":[{\"b\":\"562AB25B7000\",\"o\":\"2426DA1\",\"s\":\"_ZN5mongo15printStackTraceERSo\"},{\"b\":\"562AB25B7000\",\"o\":\"2425FB9\"},{\"b\":\"562AB25B7000\",\"o\":\"242649D\"},{\"b\":\"7F155A61B000\",\"o\":\"F600\"},{\"b\":\"7F155A61B000\",\"o\":\"EB7D\",\"s\":\"recvmsg\"},{\"b\":\"562AB25B7000\",\"o\":\"1E60E5A\",\"s\":\"_ZN4asio6detail10socket_ops4recvEiP5iovecmiRSt10error_code\"},{\"b\":\"562AB25B7000\",\"o\":\"1E60F18\",\"s\":\"_ZN4asio6detail10socket_ops9sync_recvEihP5iovecmibRSt10error_code\"},{\"b\":\"562AB25B7000\",\"o\":\"1C9511E\",\"s\":\"_ZN4asio6detail20read_buffer_sequenceINS_19basic_stream_socketINS_7generic15stream_protocolEEENS_17mutable_buffers_1EPKNS_14mutable_bufferENS0_14transfer_all_tEEEmRT_RKT0_RKT1_T2_RSt10error_code\"},{\"b\":\"562AB25B7000\",\"o\":\"1C9EB49\",\"s\":\"_ZN5mongo9transport18TransportLayerASIO11ASIOSession17opportunisticReadIN4asio19basic_stream_socketINS4_7generic15stream_protocolEEENS4_17mutable_buffers_1EEENS_14future_details6FutureIvEERT_RKT0_RKSt10shared_ptrINS0_5BatonEE\"},{\"b\":\"562AB25B7000\",\"o\":\"1CABA5F\",\"s\":\"_ZN5mongo9transport18TransportLayerASIO11ASIOSession4readIN4asio17mutable_buffers_1EEENS_14future_details6FutureIvEERKT_RKSt10shared_ptrINS0_5BatonEE\"},{\"b\":\"562AB25B7000\",\"o\":\"1CAF35C\",\"s\":\"_ZN5mongo9transport18TransportLayerASIO11ASIOSession17sourceMessageImplERKSt10shared_ptrINS0_5BatonEE\"},{\"b\":\"562AB25B7000\",\"o\":\"1CAF8E8\",\"s\":\"_ZN5mongo9transport18TransportLayerASIO11ASIOSession13sourceMessageEv\"},{\"b\":\"562AB25B7000\",\"o\":\"AC6223\",\"s\":\"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE\"},{\"b\":\"562AB25B7000\",\"o\":\"AC6A2D\",\"s\":\"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE\"},{\"b\":\"562AB25B7000\",\"o\":\"ACA101\"},{\"b\":\"562AB25B7000\",\"o\":\"1C79DF5\"},{\"b\":\"562AB25B7000\",\"o\":\"237DE24\"},{\"b\":\"7F155A61B000\",\"o\":\"7E75\"},{\"b\":\"7F155A24D000\",\"o\":\"FE8FD\",\"s\":\"clone\"}],\"processInfo\":{ \"mongodbVersion\" : \"4.0.9\", \"gitVersion\" : \"fc525e2d9b0e4bceff5c2201457e564362909765\", \"compiledModules\" : [], \"uname\" : { \"sysname\" : \"Linux\", \"release\" : \"4.14.154-99.181.amzn1.x86_64\", \"version\" : \"#1 SMP Sat Nov 16 01:38:34 UTC 2019\", \"machine\" : \"x86_64\" }, \"somap\" : [ { \"b\" : \"562AB25B7000\", \"elfType\" : 3, \"buildId\" : \"1608990BA9F24FFB0C9133E50C74957A69393AE7\" }, { \"b\" : \"7FFECAFE8000\", \"elfType\" : 3, \"buildId\" : \"644D60907530E0AC3CB1910CD1CECD19BFB27BBD\" }, { \"b\" : \"7F155BA44000\", \"path\" : \"/usr/lib64/libcurl.so.4\", \"elfType\" : 3, \"buildId\" : \"CC3772AD47FA099DFDA2B50861CCD92FA719D101\" }, { \"b\" : \"7F155B82B000\", \"path\" : \"/lib64/libresolv.so.2\", \"elfType\" : 3, \"buildId\" : \"9CBEE9AA7ED85AD5BE053B483993D677420A765E\" }, { \"b\" : \"7F155B3CC000\", \"path\" : \"/lib64/libcrypto.so.10\", \"elfType\" : 3, \"buildId\" : \"3270D2720328EEC2846C4B0D993582A0F657F54B\" }, { \"b\" : \"7F155B15B000\", \"path\" : \"/lib64/libssl.so.10\", \"elfType\" : 3, \"buildId\" : \"183215EA0DA6EE9C80A1E3A3319EC2905D1BF6E0\" }, { \"b\" : \"7F155AF57000\", \"path\" : \"/lib64/libdl.so.2\", \"elfType\" : 3, \"buildId\" : \"4663D1734EAE35F43F257D29615C1AFF5E060AE0\" }, { \"b\" : \"7F155AD4F000\", \"path\" : \"/lib64/librt.so.1\", \"elfType\" : 3, \"buildId\" : \"C07056C6DA664000A4DAAF8960AB182A8602E910\" }, { \"b\" : \"7F155AA4D000\", \"path\" : \"/lib64/libm.so.6\", \"elfType\" : 3, \"buildId\" : \"08C69C7E15BA7B4E199D2FDC1DC29B1CC1996BC1\" }, { \"b\" : \"7F155A837000\", \"path\" : \"/lib64/libgcc_s.so.1\", \"elfType\" : 3, \"buildId\" : \"A03C9A80E995ED5F43077AB754A258FA0E34C3CD\" }, { \"b\" : \"7F155A61B000\", \"path\" : \"/lib64/libpthread.so.0\", \"elfType\" : 3, \"buildId\" : \"383B229C0E6E99B4E3BA6FC8B8C096C103226984\" }, { \"b\" : \"7F155A24D000\", \"path\" : \"/lib64/libc.so.6\", \"elfType\" : 3, \"buildId\" : \"8BDBE5043577FC2EA218FAFD7EDF175D219698FB\" }, { \"b\" : \"7F155BCCB000\", \"path\" : \"/lib64/ld-linux-x86-64.so.2\", \"elfType\" : 3, \"buildId\" : \"405C4E6374AAAB00F3A7F7986679078870DC2460\" }, { \"b\" : \"7F155A027000\", \"path\" : \"/usr/lib64/libnghttp2.so.14\", \"elfType\" : 3, \"buildId\" : \"903C20D899C962C2E93B006E3BB7172C83D8ACF4\" }, { \"b\" : \"7F1559DD9000\", \"path\" : \"/usr/lib64/libidn2.so.0\", \"elfType\" : 3, \"buildId\" : \"8B0B0729CFCBDFC58A731E716A5CFE88EFFD45A2\" }, { \"b\" : \"7F1559BB1000\", \"path\" : \"/usr/lib64/libssh2.so.1\", \"elfType\" : 3, \"buildId\" : \"E03CF776B39054AC3B2EA2AB15B161A858B5732C\" }, { \"b\" : \"7F155993C000\", \"path\" : \"/usr/lib64/libpsl.so.0\", \"elfType\" : 3, \"buildId\" : \"09BFE69665CFEEC18F81D8C4A971DCA29310186C\" }, { \"b\" : \"7F15596EF000\", \"path\" : \"/usr/lib64/libgssapi_krb5.so.2\", \"elfType\" : 3, \"buildId\" : \"FE25985243C2977094769887043CD7CE965DEDAD\" }, { \"b\" : \"7F1559406000\", \"path\" : \"/usr/lib64/libkrb5.so.3\", \"elfType\" : 3, \"buildId\" : \"CB869BC8EA16FDF97808C539A9C213E2F4ED73CE\" }, { \"b\" : \"7F15591D3000\", \"path\" : \"/usr/lib64/libk5crypto.so.3\", \"elfType\" : 3, \"buildId\" : \"BCC1AEAE6B693FAB99579E8D18B116AC9555D17F\" }, { \"b\" : \"7F1558FD0000\", \"path\" : \"/usr/lib64/libcom_err.so.2\", \"elfType\" : 3, \"buildId\" : \"AB007F5DF96C66E515542598F5BE1429ED63D86F\" }, { \"b\" : \"7F1558D7D000\", \"path\" : \"/lib64/libldap-2.4.so.2\", \"elfType\" : 3, \"buildId\" : \"76EEFC9EBC6A58F6C21768893861BF4EFBA28B82\" }, { \"b\" : \"7F1558B6E000\", \"path\" : \"/lib64/liblber-2.4.so.2\", \"elfType\" : 3, \"buildId\" : \"79DD9D561E8287839B88B031A4171D4BAE2D2576\" }, { \"b\" : \"7F1558958000\", \"path\" : \"/lib64/libz.so.1\", \"elfType\" : 3, \"buildId\" : \"89C6AF118B6B4FB6A73AE1813E2C8BDD722956D1\" }, { \"b\" : \"7F1558642000\", \"path\" : \"/usr/lib64/libunistring.so.0\", \"elfType\" : 3, \"buildId\" : \"2B090A6860553944846E3C227B6AD12F279B304F\" }, { \"b\" : \"7F15582CC000\", \"path\" : \"/usr/lib64/libicuuc.so.50\", \"elfType\" : 3, \"buildId\" : \"3207ED4AD484C205F537B6B9C52665390816FE2B\" }, { \"b\" : \"7F15580BC000\", \"path\" : \"/usr/lib64/libkrb5support.so.0\", \"elfType\" : 3, \"buildId\" : \"1447F994433DA2A94377D03DA49A5E78BEA2AD65\" }, { \"b\" : \"7F1557EB9000\", \"path\" : \"/lib64/libkeyutils.so.1\", \"elfType\" : 3, \"buildId\" : \"37A58210FA50C91E09387765408A92909468D25B\" }, { \"b\" : \"7F1557C9E000\", \"path\" : \"/usr/lib64/libsasl2.so.2\", \"elfType\" : 3, \"buildId\" : \"354560FFC93703E5A80EEC8C66DF9E59DA335001\" }, { \"b\" : \"7F1557A46000\", \"path\" : \"/usr/lib64/libssl3.so\", \"elfType\" : 3, \"buildId\" : \"7693FEC8196F8ADB894C80EDF5AC0822128FC7BF\" }, { \"b\" : \"7F155781F000\", \"path\" : \"/usr/lib64/libsmime3.so\", \"elfType\" : 3, \"buildId\" : \"C779ABB5959D9C27C37DCF8E61A057D104B6F671\" }, { \"b\" : \"7F15574F8000\", \"path\" : \"/usr/lib64/libnss3.so\", \"elfType\" : 3, \"buildId\" : \"E43EC69F6E0BE4B9CF0678F021E519FEEB92A369\" }, { \"b\" : \"7F15572C8000\", \"path\" : \"/usr/lib64/libnssutil3.so\", \"elfType\" : 3, \"buildId\" : \"CDB980E3F163A54FC153EC747FBDA659222AD61B\" }, { \"b\" : \"7F15570C4000\", \"path\" : \"/lib64/libplds4.so\", \"elfType\" : 3, \"buildId\" : \"57C3901BDBF9C1F6150DCE3A269EBC701CF4A948\" }, { \"b\" : \"7F1556EBF000\", \"path\" : \"/lib64/libplc4.so\", \"elfType\" : 3, \"buildId\" : \"E92FA782A5BB19F0AFB6C83D35F176233AEBA151\" }, { \"b\" : \"7F1556C81000\", \"path\" : \"/lib64/libnspr4.so\", \"elfType\" : 3, \"buildId\" : \"BA485B89AE011611C28A3F96AFEE5FC6B9F15B7C\" }, { \"b\" : \"7F15556AF000\", \"path\" : \"/usr/lib64/libicudata.so.50\", \"elfType\" : 3, \"buildId\" : \"D42D574AC100115C507E48AFC346DCD5546B825A\" }, { \"b\" : \"7F155532A000\", \"path\" : \"/usr/lib64/libstdm.so.6\", \"elfType\" : 3, \"buildId\" : \"8791DDD49348603CD50B74652C5B25354D8FD06E\" }, { \"b\" : \"7F1555109000\", \"path\" : \"/usr/lib64/libselinux.so.1\", \"elfType\" : 3, \"buildId\" : \"F5054DC94443326819FBF3065CFDF5E4726F57EE\" }, { \"b\" : \"7F1554ED2000\", \"path\" : \"/lib64/libcrypt.so.1\", \"elfType\" : 3, \"buildId\" : \"8DEE27472DF04C068D3FB7D5EBD80B5829B92EC3\" }, { \"b\" : \"7F1554CD0000\", \"path\" : \"/lib64/libfreebl3.so\", \"elfType\" : 3, \"buildId\" : \"C93088FEDB7ADACD950BDBE9786D807AB9B949B2\" } ] }}\nmongod(_ZN5mongo15printStackTraceERSo+0x41) [0x562ab49ddda1]\nmongod(+0x2425FB9) [0x562ab49dcfb9]\nmongod(+0x242649D) [0x562ab49dd49d]\nlibpthread.so.0(+0xF600) [0x7f155a62a600]\nlibpthread.so.0(recvmsg+0x2D) [0x7f155a629b7d]\nmongod(_ZN4asio6detail10socket_ops4recvEiP5iovecmiRSt10error_code+0x6A) [0x562ab4417e5a]\nmongod(_ZN4asio6detail10socket_ops9sync_recvEihP5iovecmibRSt10error_code+0x68) [0x562ab4417f18]\nmongod(_ZN4asio6detail20read_buffer_sequenceINS_19basic_stream_socketINS_7generic15stream_protocolEEENS_17mutable_buffers_1EPKNS_14mutable_bufferENS0_14transfer_all_tEEEmRT_RKT0_RKT1_T2_RSt10error_code+0x8E) [0x562ab424c11e]\nmongod(_ZN5mongo9transport18TransportLayerASIO11ASIOSession17opportunisticReadIN4asio19basic_stream_socketINS4_7generic15stream_protocolEEENS4_17mutable_buffers_1EEENS_14future_details6FutureIvEERT_RKT0_RKSt10shared_ptrINS0_5BatonEE+0x99) [0x562ab4255b49]\nmongod(_ZN5mongo9transport18TransportLayerASIO11ASIOSession4readIN4asio17mutable_buffers_1EEENS_14future_details6FutureIvEERKT_RKSt10shared_ptrINS0_5BatonEE+0x13F) [0x562ab4262a5f]\nmongod(_ZN5mongo9transport18TransportLayerASIO11ASIOSession17sourceMessageImplERKSt10shared_ptrINS0_5BatonEE+0x9C) [0x562ab426635c]\nmongod(_ZN5mongo9transport18TransportLayerASIO11ASIOSession13sourceMessageEv+0x48) [0x562ab42668e8]\nmongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x493) [0x562ab307d223]\nmongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x11D) [0x562ab307da2d]\nmongod(+0xACA101) [0x562ab3081101]\nmongod(+0x1C79DF5) [0x562ab4230df5]\nmongod(+0x237DE24) [0x562ab4934e24]\nlibpthread.so.0(+0x7E75) [0x7f155a622e75]\nlibc.so.6(clone+0x6D) [0x7f155a34b8fd]\n----- END BACKTRACE -----\n",
"text": "MongoDB server suddenly crashes with Got signal: 3 (Quit).Stack trace looks like this:Can someone provide any information on why this crash happens.\nThe crash happens once in a while typically when the load on the node is high.",
"username": "Abhishek_Sinha1"
},
{
"code": "dmesg | egrep -i “killed process”",
"text": "Welcome to the MongoDB Community @Abhishek_Sinha1!Please share some more details about your deployment:The crash happens once in a while typically when the load on the node is high.Does high load also correlate with low free memory or increased swap usage?One likely possibility is the Linux Out-Of-Memory (OOM) process killer looking to free up RAM. If this is the culprit, there should be some evidence in your system logs, eg: dmesg | egrep -i “killed process”.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "@Stennie_X\nMongoDB version: 4.0.9\nOS: { type: “Linux”, name: “Linux”, architecture: “x86_64”, version: “4.14.154-99.181.amzn1.x86_64” }\ndeployment: standaloneHigh load does not relate to low memory.\nAlso, a couple of days back, it crashed without any load at all. There is no OOM scenario. The system had 150GB+ free memory.",
"username": "Abhishek_Sinha1"
},
{
"code": "",
"text": "@Stennie_X can you share any pointers to debug further?",
"username": "Abhishek_Sinha1"
},
{
"code": "mongodmongod",
"text": "Hi @Abhishek_Sinha1I haven’t been able to find any reported issues with Signal 3 so far, and from your description it doesn’t seem like there are any pattern to the crashes at all.However, I would like to suggest some things that may improve the situation:It will be helpful if you can provide any pattern to the crashes (e.g. what operation is Pymongo doing before any crash, how many connections it handles before it crash, time of day of the crashes, etc.), as it’s difficult to pin down causes of issues with no known pattern. However, if there is no pattern, typically it’s a hardware issue with random memory corruption or similar.Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "imgonline-com-ua-twotoone-ORPOhGx218P34R3571×1829 1.97 MB Hi @kevinadiThanks for your suggestions. I will see if these can be incorporated.Regarding the pattern, the crash happens when the load on MongoDB is high (around 500 connections). This is during the morning hours (GMT) when we run a lot of jobs on the server. PyMongo operation majorly involves find and update queries.We enabled verbose logging as well on MongoDB.\nI am attaching a few more images of the logs that we captured.",
"username": "Abhishek_Sinha1"
},
{
"code": "",
"text": "Hi @Abhishek_Sinha1Please don’t post screenshots if possible as they are hard to read and not searchable Regarding the pattern, the crash happens when the load on MongoDB is high (around 500 connections).Typically this implies an OS enforced limit of some kind. As MongoDB does not kill itself even under high load (but will obediently push through no matter how long an operation takes), I would look for any OS level setting that limit disk/CPU/memory usage as mentioned in the Production Notes, and double check that all limits are set to the recommended levels.One question I neglected to ask was how did you install MongoDB? Are you following the instructions at Install MongoDB Community Edition on Amazon Linux, using Docker, or some other method? If you’re using Docker or similar method, additional limits may be enforced by the container host in addition to the OS.Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "@kevinadiMongoDB is installed by downloading tgz binaries. It is a standalone server.I just noticed this in the log:\n2021-04-20T11:56:50.402+0000 I CONTROL [initandlisten] build environment:\n2021-04-20T11:56:50.402+0000 I CONTROL [initandlisten] distmod: rhel62\n2021-04-20T11:56:50.402+0000 I CONTROL [initandlisten] distarch: x86_64\n2021-04-20T11:56:50.402+0000 I CONTROL [initandlisten] target_arch: x86_64The distmod is rhel62, Could this be causing the issue? Is there any Amazon Linux specific dependency? Since behind the scene, I believe Amazon Linux is on RHEL itself.",
"username": "Abhishek_Sinha1"
},
{
"code": "mongod",
"text": "Hi @Abhishek_Sinha1If the mongod process can run without issues unless it’s under a high load, I don’t think the problem was caused by any dependency issues. If it was, I would think that the process would have multiple issues to start up.I would encourage you to match up the values in your deployment and the Production Notes for any discrepancies, and also use a more recent MongoDB versions and a supported OS as well to minimize the risk of issues.Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "@kevinadiWhat I meant was the production server is on Amazon Linux 1 whereas we have used CentOS 6.2 MongoDB package.\nSince the stack trace had some errors from libpthread and C functions, I thought that there could be some incompatibility between these two which is causing these random failures.Is there any way to decode the backtrace which I posted in the first comment in order to understand the issue better?",
"username": "Abhishek_Sinha1"
},
{
"code": "mongod",
"text": "Hi @Abhishek_Sinha1I don’t believe there was a crash, actually. Isn’t mongod was killed by Signal 3 (Quit) in all cases?Unless I’m missing something, I think the more productive way is to investigate how and where that Signal 3 is coming from.Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Yes, It has been signal 3 always.",
"username": "Abhishek_Sinha1"
},
{
"code": "",
"text": "@kevinadiAny pointers to debug the signal 3 request?\nWe have gone through dmesg, kern, audit logs but did not find any occurrence of any log sending signal 3.2021-04-06T13:21:15.832+0000 F - [conn2575456] Got signal: 3 (Quit)Here, signal 3 is associated with a connection id and this connection is from PyMongo. Is there a possibility of the client sending a signal on any particular query?Also, we are running MongoDB on NUMA. Could that be a possibility?",
"username": "Abhishek_Sinha1"
},
{
"code": "",
"text": "Hi @Abhishek_Sinha1Here, signal 3 is associated with a connection id and this connection is from PyMongo. Is there a possibility of the client sending a signal on any particular query?I don’t think any official drivers have a “server kill-switch” since that will be quite dangerous and prone to abuse. If a driver during normal operation can bring down a server by sending kill signals by itself (without you instructing it to do so), please file a ticket in the relevant driver’s JIRA project.Also, we are running MongoDB on NUMA. Could that be a possibility?About NUMA, it’s mentioned in the Production Notes: MongoDB and NUMA Hardware.Having said that, I think those are red herrings. I believe we established that:Please correct me if I misunderstand anything.If my understanding is correct, the only issue is tracing how the server got that Signal 3, and from where. I would reiterate my earlier suggestion to check for OS limits (e.g. ulimit), and see if they match or exceed what’s mentioned in the Production Notes. If everything is in order and you can rule out the OS as the culprit, it might be worth upgrading to the latest MongoDB version to see if this still occurs. All else fails, it might be possible that the app is sending it for some reason.Best regards,\nKevin",
"username": "kevinadi"
}
] | MongoDB crashes with signal 3 (quit) | 2021-04-09T12:02:18.257Z | MongoDB crashes with signal 3 (quit) | 5,274 |
null | [
"connecting",
"database-tools"
] | [
{
"code": "mongocli configmongorestore: command not found",
"text": "Hi all.\nI am trying to load a mongodb dump that I have on my local machine into atlas cluster.\nI have not used atlas before apart from a small project. I have read everything that grows under the sun and i think i must have a silly issue that i cannot see.\nI am on MACOS 10.14.6the dump has been created with mongodump command (BSON files). I have:uploaded the dump locally and it works well.created 2 DB on atlas, all goodinstalled mongocli on my machine\nmongocli --version\nmongocli version: 1.15.1mongo --version\nMongoDB shell version v4.2.0configured a user on atlascreated api keys and use mongocli config to set them up locallynow, 1st problem\nmongo “mongodb+srv://cluster0.u7g3w.mongodb.net/beta” --username myusernamereturns this error2021-04-22T15:00:01.266+0800 I NETWORK [js] Marking host cluster0-shard-00-01.u7g3w.mongodb.net:27017 as failed :: caused by :: Location40659: can’t connect to new replica set master [cluster0-shard-00-01.u7g3w.mongodb.net:27017], err: AuthenticationFailed: bad auth : Authentication failed.*** It looks like this is a MongoDB Atlas cluster. Please ensure that your IP whitelist allows connections from your network.2021-04-22T15:00:01.348+0800 E QUERY [js] Error: bad auth : Authentication failed. :\nconnect@src/mongo/shell/mongo.js:341:17\n@(connect):2:6\n2021-04-22T15:00:01.353+0800 F - [main] exception: connect failed\n2021-04-22T15:00:01.353+0800 E - [main] exiting with code 1bad auth : Authentication failed.I have another issue, which does not seem to be related:\nwhen i try to run mongorestore, i get the message:\nmongorestore: command not foundI have added mongo/bin to my PATH though, wondering if what i miss here\nAny idea would be much appreciated!\nThank you",
"username": "Al_D"
},
{
"code": "mongo “mongodb+srv://cluster0.u7g3w.mongodb.net/beta” --username--usernameatlasUsermongo “mongodb+srv://cluster0.u7g3w.mongodb.net/beta” --username atlasUsermongorestore",
"text": "Hi @Al_D,Welcome to the community!bad auth : Authentication failed.This generally means credentials were entered incorrectly. I can see the connection string you posted was:\nmongo “mongodb+srv://cluster0.u7g3w.mongodb.net/beta” --usernameDid you remove the username from this post or is this the full command you had applied?\nA username will need to be supplied after the --username option.For example, connecting to the cluster as user atlasUser:mongo “mongodb+srv://cluster0.u7g3w.mongodb.net/beta” --username atlasUserYou may find the following Atlas documentation useful:If you’re choosing to use the same user credentials that you’re trying to perform the mongoshell connection from and credentials are entered incorrectly, then the mongorestore will also fail.I have added mongo/bin to my PATH though, wondering if what i miss hereHave you tried running mongorestore from a new terminal after adding it to your PATH?Kind Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "mongo “mongodb+srv://cluster0.u7g3w.mongodb.net/beta” --username",
"text": "thank you very much for taking the time to read and answer my question:\nthere was a typo in my command, i indeed passed my username after --username\nmongo “mongodb+srv://cluster0.u7g3w.mongodb.net/beta” --username\nthen was prompted for my password\nthen got the error message above\n(i edited my original post to reflect the change)as for mongorestore and PATH, indeed i did not make it persist\nI have added the new PATH to my .bash_profile now and still face the same error.\nis it part of a different install?\nThank you very much for your help",
"username": "Al_D"
},
{
"code": "mongo “mongodb+srv://cluster0.u7g3w.mongodb.net/beta” --usernameexport $PATHls -lmongorestoremongorestoremongorestoremongorestore",
"text": "Hi @Al_D,Thanks for clarifying the username.there was a typo in my command, i indeed passed my username after --username\nmongo “mongodb+srv://cluster0.u7g3w.mongodb.net/beta” --username\nthen was prompted for my password\nthen got the error message aboveI would believe in this case if you’re still getting the bad Auth error, then I would double check the password if you are certain the username is correct. Alternatively, some additional troubleshooting steps you can take is to either add a new user / password combination or modify the current user’s password and try again.as for mongorestore and PATH, indeed i did not make it persist\nI have added the new PATH to my .bash_profile now and still face the same error.Are you able to post the output from running export $PATH here? In addition to that, please change directories to the bin folder you have added to PATH and run ls -l to ensure mongorestore is in the bin folder. If you are unable to locate mongorestore, you can download MongoDB Database Tools which will contain mongorestore.Alternatively, you can run mongorestore directly from the bin folder without needing to add it to PATH.Hope this helps.Kind Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi, thanks a lot for this",
"username": "Al_D"
}
] | Cannot import dump from local machine to Atlas | 2021-04-22T07:04:49.581Z | Cannot import dump from local machine to Atlas | 3,441 |
null | [
"app-services-cli"
] | [
{
"code": "",
"text": "I use the realm-cli in my CI/CD pipeline to deploy my realm-app. For awhile now, after I deploy from my CI/CD pipeline I’ll notice that the app gets stuck in a weird state when I enter the UI. It basically thinks that all the previous code of the app is like a draft, and I have to discard that draft. I also think this causes the latest code to not be activated sometimes. Has anyone experienced this?",
"username": "Lukas_deConantseszn1"
},
{
"code": "",
"text": "Hi Lukas,What steps did you follow including the realm-cli commands used?Were there any errors along the way that you can share?Is there any other user on the project making changes at the same time as you deploying?Regards\nManny",
"username": "Mansoor_Omar"
}
] | Realm App in weird draft state after CLI deploy | 2021-03-27T13:44:29.798Z | Realm App in weird draft state after CLI deploy | 2,663 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "Hello,The dotnet (C#) drivers 2.11 brings supports for Snappy compression.\nWhen the driver is installed from NuGet, this adds 3 content files to the targeted project.\n(snappy32.dll, snappy64.dll, libzstd.dll)As those libraries are defined as content files and not references, they are not copied over the project during a simple NuGet restore.When working with VCS (GIT, svn, …), it then mandatory to add those files to the repository.This is really annoying, because one rule of thumb is to avoid pushing binaries to any VCS.Is there an easy way to workaround this ?Many thanks in advance for your suggestions",
"username": "Gauthier_Rossion"
},
{
"code": "",
"text": "Hello, This impacting gated build check-in failure , its not able to find these but locally its building without any issuesany idea how to proceed further on these?Thanks,\nSateesh",
"username": "Dasari_Sateesh"
},
{
"code": "MongoDB.Driverpackage.json",
"text": "Hi @Gauthier_Rossion and @Dasari_Sateesh,It’s been a while since you posted this question, have you found a solution to this ?I have just tested this with MongoDB.Driver version 2.11.5 (current stable) and it didn’t add those assemblies. Would you be able to confirm that this is the case with current version 2.11.5 ?If this is still an issue for you could you share a minimal example of package.json that is able to reproduce this issue ? Also could you list where in the project directory would you find those assembly files are in ?Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Hi @Wan,I give it another try with the latest change and I didn’t see a difference in the package installation behaviour.The 3 libraries are still added by file reference.However, in the meantime Visual Studio got updated, and build errors does not occur anymore.So I guess the issue I faced were linked to by dev environment.However, my projects are actually not using the ‘snappy’ compression features.As the libraries are still not present even after a NuGet Restore, I suspect the errors will come back once I decide to use them.(Or perhaps some runtime errors)",
"username": "Gauthier_Rossion"
},
{
"code": "",
"text": "I see this as well, and though it doesn’t seem to impact functionality, adding dlls to my project directly is very odd. Here’s what an empty project with just MongoDB.Driver 2.11.6 installed looks like:\nScreenshot 2021-03-15 190008353×511 10.1 KB\nThis is on a .net framework 4.7.2 project.\nIf I install 2.12, it adds more binary references from the new cryptography nuget that MongoDB.Driver depends on.\nIs this by design?",
"username": "Ivan_Milenkovic"
},
{
"code": "dll",
"text": "Hi @Ivan_Milenkovic ,Unfortunately I’m still unable to reproduce this behaviour with MongoDB.Driver version 2.12.2 .If you’re still experiencing this issue, would you be able to provide a minimal reproduce-able example project ? This is so that we can investigate more on what causing the dll being pulled.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Hi @wan,I tried it again, and I noticed that it does not happen on a .net 5 project. I’m assuming it won’t happen on a project that uses the new .csproj file format in general, and I suspect that’s why you aren’t able to reproduce.\nCan you please try to include the nuget package in a fresh project targeting .net framework 4.7.2?Thanks,\nIvan",
"username": "Ivan_Milenkovic"
},
{
"code": "",
"text": "Hi @wan,I can reproduce this when using a project targeting .NET Framework, but I do not get the same problem if I use .NET Core or .NET 5.0.So I think this is an issue only affecting .NET Framework projects, and not .NET Core ones.Regards,\nOwain",
"username": "Owain"
},
{
"code": "net5.0net4.8net4.7.2<ItemGroup>\n <PackageReference Include=\"MongoDB.Driver\" Version=\"2.12.2\" />\n</ItemGroup>\n",
"text": "Hi @Ivan_Milenkovic, @Owain,Unfortunately I’m unable to reproduce this issue using Visual Studio 2019 on a simple project targeting .NET frameworks on .NET Core, net5.0, net4.8, and net4.7.2.\nThe project only has a single dependencyIf you can reproduce this issue consistently, could you please upload a minimal example project into a GitHub public repository for reference ?Regards,\nWan.",
"username": "wan"
},
{
"code": "MongoDB.DriverPackageReference.csprojsnappylibzstdlibmongocryptmongocrypt",
"text": "Hi @wan,This issue is only happening if I use the NuGet Package Manager.If I add just the MongoDB.Driver as a PackageReference to the .csproj file, then the snappy, libzstd, libmongocrypt, and mongocrypt files are not added to the project. However, will this cause any issues if compression or crypto are required? Also, if I use this workaround but reference an older version of the library, then the aforementioned files are added to the project when I update the package using the NuGet Package Manager.I have created a git repo, mongo-snappy, with some replication steps and an example project.Thanks,\nOwain",
"username": "Owain"
},
{
"code": "dlls",
"text": "Hi @Owain,Thank you for taking the time to document the steps with all of the information.\nI have created a tracking ticket CSHARP-3612 for the MongoDB .NET driver developers to look into.Please note that the presence of these dlls in the .NET project should not cause any issues.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | NuGet and Snappy libraries impact on VCS (GIT) | 2020-10-06T05:48:20.230Z | NuGet and Snappy libraries impact on VCS (GIT) | 7,099 |
[] | [
{
"code": "",
"text": "Hello Atlas experts,it seems to be possible to setup a RS where each node is on a different provider.Sounds odd but you get:where as a 3 node RS only provides:Fun fact: according to the UI it is cheaper, I’d assume that the TCO including network traffic will come to something even. For sure the latency will increase, I was told appr. ~5 ms when all nodes are in the same region. This can be interesting but: is this a real functioning setup ?grafik1038×451 45.4 KBIs anyone around who can addon experiences, thoughts, etc. to this?Regards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Hi Michael,Great questions!You’re exactly right that with this configuration you maintain majority quorum and hence continuous read and write availability (save for a momentary replica-set level election) in the event of a full cloud provider. Of course that all assumes you have a cross-cloud resilient application tier which isn’t trivial (but with K8s it’s becoming more and more reasonable over time, still early days).You’re definitely right that TCO when you include cross-cloud data transfers (even with compression over the wire) should not be cheaper than being in a single cloud provider. And you might consider running 2x2x1 to ensure that even during maintenance you always have your primary in your preferred provider: that would also increase the cost a bit.Regarding VPC Peering (or Private Endpoints): when you use this you can only reach the portion of your cluster inside the same cloud provider. So in a sharded cluster this means reads and writes (since mongos’s can do the routing to the rest of the cluster) but in a replica set you’d be experiencing a read-only connection if you were peered only to a secondary and couldn’t reach the primary over the network. Some options to consider would be to leverage public IP access lists for cross-provider app tier access, or of course you could run a sharded cluster.There would definitely be latency tradeoffs here: particularly if you’re using the majority write concern. Writes would then acknowledge after hitting two cloud providers: which could be susceptible for less reliable network latency.Cheers\n-Andrew",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Spin up a Multiprovider 3 Node Cluster on 3 Providers? | 2021-04-21T18:51:07.530Z | Spin up a Multiprovider 3 Node Cluster on 3 Providers? | 1,985 |
|
null | [
"compass",
"security"
] | [
{
"code": "",
"text": "We have MongoDB 4.4.4 on Server 2016\nWe haven’t username/password authentication set up.\nWe can not log in MongoDB Compass and see the error:an error occurred while loading navigation command hostinfo requires authenticationIf someone could tell me what I’m doing wrong I’d appreciate it. Thanks in advance!",
"username": "moshe_kremen"
},
{
"code": "",
"text": "Are you able to connect with shell?\nAre you using SRV string or individual fields in Compass?Issue could be due to selection of authenticationDatabase or authentication mechanism\nYou can refer to",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "We able to connect with shell . (mongo --port 27017 )\nWe are using individual fields in Compass.\nWe are not using Enable Access Control (authorization: disabled in configuration file) .\nWe don’t select authenticationDatabase in Compass ,\nas we don’t use authentication mechanism meanwhile )",
"username": "moshe_kremen"
},
{
"code": "",
"text": "OK\nFor locally running mongod just need to give port and host as localhost and connectDid you try to restart Compass and see",
"username": "Ramachandra_Tummala"
}
] | MongoDB 4.4.4 on Server 2016 : an error occurred while loading navigation command hostinfo requires authentication | 2021-04-20T07:13:58.255Z | MongoDB 4.4.4 on Server 2016 : an error occurred while loading navigation command hostinfo requires authentication | 23,461 |
[
"app-services-user-auth",
"api"
] | [
{
"code": "",
"text": "Hi there,I’m using Realm and I would like to delete an App User programmatically.\nI’m trying to do so by using Realm Administration API, especially these methods:\nCapture d’écran 2021-04-14 à 17.42.17794×317 16.9 KB\nWhen I use the GET method everything is fine, I’m able to retrieve my user information, however the DELETE method isn’t working.Here is what I do to call the GET method:curl --request GET --header ‘Authorization: Bearer <access_token>’ https://realm.mongodb.com/api/admin/v3.0/groups/<group_id>/apps/<app_id>/users/<user_id>And I do exactly the same to delete but just replacing “GET” by “DELETE”.Is there something I missed?Thanks for your help!",
"username": "Julien_Chouvet"
},
{
"code": "projectIdgroupdIdhttps://realm.mongodb.com/api/admin/v3.0/groups/${projectId}/apps/${appId}/users/${userId}",
"text": "Have you tried this endpoint yet? It uses projectId as opposed to groupdId…(working in node.js)https://realm.mongodb.com/api/admin/v3.0/groups/${projectId}/apps/${appId}/users/${userId}",
"username": "Eric_Lightfoot"
},
{
"code": "projectIdgoupId",
"text": "I’m actually using my projectId as goupId because in this link it’s said that it’s the same.",
"username": "Julien_Chouvet"
},
{
"code": "",
"text": "Maybe you can get more specific about how the DELETE request is not working?",
"username": "Eric_Lightfoot"
},
{
"code": "id<user_id>",
"text": "Well unfortunately there is not much to say. I’m running the curl command in order to delete one of the following Realm App User, using the id column for <user_id>:\nCapture d’écran 2021-04-15 à 08.01.451440×608 89.4 KBAs the DELETE request returns nothing I don’t have any clues about what is going wrong.Tell me if you want some specific information.",
"username": "Julien_Chouvet"
},
{
"code": "curl --request DELETE \\\n --header 'Authorization: Bearer <accesstoken>' \\\nhttps://realm.mongodb.com/api/admin/v3.0/groups/<projectID>/apps/<appID>/users/<userId>\n\nnpm i -g mongodb-realm-cli@betarealm-cli users delete",
"text": "Hey @Julien_Chouvet - a few things to note:\na) the DELETE method isn’t going to return anything - I just tried the following which successfully deleted my user:Project ID and Group ID are indeed the same thing and used interchangeably. If you’re using the project ID where your app is located, it shouldn’t cause any issues.If you’re looking for much easier user management, we just introduced it in our new CLI - you can download it via:\nnpm i -g mongodb-realm-cli@betaafter logging in via the CLI, you can runrealm-cli users delete which will provide a list of users that you can select to delete:image1892×278 99.7 KBLet me know if you have any other questions",
"username": "Sumedha_Mehta1"
},
{
"code": "curlcurl",
"text": "Hey @Sumedha_Mehta1,Thanks for your help but unfortunately it’s still not working \nI tried again with your curl command but it did not work (but still working with GET). Is there somewhere some logs produced by the DELETE request that I can use to find what is going wrong?Actually I want to use this REST method on my iOS app to allow a user to remove its account. I first tried directly to do the DELETE from the app but it didn’t work that’s why I’m trying with curl.So, is there another way to delete a user that I can use from my app? Is it possible to do it from a Realm function or with a Trigger?Thanks for your help!",
"username": "Julien_Chouvet"
},
{
"code": "",
"text": "Hey Julien - you should be able to do this in a system function or trigger as well. We don’t typically recommend that you do admin API calls from the client, but you could do something like this by calling a system function that executes this API call.Pavel goes through an example here of how to call the Admin API in functions - Custom Function Authentication Problems and Solutions - #2 by Pavel_DuchovnyCan I ask what provider this user has been registered on ream with? (email/pass, anon, etc…)",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Ok thank! I’ll try this way.I tried with both email/password & custom function",
"username": "Julien_Chouvet"
},
{
"code": "exports = function(payload, response) {\n\n const AtlasPrivateKey = <my_private_key>\n const AtlasPublicKey = <my_public_key>\n const AtlasGroupId = <my_group_id>\n const appId = <my_app_id>\n \n // Authenticate to Realm API\n return context.http.post({\n url : \"https://realm.mongodb.com/api/admin/v3.0/auth/providers/mongodb-cloud/login\",\n headers : { \"Content-Type\" : [\"application/json\"],\n \"Accept\" : [\"application/json\"]},\n body : {\"username\": AtlasPublicKey, \"apiKey\": AtlasPrivateKey},\n encodeBodyAsJSON: true\n }).then (respone_cloud_auth => {\n const cloud_auth_body = JSON.parse(respone_cloud_auth.body.text());\n \n // Get the internal appId\n return context.http.get({\n url : `https://realm.mongodb.com/api/admin/v3.0/groups/${AtlasGroupId}/apps`,\n headers : { \"Content-Type\" : [\"application/json\"],\n \"Accept\" : [\"application/json\"],\n \"Authorization\" : [`Bearer ${cloud_auth_body.access_token}`]\n }\n }).then(respone_realm_apps => {\n const realm_apps = JSON.parse(respone_realm_apps.body.text());\n \n var internalAppId = \"\";\n \n realm_apps.map(function(app){ \n if (app.client_app_id == appId){\n internalAppId = app._id;\n }\n });\n \n return context.http.delete({\n url : `https://realm.mongodb.com/api/admin/v3.0/groups/${AtlasGroupId}/apps/${internalAppId}/users/5ee8bc87c6be46a85871365a`,\n headers : { \"Content-Type\" : [\"application/json\"],\n \"Accept\" : [\"application/json\"],\n \"Authorization\" : [`Bearer ${cloud_auth_body.access_token}`]}\n }).then (respone_realm_users => {\n return respone_realm_users.body.text();\n });\n });\n });\n} \n<my_group_id><my_app_id>",
"text": "Hey @Sumedha_Mehta1,I tried to create a webhook to call the Admin API to delete a user but it still doesn’t work.\nI follow the example you gave me:Pavel goes through an example here of how to call the Admin API in functions - Custom Function Authentication Problems and Solutions The GET works perfectly but the DELETE still does nothing.\nHere is my code:For <my_group_id> I use the id provided here:\n\nCapture d’écran 2021-04-18 à 10.09.321438×762 76.4 KB\nAnd for <my_app_id> I use the id provided here:\n\nCapture d’écran 2021-04-18 à 10.10.551437×467 62.9 KB\nThanks!",
"username": "Julien_Chouvet"
},
{
"code": "",
"text": "Hey Julien - interesting that the GET works perfectly. The app id you’re using should be the second string in your URL when you’re in Realm, not the app id used for connecting via the SDKs. Does that work in your case?Get your ideas to market faster with a developer data platform built on the leading modern database. MongoDB makes working with data easy.or you can do a request to get it like described here\nhttps://docs.mongodb.com/realm/admin/api/v3/#application-id",
"username": "Sumedha_Mehta1"
},
{
"code": "<my_app_id> // Get the internal appId\n return context.http.get({\n url : `https://realm.mongodb.com/api/admin/v3.0/groups/${AtlasGroupId}/apps`,\n headers : { \"Content-Type\" : [\"application/json\"],\n \"Accept\" : [\"application/json\"],\n \"Authorization\" : [`Bearer ${cloud_auth_body.access_token}`]\n }\n }).then(respone_realm_apps => {\n const realm_apps = JSON.parse(respone_realm_apps.body.text());\n \n var internalAppId = \"\";\n \n realm_apps.map(function(app){ \n if (app.client_app_id == appId){\n internalAppId = app._id;\n }\n });\ninternalAppId",
"text": "Yes this is the one i’m actually using in my request. In the code below I used the app id <my_app_id> in order to retrieve the ‘internal’ app id thanks to this part of the code:I checked the string in the var internalAppId and it’s the same as the one in the URL when I’m in my Realm.",
"username": "Julien_Chouvet"
},
{
"code": "",
"text": "The solution to this issue was that the API Key did not have ‘Project Owner’ permissions and mutations to the app were not permitted.",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm Administration API - DELETE user | 2021-04-14T15:57:37.396Z | Realm Administration API - DELETE user | 7,064 |
|
null | [
"python",
"security"
] | [
{
"code": "client = pymongo.MongoClient(\"mongodb+srv://LOGIN:WRONGPASSWORD!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!@sandbox.g4tdz.mongodb.net/myFirstDatabase?retryWrites=true&w=majority\")\n",
"text": "Hello I have noticed a weird behaviour in my Python programm.this is my Code fragment that makes me worry:Basically with the right login name i can get access to the dabatabase with the wrong password. Literally any non-zero String works.I dont understand how or why",
"username": "Bogdan_Narusavicius"
},
{
"code": "",
"text": "Hi @Bogdan_NarusaviciusAll you have there is a connection. If you try an operation you’ll get an Exception if authentication is enabled(which it is on Atlas).",
"username": "chris"
}
] | Python pymongo Authentication Password | 2021-04-22T07:43:52.648Z | Python pymongo Authentication Password | 1,640 |
null | [
"java",
"crud"
] | [
{
"code": "[\n{\n\"_id\": 12175,\n\"MatchID\": 11978,\n\"Players\": [\n{\n\"PlayerID\": 12063,\n\"PlayerPosition\": \"Captain\",\n\"Points\": 8\n},\n{\n\"PlayerID\": 12041,\n\"PlayerPosition\": \"Player\",\n\"Points\": 3\n},\n{\n\"PlayerID\": 12066,\n\"PlayerPosition\": \"Player\",\n\"Points\": 21\n},\n{\n\"PlayerID\": 12067,\n\"PlayerPosition\": \"Player\",\n\"Points\": 33\n},\n{\n\"PlayerID\": 12064,\n\"PlayerPosition\": \"Player\",\n\"Points\": 0\n},\n{\n\"PlayerID\": 12069,\n\"PlayerPosition\": \"ViceCaptain\",\n\"Points\": 12288\n},\n{\n\"PlayerID\": 12045,\n\"PlayerPosition\": \"Player\",\n\"Points\": 0\n},\n{\n\"PlayerID\": 12074,\n\"PlayerPosition\": \"Player\",\n\"Points\": -3\n},\n{\n\"PlayerID\": 12079,\n\"PlayerPosition\": \"Player\",\n\"Points\": 8\n},\n{\n\"PlayerID\": 12059,\n\"PlayerPosition\": \"Player\",\n\"Points\": 0\n},\n{\n\"PlayerID\": 12054,\n\"PlayerPosition\": \"Player\",\n\"Points\": 0\n}\n],\n\"Points\": 3141\n}\n]\ndb.user_teams.updateMany(\n{\"MatchID\": 11978, \"Players.PlayerID\": 12063},\n\n {\n $set: { 'Players.$.Points' : { $switch: {\n branches: [\n { case: { $eq: [ \"$Players.$.PlayerPosition\", \"Captain\" ] }, then: 16 },\n { case: { $eq: [ \"$Players.$.PlayerPosition\", \"ViceCaptain\" ] }, then: 12 }\n ],\n default: 8\n } } }\n }\n);\ndb.user_teams.updateMany(\n{\"MatchID\": 11978, \"Players.PlayerID\": 12063},\n[\n{\n$set: { 'Players.$.Points' : { $switch: {\nbranches: [\n{ case: { $eq: [ \"$Players.$.PlayerPosition\", \"Captain\" ] }, then: 16 },\n{ case: { $eq: [ \"$Players.$.PlayerPosition\", \"ViceCaptain\" ] }, then: 12 }\n],\ndefault: 8\n} } }\n}\n]\n);\n",
"text": "HiI have my structure like thisNow I am trying to update the player’s point using the below queryBut this is giving me error\ncom.mongodb.MongoWriteException: The dollar ($) prefixed field ‘$switch’ in ‘Players.0.Points.$switch’ is not valid for storage.I also tried withGetting error\ncom.mongodb.MongoWriteException: Invalid set :: caused by :: FieldPath field names may not start with ''.Please help me to resolve this error.",
"username": "Initfusion_Testing"
},
{
"code": "$mapPlayers$condPlayerID$switch$mergeObjectsPointsdb.user_teams.updateMany({\n \"MatchID\": 11978,\n \"Players.PlayerID\": 12063\n},\n[\n {\n $set: {\n Players: {\n $map: {\n input: \"$Players\",\n in: {\n $mergeObjects: [\n \"$$this\",\n {\n $cond: [\n { $eq: [\"$$this.PlayerID\", 12063] },\n {\n Points: {\n $switch: {\n branches: [\n {\n case: { $eq: [\"$$this.PlayerPosition\", \"Captain\"] },\n then: 16\n },\n {\n case: { $eq: [\"$$this.PlayerPosition\", \"ViceCaptain\"] },\n then: 12\n }\n ],\n default: 8\n }\n }\n },\n \"$$this\"\n ]\n }\n ]\n }\n }\n }\n }\n }\n])\nCaptainViceCaptainCaptainViceCaptaindb.collection.updateMany({\n \"MatchID\": 11978,\n \"Players.PlayerID\": 12063\n},\n{\n $set: {\n \"Players.$[c].Points\": 16,\n \"Players.$[v].Points\": 12,\n \"Players.$[cv].Points\": 8\n }\n},\n{\n arrayFilters: [\n {\n \"c.PlayerPosition\": \"Captain\",\n \"c.PlayerID\": 12063\n },\n {\n \"v.PlayerPosition\": \"ViceCaptain\",\n \"v.PlayerID\": 12063\n },\n {\n \"cv.PlayerPosition\": {\n $nin: [\n \"Captain\",\n \"ViceCaptain\"\n ]\n },\n \"cv.PlayerID\": 12063\n }\n ]\n})\n",
"text": "The $switch is a aggregation pipeline operator, regular update query will not allow to use this operation,There is a option update with aggregation pipeline starting from MongoDB 4.2,PlaygroundYou can use other option, arrayFilters $[identifier],Playground",
"username": "turivishal"
},
{
"code": "",
"text": "Hi @turivishalThank you very much. Both methods are working fine.But which technique is preferable and optimized?Once again thank you.",
"username": "Initfusion_Testing"
},
{
"code": "",
"text": "I would prefer second method arrayFilters because it will update in exact matching element,\nI would not prefer first aggregation method because it process for whole array and write again so it may impact performance.",
"username": "turivishal"
},
{
"code": "",
"text": "Great, Thank you for your very quick response. I appreciate your investigation.",
"username": "Initfusion_Testing"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Need help to write update query | 2021-04-22T06:11:54.583Z | Need help to write update query | 3,237 |
null | [
"atlas-device-sync"
] | [
{
"code": "let session = realm.syncSession\nnotificationToken = session!.addProgressNotification(for: .download, mode: .forCurrentlyOutstandingWork) { [weak self] progress in\n DispatchQueue.main.async {\n progressView.setProgress(Float(progress.fractionTransferred), animated: true)\n if progress.isTransferComplete {\n // Continue\n }\n }\n}\n[\n \"Session closed after receiving UNBIND event\",\n \"Session was active for: 9s\"\n]\n",
"text": "Hi there,When a user switches device and logs in, I have a block of code to synchronously sync the data with the servers before progressing:Code looks something like this:However, as soon as it completes, the completion handler is never called and we never progress. This block of code used to work for me (~1.5 weeks ago), but for some reason has stopped working. I know it was synced successfully because when I restart the app and log-in, the data is now successfully loaded locally.In my logs I see this, but otherwise no errors.Logs:Thoughts?",
"username": "Roger_Cheng"
},
{
"code": "",
"text": "@Roger_Cheng What version of the SDK are you using? I believe we just fixed some behavior related to this in the latest version",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Ian_WardSDK:\nRealm Cocoa v10.7.2\nPlatform Version:\nVersion 14.3 (Build 18C66)I will say that about 1.5 weeks ago, when we were testing, realm sync was working just fine for us, but we did migrate to a new, atlas cluster in anticipation of moving to production (from M0 to M2 simply by clicking through the Atlas GUI) and have since struggled with our syncs.Since this post yesterday, we dropped all the data in the underlying databases, terminated the sync and restarted the sync, and while have seen some syncs come through (though not as cleanly with the progress bar as before), are still seeing recurring errors in the logs, with the most recent one being, on repeat:Failed to integrate download after attempting the maximum number of tries: error applying downloaded changesets to mongodb: (NoSuchTransaction) error performing bulk write to MongoDBHere is my url if you’re able to take a look - appreciate you investigating:",
"username": "Roger_Cheng"
},
{
"code": "",
"text": "I will say that about 1.5 weeks ago, when we were testing, realm sync was working just fine for us, but we did migrate to a new, atlas cluster in anticipation of moving to production (from M0 to M2 simply by clicking through the Atlas GUI) and have since struggled with our syncs.For any performance related issues we really would recommend being on a dedicated tier, especially if you are looking to go into production. Any time you upgrade from a shared tier or drop collections you will need to terminate and re-enable sync to regenerate the history - make sure to wait a bit before re-enabling as it cleans up the old history. The re-enabling of sync causes an Initial Sync event which puts a lot of load on your server - doubly so if you are a shared tier user.https://docs.mongodb.com/realm/reference/terminating-and-reenabling-realm-sync/",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Thanks - that must have been what caused the cascade of errors / problems. Very helpful.",
"username": "Roger_Cheng"
},
{
"code": "",
"text": "@Roger_Cheng Do the performance problems persist? Have you tried upgrading to a M10, atleast temporarily to see if it improves your workload?",
"username": "Ian_Ward"
},
{
"code": "",
"text": "The sync has begun working again ever since fully terminating the sync and dropping all the databases. Obviously a sub-ideal solution, but fortunately we’re not live yet. Haven’t had a chance to try M10 yet, but that suggestion is duly noted for the future.",
"username": "Roger_Cheng"
}
] | realm.syncSession disconnects upon completion | 2021-04-19T18:59:40.551Z | realm.syncSession disconnects upon completion | 3,279 |
null | [
"connecting",
"security",
"ruby"
] | [
{
"code": "db_options = {\n \n ssl: false,\n \n pool_size: 40,\n \n pool_timeout: 30\n \n}\n\nclient = MongoClient.from_uri(\"mongodb://appuser:[email protected]/details_info\")\ncol = client.db('details_info').collection('test_report')\ncol.insert(...)\nMongo::AuthenticationError: Failed to authenticate user 'appuser' on db 'details_info'D:/Workspace/ruby230/2.3.0/lib/ruby/gems/2.3.0/gems/mongo-1.9.2/lib/mongo/db.rb:179:in `issue_authentication'D:/Workspace/ruby230/2.3.0/lib/ruby/gems/2.3.0/gems/mongo-1.9.2/lib/mongo/mongo_client.rb:275:in `block in apply_saved_authentication'D:/Workspace/ruby230/2.3.0/lib/ruby/gems/2.3.0/gems/mongo-1.9.2/lib/mongo/mongo_client.rb:274:in `each'D:/Workspace/ruby230/2.3.0/lib/ruby/gems/2.3.0/gems/mongo-1.9.2/lib/mongo/mongo_client.rb:274:in `apply_saved_authentication'D:/Workspace/ruby230/2.3.0/lib/ruby/gems/2.3.0/gems/mongo-1.9.2/lib/mongo/util/pool.rb:190:in `checkout_new_socket'D:/Workspace/ruby230/2.3.0/lib/ruby/gems/2.3.0/gems/mongo-1.9.2/lib/mongo/util/pool.rb:288:in `block (2 levels) in checkout'D:/Workspace/ruby230/2.3.0/lib/ruby/gems/2.3.0/gems/mongo-1.9.2/lib/mongo/util/pool.rb:279:in `synchronize'D:/Workspace/ruby230/2.3.0/lib/ruby/gems/2.3.0/gems/mongo-1.9.2/lib/mongo/util/pool.rb:279:in `block in checkout'D:/Workspace/ruby230/2.3.0/lib/ruby/gems/2.3.0/gems/mongo-1.9.2/lib/mongo/util/pool.rb:272:in `loop'D:/Workspace/ruby230/2.3.0/lib/ruby/gems/2.3.0/gems/mongo-1.9.2/lib/mongo/util/pool.rb:272:in `checkout'D:/Workspace/ruby230/2.3.0/lib/ruby/gems/2.3.0/gems/mongo-1.9.2/lib/mongo/mongo_client.rb:563:in `checkout_writer'D:/Workspace/ruby230/2.3.0/lib/ruby/gems/2.3.0/gems/mongo-1.9.2/lib/mongo/networking.rb:85:in `send_message_with_gle'D:/Workspace/ruby230/2.3.0/lib/ruby/gems/2.3.0/gems/mongo-1.9.2/lib/mongo/collection.rb:1121:in `block in send_insert_message'D:/Workspace/ruby230/2.3.0/lib/ruby/gems/2.3.0/gems/mongo-1.9.2/lib/mongo/util/logging.rb:55:in `block in instrument'D:/Workspace/ruby230/2.3.0/lib/ruby/gems/2.3.0/gems/mongo-1.9.2/lib/mongo/util/logging.rb:20:in `instrument'D:/Workspace/ruby230/2.3.0/lib/ruby/gems/2.3.0/gems/mongo-1.9.2/lib/mongo/util/logging.rb:54:in `instrument'D:/Workspace/ruby230/2.3.0/lib/ruby/gems/2.3.0/gems/mongo-1.9.2/lib/mongo/collection.rb:1119:in `send_insert_message'D:/Workspace/ruby230/2.3.0/lib/ruby/gems/2.3.0/gems/mongo-1.9.2/lib/mongo/collection.rb:1111:in `insert_batch'D:/Workspace/ruby230/2.3.0/lib/ruby/gems/2.3.0/gems/mongo-1.9.2/lib/mongo/collection.rb:1169:in `insert_documents'D:/Workspace/ruby230/2.3.0/lib/ruby/gems/2.3.0/gems/mongo-1.9.2/lib/mongo/collection.rb:389:in `insert'\n",
"text": "When I login mongodb in my ruby code against ruby driver 1.9.2, always told me theFailed to authenticate user ‘appuser’ on db ‘details_info’logs:When I use ruby driver 2.14.0, the authentication is successfully, but doesn’t work in 1.9.2. could you please give me some help?",
"username": "tracy_ren"
},
{
"code": "",
"text": "When I use ruby driver 2.14.0, the authentication is successfully, but doesn’t work in 1.9.2. could you please give me some help?Hi @tracy_ren,The MongoDB Ruby 1.9.2 driver was released in August, 2013 and predates support for modern MongoDB server features and authentication methods. The latest production release of MongoDB when this driver version was released was MongoDB 2.4.If you are connecting to a newer MongoDB server, you will also have to update your driver version. See Ruby Driver Compatibility for a reference of supported driver and server combinations.Regards,\nStennie",
"username": "Stennie_X"
}
] | Fail to authenticate user on db with ruby driver 1.9.2 | 2021-04-22T08:02:35.133Z | Fail to authenticate user on db with ruby driver 1.9.2 | 3,530 |
null | [] | [
{
"code": "",
"text": "Hello guys I found out about MongoDB Realm, say if i have an existing API using Nodejs and MongoDB which works locally, can i take this API and make it work on MongoDB Realm? also I wanted to make a real time chat and notification but i think I read that MongoDB Realm offers those services? am I correct? or will i need to implement socket ? I want the API to also work on mobile that’s why I’m interested in MongoDB Realm.\nThanks in advance!",
"username": "Ahmed_Omar"
},
{
"code": "",
"text": "Hi @Ahmed_Omar - welcome to the community forum!There’s a dedicated Node.js SDK for Realm, but many of the Realm features (including accessing data and calling Realm functions) is available through any of the MongoDB drivers using Realm’s wire protocol feature – so that’s an option if you don’t want to switch to the SDK (at least not right away).This series of articles describe how to build a chat app using Realm (the sharing of messages is done via Realm Sync rather than sockets – that way it handles the case when a user is offline for a while – when they connect they automatically receive all of the messages they missed).",
"username": "Andrew_Morgan"
}
] | Existing API using Nodejs to MongoDB Realm | 2021-04-22T00:21:22.964Z | Existing API using Nodejs to MongoDB Realm | 1,632 |
null | [
"aggregation",
"queries",
"dot-net"
] | [
{
"code": "",
"text": "Hi guys, we want to know if you can help us with a question.Is there a way to send an “aggregate pipeline” instruction from my application’s backend to be processed asynchronously by Mongo and then go and check its status and determine if it is still running or is finished. All of the above focused on the use of the Mongo driver, more specifically the .net Core driver.Currently we are executing an aggregation pipeline that takes more than 5 minutes and we are invoking it from an azure function that has an execution limit of 5 minutes, so the function dies before the mongo processes the instruction.Thank you.",
"username": "Jose_Alejandro_Benit"
},
{
"code": "",
"text": "Hi @Jose_Alejandro_Benit,Currently we are executing an aggregation pipeline that takes more than 5 minutes and we are invoking it from an azure function that has an execution limit of 5 minutes, so the function dies before the mongo processes the instruction.Depending on the use case, 5 minutes seems like an excessive time. Do you know why the aggregation pipeline process is taking this long to be processed ?If the complication is mainly due to the document schema, please consider to re-design the schema to fit the use case. See also Building With Patterns: A SummaryYou could also try to split the aggregation pipeline into smaller batches. For example, by using $merge operator and materialised views. Alternatively, you could also split the aggregation pipeline into smaller tasks.Regards,\nWan.",
"username": "wan"
}
] | Asynchronous aggregation pipeline from mongo driver | 2021-04-19T21:54:01.147Z | Asynchronous aggregation pipeline from mongo driver | 2,620 |
null | [
"connecting"
] | [
{
"code": "",
"text": "Hi allThis is completely new territory to me. I’ve set up an Atlas M10 instance, established VPC peering between Atlas and my GCP custom network, created a Kubernetes cluster in that same custom network.Now, I create a busybox pod in my cluster, and launch nslookup against my Atlas cluster name. It says it can’t resolve the name. Am I missing something? If it can’t resolve the FQDN name, how my applications even be able to connect using the generated connection string (in Atlas GUI).Please help.",
"username": "Jesum_Yip"
},
{
"code": "",
"text": "btw, SRV records seem to work fine though - i am able to resolve them via DNS. … i must be doing something royally stupid, but I don’t know what it is. ",
"username": "Jesum_Yip"
},
{
"code": "nslookup",
"text": "Hi @Jesum_Yip,Welcome to the community! Thanks for contributing.Glad to hear that using the SRV record works.Now, I create a busybox pod in my cluster, and launch nslookup against my Atlas cluster name. It says it can’t resolve the name.Are you able to provide nslookup command being used as well as the full output?Look forward to hearing from you.Kind Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "/ # nslookup -debug -type=SRV _mongodb._tcp.tyk-mongodb-pri.8sjy7.mongodb.net\nServer: 10.9.80.10\nAddress: 10.9.80.10:53\n\nQuery #0 completed in 10ms:\nNon-authoritative answer:\n_mongodb._tcp.tyk-mongodb-pri.8sjy7.mongodb.net service = 0 0 27017 tyk-mongodb-shard-00-00-pri.8sjy7.mongodb.net\n_mongodb._tcp.tyk-mongodb-pri.8sjy7.mongodb.net service = 0 0 27017 tyk-mongodb-shard-00-01-pri.8sjy7.mongodb.net\n_mongodb._tcp.tyk-mongodb-pri.8sjy7.mongodb.net service = 0 0 27017 tyk-mongodb-shard-00-02-pri.8sjy7.mongodb.net",
"text": "I am new to this, and spent the last few hours trying and learning but I think I understand it better now.When I create a cluster, I should connect to it using the SRV record as this is the cluster’s name. The individual shard names are the nodes in the cluster. The SRV record has a reference to these shard names. By specifying mongodb+srv:// in the connection string, I am telling the driver to please use the SRV record. Hence, the URI that comes after that is the SRV record. Is my understanding correct?image1357×460 11 KB\nThe image shows I am able to connect to it after I deployed a pod with mondb-clients in it. (Don’t worry, I already changed the password).Here is the nslookup output",
"username": "Jesum_Yip"
},
{
"code": "",
"text": "So by performing the connection this way, and I used the private connection (you can see the -pri in the name above), this means all my connectivity is flowing via the VPC peering that I have established. Is this correct? Hence less risk of data being sniffed across the wire.",
"username": "Jesum_Yip"
},
{
"code": "",
"text": "So I think the problem I am now facing is with the helm chart of an app developed using Go. This app is called Tyk (it’s an API gateway).In the yaml file, I have specified the value of the mongoDB connection string exactly as in the screenshot I provided above mongodb+srv://…And during the installation of the components referenced by the helm chart, I am seeing connectivity failures to MongoDB. My tests in manually connecting to the Atlas instance was done using a pod deployed in the same K8 cluster, so I think network, and IP address whitelisting is sorted out. This means my next step is to ask Tyk why this is failing and how to troubleshoot it further.",
"username": "Jesum_Yip"
},
{
"code": "nslookup",
"text": "Hi @Jesum_Yip,Thanks for getting back to me with that information and the nslookup output.The SRV record has a reference to these shard names. By specifying mongodb+srv:// in the connection string, I am telling the driver to please use the SRV record. Hence, the URI that comes after that is the SRV record. Is my understanding correct?Yes, your understanding here is correct. However, the shard names you are referencing are specific to Sharded Clusters. The SRV record references the hostnames of the nodes within your cluster. Since you’ve mentioned this is an M10 cluster, I would assume that this is a standard replica set and not a sharded cluster.So by performing the connection this way, and I used the private connection (you can see the -pri in the name above), this means all my connectivity is flowing via the VPC peering that I have established. Is this correct?Yes, this is also correct.And during the installation of the components referenced by the helm chart, I am seeing connectivity failures to MongoDB. My tests in manually connecting to the Atlas instance was done using a pod deployed in the same K8 cluster, so I think network, and IP address whitelisting is sorted out.It does sound like there is no network, Atlas configuration or cluster issues from your description at this stage. However, to better troubleshoot this would you be able to provide the full connectivity failure errors you’re receiving?Kind Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "All I see in the pod logs is an infinite loop trying to connect to Mongodb. I don’t see the reason for the failure. Let me speak with a tyk representative to see how I can get more detailed error logs.",
"username": "Jesum_Yip"
},
{
"code": "",
"text": "Thanks for the update @Jesum_Yip, please update here if you find a resolution from the tyk representatives so that users in future may also be able to implement the same possible fix.",
"username": "Jason_Tran"
},
{
"code": "mongodb://username:[email protected]:27017,tyk-mongodb-shard-00-01-pri.8sjy7.mongodb.net:27017,tyk-mongodb-shard-00-02-pri.8sjy7.mongodb.net:27017/tyk-dashboard?&authSource=admin",
"text": "I finally got it working. Looks like you are right - the driver doesn’t understand the +SRV keyword in the URI.I had to finally use all the individual sharded clusters. I also couldn’t use ssl=true in the URI. Tyk didn’t like it. Instead, I had to modify the helm chart useSSL value to be TRUE.This is the final URI I used (I am quite sure 27017 is not required because https://docs.mongodb.com/manual/reference/connection-string/ says that it will default to 27017 if no port is specified).mongodb://username:[email protected]:27017,tyk-mongodb-shard-00-01-pri.8sjy7.mongodb.net:27017,tyk-mongodb-shard-00-02-pri.8sjy7.mongodb.net:27017/tyk-dashboard?&authSource=adminI also did a double check to ensure the connection was not going through public internet - I had a look at the database access history and I can see the incoming connections are from a 10.x.x.x private subnet range.Thank you!",
"username": "Jesum_Yip"
},
{
"code": "",
"text": "I finally got it working.Glad to hear @Jesum_Yip! Thanks for the update.As an additional note, while connecting without SRV works, all official MongoDB drivers that are compatible with MongoDB server v3.6+ should support SRV connection URI. The real issue could be related to network configuration on the deployment environment.Kind Regards,\nJason",
"username": "Jason_Tran"
}
] | New to this. Can't connect to Atlas from GCP | 2021-04-21T02:06:14.312Z | New to this. Can’t connect to Atlas from GCP | 5,196 |
null | [
"replication"
] | [
{
"code": "",
"text": "There’s something I don’t understand about hidden nodes.I want to monitor or back up the hidden node’s data using the third-party solution according to the recommendation in the second sentence.\nHowever, it is expected that the hidden node will not be visible in the program by the first sentence.Then what is the meaning of the second sentence?\nUnder what conditions can hidden nodes be used for reporting or backup?",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "Hello @Kim_Hakseon,However, it is expected that the hidden node will not be visible in the program by the first sentence.The client application, in this case, means the client program that is connecting to the replica set. The client writes and reads data from the replica set. You specify the replica set’s uri from your client program (e.g., shell, NodeJS or Java program) to connect, read and write. This client program cannot connect to the hidden member in any case (only data replication happens). But, a client program can read data from other secondary members - when appropriate Read Preference is configured.If the replica set’s primary crashes, an election takes place and a new member is elected as primary. And the client program reads and writes from the new primary. In such a scenario, the hidden member cannot become a primary.Use hidden members for dedicated tasks such as reporting and backupsIn a replica set, you can connect to a secondary member directly and perform read operations (e.g., query a collection). This capability allows you to extract the data needed from the hidden member for reporting and such tasks. Note that, the only operation happening on the hidden member is just the replication (copy of the write operations are written to this node).",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "The conclusion of the answer is that the hidden node is invisible in the connection to the replica set (ex:mongodb://host1:27017,host2:27017,host3:27017/?replicaaSet:rs0), and can be seen in the direct access to the hidden node (ex:mongodb://host3:27017/).Thank you so much. ",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | About Hidden Node | 2021-04-22T02:50:53.924Z | About Hidden Node | 3,980 |
null | [
"aggregation",
"queries"
] | [
{
"code": "{\n_id:ObjectId(\"3df61406-a65c-480d-b258-85786caa871e\"),\n\"model\":[\"nnn\",\"mmm\"]\nchildrens: [\"2a865da0-fbe1-4842-954d-8c7b527f8bd1\", \"8d06e6af-5f68-4559-9712-706afed5c0db\"]\n},\n{\n_id:ObjectId(\"2a865da0-fbe1-4842-954d-8c7b527f8bd1\"),\n\"model\":[\"nnn\",\"mmm\"],\nchildrens: [\"4f84d69a-94da-4219-a85d-0293a615da8d\", \"5e55c16f-1b39-48ca-8f79-7acc373dca13\"]\n},\n{\n_id:ObjectId(\"4f84d69a-94da-4219-a85d-0293a615da8d\"),\n\"model\":[\"nnn\",\"mmm\"],\nchildrens: []\n},\n{\n_id:ObjectId(\"5e55c16f-1b39-48ca-8f79-7acc373dca13\"),\n\"model\":[\"nnn\",\"mmm\"],\nchildrens: [\"fb47003b-8b9f-4c1c-bc6c-cf51c0dbb8bc\"]\n},\n{\n_id:ObjectId(\"8d06e6af-5f68-4559-9712-706afed5c0db\"),\n\"model\":[\"nnn\",\"mmm\"],\nchildrens:[]\n},\n{\n_id:ObjectId(\"fb47003b-8b9f-4c1c-bc6c-cf51c0dbb8bc\"),\n\"model\":[\"nnn\",\"mmm\"],\nchildrens:[]\n}\n{\n _id:ObjectId(\"3df61406-a65c-480d-b258-85786caa871e\"),\n \"model\":[\"nnn\",\"mmm\"]\n childrens: [\n {\n _id:ObjectId(\"2a865da0-fbe1-4842-954d-8c7b527f8bd1\"),\n \"model\":[\"nnn\",\"mmm\"],\n childrens: [\n {\n _id:ObjectId(\"4f84d69a-94da-4219-a85d-0293a615da8d\"),\n \"model\":[\"nnn\",\"mmm\"],\n childrens: []\n },\n {\n _id:ObjectId(\"5e55c16f-1b39-48ca-8f79-7acc373dca13\"),\n \"model\":[\"nnn\",\"mmm\"],\n childrens: [\n {\n _id:ObjectId(\"fb47003b-8b9f-4c1c-bc6c-cf51c0dbb8bc\"),\n \"model\":[\"nnn\",\"mmm\"],\n childrens:[]\n }\n ]\n }\n ]\n },\n {\n _id:ObjectId(\"8d06e6af-5f68-4559-9712-706afed5c0db\"),\n \"model\":[\"nnn\",\"mmm\"],\n childrens:[]\n }]\n }\ndb.collection.aggregate([\n {\n \"$match\": {\n \"_id\": \"3df61406-a65c-480d-b258-85786caa871e\"\n }\n },\n {\n $unwind: \"$childrens\"\n },\n {\n $graphLookup: {\n from: \"collection\",\n startWith: \"$childrens\",\n connectFromField: \"childrens\",\n connectToField: \"_id\",\n as: \"childrensList\"\n }\n }\n])\n",
"text": "I’am having document like following.I need to fetch data from root (ObjectId(“3df61406-a65c-480d-b258-85786caa871e”)) to last child. I need a query to get the following output.I have tried the following query, but that was also not giving exact output.",
"username": "Tamilselvan_95764"
},
{
"code": "",
"text": "Hi @Tamilselvan_95764 Welcome to MongoDB Community Forum,See a topic this will help you,",
"username": "turivishal"
},
{
"code": "",
"text": "Hai @turivishal i have checked that topic, but document structure was different, so that query not suits for me.\nCan you please give your solutions here Mongo playground ? Thanks in advance.",
"username": "Tamilselvan_95764"
},
{
"code": "db.collection.aggregate([\n {\n $match: {\n _id: \"3df61406-a65c-480d-b258-85786caa871e\"\n }\n },\n {\n $graphLookup: {\n from: \"collection\",\n startWith: \"$childrens\",\n connectFromField: \"childrens\",\n connectToField: \"_id\",\n depthField: \"level\",\n as: \"children\"\n }\n },\n {\n $unwind: {\n path: \"$children\",\n preserveNullAndEmptyArrays: true\n }\n },\n {\n $sort: {\n \"children.level\": -1\n }\n },\n {\n $group: {\n _id: \"$childrens\",\n parent_id: {\n $first: \"$_id\"\n },\n model: {\n $first: \"$model\"\n },\n children: {\n $push: \"$children\"\n }\n }\n },\n {\n $addFields: {\n children: {\n $reduce: {\n input: \"$children\",\n initialValue: {\n level: -1,\n presentChild: [],\n prevChild: []\n },\n in: {\n $let: {\n vars: {\n prev: {\n $cond: [\n {\n $eq: [\n \"$$value.level\",\n \"$$this.level\"\n ]\n },\n \"$$value.prevChild\",\n \"$$value.presentChild\"\n ]\n },\n current: {\n $cond: [\n {\n $eq: [\n \"$$value.level\",\n \"$$this.level\"\n ]\n },\n \"$$value.presentChild\",\n []\n ]\n }\n },\n in: {\n level: \"$$this.level\",\n prevChild: \"$$prev\",\n presentChild: {\n $concatArrays: [\n \"$$current\",\n [\n {\n $mergeObjects: [\n \"$$this\",\n {\n children: {\n $filter: {\n input: \"$$prev\",\n as: \"e\",\n cond: {\n $in: [\n \"$$e._id\",\n \"$$this.childrens\"\n ]\n }\n }\n }\n }\n ]\n }\n ]\n ]\n }\n }\n }\n }\n }\n }\n }\n },\n {\n $addFields: {\n children: \"$children.presentChild\"\n }\n }\n])\n",
"text": "Please see your solution Playground.",
"username": "turivishal"
}
] | Nested self join in mongodb | 2021-04-21T15:35:34.377Z | Nested self join in mongodb | 4,809 |
[
"mongoose-odm"
] | [
{
"code": " Nom:{type:String, required :true},\n Prenom:{type:String, required :true},\n Birthday: Date,\n Gender:{type:String,\n enum:\"homme\"||\"Femme\" \n},\n Email: {type:String, required :true},\n States:{\n type:String},\n /* enum:\"ariana\",\"beja\" \"benarous\",\"bizerte\",\"gabes\",\"gafsa\",\"jendouba\",\"kairouan\",\"kasserine\",\"kebili\",\"kef\",\"mahdia\",\"manouba\",\"mednine\",\"monastir\",\"nabeul\",\"sfax\",\"sidi bouzid\",\"silliana\",\"sousse\",\"tataouine\",\"tozeur\",\"tataouine\"\"zaghouan\"*/\n Hobbies:{type:String}, this is the multiple select\n Password:{type:String,required:true},\n confirmPassword:{type:String,required:true},\n Nfollowers:{type:Number,default:0},\n Nfollowing:{type:Number,default:0},\n\n}) \n\nexport default mongoose.model('User',userSchema)\n",
"text": "in fact it cause a lot of problems i m building an authentification app and when i submit this message from the server shows upaaa926×472 32.5 KB",
"username": "Nour_ha"
},
{
"code": "Gender: { type: String,\n enum: \"homme\"||\"Femme\" \n}\nGenderGender: \"homme\"GenderGender: { type: String,\n enum: [ \"Homme\", \"Femme\" ]\n}\n",
"text": "Hello @Nour_ha, welcome to the MongoDB Community forum!Since,you can only assign a string value to the Gender, and this value can be one of “homme” or “Femme”. For example,Gender: \"homme\"From the documentation (Mongoose - Built-in Validators) it shows that the Gender can be defined as:",
"username": "Prasad_Saya"
},
{
"code": "Gender",
"text": "i tried at first but when i try to submit the form it shows me \"error :User failed :Fender:`` is not a valid enum value for path Gender also i have to other select menus where when i send data they don t arrive to server when i leave it just a string the error diseppear but no data sent to server same as the other select menu gouvernorat (which means states the user choose 1from 24 options) and the multiple select Cinteret or hobbies where the user can select many options",
"username": "Nour_ha"
},
{
"code": "",
"text": "Hello @Nour_ha, Generally the data from the form is received as strings and these are mapped to the data in the database thru the application. As you know, the application connects to the database via a driver (possibly with mapping using an ODM).",
"username": "Prasad_Saya"
}
] | Select field and multi select and gender and birthday datatypes in mongodb | 2021-04-20T01:39:44.395Z | Select field and multi select and gender and birthday datatypes in mongodb | 7,348 |
|
null | [
"indexes"
] | [
{
"code": "{ \n \"_id\" : ObjectId(\"6080a5c299aecc30a333dfc7\"), \n \"name\" : \"Shakir\", \n \"location\" : \"Ottawa\", \n \"region\" : \"AMER\", \n \"joined\" : 2015\n}\n{ \n \"_id\" : ObjectId(\"6080a5c299aecc30a333dfc8\"), \n \"name\" : \"Chris\", \n \"location\" : \"Austin\", \n \"region\" : \"AMER\", \n \"joined\" : 2016\n}\n{ \n \"_id\" : ObjectId(\"6080a5c299aecc30a333dfc9\"), \n \"name\" : \"III\", \n \"location\" : \"Sydney\", \n \"region\" : \"APAC\", \n \"joined\" : 2016\n}\n{ \n \"_id\" : ObjectId(\"6080a5c299aecc30a333dfca\"), \n \"name\" : \"Miguel\", \n \"location\" : \"Barcelona\", \n \"region\" : \"EMEA\", \n \"joined\" : 2017\n}\n{ \n \"_id\" : ObjectId(\"6080a5c299aecc30a333dfcb\"), \n \"name\" : \"Alex\", \n \"location\" : \"Toronto\", \n \"region\" : \"AMER\", \n \"joined\" : 2018\n}\ndb.getCollection(\"region\").createIndex({ \"region\" : 1, \"joined\" : 1})db.getCollection(\"region\").find({ joined: { $gt: 2015 } }).sort({ region: 1, })\"winningPlan\" : { \n \"stage\" : \"FETCH\", \n \"filter\" : { \n \"joined\" : { \n \"$gt\" : 2015.0\n }\n }, \n \"inputStage\" : { \n \"stage\" : \"IXSCAN\", \n \"keyPattern\" : { \n \"region\" : 1.0, \n \"joined\" : 1.0\n }, \n \"indexName\" : \"region_1_joined_1\", \n \"isMultiKey\" : false, \n \"multiKeyPaths\" : { \n \"region\" : [\n\n ], \n \"joined\" : [\n\n ]\n }, \n \"isUnique\" : false, \n \"isSparse\" : false, \n \"isPartial\" : false, \n \"indexVersion\" : 2, \n \"direction\" : \"forward\", \n \"indexBounds\" : { \n \"region\" : [\n \"[MinKey, MaxKey]\"\n ], \n \"joined\" : [\n \"[MinKey, MaxKey]\"\n ]\n }\n }\n }\n",
"text": "hi,\ni want to create a compound index for sort and range filter,\ni dont know why its doing a seperate filter and not using the compound index for the range filter.here is my data:and i created this index:\ndb.getCollection(\"region\").createIndex({ \"region\" : 1, \"joined\" : 1})this is my query:\ndb.getCollection(\"region\").find({ joined: { $gt: 2015 } }).sort({ region: 1, })here is my winning plan:thanks",
"username": "Landau_Yoel"
},
{
"code": "SORTFETCH+filter",
"text": "Hi @Landau_Yoel, and welcome to the community,\nI quote from Sort and Non-prefix Subset of an IndexIf the query does not specify an equality condition on an index prefix that precedes or overlaps with the sort specification, the operation will not efficiently use the index.which means, regardless of your index, mongodb will do a SORT or a FETCH+filter.",
"username": "Imad_Bouteraa"
}
] | Compound index for sort and range | 2021-04-21T23:46:17.391Z | Compound index for sort and range | 1,900 |
null | [
"data-modeling",
"atlas-device-sync"
] | [
{
"code": " name: ‘Users',\nprimaryKey: '_id',\nproperties: {\n\t_id: \"objectId\",\n\t_partition: \"string\",\n\tFirstName: 'string?',\n LastName: 'string?',\n\tMobileNumber: 'int',\n}\n ",
"text": "Hi,I am developing a mobile app using React Native and MongoDB Realm. For backend sync using MongoDB atlas.When schema is changed at client side and synced to atlas server, getting the following error:“The following changes cannot be made in additive-only schema mode:\nProperty ‘Users.FirstName’ has been made optional.”Schema is as follows:\n \nexport const Users = {};\n In a production application, when we made some changes in the schema like “FirstName” field is required previously & later, we make it optional, and we have 10000 of users using the app, then how can we handle this situation.Regards",
"username": "Vishnu_Rana"
},
{
"code": "",
"text": "Hi Vishnu,Changing an existing property to optional is an example of a destructive change and you would usually have to manually update your server side schema to match the destructive change in your client.The following are all considered destructive changes:What does your schema look like in the Realm UI ?\nPlease include the required fields as well. If you have FirstName under the “required” section, you would have to remove it from there to match your client schema as being Optional.Regards\nManny",
"username": "Mansoor_Omar"
}
] | Changes cannot be made in additive-only schema mode | 2021-04-21T07:24:17.345Z | Changes cannot be made in additive-only schema mode | 5,845 |
null | [] | [
{
"code": "",
"text": "Hello. I am currently using MongoDB to store my information that is typed into a page from my website. From here the user inputs info and when they click the button it should send the info to another page to show it outputted to the screen but my team has only got this working on local hosting and our aim is to have it working for use 24/7 globally without running on just localhost. We are hosting our site using AWS amplify and we have tried to use AWS elastic beanstalk and other methods to run globally for anyone to use but keep hitting a brick wall. We are unsure but have done some research on maybe changing the cluster from a free shared tier to a dedicated cluster but would like feedback to see if this will for sure work before committing. Any help or advice would be very much appreciated at this time. Thanks",
"username": "Jack_Haugh"
},
{
"code": "",
"text": "Hi @Jack_Haugh,We are hosting our site using AWS amplify and we have tried to use AWS elastic beanstalk and other methods to run globally for anyone to use but keep hitting a brick wall.Would you be able to elaborate further on the “brick wall” that you’re keep hitting ?Depending on your website use case, there are multiple ways to access your data from MongoDB Atlas. You could try either the following methods:Alternatively, depending on your use case, instead of hosting the website on AWS you could also create React SPA and Statically Host in MongoDB Realm.We are unsure but have done some research on maybe changing the cluster from a free shared tier to a dedicated cluster but would like feedback to see if this will for sure work before committing.Any of the methods mentioned above should work from a free shared tier Atlas cluster.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Hello there @wan . The brick wall I’m referring to is us not reaching our objective of a global use database that will work for our site without the need of the local host and work on a server. (server class: link temporarily removed) This is our server class right now we used for calling the mongo server and we have changed stuff when we attempted going at using amazon web services but to no avail. I will look into the points you mentioned above and see if it could help us out atall. Appreciate your comment back and thanks again.\nRegards\nJack",
"username": "Jack_Haugh"
}
] | Output contents from MongoDB to website for use online | 2021-04-20T13:11:27.840Z | Output contents from MongoDB to website for use online | 1,910 |
[
"dot-net"
] | [
{
"code": "public class ProductMapping\n {\n [BsonId]\n public ObjectId Id { set; get; }\n [BsonElement(\"name\")]\n public string Name { set; get; }\n [BsonElement(\"price\")]\n public decimal Price { set; get; }\n [BsonElement(\"description\")]\n public string Description { set; get; }\n [BsonElement(\"available\")]\n public bool Available { set; get; }\n }\ndatabase.GetCollection<ProductMapping>(\"products\").Find(x => x.Available && x.Price > 900);",
"text": "I’m working with c# driver and have problem with simple linq expression.\nMy mapping class:and my linq query:database.GetCollection<ProductMapping>(\"products\").Find(x => x.Available && x.Price > 900);Don’t know why, but the value 900 is converted to string, hence query doesn’t return any result.\nWhat’s the solution to this that I obviously don’t see?",
"username": "lkurylo"
},
{
"code": "database.GetCollection<ProductMapping>(\"products\").Find(x => x.Available && x.Price > 900);900Decimal128Price[BsonRepresentation(BsonType.Decimal128)]Decimal128new Decimal128(900)DecimalNumber{ \"_id\" : ObjectId(\"608050e3e4251729e6fb4a9e\"), \"fld\" : NumberDecimal(\"12.456\") }[BsonRepresentation(BsonType.Decimal128)]\npublic decimal fld { get; set; }\nfldvar filter = Builders<MyClass>.Filter.Eq(\"fld\", new Decimal(12.456));\nvar list = collection.Find(filter).ToList<MyClass>();\nlist.ForEach(e => Console.WriteLine(e.ToJson()));\n{ \"_id\" : ObjectId(\"608050e3e4251729e6fb4a9e\"), \"fld\" : NumberDecimal(\"12.456\") }",
"text": "my linq query:database.GetCollection<ProductMapping>(\"products\").Find(x => x.Available && x.Price > 900);MongoDB interprets any number (say 900), by default, as a double. I think you will need to specify explicitly that the data type is of type Decimal128.Annotate the Price field [BsonRepresentation(BsonType.Decimal128)], to specify that the data is stored in the MongoDB database as type Decimal128.When querying you can use the MongoDB.Bson.Decimal128 type to specify the filter’s value, by using one of he constructors.; for example, new Decimal128(900).I tried some code. I have a document in my collection as follows. It has a DecimalNumber field:{ \"_id\" : ObjectId(\"608050e3e4251729e6fb4a9e\"), \"fld\" : NumberDecimal(\"12.456\") }My class has the field defined as follows:My query using the fld:The query successfully retrieved the document:{ \"_id\" : ObjectId(\"608050e3e4251729e6fb4a9e\"), \"fld\" : NumberDecimal(\"12.456\") }",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thank you for explanation. BsonRepresentation attribute was enought to get it to work.\nI see I need to get more into mongo data types to fully understand what you provided.",
"username": "lkurylo"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | [c#] linq query generates as string instead of decimal/int value | 2021-04-21T14:43:03.987Z | [c#] linq query generates as string instead of decimal/int value | 6,196 |
|
null | [
"security"
] | [
{
"code": "",
"text": "Hello There,Can anyone answer any or all of the following concerns:I have tried looking through MongoDB docs but didn’t find a suitable answer to it, I can find how to set it up but not why would one want to set it up?Note: The only reason I found is that use your Key Management when you need to have control over the keys used to encrypt your data.Please answer any thoughts you have on this. It will be highly appreciated.",
"username": "Anurag_59083"
},
{
"code": "Project Owners",
"text": "Hi @Anurag_59083,Have you had a look at the Encryption at Rest using Customer Key Management documentation?To answer your first question, since this is an additional layer of encryption, it won’t override the default encryption at rest for the cluster’s storage and snapshot volumes. Encryption at rest using the Customer Key Management is optional and will enable database-level encryption for sensitive workloads via the WiredTiger Encrypted StorageEngine. This option allows customers to use their own AWS KMS, Azure Key Vault, or Google Cloud KMS keys to control the keys used for encryption at rest.There is a security white paper available here which describes this further.To answer your second question, you may wish to refer to this statement from the docs, most notably that it is an additional layer of encryption:Atlas Project Owners can configure an additional layer of encryption on their data using their Atlas-compatible customer key management provider with the MongoDB encrypted storage engine.As to “why would anyone do this?”, the answer may depend on your security policy. Atlas is secure by default (in transport and at rest), but individual security policies may vary. This option is available to cater for individuals or organizations requiring this additional protection by having your own keys in addition to what Atlas has provided by default.Also, as noted on the Encryption at Rest using Customer Key Management documentation, configuring Encryption at Rest using your Key Management incurs additional charges for the Atlas project.Hope this helps.Kind Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "That is one great explanation. Thanks a lot for your response.",
"username": "Anurag_59083"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Encryption at Rest using your Key Management | 2021-04-20T17:54:01.442Z | Encryption at Rest using your Key Management | 2,831 |
null | [
"atlas-device-sync",
"flutter"
] | [
{
"code": "",
"text": "Hi,I just tried the new flutter package which was recently uploaded GitHub - realm/realm-dart: Realm is a mobile database: a replacement for SQLite & ORMs., it works but only on iOS and Android. Will it be web compatible later?\nThis is to know if I am waiting for the web version or if I will have to develop an API in Node JS.Thanks for the information ",
"username": "Arnaud_Combes"
},
{
"code": "",
"text": "@Arnaud_Combes Right now we are completely focused on getting our Flutter SDK to GA for mobile only platforms right now. Over the long-term we may look to explore Web compatibility but that is not any time soon.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Thanks for your feedback, I will then make an API in Node JS.",
"username": "Arnaud_Combes"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm Dart/Flutter official package | 2021-04-20T12:25:20.399Z | Realm Dart/Flutter official package | 5,191 |
null | [
"swift",
"atlas-device-sync"
] | [
{
"code": "",
"text": "Hi all,I’m developing an iOS app using Realm Sync.\nIs there a way to allow a user to be connected to only one device at a time? Meaning that if he has an active session on one device and wants to connect to another device, it will be automatically disconnected from the first one.Thanks for your help!",
"username": "Julien_Chouvet"
},
{
"code": "",
"text": "Wrote something up here that answers a very similar question - Restricting user to login from multiple devices",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Thanks! I managed to do something similar with push notifications ",
"username": "Julien_Chouvet"
},
{
"code": "",
"text": "Please share your approach.",
"username": "Sudarshan_Roy"
},
{
"code": "last_device_idlast_device_idlast_device_id",
"text": "For each user I store the id of the last device used to connect (let’s call it last_device_id)\nWhen I log in from a device I check if the current device id is the same as last_device_id.\nIf no, I send a push notification to last_device_id. When the phone receives this notification it logs out from Realm.",
"username": "Julien_Chouvet"
}
] | How do I log a user out from previous devices when they log in on a new one? Realm Apps | 2021-04-01T17:45:42.911Z | How do I log a user out from previous devices when they log in on a new one? Realm Apps | 3,392 |
null | [
"atlas-device-sync"
] | [
{
"code": "",
"text": "hello,I wanna get the user information as input in android and upload the information to MongoDB Atlas how it will be possible please send me some tutorials or docs. my question is how to store images offline in the device and then sync later to Atlas when a connection happens with the server.",
"username": "kunal_gharate"
},
{
"code": "",
"text": "Realm is not a good solution for storing blob data. Images can be very large files so leveraging a solution that can handle that type of data is recommended.We’ve be using Firebase Storage and it works very well for that purpose.If the images are small, like thumbnails, Realm can be used for those.",
"username": "Jay"
},
{
"code": "",
"text": "Its single user image around 100kb - 2mb . We cant upload image when user is offline .",
"username": "kunal_gharate"
},
{
"code": "",
"text": "We cant upload image when user is offline .Well, that would be true. If you are offline it would be stored locally and not uploaded. Once the connection is reestablished, it will sync automatically. Do you have some code you’ve tried that you need help with?Did you check out the getting started guide as there’s an abundance of information about storing data in Realm. There’s also a tutorial for Getting Started With Sync you may want to take a look at after your comfortable with the basics.",
"username": "Jay"
},
{
"code": "",
"text": "I am looking for sample for image upload if you have any GitHub repo please share with me",
"username": "kunal_gharate"
},
{
"code": "",
"text": "If your image sizes are 2mb, Realm (as mentioned above) is not a good solution for image storage. If they are 100k then that would be ok but anything much larger is not recommended.",
"username": "Jay"
},
{
"code": "",
"text": "Yes i know its not solution to store image in realm but where we can store image when user is offline",
"username": "kunal_gharate"
},
{
"code": "",
"text": "Store them in files on the drive/internal storage and keep a reference to them in Realm?",
"username": "Jay"
},
{
"code": "",
"text": "How i can get on web side if i store it in mobile",
"username": "kunal_gharate"
},
{
"code": "",
"text": "For the WildAid O-FISH apps, we had this exact problem. User’s need to attach photos to boarding reports while they’re out at sea without internet connectivity.The approach we took was to store the photo (and a thumbnail) in a Realm Object. When the device is back online, that Object gets synced to Atlas. The insert (sync) of the Photo Object into Atlas fires a database trigger, which uploads the image to S3, and replaces the image with the S3 URL. The updated Photo object is then synced back to the mobile apps - freeing up storage.The approach is described in “Realm Data and Partitioning Strategy Behind the WildAid O-FISH Mobile Apps”",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "I am surprised that the images are stored in a Realm object/Atlas.As mentioned in the article, storing large blob data (images) in Realm is generally discouraged for a number of reasons.How the photos are handled is excellently covered but it’s not clear what happens if the image is > 16Mb?Even if it’s quickly offloaded it still seems like it would be a real issue.There is a limit to the size of a single BSON document of 16Mb, and images can easily go beyond that size limit which will lead to intermittent operation.How do you deal with that issue?And as a followup, as cameras get better, the image file size grows so while it may be ok on an iPhone X, when the iPhone 16x++ comes out next year with file sizes of 20Mb, what happens to the app?",
"username": "Jay"
},
{
"code": "",
"text": "We compress the images on the device to ensure that they’re not too large",
"username": "Andrew_Morgan"
},
{
"code": "exports = function(fileName){\n const partition = context.user.id;\n const s3Service = context.services.get(\"appwise-aws\").s3('us-east-1');\n \n // First check if the objet exists\n return s3Service.HeadObject({\n \"Bucket\": \"7apps-scanned-dev\",\n \"Key\": partition + \"/\" + fileName,\n }).then(() => {\n // Data shows etag, last modified, mime type etc, but we don't need those\n return s3Service.PresignURL({\n \"Bucket\": \"7apps-scanned-dev\",\n \"Key\": partition + \"/\" + fileName,\n \"Method\": \"GET\",\n \"ExpirationMS\": 120000,\n })\n });\n // If error happens than returned promise will catch there somewhere\n};\n",
"text": "Hi @Andrew_MorganI implemented a similar approach but only didn’t store a public url but only tag/id of s3 upload, so I could sign a temporary url and also check access permissions for that resource on demand, might be helpful for anyone having the same use case as mine:I was hoping to reduce the size in database, because I was deleting that blob and inserting only the tag/id of s3 resource. However the sync history still keeps blob changes, so I ended up with huge history collection in __realm_sync database.If this approach is in production, can you share some insights about how big would be the history collection?Thanks",
"username": "ilker_cam"
},
{
"code": "",
"text": "I got solutionIf you have single image you can easily store it realm as object and later you can upload it on s3 or any cloud (User compression and cropping to reduce the size of image )store image in internal storage within app folder where no one can edit or delete it store path in realm . Write a trigger where you can check if the internet is available or not if it available call the api and upload the images on the server . if images available in device you can display it",
"username": "kunal_gharate"
}
] | Upload image to MongoDB Atlas using Realm (offline first method) | 2021-01-13T17:01:42.800Z | Upload image to MongoDB Atlas using Realm (offline first method) | 8,629 |
null | [
"atlas-device-sync",
"legacy-realm-cloud"
] | [
{
"code": "",
"text": "Hi everybody,In my app, I want to implement sync as Premium feature.My question is, if a user stop paid, I don’t want to allow them use the sync feature, what should I do? Manual sync from Realm Cloud DB to local DB or there’s a better solution?Tks in advance",
"username": "Nguyen_Dinh_Tam"
},
{
"code": "",
"text": "From what I have understood there is no way to pause the syncing, so my understanding is that you would have to use a different realm file and then copy the data over.With MongoDb Realm it seems like it is a lot easier to keep the data models the same in the local realm and a synced realm, but I don’t think this is an easy use case.I plan to do something similiar though. In my app, each user can do well over 1000 edits during a session. If I would commit each of those write transactions to a synced realm it would be expensive. What I was recommended to do in this case was to do my edits locally (on a local realm) and then copy them over to the synced realm when done, which would drastically reduce the sync operations used.Not sure if this helps, but I’d also think thought the database structure from the start to make it easy to move between realms. In my case, it means avoiding relations and top level objects when necessary and favor embedding documents. If one document is a separate unit with few dependencies it would be easier to move the data from local to synced and back.Not sure if this helps, but this is my 2 cents.",
"username": "Simon_Persson"
},
{
"code": "realm.syncSession.pause()",
"text": "Which SDK are you using? Most have an API to pause/resume sync exposed on the Session object. For example, in JavaScript it looks like realm.syncSession.pause().",
"username": "nirinchev"
},
{
"code": "",
"text": "When you pause sync and then resumes. If you make multiple commits to the local database while paused and then resumes. Does that count as one realm sync operation or do you still count every commit?",
"username": "Simon_Persson"
},
{
"code": "",
"text": "I’m thinking about the same, but there’s a problem :\nAn user uses 2 devices : A and B. When he make changes on device A, the local DB on device B is not updated, then when he stop using “premium”, the local DB on device B is not the newest one, he lost data.",
"username": "Nguyen_Dinh_Tam"
},
{
"code": "",
"text": "@Simon_Persson We still count every commit.@Nguyen_Dinh_Tam A user would presumably login with the same account and therefore have the same data on device A and B. If he chose to stop paying for premium service then you could make sure that all changes were uploaded before stopping sync.",
"username": "Ian_Ward"
},
{
"code": "realm.syncSession.pause()",
"text": "Hello, I am using the Swift version for iOS and I cannot find realm.syncSession.pause().\nIt may be private in the iOS SDK.Any thoughts?The problem I am trying to solve is to pause sync to prevent saving a deleted object on an edit screen.Since the object might be deleted by another Application so it will result with a\n’RLMException’, reason: 'Object has been deleted or invalidated.'",
"username": "Georges_Jamous"
},
{
"code": "",
"text": "Sync happened based on partition key where can store partition key in user object and change it when the user plan is expired what data he have it will not get sync when he renew the plan just wrote a thread for update partition key to premium_partition key",
"username": "kunal_gharate"
}
] | Best practice for stop sync | 2020-06-06T05:15:10.822Z | Best practice for stop sync | 5,446 |
null | [
"golang"
] | [
{
"code": "",
"text": "Hello,I have recently updated the Mongo Go driver from v1.3.5 to v1.4.6 and I have noticed that the vendor directory pulled some 35.5k lines of code. After further inspection, I noticed that most of the updated code has nothing to do with Mongo driver, but its dependency on AWS SDK (mostly Credentials and Signer). Please note that I’m not using AWS to run my code. This giant dependency is caused by a rather simple need for some AWS auth utilities and I assume it can be avoided if you replace direct dependency with an interface wrapper and provide different implementations in different packages. That way, I would pull only what I need and have a cleaner update with no need to add dependencies I don’t intend to use. The other option is to simply reimplement a small chunk of the actual SDK MongoDB Driver needs to avoid any dependencies whatsoever because what you need really does not justify having the entire AWS SDK in vendors.Thanks.",
"username": "dusanb"
},
{
"code": "go.mod",
"text": "Hi @dusanb,I’m not sure I fully understand the proposal. AFAIK, Go projects must declare all of their dependencies in the go.mod file and the language will install and build all of the dependencies when compiling the project. In the driver, we do not know at compile time if the application is using AWS authentication, so we have to declare the dependency and unfortunately all of that code gets pulled in even if the authentication mechanism is not used. Can you elaborate on your proposal to show how it would address this issue?– Divjot",
"username": "Divjot_Arora"
},
{
"code": "github.com/aws/aws-sdk-go/aws/credentialsgithub.com/aws/aws-sdk-go/aws/signer/v4Authenticatorgo.mongodb.org/mongo-driver/mongo",
"text": "Hi, @Divjot_Arora,Let me try to elaborate my idea:\nFirst, what made me open this topic: over 35k lines of code for SKD is a lot. That implies that there are probably some issues with the SDK codebase, but also that it should be avoided as a dependency as much as possible. I’m not familiar with all the details of the MongoDB Driver codebase, but I noticed that only an authorization subset of SDK is used by the driver (these two packages, to be more precise: github.com/aws/aws-sdk-go/aws/credentials and github.com/aws/aws-sdk-go/aws/signer/v4).My initial proposal used to be to replace structures from these packages with interfaces defined on the root level and pass them around as interfaces. Provide different implementations of those interfaces in a separate package and use only the implementation you need. It’s similar to using interfaces to enable creating mock objects.However, I dug into the code yesterday a bit more and noticed that driver is using an interface abstraction called d Authenticator and AWS authenticator is used in init method in auth package to register authenticator. That can simply be avoided by forcing the user to pass the entire Authenticator instead of passing only the name. That way, the root package (go.mongodb.org/mongo-driver/mongo) won’t depend on AWS and AWS SDK will be added to dependencies only if AWS Authenticator is used.Regards,\nDusan",
"username": "dusanb"
},
{
"code": "",
"text": "Hello,Did anyone check on this idea? If you think it’s viable, I’m willing to send a PR, but I first need your confirmation that the proposed changes are acceptable.Regards.",
"username": "dusanb"
},
{
"code": "",
"text": "Hi @dusanb,Thanks for the reminder. I looked into this a bit earlier but didn’t come up with anything conclusive. I’ll investigate more on Monday and respond here.– Divjot",
"username": "Divjot_Arora"
},
{
"code": "AuthenticatorClientOptions",
"text": "Hi @dusanb,I looked over your proposal. The actual logic in the AWS authenticator is complex and it would not be reasonable to push the responsibility of maintaining that code onto the user. We could potentially put it in a separate package and add an Authenticator option to the ClientOptions type in the future, but removing the builtin support would be considered a backwards-breaking behavioral change, which we can’t do without a major version bump of the driver (e.g. v1.0 → v2.0). If you think this is worth considering, please file a ticket in our Jira Project and we will consider it at the time of our next major version bump.– Divjot",
"username": "Divjot_Arora"
},
{
"code": "AuthenticatorClientOptions",
"text": "Hi @Divjot_Arora,Thanks for the response. I agree that the idea of not using AWS SDK is not a way to go (the purpose of SDK is exactly to avoid that). By the way, looks like that dependency is also out of date on the MongoDB driver.However, I think that the idea of adding Authenticator to ClientOptions would be a good choice. Unfortunately, there is probably no way to avoid breaking backward compatibility. Despite that, it’s more than worth it. I will open a Jira ticket. If I can help you with development, please let me know.Regards,\nDušan",
"username": "dusanb"
}
] | Mongo Go driver AWS SDK dependency | 2021-04-07T08:21:22.406Z | Mongo Go driver AWS SDK dependency | 3,298 |
null | [
"database-tools"
] | [
{
"code": "mongodumpmongorestoremongodump -d test-db -o mock-datamongorestore -d test-db mock-data/test-db",
"text": "Hi!I am trying to do a mongodump to a json that another dev can use to mongorestore the database. However, when the second dev tries to use this data, it populates the database, but when it comes time to use the data, it doesn’t work (it’s a user-facing application, and the user cannot log in). When I (first dev)\nrestore from this dump, it works fine. Is there possibly some metadata stored in the bson file that is preventing us from sharing this dump across machines?These are the commands we are using to dump and restore\nmongodump -d test-db -o mock-data\nmongorestore -d test-db mock-data/test-dbDoes anyone know what may be happening here and/or how to mitigate? I’ve also tried using mongoexport/import with no success.",
"username": "Sofia_Paganin"
},
{
"code": "mongodumpmongorestoremongorestoredb.version()",
"text": "Welcome to the MongoDB Community @Sofia_Paganin!The mongodump and mongorestore tools should create identical versions of documents, so there is likely some difference between your two environments (for example tool versions, server versions, timezone versions, or restoring into different databases/collections).When I (first dev)\nrestore from this dump, it works fine.This actually suggests the problem might be something happening after the mongorestore. One possibility is that a person or process is removing the data you are relying on for user login.To understand more about your scenario can you please:Confirm the Security Measures you have implemented for the deployment you are restoring into.Confirm the db.version() reported for your source deployment and target deployment.Include more information on the query and error that prevents users from logging in. For example, are they unable to login because data is missing, authentication fails, or some other issue?Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "dump/-dmongorestoremongodump -d test-dbmongorestore dump/",
"text": "Hi Stennie,Thanks so much for the reply! I was working with a teammate and we simplified the commands a bit to use the default dump/ folder since we saw that the -d flag was deprecated our use of mongorestore as shown below.\nScreen Shot 2021-04-19 at 9.55.31 PM1888×176 24.6 KBNow we have the following dump and restore commands:\nmongodump -d test-db\nmongorestore dump/It turns out the error was in how we were setting up our test database, a difference in environment variables! Thank you for the fast response anyways, it’s much appreciated ",
"username": "Sofia_Paganin"
},
{
"code": "",
"text": "Hi @Sofia_Paganin,I appreciate you taking the time to share how you solved the problem!Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongo dump and restore across machines | 2021-04-12T22:20:06.458Z | Mongo dump and restore across machines | 4,296 |
null | [
"performance",
"change-streams"
] | [
{
"code": "",
"text": "Is ChangeStream based solely on primary’s oplogs or also secondary’s oplogs in the replicaset? Can reading of ChangeStream through reading secondary’s oplog and scale out secondary numbers in the replicaset? I can’t find any documentation on this specific point. If it is solely based on primary’s oplogs, then the scalability story is through sharding the cluster only? Thanks",
"username": "Hao_Zhang"
},
{
"code": "",
"text": "Hi @Hao_Zhang,Welcome to MongoDB community.As far as I know change stream is based on an aggregation stage eventually. Aggregation respect a readPreference of your connection and therefore you can read from secondary and use change stream if you understand all secondary reads implications.There is a caveat that only majority commited data is visible on change streams.Thanks\nPavel",
"username": "Pavel_Duchovny"
}
] | ChangeStream scalability | 2021-04-19T21:52:40.449Z | ChangeStream scalability | 2,275 |
null | [
"node-js"
] | [
{
"code": "datas[bey._id] = {\n\n latest: bey.latest,\n\n name: bey._id\n\n}\nthis.name = name;\n\nthis.type = type;\n\nthis.image = image;\n\nthis.firstOwner = firstOwner;\n\nthis.level = 1;\n\nthis.xp = 0;\n\nthis.specials = [];\n\nthis.passives = [];\n\nthis.aliases = [];\n\nthis.gen = 1;\n\nbname = name || this.name;\n\nif(id) this.id = id;\n\nelse {\n\nif(this.name !== \"Buddy Bey\"){\n\n if(datas[this.name]){\n\n this.id = datas[this.name].latest || 1;\n\n datas[this.name].latest = (datas[this.name].latest || 1) + 1;\n\n }else{\n\n mongo.db(\"main\").collection(\"ids\").insertOne({_id: this.name, latest: 2});\n\n datas[this.name] = {latest: 2};\n\n this.id = 1;\n\n }\n",
"text": "(Before starting, I should note I am very new to MongoDB and using the drivers.)Hello! I am receiving this error when trying to run my discord bot - or application, if you will.MongoError: MongoClient must be connected before calling MongoClient.prototype.dbI’m not quite sure why this is. Originally, I thought it was simply not connecting fast enough before MongoClient was called, so I set it up on a VPS using Redhat with AWS. However, the error still persists, so I assume it has something to do with my code.Here is the entirety of my code where the error lies:require(“dotenv”).config({path: “path/to/.env”});const uri = process.env.MONGOURL;const MongoClient = require(“mongodb”).MongoClient;const mongo = new MongoClient(uri, {useNewUrlParser: true,useUnifiedTopology: true});let bname = “Beyblade”mongo.connect(err => {console.log(“MongoDB connected for Beyblade.js”);});const ids = mongo.db(“main”).collection(“ids”)const id = ids.find({});const datas = {};Promise.all([id]).then(data => {let beys = data[0];beys.forEach(bey => {});console.log(“Updated data!”);});setInterval(() => {mongo.db(“main”).collection(“ids”).updateOne({_id: bname}, {$set: {latest: datas[bname].latest}});}, 600000);class Beyblade {constructor(name, type, image, firstOwner, id){ids.updateOne({_id: this.name}, {$set: {latest: datas[this.name].latest}});}}}async init(){return true;}}module.exports = Beyblade;I hope you guys can help me out! Thank you in advanced!",
"username": "Cringe_Burst"
},
{
"code": "const MongoClient = require(\"mongodb\").MongoClient;\nconst mongo = new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true });\nmongo.connect(err => {\n console.log(\"MongoDB connected for Beyblade.js\");\n\n});\nconst ids = mongo.db(\"main\").collection(\"ids\")\nconst id = ids.find({});\n// ...\nmongo.db(\"main\")mongo.connect(...)idsfindmongo.connect(err => {\n console.log(\"MongoDB connected for Beyblade.js\")\n const ids = mongo.db(\"main\").collection(\"ids\")\n ids.find({}).toArray(function(err, result) {\n // the variable result is an array of the documents, do something with them\n console.log(result) // this will print the documents from the collection\n })\n});\n",
"text": "Hello @Cringe_Burst, welcome to the MongoDB Community forum!From your code:The error is suggesting that the program is invoking the mongo.db(\"main\") before the connecting to the server at mongo.connect(...). This is because of the way you have structured the code - the JavaScript coding. The program is trying something before completing something before that.See the NodeJS Driver’s QuickStart and try the following sections, first:And, finally see the topic on Promises and Callbacks; knowing how to use these will solve your current problem.For example, changing your code to this, will connect to the database, get an instance of the collection ids, runs the find method on the collection and prints the collection documents to the console.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "OK great, I changed it to this but now am receiving this error:\n@Prasad_SayaReferenceError: mongo is not definedDo you know why this might be the case?",
"username": "Cringe_Burst"
},
{
"code": "const MongoClient = require('mongodb').MongoClient;\nconst uri = 'mongodb://localhost:27017' // you substitute your uri value here\nconst mongo = new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true });\nmongo.connect(err => {\n console.log(\"Connected to MongoDB server...\");\n\tconst ids = mongo.db(\"testdb\").collection(\"booksCollection\") // substitute your database and collection names\n\tids.find({}).toArray(function(err, result) {\n console.log(\"find query executed...\") \n console.log(result)\n\t});\n});",
"text": "ReferenceError: mongo is not definedDo you know why this might be the case?You can try this code - it runs fine on my computer:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "It seems like it might work…but I am now encountering another error:ReferenceError: ids is not definedAnd here’s what I changed my code to (feel free to let me know if I did something wrong while changing it)require(“dotenv”).config({path: “path/to/.env”});\nconst MongoClient = require(“mongodb”).MongoClient;\nconst uri = process.env.MONGOURL;\nconst mongo = new MongoClient(uri, {useNewUrlParser: true,useUnifiedTopology: true});\nlet bname = “Beyblade”mongo.connect(err => {\nconsole.log(“MongoDB connected for Beyblade.js”);\nconst ids = mongo.db(“main”).collection(“ids”)\nids.find({}).toArray(function(err, result) {\nconsole.log(“find query executed…”)\nconsole.log(result)\n})\n});const id = ids.find({});\nconst datas = {};Promise.all([id]).then(data => {\nlet beys = data[0];I should also mention I am fairly new to JavaScript. I know it doesn’t completely relate to mongodb, but it’s very important for me to get it figured out. I will say that I truly appreciate you helping me a ton with my project!",
"username": "Cringe_Burst"
},
{
"code": "",
"text": "Ok so SORRY so much for the amount of replies…\nI figured out why it wasn’t working - ids is within the listener while the rest of my code is not. But, I need the rest of my code to be outside the listener so that I can runmodule.exports = Beyblade;…which is very important to be able to do. It’s not really mongodb related now, but I would appreciate it if you could help out? I really appreciate it! (No I really do, it means a lot that you’re helping me!!)~~I think I need something like a global variable - I think that’s what it’s called.",
"username": "Cringe_Burst"
},
{
"code": "",
"text": "It’s not really mongodb related now, but I would appreciate it if you could help out?It is related to the JavaScript programming topic Promises and Callbacks. It is important to understand the programming with these, as the MongoDB NodeJS driver API methods return callback or promise, in general. So, the steps in your program working with MongoDB will be associated with these concepts. There is no other way (I suspect).I suggest you try the examples in the MongoDB NodeJS Driver documentation and adapt them to your own app.Another useful source for callaback and promise programming is the MDN’s Introducing asynchronous JavaScript.",
"username": "Prasad_Saya"
}
] | Having trouble connecting to MongoClient before calling MongoClient.prototype.db | 2021-04-19T03:03:12.991Z | Having trouble connecting to MongoClient before calling MongoClient.prototype.db | 40,582 |
null | [] | [
{
"code": "",
"text": "Hello all,I use community version 4.0. I read the document in https://docs.mongodb.com/manual/reference/method/db.createView/I create a view A by a lookup pipeline which do a joint search with two collections, A1 and A2, and I can see the view A contains some data from A1 and A2, and if I change some data in A1 or A2, the data in view A will be changed accordingly. Then I create a view B, view B is created by a lookup pipeline which do a joint search from collection B1 and view A, currently, I can see the data appears in view A, but not in view B, what would be the problem?Thanks,\nJames",
"username": "Zhihong_GUO"
},
{
"code": "",
"text": "Can someone confirm the view on another view is supported by mongo on which version?",
"username": "Zhihong_GUO"
},
{
"code": "",
"text": "Yes view on view is suppprted by mongo from v3.4\nPlease check doc for more details",
"username": "Ramachandra_Tummala"
}
] | How to create view from another view | 2021-04-20T07:37:30.082Z | How to create view from another view | 2,049 |
[
"java"
] | [
{
"code": ".iterator().into().find().first().find()pom.xmluser_id.into() // Returns an array of folders, given a user id\n public List<Folder> getFolders(String id) {\n if (id == null || id.isEmpty()) return null;\n List<Folder> folders = foldersCollection.find(new Document(\"user_id\", id)).into(new ArrayList<>());\n return folders;\n }\n.iterator().find() public List<Folder> getAllFolders(){\n List<Folder> folders = new ArrayList<>();\n foldersCollection.find().iterator().forEachRemaining(folders::add);\n return folders;\n }\n // Gets a folder from db given folder _id\n public Folder getFolder(String id) {\n if (id == null || id.isEmpty()) return null;\n return foldersCollection.find(new Document(\"_id\", new ObjectId(id))).first();\n }\n.find().into()find().first().find().iterator()\njava.lang.NoSuchMethodError: 'com.mongodb.internal.operation.ExplainableReadOperation com.mongodb.internal.operation.SyncOperations.find(org.bson.conversions.Bson, java.lang.Class, com.mongodb.internal.client.model.FindOptions)'\n\n\tat com.mongodb.client.internal.FindIterableImpl.asReadOperation(FindIterableImpl.java:236)\n\tat com.mongodb.client.internal.FindIterableImpl.asReadOperation(FindIterableImpl.java:40)\n\tat com.mongodb.client.internal.MongoIterableImpl.execute(MongoIterableImpl.java:135)\n\tat com.mongodb.client.internal.MongoIterableImpl.iterator(MongoIterableImpl.java:92)\n\tat com.mongodb.client.internal.MongoIterableImpl.forEach(MongoIterableImpl.java:121)\n\tat com.mongodb.client.internal.MongoIterableImpl.into(MongoIterableImpl.java:130)\n\tat com.bookmarkd.api.daos.FolderDao.getFolders(FolderDao.java:46)\n at com.bookmarkd.FolderTest.GetFolders(FolderTest.java:50) <31 internal lines>\n at java.base/java.util.ArrayList.forEach(ArrayList.java:1511) <9 internal lines>\n at java.base/java.util.ArrayList.forEach(ArrayList.java:1511) <23 internal lines>\npublic class Folder {\n\n @BsonId\n @JsonIgnore\n private ObjectId oid;\n\n @JsonProperty(\"_id\")\n @BsonIgnore\n private String id;\n\n @JsonProperty(\"user_id\")\n private String userId;\n\n private String name;\n private String icon;\n private boolean shareable;\n\n // Constructor, getters, and setters... \n}\n",
"text": "I’m attempting to build my own application and backend API to further my understanding of the Mongo Java driver after completing the M220J course, but am running into a blocker.I’m creating a bookmark managing app and rebuilding the backend in Java using the Mongo driver and Spring Boot. The error appears when using .iterator() or .into() after .find() to query my Atlas database. The error DOES NOT APPEAR when using .first() after .find().My pom.xml includes version 4.2.2 of mongodb-driver-sync. I’m using Java 15.0.1.The folders collection uses a CodecRegistry, much like how the Users or Sessions collection is created in the mflix app, meaning a query on foldersCollection should return documents of type Folder. I’ve verified that the data has the same fields in the database and Folder.java class.In the code, the user_id field is currently stored as a String, not ObjectId, in the Folder documents (acts like a foreign key) This does not cause the error. The .into() trick was taken from this article talking about Mongo and Java Pojos.This code also does not work, which uses .iterator() after .find(), and excludes the id to get all folders. It also does not work if I include the id in the query.I have verified that this code works with a test in Java and in Postman:It looks like it’s pointing towards the .find() method when I use .into(), but I’ve confirmed that .find() works when chained with .first(), so I don’t think .find() is the issue. The same error appears when using .iterator().Lastly, here’s my Folder class and an image a of a few documents in the Atlas database:\nimage788×956 83.5 KB\n",
"username": "Ian_Goodwin"
},
{
"code": "",
"text": "Hi Ian,\nI tested your code locally and it works just fine.\nIt looks like maven or you IDE is using an old version of driver-core.\nif you are setting it manually in your pom, remove it. the mongodb-driver-sync will take care of it.\notherwise, clean your caches. if it still doesn’t work, share your pom file\nRegards",
"username": "Imad_Bouteraa"
},
{
"code": "<!-- THIS MAKES THE APP WORK! -->\n\n<!-- https://mvnrepository.com/artifact/org.mongodb/mongodb-driver-sync -->\n<dependency>\n\t<groupId>org.mongodb</groupId>\n\t<artifactId>mongodb-driver-sync</artifactId>\n\t<version>4.2.3</version>\n</dependency>\n<!-- https://mvnrepository.com/artifact/org.mongodb/mongodb-driver-core -->\n<dependency>\n\t<groupId>org.mongodb</groupId>\n\t<artifactId>mongodb-driver-core</artifactId>\n\t<version>4.2.3</version>\n</dependency>\n<!-- https://mvnrepository.com/artifact/org.mongodb/bson -->\n<dependency>\n\t<groupId>org.mongodb</groupId>\n\t<artifactId>bson</artifactId>\n\t<version>4.2.3</version>\n</dependency>\n",
"text": "Sorry for the late reply Imad_Bouteraa, but the solution was actually a version issue between three Maven dependencies. I thought that you only needed mongodb-driver-sync in the pom.xml, but for me, I also had to include the same version of mongodb-driver-core and the bson artifact from org.mongodb.If I didn’t specify all three dependencies in the pom.xml file, the error occurs because mongodb-driver-sync runs at version 4.2.2, but the other dependencies run at version 4.2.1. I looked in the External Libraries folder to see the versions.I was able to update all three to version 4.2.3 by specifying them in the pom.xml, and the app works!I wish the documentation could have been clearer about requiring all three in the pom.xml, but there could have been an issue on my end in Maven or IntelliJ when resolving the dependencies. I tried so many times to comment out the dependency, reload the project, clear cache, and redownload, only to have the same error come back!",
"username": "Ian_Goodwin"
},
{
"code": "",
"text": "Glad for you I wish the documentation could have been clearer about requiring all three in the pom.xml, but there could have been an issue on my end in Maven or IntelliJ when resolving the dependenciesmongodb-driver-sync define its required dependencies with the exact versions. it should be Maven or IntelliJhttps://mvnrepository.com/artifact/org.mongodb/mongodb-driver-sync/4.2.3check this link, it may help in future troubleshootingLet's look at the java.lang.NoSuchMethodError and some ways to handle it.Regards,",
"username": "Imad_Bouteraa"
}
] | java.lang.noSuchMethodError with .iterator or .into when using .find | 2021-04-09T14:39:23.997Z | java.lang.noSuchMethodError with .iterator or .into when using .find | 10,677 |
|
null | [
"production",
"server"
] | [
{
"code": "",
"text": "MongoDB 4.0.24 is out and is ready for production deployment. This release contains only fixes since 4.0.23, and is a recommended upgrade for all 4.0 users.\nFixed in this release:",
"username": "Jon_Streets"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB 4.0.24 is released | 2021-04-20T19:15:28.787Z | MongoDB 4.0.24 is released | 2,417 |
null | [] | [
{
"code": "monogcli altas backup shnapshot .....\n",
"text": "Hello DevOps experts,\nI was under the impression that I can download an atlas snapshot via mongocli with something like:Checking the docs I find a lot but no download option, is there somewhere a hidden gem or am I out of luck here?Going forward and checking the Atlas API I also don’t find an option to download an / the latest snapshot at all. Can anyone please confirm that this is not possible?The basic question solve is: I want to script the download of the latest Atlas snapshot. Did anyone solved that and can share the solution here?Thanks a lot and best regards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Hi @michael_hoeller,The gem is that the download of a snapshot is when you issue a restore command with type download:https://docs.mongodb.com/mongocli/master/reference/atlas/backup-restore-start/#argumentsSo you basically restore to a download link you can than curl or wget to a file…You can do the same with curl and rest api …Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to script the download of the latest Atlas snapshot | 2021-04-20T13:13:57.784Z | How to script the download of the latest Atlas snapshot | 3,132 |
null | [
"crud",
"sharding"
] | [
{
"code": "upsert: true_id",
"text": "Hello everyone,\nfrom what I understand, if a collection is sharded on a key that is not the _id field, there is no guarantee of unicity for the _id field across different shards (https://docs.mongodb.com/manual/core/sharding-shard-key/#unique-indexes).Then, I don’t understand this statement about the update_one operator:I understand why the shard key is needed, in order to ensure the operation really updates 1 single item. But why can it be replaced by the _id?Thanks !",
"username": "Daniele_Tessaro"
},
{
"code": "",
"text": "Hi @Daniele_TessaroWelcome to MongoDB community.To my understanding the update one targets one document. So either you need to specify a shard key or an _id if your application enforce this uniqueness.If its not unique you might get unexpected behaviour…Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks for the response. The thing is, as per my original post, you can’t enforce uniqueness on _id across shards, unless it is the shard key. So I don’t understand.D.",
"username": "Daniele_Tessaro"
},
{
"code": "",
"text": "I think the documentation expects your application is enforcing uniqueness… Otherwise you can expect wierd results … Like not the correct document being upserted",
"username": "Pavel_Duchovny"
}
] | Update_one on sharded collection using _id | 2021-04-16T22:34:12.838Z | Update_one on sharded collection using _id | 4,799 |
null | [
"aggregation",
"performance"
] | [
{
"code": " {\n \"$match\": {\n \"isDeleted\": false,\n \"tenant_id\": ObjectId( \"5ec2a723a73af34fd5964c93\" ),\n \"$or\": [\n {\n \"emails\": {\n \"$exists\": true,\n \"$not\": {\n \"$size\": 0\n }\n }\n },\n {\n \"cellphones\": {\n \"$exists\": true,\n \"$not\": {\n \"$size\": 0\n }\n }\n }\n ]\n }\n }, \n { \"$lookup\": {\n \"from\": \"events\",\n \"let\": { \"cId\": \"$_id\" },\n \"pipeline\": [\n { \"$match\": \n { \"$expr\":\n { \"$and\":\n [ \n { \"$eq\": [ \"$contact_id\", \"$cId\"] }\n ] \n }}},\n {\n \"$group\":{\n \"_id\":\"$contact_id\",\n \"6059ff5a2aa6a105ae85d7f1\": {\n \"$sum\": { \n \"$cond\": [ \n { \"$and\":\n [ \n { \"$eq\": [ \"$channel\", \"email\" ] },\n { \"$eq\": [ \"$event\", \"open\"] },\n { \"$eq\": [ \"$campaign_id\", ObjectId( \"60648747f78ba3fd5b00e8ba\" ) ] }\n ] \n }, 1, 0 ]\n }\n }\n }\n } \n ],\n \"as\": \"events\"\n }},\n {\n \"$unwind\": {\n \"path\": \"$events\",\n \"preserveNullAndEmptyArrays\": true\n }},\n {\n \"$addFields\": {\n \"6059ff5a2aa6a105ae85d7f1\": \"$events.6059ff5a2aa6a105ae85d7f1\"\n }},\n {\n \"$project\": {\n \"events\": 0\n }},\n { \"$lookup\": {\n \"from\": \"events\",\n \"let\": { \"cId\": \"$_id\" },\n \"pipeline\": [\n { \"$match\": \n { \"$expr\":\n { \"$and\":\n [ \n { \"$eq\": [ \"$contact_id\", \"$cId\"] }\n ] \n }}},\n {\n \"$group\":{\n \"_id\":\"$contact_id\",\n \"6075ecec3319af23fd597b0f\": {\n \"$sum\": { \n \"$cond\": [ \n { \"$and\":\n [ \n { \"$eq\": [ \"$channel\", \"email\" ] },\n { \"$eq\": [ \"$event\", \"open\"] },\n { \"$eq\": [ \"$campaign_id\", ObjectId( \"601c1b8343e5614118d6afa5\" ) ] }\n ] \n }, 1, 0 ]\n }\n }\n }\n } \n ],\n \"as\": \"events\"\n }},\n {\n \"$unwind\": {\n \"path\": \"$events\",\n \"preserveNullAndEmptyArrays\": true\n }},\n {\n \"$addFields\": {\n \"6075ecec3319af23fd597b0f\": \"$events.6075ecec3319af23fd597b0f\"\n }},\n {\n \"$project\": {\n \"events\": 0\n }},\n { \"$lookup\": {\n \"from\": \"tagcontacts\",\n \"let\": { \"cId\": \"$_id\" },\n \"pipeline\": [\n { \"$match\": \n { \"$expr\":\n { \"$and\":\n [ \n { \"$eq\": [ \"$contact_id\", \"$cId\"] }\n ] \n }}},\n {\n \"$group\":{\n \"_id\":\"$contact_id\",\n \"6075ecec3319afd31e597b0c\": {\n \"$sum\": { \n \"$cond\": [ { \"$eq\": [ \"$tag\", ObjectId( \"60478086f4ac576583614c56\" ) ] }, 1, 0 ]\n }\n }\n }\n } \n ],\n \"as\": \"tags\"\n }},\n {\n \"$unwind\": {\n \"path\": \"$tags\",\n \"preserveNullAndEmptyArrays\": true\n }},\n {\n \"$addFields\": {\n \"6075ecec3319afd31e597b0c\": \"$tags.6075ecec3319afd31e597b0c\"\n }},\n {\n \"$project\": {\n \"tags\": 0\n }},\n { \"$lookup\": {\n \"from\": \"tagcontacts\",\n \"let\": { \"cId\": \"$_id\" },\n \"pipeline\": [\n { \"$match\": \n { \"$expr\":\n { \"$and\":\n [ \n { \"$eq\": [ \"$contact_id\", \"$cId\"] }\n ] \n }}},\n {\n \"$group\":{\n \"_id\":\"$contact_id\",\n \"6075ecec3319af462d597b0b\": {\n \"$sum\": { \n \"$cond\": [ { \"$eq\": [ \"$tag\", ObjectId( \"606f1b593b1622a3ec817f80\" ) ] }, 1, 0 ]\n }\n }\n }\n } \n ],\n \"as\": \"tags\"\n }},\n {\n \"$unwind\": {\n \"path\": \"$tags\",\n \"preserveNullAndEmptyArrays\": true\n }},\n {\n \"$addFields\": {\n \"6075ecec3319af462d597b0b\": \"$tags.6075ecec3319af462d597b0b\"\n }},\n {\n \"$project\": {\n \"tags\": 0\n }},\n {\n \"$match\": {\n \"$and\": [\n {\n \"$and\": [\n {\n \"6075ecec3319af23fd597b0f\": 0\n },\n {\n \"6059ff5a2aa6a105ae85d7f1\": 0\n }\n ]\n },\n {\n \"$and\": [\n {\n \"6075ecec3319afd31e597b0c\": {\n \"$lt\": 1\n }\n },\n {\n \"6075ecec3319af462d597b0b\": {\n \"$lt\": 1\n }\n }\n ]\n }\n ]\n }\n },\n {\n \"$count\": \"Quantos\" \n } \n], \n{ \"allowDiskUse\": true })´´´\n",
"text": "The aggregate below runs in less than 1 second without the final $count stage. But with the $count it takes 562 seconds to run (8 vcpus and 62GB RAM). The count result is 212436.Any directions for having a faster count?",
"username": "Admin_MlabsPages_mLa"
},
{
"code": "",
"text": "Hello @Admin_MlabsPages_mLa,\nMay you provide the explain(“executionStats”) output for both cases? (with and without $count)\nRegards,",
"username": "Imad_Bouteraa"
},
{
"code": "",
"text": "As a matter of fact the explains don’t say much… For both cases the index being used is one that is compounded by the first $match fields.",
"username": "Admin_MlabsPages_mLa"
}
] | Slow aggregate $COUNT | 2021-04-19T21:16:00.202Z | Slow aggregate $COUNT | 4,219 |
null | [
"student-developer-pack"
] | [
{
"code": "",
"text": "My student pack was approved on GitHub. But when I visit the MongoDB student pack page it does not let me proceed. Can you get me approved quickly, please?",
"username": "mukund_mundhra"
},
{
"code": "",
"text": "\nyou can see the image for reference",
"username": "mukund_mundhra"
},
{
"code": "",
"text": "Hi @mukund_mundhraWelcome to the forum! Unfortunately, this data is coming from GitHub and we’re not able to manually approve these requests. Could you please check with GitHub if you have the student or teacher discount applied to your GitHub account? Our experience is that sometimes students receive the teacher discount, instead of the student pack.Thank you!Lieke",
"username": "Lieke_Boon"
},
{
"code": "",
"text": "I have the student discount applied image1219×593 44.1 KB\nCan you please look into this?",
"username": "mukund_mundhra"
},
{
"code": "",
"text": "Closing this topic! The issue was solved after the next try ",
"username": "Lieke_Boon"
},
{
"code": "",
"text": "",
"username": "Lieke_Boon"
}
] | Student pack was approved on GitHub but not on MongoDB student pack page | 2021-04-19T14:19:34.004Z | Student pack was approved on GitHub but not on MongoDB student pack page | 5,960 |
null | [
"node-js",
"data-modeling"
] | [
{
"code": "",
"text": "Hi!\nI am developing an application with Mongodb, with roles. I have a collection called users (with username, password, role, active and _id) and another collection that have each roles (student, teacher and admin) that have it’s reference fields or subdocument.I did it so, because the relation itself is with other collections (exams, tutors, bugs and so on) is not with the user entity, but also the subtype (student, teacher and admin).The _id field for the user and the subtype collection is the same. I created a transaction in Mongodb to create a user and the document according to the role that is in the User collection (in collection Students or Teachers or Admins) with the same id which is in the Users collection.Also, the _id in subtype collection’s is a reference field to the User Collection (is primary key and reference field). I am not sure if this is Ok, i have serious doubts.Ok. The idea, in the part of Express and Mongo, is that when we access a protected record of the application, we pass a Token with the user ID (in headers). In this way, we can access user and validate if it exists, his role and is the owner of the entity we want to modify (ex: one student shouldn’t modify profile data of other users).Problems with this:If we want to restrict access to the document in a REST service, we would have to take into account if the IDIt is user, subtype or other entity that relates to the subtype.It becomes strange if we make the populate of the user’s data through the Subtype ID field (it would appear as an ID instead of for example, task and it is not apparently configurable).From the URL it is difficult to see the collection we are accessing, since it is not always so obvious.To sum up, i am confused building an application that uses roles, and the collections depending on an “abstract user”, because the subtypes have different set of fields, and how me should manage in a real application.As Front i am using Angular 2+. Could you give me a hand? Thank you,",
"username": "Antonio_Ubeda_Montero"
},
{
"code": "",
"text": "Please, help me! I posted ten days ago this question and i can’t solve it without your help",
"username": "Antonio_Ubeda_Montero"
},
{
"code": "",
"text": "Hey @Antonio_Ubeda_Montero!Sorry for the delay! Let’s see if we can help you out. Before we address how to manage related collections of data, as this is totally possible in MongoDB (we will get to this in a moment). However, this approach works great when using an RDBMS, but isn’t recommended when using MongoDB. I wrote more about this on the MongoDB Developer Hub . tl;dr: I would personally recommend embedding the role within the user document, unless you have a good reason to keep it separate, and I am not seeing a reason here (correct me if I’m wrong though ). Embedding this data directly is going to save you a lot of effort.Alright, now that we’ve addressed the schema design, let’s address how to do this with your existing schema design. I want to address the issues in your OG post:If we want to restrict access to the document in a REST service, we would have to take into account if the IDIt is user, subtype or other entity that relates to the subtype.One of the downsides of keeping this data separate is that you will need to make at a minimum two queries to your database to get all the data you need. This isn’t bad, it’s just something that’s true in this case.It becomes strange if we make the populate of the user’s data through the Subtype ID field (it would appear as an ID instead of for example, task and it is not apparently configurable).This is true, you will need to make a query for this.From the URL it is difficult to see the collection we are accessing, since it is not always so obvious.This isn’t an issue, since Angular 2 follows the MVC pattern, so your data model should be kept separate from the controller on the frontend anyways. There is no need for the frontend to know what your data model on your backend looks like. So, this shouldn’t be an issue.To sum up, I think it’s going to be more effective for us to discuss why you are separating our this data and figure out the best data model for you application. Looking forward to hearing from you!",
"username": "JoeKarlsson"
},
{
"code": "",
"text": "Hi.First of all, thank you for reply. In Mongodb modeling course says that is best to use sub documents than reference and that all data that needs to query together should be stored together inside the same collection.But if the number of elements is too big (over 50000) is better keeping separate into another collection. Also because mongo when you query a document gets all document and if it can be stored in RAM must access to the hard disk (that is slower).This is the reason because i decided to reference rather than using subdcuments (in some cases i use it). The reason for i keep separate User and sub types is for two reasons:These are the reasons. I am stucked with this.How would be the best way to manage this? Thank you in advance.",
"username": "Antonio_Ubeda_Montero"
},
{
"code": "",
"text": "I replied in a different message. Sorry, is in a comment below",
"username": "Antonio_Ubeda_Montero"
},
{
"code": "",
"text": "Hey Antonio, Sounds good! Can you clarify what kind of help you need to manage your collections? Are you looking for help putting together queries? Are you looking for scalability tips? Are you looking for help integrating it into your front end? Please advise. Thank you!",
"username": "JoeKarlsson"
},
{
"code": "",
"text": "Hi Joe.The main problem i have now is when doing a signin: Should i use a transaction to insert into the User’s collection and the subtype collection? The id’s of User and Subtypes (student, teacher and admin) should be the same or should be different and refere to the User’s collection and if they are different, in the case of managing a JWT that keep the user’s id how would be the best way to identify if the user has the rights to Insert/Delete/Update one document (for instance, one student shouldn’t modify the data of another student).In student we have one _id, in subtypes another (or not?) and maybe in another collections another more _id. Ufff! I am new with Mongo and i have overwhelmed And of course, in the Front (Angular) i want to keep independence of User’s guard.Yeah, the main problem is with my backend in Mongo to connect well with front-end.",
"username": "Antonio_Ubeda_Montero"
},
{
"code": "",
"text": "Thank you for your time.",
"username": "Antonio_Ubeda_Montero"
},
{
"code": "",
"text": "Putting transactions around everything in MongoDB is a bit of a code smell for me. I wouldn’t recommend it in this instance. If you are nervous about it, you can up the write concerns for your DB.Keep the IDs unique, but be sure to add a new field that tracks the relationship in the user document. Managing UUIDs yourself is a good way to get a collision.It sounds like your making this a little more difficult than it needs to be. Can I ask what this is for? Is this a student project? Do you anticipate more than 500,000 users on your service? Honestly, if this is an MVP, I would make it easy, and refactor your schema as your app scales and grows. You know what they say, “Premature optimizations is the root of all evil.”",
"username": "JoeKarlsson"
},
{
"code": "",
"text": "It’s a portfolio to show to companies, in order to hire me. I am a Cobol programmer (yes, prehistoric ) that since a few years ago lost his jobs and bet to transform to Web technologies. But the job market in Spain is too hard: for a Junior profile they ask 2 years of experience and to know a lot of technologies.The design is thought on the principles of SOLID and design patterns. I like to mantain this as open as i can for, in the future be more scalable.But you are all right. Doing transactions is not good… I am learning by myself and i can’t know anybody to have a guide of how it is developed in real life proyects.How would you do?",
"username": "Antonio_Ubeda_Montero"
},
{
"code": "",
"text": "That’s so cool! That’s a great idea! If this is just for a personal portfolio site, designing it for a massively scalable schema is probably overkill for your use case. That’s great you’re following SOLID priniciples, but it’s important to know how to denormalize your data in order to take advantage of MongoDB documents and it’s features. It’s hard to unlearn RDBMS schema design best practices. This is very normal, so don’t stress about it. I would recommend checking our the MongoDB Developer Hub or check out a couple of course at university.mongodb.com. Those both are a great place to start I would also recommend checking out these courses:Another great option is to check out Angular projects that use MongoDB to see how others integrate it and model their data. But everyone learns differently Does that help? Anything else I can help you with?",
"username": "JoeKarlsson"
},
{
"code": "",
"text": "I have already taken M320. I’ll try taking M220JS, but i did two courses of Angular (one introduction and another with MEAN with UDemy but the cases didn’t match the logic in this case). I am nowadays without job and no incomes… It’s difficult for me be patient.Thank you anyway. I will keep searching a solution to this design problem.",
"username": "Antonio_Ubeda_Montero"
},
{
"code": "",
"text": "We can talk about it here too! Can you post your schema here so we can look at it more? I want to see the structure.",
"username": "JoeKarlsson"
},
{
"code": "",
"text": "I but a snippet and general logic. Could i send you it in private? I don’t feel confortable publishing all my schema in public because it is a real project.\nThank you.",
"username": "Antonio_Ubeda_Montero"
},
{
"code": "",
"text": "You can, but I would very much prefer to do this in public, that way everyone can benefit If you have any fields you want to keep private, just remove them or alter them and put in fake data if you need to. Thank you!",
"username": "JoeKarlsson"
},
{
"code": "",
"text": "I would prefer in private. People knows enough to have a idea of the problem, and later we can give a solution and a brief description of the problem, with more data if you can. How can i send you the Schema? Thanks in advance",
"username": "Antonio_Ubeda_Montero"
},
{
"code": "",
"text": "You can DM me on here ",
"username": "JoeKarlsson"
},
{
"code": "",
"text": "Sure. I have sent you a message today. Thank you.",
"username": "Antonio_Ubeda_Montero"
},
{
"code": "",
"text": "Hi @JoeKarlsson ! I sent you eleven days ago the schema. Could you help me, please?\nI did all the courses you recommend me, but none of them could help me with that.\nThank you,",
"username": "Antonio_Ubeda_Montero"
},
{
"code": "",
"text": "Please, can anybody help me? This doubt is waiting since more than one month and my project is stopped. I did all the recomendations, take the courses recommended and send my collection’s schema…",
"username": "Antonio_Ubeda_Montero"
}
] | How to manage a DB with collections with different fields | 2021-03-07T11:59:39.282Z | How to manage a DB with collections with different fields | 12,049 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "We are migrating the data from Oracle to mongoDB, tables have column with number data type e.g. number(19) what should be the data type mongoDB? int32, int64 or decmal128?",
"username": "Saurabh_Shinge"
},
{
"code": "number(19)int64byte\t1 byte (8-bits)\nint32\t4 bytes (32-bit signed integer, two's complement)\nint64\t8 bytes (64-bit signed integer, two's complement)\nuint64\t8 bytes (64-bit unsigned integer)\ndouble\t8 bytes (64-bit IEEE 754-2008 binary floating point)\ndecimal128\t16 bytes (128-bit IEEE 754-2008 decimal floating point)\n",
"text": "Welcome to the MongoDB Community @Saurabh_Shinge!I believe the number(19) Oracle type is an 8 byte “big integer” which would be equivalent to an int64 in MongoDB.For comparison, here are the sizes of numeric types from the BSON spec:Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thanks @Stennie_X.\nSo do you think using decimal128 instead of int64 will affect the database size and performance?Regards,\nSaurabh",
"username": "Saurabh_Shinge"
},
{
"code": "Decimal128int64int64Decimal128int64number(19)",
"text": "Hi @Saurabh_Shinge,Decimal128 is twice as big (16 bytes) as an int64, so your field values will be comparatively larger in an uncompressed format (i.e. in the WiredTiger cache). Performance depends on factors like the configuration, resources, and workload of your deployment. The best way to predict the outcome on your database size and performance would be for you to test in a representative environment.I would choose a representation that is appropriate for the data you are storing. If you need accurate decimal floating-point representation or integers outside the range of an int64, Decimal128 would be appropriate. If you’re just migrating existing data, I think int64 would suffice.I would try selecting some of your current maximum field values from the number(19) column and confirm they can be accurately represented in your target MongoDB field type.Regards,\nStennie",
"username": "Stennie_X"
}
] | Data Type for migration from Oracle to mongoDB | 2021-04-17T05:44:36.775Z | Data Type for migration from Oracle to mongoDB | 2,693 |
null | [
"aggregation",
"performance"
] | [
{
"code": " {\n \"$match\": {\n \"isDeleted\": false,\n \"tenant_id\": ObjectId( \"5ec2a723a73af34fd5964c93\" ),\n \"$or\": [\n {\n \"emails\": {\n \"$exists\": true,\n \"$not\": {\n \"$size\": 0\n }\n }\n },\n {\n \"cellphones\": {\n \"$exists\": true,\n \"$not\": {\n \"$size\": 0\n }\n }\n }\n ]\n }\n }, \n { \"$lookup\": {\n \"from\": \"events\",\n \"let\": { \"cId\": \"$_id\" },\n \"pipeline\": [\n { \"$match\": \n { \"$expr\":\n { \"$and\":\n [ \n { \"$eq\": [ \"$contact_id\", \"$$cId\"] }\n ] \n }}},\n {\n \"$group\":{\n \"_id\":\"$contact_id\",\n \"6059ff5a2aa6a105ae85d7f1\": {\n \"$sum\": { \n \"$cond\": [ \n { \"$and\":\n [ \n { \"$eq\": [ \"$channel\", \"email\" ] },\n { \"$eq\": [ \"$event\", \"open\"] },\n { \"$eq\": [ \"$campaign_id\", ObjectId( \"60648747f78ba3fd5b00e8ba\" ) ] }\n ] \n }, 1, 0 ]\n }\n }\n }\n } \n ],\n \"as\": \"events\"\n }},\n {\n \"$unwind\": {\n \"path\": \"$events\",\n \"preserveNullAndEmptyArrays\": true\n }},\n {\n \"$addFields\": {\n \"6059ff5a2aa6a105ae85d7f1\": \"$events.6059ff5a2aa6a105ae85d7f1\"\n }},\n {\n \"$project\": {\n \"events\": 0\n }},\n { \"$lookup\": {\n \"from\": \"events\",\n \"let\": { \"cId\": \"$_id\" },\n \"pipeline\": [\n { \"$match\": \n { \"$expr\":\n { \"$and\":\n [ \n { \"$eq\": [ \"$contact_id\", \"$$cId\"] }\n ] \n }}},\n {\n \"$group\":{\n \"_id\":\"$contact_id\",\n \"6075ecec3319af23fd597b0f\": {\n \"$sum\": { \n \"$cond\": [ \n { \"$and\":\n [ \n { \"$eq\": [ \"$channel\", \"email\" ] },\n { \"$eq\": [ \"$event\", \"open\"] },\n { \"$eq\": [ \"$campaign_id\", ObjectId( \"601c1b8343e5614118d6afa5\" ) ] }\n ] \n }, 1, 0 ]\n }\n }\n }\n } \n ],\n \"as\": \"events\"\n }},\n {\n \"$unwind\": {\n \"path\": \"$events\",\n \"preserveNullAndEmptyArrays\": true\n }},\n {\n \"$addFields\": {\n \"6075ecec3319af23fd597b0f\": \"$events.6075ecec3319af23fd597b0f\"\n }},\n {\n \"$project\": {\n \"events\": 0\n }},\n { \"$lookup\": {\n \"from\": \"tagcontacts\",\n \"let\": { \"cId\": \"$_id\" },\n \"pipeline\": [\n { \"$match\": \n { \"$expr\":\n { \"$and\":\n [ \n { \"$eq\": [ \"$contact_id\", \"$$cId\"] }\n ] \n }}},\n {\n \"$group\":{\n \"_id\":\"$contact_id\",\n \"6075ecec3319afd31e597b0c\": {\n \"$sum\": { \n \"$cond\": [ { \"$eq\": [ \"$tag\", ObjectId( \"60478086f4ac576583614c56\" ) ] }, 1, 0 ]\n }\n }\n }\n } \n ],\n \"as\": \"tags\"\n }},\n {\n \"$unwind\": {\n \"path\": \"$tags\",\n \"preserveNullAndEmptyArrays\": true\n }},\n {\n \"$addFields\": {\n \"6075ecec3319afd31e597b0c\": \"$tags.6075ecec3319afd31e597b0c\"\n }},\n {\n \"$project\": {\n \"tags\": 0\n }},\n { \"$lookup\": {\n \"from\": \"tagcontacts\",\n \"let\": { \"cId\": \"$_id\" },\n \"pipeline\": [\n { \"$match\": \n { \"$expr\":\n { \"$and\":\n [ \n { \"$eq\": [ \"$contact_id\", \"$$cId\"] }\n ] \n }}},\n {\n \"$group\":{\n \"_id\":\"$contact_id\",\n \"6075ecec3319af462d597b0b\": {\n \"$sum\": { \n \"$cond\": [ { \"$eq\": [ \"$tag\", ObjectId( \"606f1b593b1622a3ec817f80\" ) ] }, 1, 0 ]\n }\n }\n }\n } \n ],\n \"as\": \"tags\"\n }},\n {\n \"$unwind\": {\n \"path\": \"$tags\",\n \"preserveNullAndEmptyArrays\": true\n }},\n {\n \"$addFields\": {\n \"6075ecec3319af462d597b0b\": \"$tags.6075ecec3319af462d597b0b\"\n }},\n {\n \"$project\": {\n \"tags\": 0\n }},\n {\n \"$match\": {\n \"$and\": [\n {\n \"$and\": [\n {\n \"6075ecec3319af23fd597b0f\": 0\n },\n {\n \"6059ff5a2aa6a105ae85d7f1\": 0\n }\n ]\n },\n {\n \"$and\": [\n {\n \"6075ecec3319afd31e597b0c\": {\n \"$lt\": 1\n }\n },\n {\n \"6075ecec3319af462d597b0b\": {\n \"$lt\": 1\n }\n }\n ]\n }\n ]\n }\n },\n {\n \"$count\": \"Quantos\" \n } \n], \n{ \"allowDiskUse\": true })´´´",
"text": "The aggregate below runs in less than 1 second without the final $count stage. But with the $count it takes 562 seconds to run (8 vcpus and 62GB RAM). The count result is 212436.Any directions for having a faster count?",
"username": "Admin_MlabsPages_mLa"
},
{
"code": "$count$count$lookup$match$or$exists: true",
"text": "Hi @Admin_MlabsPages_mLaWithout the $count, the aggregation was not executed. It simply returns the cursor for the query, but not execute it. This is true for official drivers and the mongo shell. This is why without $count your query returns in less than a second. It’s because technically the server did nothing.Once you iterate on the cursor by fetching documents from it (or performing a count), the cursor is then executed for real and that’s why your query takes a longer time to return.As to why it takes that much time to return, it’s worth noting that your pipeline has 18 stages, with four $lookup and a $match stages with $or and $exists: true clauses that may not be able to use any index.Best regards,\nKevin",
"username": "kevinadi"
}
] | Slow aggregate when $count | 2021-04-15T14:40:27.746Z | Slow aggregate when $count | 4,805 |
null | [
"installation"
] | [
{
"code": "",
"text": "Hi, Is MongoDB supported for Windows Server 2012 ?Thank you",
"username": "Luke_Skywalker1"
},
{
"code": "",
"text": "Hi @Luke_Skywalker1,The MongoDB Production Notes include the supported operating systems for each MongoDB server release series.According to this reference, Windows Server 2012 is supported In MongoDB 3.6 through 4.2. Server versions older than MongoDB 3.6 are end of life and no longer receive maintenance or security updates. MongoDB 3.6 also reaches end of life this month (April 2021), so MongoDB 4.0 or 4.2 are more recommendable choices.For a new installation I would choose the latest version of MongoDB available for your operating system and the latest minor release in that series. For Windows Server 2012, that would currently be MongoDB 4.2.13.If you want to use MongoDB 4.4 (the latest production release series), you will need Windows Server 2016 or newer as your O/S.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi @Stennie_X,Thank you",
"username": "Luke_Skywalker1"
}
] | MongoDB on Server 2012R2 | 2021-04-20T04:43:19.292Z | MongoDB on Server 2012R2 | 8,744 |
null | [
"sharding",
"monitoring"
] | [
{
"code": "\"block-manager\" : {\n \"allocations requiring file extension\" : 100087,\n \"blocks allocated\" : 2788802,\n \"blocks freed\" : 2750269,\n \"checkpoint size\" : 492033282048,\n \"file allocation unit size\" : 4096,\n \"file bytes available for reuse\" : 6119456768,\n \"file magic number\" : 120897,\n \"file major version number\" : 1,\n \"file size in bytes\" : 498166542336,\n \"minor version number\" : 0\n},\n\"block-manager\" : {\n \"allocations requiring file extension\" : 88776387,\n \"blocks allocated\" : 2968371230,\n \"blocks freed\" : 2904117984,\n \"checkpoint size\" : 592490942464,\n \"file allocation unit size\" : 4096,\n \"file bytes available for reuse\" : 2130100224,\n \"file magic number\" : 120897,\n \"file major version number\" : 1,\n \"file size in bytes\" : 594622980096,\n \"minor version number\" : 0\n},\n\t\t\"creationString\" : \"access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,read_timestamp=none),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=false),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u\",",
"text": "I’ve a 2 shard cluster running, and was wondering why 1 node in the cluster was operating at a significantly high CPU throughput than the other.For one of my largest collections, I noticed that the values within the block-manager output of collections stats where significantly different:vsI was trying to find out what these fields mean, the collection is balanced, and the file size in bytes is fairly even. So in that case why is the blocks allocated and freed so different?I tried to find documentation on the meaning of these values but no luck.The creation string on both shards for this collection are identical as well:\n\t\t\"creationString\" : \"access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,read_timestamp=none),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=false),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u\",",
"username": "fergus"
},
{
"code": "mongodmongodmongodmongod",
"text": "Hi @fergusThe short answer is, each mongod manages its own storage independently of other mongod in the cluster or replica set. Thus it’s not uncommon to see each mongod behaving slightly differently, even though logically they should be identical. This is due to differences in actual conditions within each hardware situation.For replica sets, they should be close to each other, but for sharded clusters, it’s not that simple since some part of the shard may work harder than others. For example, non-sharded collections live in a database’s Primary Shard (each database is different).It’s worth mentioning that WiredTiger simply follows the instructions of the associated mongod, so if the block manager is busier in one part of the cluster, it means that the server is also putting more data into disk in that instance.Best regards,\nKevin",
"username": "kevinadi"
}
] | WiredTiger Block Manager differences | 2021-04-09T23:05:05.140Z | WiredTiger Block Manager differences | 2,423 |
null | [
"crud",
"mongoose-odm"
] | [
{
"code": " const options = {\n $addToSet: { whoLikes: userId },\n $inc: { likesCount: 1 },\n new: true,\n };\n\ncollection.findByIdAndUpdate({ _id: postId }, options)\n",
"text": "What I want is increment likesCount only if whoLikes gets a new record. Right now likesCount incrementing all the time doesn’t matter if new records has been inserted or ot in whoLikes array. Any idea how to achieve it with one query?I’m using mongoose, node.js",
"username": "Dmitriy_Blot"
},
{
"code": "findOneAndUpdate_idconst filter = { _id: postId, whoLikes: { $ne: userId }}\nconst update = { \n $addToSet: { whoLikes: userId },\n $inc: { likesCount: 1 }\n}\nconst options = { new: true }\ncollection.findOneAndUpdate(filter, update, options)likesCountwhoLikes",
"text": "Hello @Dmitriy_Blot, welcome to the MongoDB Community forum!You may want to use the findOneAndUpdate method in this case. Since, you want to update on a condition in addition to the _id filter.Run the update:collection.findOneAndUpdate(filter, update, options)The above update will increment likesCount only if whoLikes gets a new record.",
"username": "Prasad_Saya"
}
] | Updated fields only if addToSet get updated | 2021-04-20T01:22:30.350Z | Updated fields only if addToSet get updated | 1,848 |
null | [] | [
{
"code": "const ObjectID = require(\"mongodb\").ObjectID;\n\nvar id = new ObjectID().toString(), ctr = 0;\nvar timestamp = id.slice(ctr, (ctr+=8));\nvar machineID = id.slice(ctr, (ctr+=6));\nvar processID = id.slice(ctr, (ctr+=4));\nvar counter = id.slice(ctr, (ctr+=6));\nconsole.log(\"id:\", id);\nconsole.log(\"timestamp:\", timestamp);\nconsole.log(\"machineID:\", machineID);\nconsole.log(\"processID:\", processID);\nconsole.log(\"counter:\", counter); \nconsole.log(\"timestamp:\", parseInt(timestamp, 16));\nconsole.log(\"machineID:\", parseInt(machineID, 16));\nconsole.log(\"processID:\", parseInt(processID, 16));\nconsole.log(\"counter:\", parseInt(counter, 16)); \n",
"text": "I know about ObjectId how it works internally and its bytes occupation from both docs: bson-types/#objectid and method-ObjectId,Is it possible to decript like below without using any mongodb functions?Result:id: 607c3511f6c1fb54f91046da\ntimestamp: 607c3511\nmachineID: f6c1fb\nprocessID: 54f9\ncounter: 1046daAnd convert that parts in Integer:Result:timestamp: 1618752785\nmachineID: 16171515\nprocessID: 21753\ncounter: 1066714I am sure this is not a valid process to decript objectID, please suggest is there any way to get accurate partition?",
"username": "turivishal"
},
{
"code": "",
"text": "I am sure this is not a valid process to decript objectID, please suggest is there any way to get accurate partition?Hi @turivishal,Assuming your driver is generating ObjectIDs in this format (there are some historical variations), you can decode as you’ve suggested without using any MongoDB functions.For example, as per your documentation links the middle 5 bytes are expected to be a random value. Some drivers used to compose this from a machineID and processID (per your interpretation). The 4-byte timestamp prefix is generally the only reliable information to decode; the rest of the bytes help with uniqueness for independently generated ObjectIDs.Did any of your results differ from what was expected? If so, what specific driver & version are you using to create the ObjectIDs?Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thank you for your reply,I have not implemented yet, before that i just want to confirm if its accurate at-least 90% then i can think about it,I have project in NodeJS, and i am using mongoose npm latest version, and i think mongoose using MongoDB NodeJS Driver,",
"username": "turivishal"
}
] | Is it possible to decript ObjectId in partitions from string? | 2021-04-18T13:44:08.322Z | Is it possible to decript ObjectId in partitions from string? | 4,325 |
null | [
"swift"
] | [
{
"code": "public lazy var memberships: AnyPublisher<[Membership], Never> = {\n Just(())\n .receive(on: DispatchQueue.main)\n .flatMapLatest { _ in\n try! Realm()\n .objects(MembershipObject.self)\n .collectionPublisher\n .map { $0.map { Membership(managedObject: $0) }}\n .assertNoFailure()\n }\n .share(replay: 1)\n .eraseToAnyPublisher()\npublic lazy var memberships: AnyPublisher<[Membership], Never> = {\n try! Realm()\n .objects(MembershipObject.self)\n .collectionPublisher\n .map { $0.map { Membership(managedObject: $0) }}\n .assertNoFailure()\n .share(replay: 1)\n .eraseToAnyPublisher()\n}\n",
"text": "My goal is to lazily create a publisher of [Membership]. I want to use realm to implement the logic of when the collection updates, but I don’t want to expose Realm API in the interface. I want tom app to Combine publisher and my own model object.My code for this below seems to work, but I wonder if there might be a simpler/cleaner way to do it:Here’s the version that I wanted to write:But I ran into threading issues related to the fact that collectionPublisher requires a run loop, but the calling thread might not have one.My general question… (I’ve just started using realm) How can I do this better? And in particular is there a better way to deal with threading issues then to just process everything on main as I’m doing in first working example?Thanks,\nJesse",
"username": "Jesse_Grosjean"
},
{
"code": "",
"text": "Welcome @Jesse_Grosjean - I’d probably take a look at our SwiftUI and Combine quickstart here:\nhttps://docs.mongodb.com/realm/sdk/ios/integrations/swiftui/You’ll probably want to freeze the results and pass it to the publisher. We’ve actually did a meetup presentation on this topic here -We’ve been running a series on SwiftUI and Realm at our user group - sign up here to get notified of future events -\nhttps://live.mongodb.com/realm-global-community/",
"username": "Ian_Ward"
}
] | Clean way to deal with threading issues going from Realm to lazy AnyPublisher | 2021-04-19T23:47:53.023Z | Clean way to deal with threading issues going from Realm to lazy AnyPublisher | 2,164 |
null | [
"crud"
] | [
{
"code": "",
"text": "I want to check if value exist in Collection or not.\ne.g. I have Collection of cart Products, I want to check if that product is already present in Collection then i don’t want to insert product else in insert that product into cart Collection.",
"username": "Its_Me"
},
{
"code": "",
"text": "Take a look at upsert in https://docs.mongodb.com/drivers/node/fundamentals/crud/write-operations/upsert/",
"username": "steevej"
}
] | How to Check if value exist in collection or not using MongoDB Webhook? | 2021-04-19T19:45:50.485Z | How to Check if value exist in collection or not using MongoDB Webhook? | 1,829 |
null | [
"swift",
"atlas-device-sync"
] | [
{
"code": "",
"text": "I’m following the RChat tutorial to setup a basic chat app.I can create user accounts and they populate my app user list within Realm and my User collection within Atlas.Upon logging in though, the user immediately disconnects.I have a feeling it has something to do with async open being set to false, although the tutorial doesn’t mention that it’s a requirement.Here’s the output of the xcode console:2021-04-19 11:48:10.044112-0700 BonkLink[4337:91775] Sync: Connection[1]: Session[1]: client_reset_config = false, Realm exists = false, async open = false, client reset = false2021-04-19 11:48:10.191365-0700 BonkLink[4337:91775] Sync: Connection[1]: Connected to endpoint ‘3.210.32.164:443’ (from ‘192.168.1.193:52816’)2021-04-19 11:48:11.352449-0700 BonkLink[4337:91775] Sync: Connection[1]: Disconnected2021-04-19 11:48:11.353044-0700 BonkLink[4337:91775] Sync: Connection[2]: Session[2]: client_reset_config = false, Realm exists = true, async open = false, client reset = falseUser realm location: /Users/guygreenleaf/Library/Developer/CoreSimulator/Devices/E93C2E78-1650-497A-88FC-B469E32EF5E2/data/Containers/Data/Application/50B09FF1-EBAD-4270-ACAE-0065E43BA116/Documents/mongodb-realm/bonklink-cytqk/607dd069e94ba6566559491a/%22user%3D607dd069e94ba6566559491a%22.realmBusyCounter now out of range.Continuing2021-04-19 11:48:11.461192-0700 BonkLink[4337:91775] Sync: Connection[2]: Connected to endpoint ‘3.210.32.164:443’ (from ‘192.168.1.193:52818’)2021-04-19 11:48:12.294444-0700 BonkLink[4337:91775] Sync: Connection[2]: DisconnectedIn my logs on Realm when the connection ends, I noticed this:Logs:\n[\n“Connection was active for: 0s”,\n“Checking if can sync a write for partition = user=607a08427eb1f7538e7478fc”,\n“Partition key = user; partition value = 607a08427eb1f7538e7478fc”,\n“Checking if partitionKey(607a08427eb1f7538e7478fc) matches user.id(607a08427eb1f7538e7478fc) – false”,\n“Checking if can sync a write for partition = user=607a08427eb1f7538e7478fc”,\n“Partition key = user; partition value = 607a08427eb1f7538e7478fc”,\n“Checking if partitionKey(607a08427eb1f7538e7478fc) matches user.id(607a08427eb1f7538e7478fc) – false”\n]Which leads me to believe something is wrong with my partitioning? I’m not sure at this point.UPDATE: I’ve now got my logs returning TRUE on checking the partitionkey and user.id, but for some reason i’m getting this:Session closed after receiving UNBIND eventWhich is causing an immediate disconnect.Any advice on this is greatly appreciated!",
"username": "Zip_Chat"
},
{
"code": "",
"text": "@Zip_Chat If you are not getting any error in the server or clients logs then this commonly points to an error in the code where a strong reference is not held to the realm variable. When the code reaches the end of the closure the garbage collector cleans up the references and tears down the sync connection. Be sure to persist the realm reference in a place where the lifecycle is not getting torn down.",
"username": "Ian_Ward"
}
] | RealmSync connection won't stay open (Swift) | 2021-04-19T18:53:25.605Z | RealmSync connection won’t stay open (Swift) | 2,194 |
null | [
"dot-net"
] | [
{
"code": "public IEnumerable <string> Get_AllCollections_in_DB ()\n {\n var client = new MongoClient (\"mongodb: // localhost: 27017\");\n MongoServer server = client.GetServer ();\n MongoDatabase database = server.GetDatabase (\"Uni_Training\");\n return database.GetCollectionNames ();\n }\n",
"text": "Hello friends!I got an error when I queryed all the collections names in a mongodb database? Can you answer for me the API return error“{” error “:” The GuidRepresentation for the reader is CSharpLegacy, which requires the binary sub type to be UuidLegacy, not UuidStandard. “}”",
"username": "chu_tinh"
},
{
"code": "var client = new MongoClient(\"mongodb://localhost:27017\"); // mind the spaces in the connection string\nvar database = client.GetDatabase(\"Uni_Training\");\nvar collectionNames = database.ListCollectionNames().ToList(); // mind the .ToList() call",
"text": "Hello @chu_tinh,You are using “Legacy” API of the driver which is present only for backward compatibility purposes.Consider using the regular API:",
"username": "Mikalai_74493"
},
{
"code": "",
"text": "Thank you for guidelines! (@Mikalai_74493)Chu Xuân Tình (Viet Nam)",
"username": "chu_tinh"
},
{
"code": "",
"text": "This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | The GuidRepresentation for the reader is CSharpLegacy, which requires the binary sub type to be UuidLegacy, not UuidStandard | 2021-04-18T23:04:47.998Z | The GuidRepresentation for the reader is CSharpLegacy, which requires the binary sub type to be UuidLegacy, not UuidStandard | 3,845 |
null | [
"app-services-user-auth"
] | [
{
"code": "AuthError Error - confirmation requiredInvalidSession Error - invalid sessionPending User LoginInvalidSession Error - invalid session",
"text": "I tried to follow the iOS guide to link an anonymous user to the newly created with email and password https://docs.mongodb.com/realm/sdk/ios/advanced-guides/link-user-identities/The first time it failed with an error AuthError Error - confirmation required. It was descriptive enough so I just go through the confirmation flow first.The second time it failed with an error InvalidSession Error - invalid session. I thought it might be because the user is in the Pending User Login state so I decided to proceed with login.The third time it failed with the same error InvalidSession Error - invalid session when I tried to log in as anon, then login as email/password and link after that.",
"username": "Anton_P"
},
{
"code": "user.identities[].providerTypelocal-userpassanon-user",
"text": "I also tried to link with anon credentials after i log in with email/password. It reports success but it doesn’t link anything it just switches to anonymous account and after I log in next time with email/password the anon data is not preserved.I also checked user.identities[].providerType. There is always just one. It’s local-userpass after email/password login or anon-user after anon login/link.",
"username": "Anton_P"
},
{
"code": "InvalidSession Error - a user already exists with the specified providerInvalidPassword Error - invalid username/password",
"text": "When I try to login with anon and then link with an existing and not pending email/password account it reports InvalidSession Error - a user already exists with the specified providerWhen I try to link to non-existing email/password user it reports InvalidPassword Error - invalid username/passwordI tried to delete the user and create it again. It worked when I tried to link the first time only after the user was confirmed. I can’t be sure what was the root cause but more likely the consequence of my actions. I tried to link accounts before the user was confirmed via the email link.Even though it worked in the end the app might not know the exact moment the user was confirmed (the confirmation might be done on the web also) so even if I try to link users just before the first login it might fail and go into a failed state.",
"username": "Anton_P"
}
] | Unable to link anonymous user to email/password user. Invalid session error | 2021-04-19T18:13:39.148Z | Unable to link anonymous user to email/password user. Invalid session error | 2,920 |
null | [
"aggregation",
"crud"
] | [
{
"code": "db.collection.updateMany({},\n[\n {\n $set: {\n new_field: {\n $avg: \"$sources.value\"\n }\n }\n }\n])\n{\n _id: (ID),\n sources: [{value: 10, ...}, {value: 20, ...}]\n}\npymongo.UpdateOne({filter}, {'$set': {'new_field': {'$avg': \"$sources.value\"}}})$setsources",
"text": "With mongo shell, I can do something like this:When the data looks like this:The output is as expected, with the average being added to the document.I am trying to reproduce this with PyMongo, to no avail.\nWhen I try this: pymongo.UpdateOne({filter}, {'$set': {'new_field': {'$avg': \"$sources.value\"}}}), I get this error:The dollar ($) prefixed field ‘$avg’ in ‘new_field.$avg’ is not valid for storageWhen I try wrapping the whole $set dictionary in a string, it puts the whole string as the value instead of executing it.Is there a proper way to have PyMongo use an aggregation during a set?Note: For the sake of the post, I am trivializing the example. The real problem is a more complex way of calculating the average given different conditions, and the sources object is more than just a list of one value to take the average of.",
"username": "Reid_Gahan"
},
{
"code": "pymongo.UpdateOne({filter}, [{'$set': {'new_field': {'$avg': \"$sources.value\"}}}])\n",
"text": "Hello @Reid_Gahan, Welcome to MongoDB Community Forum,pymongo.UpdateOne({filter}, {’$set’: {‘new_field’: {’$avg’: “$sources.value”}}})Update part should be in array bracket because its aggregation pipeline, that you have used correctly in your mongo shell query but not in pymongo query, try,",
"username": "turivishal"
},
{
"code": "$addToSet$setOnInsertUpdateOne()Updateone({filter}, [{'$set': {}, '$addToSet': {}}])\nUpdateone({filter}, [{'$set': {}}, { '$addToSet': {}}])\nUpdateOne({filter}, {'$set': {}, '$addToSet': {}, etc...})\n$set",
"text": "I may have trivialized too much… that worked as desired, but I also need to do a $addToSet and $setOnInsert in the UpdateOne().If I do:It fails with error:pymongo.errors.OperationFailure: A pipeline stage specification object must contain exactly one fieldIf I do:It fails with error:pymongo.errors.OperationFailure: Unrecognized pipeline stage name: ‘$addToSet’For context:worked prior to adding the aggregation steps to the $set step. Now if I try it in that format, I get aforementioned error about dollar prefixed field.",
"username": "Reid_Gahan"
},
{
"code": "Updateone({filter}, [{'$set': {}, '$addToSet': {}}])$addToSet",
"text": "Updateone({filter}, [{'$set': {}, '$addToSet': {}}])$addToSet is not aggregation pipeline stage, its update operator you can’t use this way in aggregation pipeline.I would suggest to read below documents you will get idea, There are 2 ways to update documents,1) Regular Update:SyntaxOperators2) Update with aggregation pipeline:",
"username": "turivishal"
},
{
"code": "$setUpdateOneUpdateMany",
"text": "Great, thank you for the clarification there. This made sense, with the distinction between Update Document and Aggregation Pipeline. The usage of $set for both cases threw me off, so thank you for explaining.For posterity sake, my solution is to do all the UpdateOnes for the collection, and then do an UpdateMany that will run the aggregation pipeline.",
"username": "Reid_Gahan"
}
] | Updates with Aggregation Pipeline | 2021-04-16T18:05:54.571Z | Updates with Aggregation Pipeline | 5,102 |
null | [
"atlas-functions",
"graphql"
] | [
{
"code": "project( id: $id ) {\n id\n name\n contributors {\n id\n name\n }\n tasks {\n id\n name\n ...\n }\n}\n",
"text": "I am following along with the web tutorial for the task app.I notice that some fetching and updating uses functions and React state instead of Apollo GraphQL, such as fetching a user’s projects, fetching a project’s contributors, and adding a contributor to a project.I’d like to avoid this pattern and route all client-server interactions through GraphQL. Is this possible on the server side with custom resolvers? Was there a reason functions were chosen for the tutorial?For example, say the user selects a project, I’d expect to be able to execute a cohesive query such as the following in order to get more details about a project:I’m aware I could use local-only fields and a client-side schema and transform the results of the functions on the client. Though this adds unnecessary weight and complexity to the frontend, so I’d like to avoid it.",
"username": "Andrewd"
},
{
"code": "",
"text": "Hey Andrew,Welcome to the forums! I wrote the web tutorial so hopefully I can clear things up for you. In general YES you can use GraphQL custom resolvers instead of direct function calls.The primary reason we don’t currently do this is for interoperability between the various tutorial clients, e.g. the Android app doesn’t use GraphQL so we just call the functions directly. That said it should be pretty straightforward to create custom resolvers that wrap these functions. You can use function context to call the existing functions from your custom resolver functions.This is a great idea - I’ll look into updating the web tutorial to do this since I think it makes sense for everyone to use it. In the meantime please let me know if you hit any trouble implementing the custom resolvers!",
"username": "nlarew"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Tutorial: Are functions necessary? Or could the same thing be accomplished with custom resolvers? | 2021-04-19T01:45:41.558Z | Tutorial: Are functions necessary? Or could the same thing be accomplished with custom resolvers? | 2,049 |
[
"kafka-connector"
] | [
{
"code": "",
"text": "Hi Team,I am looking at the MongoDB source connector (v1.3) and it allows you to create DLQ’s.\nerrors.deadletterqueue.topic.nameCan you let me know which Converter is being used by the connector for the DLQ?\nAlso, is there a property I can use to override this converter and give my own custom converter?Thanks.",
"username": "syed_mohammad_Saif"
},
{
"code": "",
"text": "Hi @syed_mohammad_Saif,The data is sent as an extended json string. There are no plans at this time to support alternative converters.I’ve added DOCS-14368 to track updating the configuration documentation.Ross",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Kafka Source connector DLQ Converter | 2021-04-19T15:15:53.972Z | MongoDB Kafka Source connector DLQ Converter | 1,897 |
|
null | [
"app-services-user-auth"
] | [
{
"code": "let credentials = RLMAppCredentials.init(jwt: token)\nself.realmApp!.login(withCredential: credentials, completion: { (syncUser, error) in\n...\n}\n",
"text": "Has anyone been able to login to MongoDB Realm using a JWT token generated by Firebase? I believe I have everything set up correctly in RealmUI: enabled the Custom JWT Authentication, specified the algorithm (RS256), entered my public key. In my code I have Firebase generate the token which has the aud, sub, exp, and iat values set in the payload. I create the credentials and then try to login:The login fails and the error I get back isError Domain=realm::app::ServiceError Code=2 “authentication via ‘custom-token’ is unsupported” UserInfo={NSLocalizedDescription=authentication via ‘custom-token’ is unsupported, realm::app::ServiceError=InvalidSession}I have no idea why the server thinks it’s a custom token.",
"username": "Nina_Friend"
},
{
"code": "",
"text": "@Nina_Friend Can you share your custom auth code please?",
"username": "Ian_Ward"
},
{
"code": "const jwt = require('jsonwebtoken');\nconst fs = require('fs');\nconst key = fs.readFileSync(\"jwtRS256.key\");\nexports.realmFirebaseAuth = functions.https.onCall((data, context) => { \n const uid = context.auth.uid; \n const payload = { }; \n const token = jwt.sign(payload, { key: key, passphrase: \"<My PassPhrase>\" }, { algorithm: 'RS256', subject: uid, audience: \"<Realm App Id>\", expiresIn: \"1d\"}); \n return { token:token };\n});",
"text": "I assume you mean my Google function for getting the token. Here it is:",
"username": "Nina_Friend"
},
{
"code": "",
"text": "@Nina_Friend I was trying to confirm your JWK configuration on the Realm custom auth side because this should be supported - mongodb - Is it possible to use Firebase signInWithEmailAndPassword in Stitch Custom Authentication/ Stitch Auth - Stack OverflowWhat version of the Realm SDK are you using?",
"username": "Ian_Ward"
},
{
"code": "",
"text": "My SDK in 10.0.0-beta.2.",
"username": "Nina_Friend"
},
{
"code": "",
"text": "@Nina_Friend What does your Custom JWT configuration looks like in the Realm Cloud UI ?",
"username": "Ian_Ward"
},
{
"code": "",
"text": "I just tried my code again. I had the wrong Realm App Id being passed to jwt.sign. I fixed that but now I’m getting a different error when I call realmApp.login:Error Domain=realm::app::ServiceError Code=47 “Invalid Key: Key must be PEM encoded PKCS1 or PKCS8 private key” UserInfo={NSLocalizedDescription=Invalid Key: Key must be PEM encoded PKCS1 or PKCS8 private key, realm::app::ServiceError=AuthError}Here is my set up in the Realm UI:\nProviders1956×1588 304 KB",
"username": "Nina_Friend"
},
{
"code": "",
"text": "I had authentication working with the Realm Cloud Platform. I got it going a year ago using the detailed instructions that were on the old Realm website. It would be really helpful to have step-by-step instructions like that again for the MongoDB platform.",
"username": "Nina_Friend"
},
{
"code": "",
"text": "Error Domain=realm::app::ServiceError Code=47 “Invalid Key: Key must be PEM encoded PKCS1 or PKCS8 private key” UserInfo={NSLocalizedDescription=Invalid Key: Key must be PEM encoded PKCS1 or PKCS8 private key, realm::app::ServiceError=AuthError}This is a server side error not a client one. I would guess that the JWT is being generated incorrectly",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Hi! Did you get this resolved? I wrote the guide for the Realm Cloud Function and I am using Firebase with MongoDb Realm now.In MongoDb Realm there is no need for the cloud function to authenticate. You can pass the firebase token from firebaseUser.getIdToken() and pass that directly into MongoDbRealm JWT authentication. All you need to do is to configure the JWT authentication to use https://www.googleapis.com/service_accounts/v1/jwk/[email protected] as JWK url and specify your firebase project name as audience, then you are done.As a bonus, authentication will be faster since you skip the roundtrip to your cloud function ",
"username": "Simon_Persson"
},
{
"code": "{\n \"aud\": \"realmtest-ufdbt\",\n \"sub\": “[email protected]”,\n \"exp\": 1596288789,\n “user_data\": {\n \"email\": “[email protected]”\n },\n \"iat\": 1596245589\n}\n",
"text": "I would check the content of the JWT token in https://jwt.io. Basically you need to make sure that it has aud property with the realm app id, a sub property with the user’s email, an exp with the expiration date of the token, and possibly some user_data metadata with at least an email or a name so that it can set the name in the User’s table, finally include an iat property with time of creation.Something like thisWe are working on a JWT authenticator for Realm, and got the JWT authentication to work with the new MongoDB Realm, so I know it is possible.",
"username": "Richard_Krueger"
},
{
"code": "",
"text": "We investigated this issue and it turns out we are not processing the Firebase tokens with the correct crypto format - we are looking to fix that now. We should be able to check the type of the key after parsing the provided public key PEM and then choose the correct x509 parsing function.in the meantime, I just tried this using an openssl-generated keypair and it works fine, so you may want to try that out// generate a public and private RSA key pair\nopenssl genrsa -out private.pem 4096// export the RSA public key to PEM\nopenssl rsa -in private.pem -outform PEM -pubout -out public.pem",
"username": "Ian_Ward"
},
{
"code": "",
"text": "That is weird… I got the firebase auth working in my app using the mobile SDKs? Data was synced fine, but I was only using a public, read only Realm. But this isn’t supposed to be working?",
"username": "Simon_Persson"
},
{
"code": "",
"text": "Thanks for your suggestion, Simon. That is a much easier way of setting up the Firebase authentication.",
"username": "Nina_Friend"
},
{
"code": "",
"text": "I have struggled with the exact same issue. Custom JWT Token authentication is not sufficiently described in the MongoDB documentation. At least, for the integration with Firebase, I have written a step-by-step guide which should be very easy to follow. You can find it on Medium here",
"username": "Nils_Ackermann"
}
] | Problems authenticating using JWT token | 2020-07-06T16:33:06.974Z | Problems authenticating using JWT token | 6,521 |
null | [
"connecting"
] | [
{
"code": "",
"text": "Hello,I am having some problems connecting to the database from a server hosted at fastcoment and they asked me about the MongoDB server IP address so they can check more. Is there I way I can find out which is the IP address of the server where my DB is located?",
"username": "Ioana_Catalina_E"
},
{
"code": "",
"text": "Hi @Ioana_Catalina_E - welcome to the forums!You can find the details around the MongoDB server hostname in the connection string your application uses to connect to the database (See this docs page for details - https://docs.mongodb.com/manual/reference/connection-string/).You can then use nslookup or a similar tool to determine what the IP of the server is based on it’s hostname.Hope this helps and if you want to try out this commands, you can always use Atlas’s Free Tier to create a cluster and then test these commands mentioned to see how it works before checking your other server.Kindest regards,\nEoin",
"username": "Eoin_Brazil"
}
] | MongoDB server IP address | 2021-04-17T12:10:59.535Z | MongoDB server IP address | 12,173 |
null | [
"node-js"
] | [
{
"code": " script.\n document.addEventListener('DOMContentLoaded', function (databook) {\n var calendarEl = document.getElementById('calendar');\n var initdate = new Date();\n var calendar = new FullCalendar.Calendar(calendarEl, {\n function(){\n var eventsArray=[];\n bookdata.forEach(function (element){\n eventsArray.push({\n title:element.title,\n start:element.start,\n end:element.end })\n })\n },\n initialView: 'dayGridMonth',\n timeZone:'Europe/Athens',\n initialDate: initdate,\n handleWindowResize:true,\n\n headerToolbar: {\n left: 'prev,next today',\n center: 'title',\n right: 'dayGridMonth,timeGridWeek,timeGridDay'\n },\n eventTimeFormat:{\n hour: 'numeric',\n minute: '2-digit',\n\n },\n eventDisplay:'auto',\n views:{\n timeGrid:{\n formatDateTime:'DD/MM/YYYY HH:mm'\n }\n },\n events:eventsArray\n\n\n });\n calendar.addEvent()\n calendar.render();\n\n });\n ```",
"text": "Hello. Iam trying to send an array of events objects from my mongodb to fullcalendar. What i want is to pass the events array as a simple argument inside the calendar function here is my code.Note that databook is my events array fetching from my express/node js backend, but when i run calendar then it throws me error Undefined eventsArray. Any suggestion is welcomed thank you",
"username": "petridis_panagiotis"
},
{
"code": "",
"text": "Hi @petridis_panagiotis,If i read the code correctly the eventsArray is defined only in the callback function and cannot be used outside.I recommend init it outside of the function , maybe next to initdate.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks, i will try, in general the problem is that i can not pass json objects to client side. Is there any way to construct my calendar on backend, then pass my calendar object as argument to res.render and then render it on front end?",
"username": "petridis_panagiotis"
}
] | Full Calendar interfacing with MongoDB | 2021-04-17T15:09:45.096Z | Full Calendar interfacing with MongoDB | 5,131 |
[
"app-services-cli"
] | [
{
"code": "",
"text": "Hi!I just noticed some new files appear when I exported a realm app via the cli.Are there any release notes about how these environments work, or is it still under development?It will be great to get this going as currently I have a folder for dev and prod withing the project and have to have a script to run through some of the .json files to turn off scheduled triggers in dev etc.!Thanks!Screen Shot 2021-02-15 at 12.44.303922×1602 274 KB",
"username": "Adam_Holt"
},
{
"code": "",
"text": "Hi Adam,Thanks for posting your first question and welcome to the community!I have confirmed with our team that the addition of the “environments” folder in your export is part of a new feature that we are still working on.At this time the CLI should not read or process any files in this folder so it can be ignored for now. I will look out for new information regarding this feature and circle back around to update this thread with any new documentation that is released.Regards\nManny",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "Great, thanks Manny!Looking forward to the new functionality.Thanks ",
"username": "Adam_Holt"
},
{
"code": "",
"text": "This feature is live now. Please see the relevant documentation below:https://docs.mongodb.com/realm/config/environments/#std-label-appconfig-environment",
"username": "Mansoor_Omar"
}
] | New files from CLI for environments | 2021-02-15T00:54:23.797Z | New files from CLI for environments | 1,901 |
|
null | [
"replication",
"monitoring"
] | [
{
"code": "",
"text": "Hi all i met a strange question .Mongodb version 4.0.12 communityAn replset with node A(primary node),B(sec node),C(sec node) i want to copy node B to new machine and then remove it .My detail operation is login the pri and set the node B priority is 0 and hidden is true then reconfig it . i see it work and node B transfer to hidden node , next login the node B and exec db.fsyncLock() , it also work , next i scp node B dir to new machine then start it success and add it in the replset .so as for now in the replset i have 4 nodes , ABCD ,but in the pri i exec rs.status() i see node B lastHeartbeatRecv still was t1(the time i add node C in the replset), and it won’t changed . but lastHeartbeat is still change to the last.And if i exec rs.status() in the node B i just can see node A and node C i know it cause of node B was locked and can not update config , but in the result the node A and node C lastHeartbeat and lastHeartbeatRecv still was t1.I exec the db.setLogLevel(2,“replication.heartbeats”); in the pri node and i can see in the mongod.log the node B heartbeat request to node A .So why lastHeartbeatRecv is not updated in the node A and node C and node D ?",
"username": "zhijia_zhang"
},
{
"code": "",
"text": "the detail operation like this :\nin the primary node choose a sec node (NODEB)set the priority 0 and hidden true and reconfig it .\na.members[1].priority=0\na.members[1].hidden=true\nrs.reconfig(a)then in the NODEB that was hidden exec\ndb.fsyncLock()you can see whatever in the pri node and NODEB that config version was same and lastHeartbeat and lastHeartbeatRecv was still updated.next you can scp -r NODEB dir to the new machine and then start it and add it to the replset . TIME1after that you can exec rs.status in the pri node you will see that NODEB lastHeartbeat is still updated but lastHeartbeatRecv was TIME1 and will not change anymore.\nalso you can exec rs.status in the NODEB and you will see pri node and sec node’s lastHeartbeat and lastHeartbeatRecv was TIME1 and also not change anymore.i exec db.setLogLevel(2,“replication.heartbeats”); in the pri node and sec node .\nin the node log i can see NODEB send request to the pri node and also pri node will send response to the NODEB ,\nso why lastHeartbeatRecv was not changed in the pri and other sec node ? and why in the NODEB pri and other sec lastHeartbeat and lastHeartbeatRecv was not changed",
"username": "zhijia_zhang"
}
] | Why lastHeartbeatRecv can not updated | 2021-04-16T03:31:17.716Z | Why lastHeartbeatRecv can not updated | 1,864 |
null | [
"aggregation",
"dot-net",
"atlas-search",
"text-search"
] | [
{
"code": "",
"text": "Using C# how can I give current date in origin of near object in search stage. I’ve already tried DateTime.Now & new Date() but it always throw error on aggregation.Here is the my complete question:",
"username": "Waleed_Nasir"
},
{
"code": "new BsonDateTime(origin)",
"text": "could you try to usenew BsonDateTime(origin) to convert the date to a BsonDataTime.",
"username": "Marcus"
},
{
"code": "",
"text": "@Marcus Thanks for you response.I’ve tried what you’ve told me but still it is throwing same error.You can see my codevar nearCreateDateObject = new NearClauseSearchModel\n{\npath = “CreatedDate”,\norigin = new BsonDateTime(DateTime.Today.Date),\npivot = Convert.ToUInt64(“7776000000”),\nscore = GenericObject(“boost”, GenericObject(“value”, createDateScore))\n};and this is the final bson.\n“near”:{\n“path”:“CreatedDate”,\n“origin”:“2021-04-17T19:00:00Z”,\n“pivot”:“NumberLong(”“7776000000\"”)\",\n“score”:{\n“boost”:{\n“value”:1\n}\n}\n}",
"username": "Waleed_Nasir"
}
] | Command aggregate failed: Remote error from mongot :: caused by :: \"origin\" must be a date, number, or geoPoint (from \"compound.should[1].near\") | 2021-04-18T00:52:06.216Z | Command aggregate failed: Remote error from mongot :: caused by :: \”origin\” must be a date, number, or geoPoint (from \”compound.should[1].near\”) | 4,261 |
null | [
"atlas-triggers",
"api"
] | [
{
"code": "",
"text": "I want to Create a Scheduled Trigger by using MongoDB webhook Call.\ne.g i will call webhook and pass parameter to it using post request and paramter are time. Then that Webhook should Create a Scheduled Trigger for Execution at given time.",
"username": "Its_Me"
},
{
"code": "",
"text": "Hi @Its_Me,Are you looking to schedule a reoccurring event or just scheduled based on your document data?See the following solution for document basef trigger trickDatabase Triggers based on document dateOtherwise you can use the administration api for realm to create triggers via context.http service on your webhookThanks\nPavel",
"username": "Pavel_Duchovny"
}
] | How to Create a Scheduled Trigger using MongoDB Webhook by passing its parameters | 2021-04-17T10:55:10.134Z | How to Create a Scheduled Trigger using MongoDB Webhook by passing its parameters | 4,421 |
null | [] | [
{
"code": "echo$document['dateField']echo \"Date Played: \".$round['roundDate'].\"<br>\";intecho \"Date Played: \".date(\"DD/MM/YYYY\", $round['roundDate']).\"<br>\";",
"text": "I have a returned query, with documents that contain MongoDB dates.If I simply echo the $document['dateField'], it displays as:echo \"Date Played: \".$round['roundDate'].\"<br>\";Date Played: 1620255600000I would prefer this as:Date Played: 15/04/2021Or:Date Played: 15th April 2021I have tried, which wants an int for the 2nd parameter:echo \"Date Played: \".date(\"DD/MM/YYYY\", $round['roundDate']).\"<br>\";But this errors with:Fatal error : Uncaught TypeError: date(): Argument #2 ($timestamp) must be of type ?int, MongoDB\\BSON\\UTCDateTime given in /var/www/html/selectComp.php:39 Stack trace: #0 /var/www/html/selectComp.php(39): date(‘DD/MM/YYYY’, Object(MongoDB\\BSON\\UTCDateTime)) #1 {main} thrown in /var/www/html/selectComp.php on line 39I know this is just a date conversion, but it always confuses me…",
"username": "Dan_Burt"
},
{
"code": "$round['roundDate']->toDateTime()->format(\"d M Y\");",
"text": "Fixed with (but not played with other format styles / options):$round['roundDate']->toDateTime()->format(\"d M Y\");",
"username": "Dan_Burt"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Display UTCDateTime in PHP | 2021-04-15T09:58:45.961Z | Display UTCDateTime in PHP | 7,285 |
[
"queries",
"indexes"
] | [
{
"code": "kit1kit2$orkit1kit2",
"text": "I have a collection with almost 2.4 million docs (and growing strongly).I have the following unique, compound index for the fields kit1 and kit2:Screenshot 2021-04-17 at 18.21.02903×73 8.48 KBWhen I run a simple $or query where I check for both the kit1 and kit2 field I was expecting that it would use the index. But the explain feature tells me that there’s no index available for this query:Screenshot 2021-04-17 at 18.20.12886×704 59.2 KBWhile Atlas is still fast (50 ms for the COLLSCAN), I want this query to use an index.Question 1:\nWhy doesn’t it use the existing index?Question 2:\nWhat index do I need to add or how can I change my query to let it use the existing index?Thanks in advance!",
"username": "Andreas_West"
},
{
"code": "$or$or$or",
"text": "Hello @Andreas_West, the reason is this: $or Clauses and Indexes. It says:When evaluating the clauses in the $or expression, MongoDB either performs a collection scan or, if all the clauses are supported by indexes, MongoDB performs index scans. That is, for MongoDB to use indexes to evaluate an $or expression, all the clauses in the $or expression must be supported by indexes. Otherwise, MongoDB will perform a collection scan.So, for the query to perform an index scan you need to have indexes on both the fields - individually.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thanks @Prasad_Saya. Is it because the index is an unique one?I thought that it would be redundant to have the following 3 indexes:kit1, kit2 as compound index\nkit1\nkit2My (rather simplistic) thinking was that it could take the 1st index for all queries that require either the kit1 or kit2 or kit1 & kit2 field.But reading your answer and seeing the explain I was obviously wrong.",
"username": "Andreas_West"
},
{
"code": "(kit1 == \"x\" OR kit2 == \"y\" )kit1kit2kit1+kit2kit2kit1kit1kit2 kit1+kit2kit1",
"text": "Is it because the index is an unique one?Nope.The query needs indexes on the fields on both side of the or. That is if your query filter is:(kit1 == \"x\" OR kit2 == \"y\" )then, there should be indexes on kit1, kit2, which would be used individually. So, if you have the following three indexes:kit1, kit2 as compound index\nkit1\nkit2I think you can keep the first compound index (kit1+kit2) and the index on kit2 (and drop the index on kit1). Another option is drop the compound index (in case you don’t have use of it elsewhere) and retain the two individual single field indexes on kit1 and kit2.See this note about Compound Index Prefixes to understand why an index on kit1+kit2 can be used in lieu of an index on kit1 only.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Why is MongoDb not taking an existing, unique compound index for an $or query? | 2021-04-17T10:25:00.986Z | Why is MongoDb not taking an existing, unique compound index for an $or query? | 7,787 |
|
null | [
"aggregation",
"crud",
"golang"
] | [
{
"code": "type MyStruct struct {\n Field1 map[string]struct{}\n Field2 []string\n Index int\n Field4 SomeOtherStruct\n}\n\n\n// this works, however using $inc instead of set does not work. That's ok, I can deal.\ncollection.UpdateOne(ctx, bson.M{\"id\": id, \"version\": version}, mongo.Pipeline{\n\t\t{{Key: \"$set\", Value: bson.M{\"index\": 3}}},\n\t\t{{Key: \"$addFields\",\n\t\t\tValue: bson.M{\n\t\t\t\t\"field2\": bson.M{\n\t\t\t\t\t\"$arrayElemAt\": []interface{}{\"$field2\", \"$index\"},\n\t\t\t\t},\n\t\t\t},\n\t\t}},\n\t})\n\n// A little frustrating I can't use $index here, but again I'll live.\ncollection.UpdateOne(ctx, bson.M{\"id\": id}, bson.D{\n\t{Key: \"$set\", Value: MyStruct{}}, {Key: \"$push\", Value: bson.M{\n\t\t\"field2\": bson.M{\n\t\t\t\"$each\": myStringArray,\n\t\t\t\"$position\": \"$index\", // can't lookup the index here\n\t\t},\n\t}})\n\n// But wait, I can't use the struct in a pipeline?!? (Hint: it complains about empty fields). Ok, starting to reconsider using mongo now...\ncollection.UpdateOne(ctx, bson.M{\"id\": id, \"version\": version}, mongo.Pipeline{\n\t\t{{Key: \"$set\", MyStruct{}}}, // doesn't like the empty fields here.\n\t\t{{Key: \"$addFields\",\n\t\t\tValue: bson.M{\n\t\t\t\t\"field2\": bson.M{\n\t\t\t\t\t\"$arrayElemAt\": []interface{}{\"$field2\", \"$index\"},\n\t\t\t\t},\n\t\t\t},\n\t\t}},\n\t})\n",
"text": "I’m using the go driver. Some of these issues are likely due to the go driver, but I believe the bulk is due to the API itself.Here’s some inconsistencies:And there’s no fix! I can’t use addFields, or set with a lookup on arrayElemAt outside of the pipeline, and I can’t use set with the struct that may have empty fields in a pipeline.Doing 2 queries to accomplish this is not even the frustrating part. The frustrating part is the lack of documentation around what is allowed to be strung together under what circumstances. Maybe I’ll understand it more with time. But API design is important, and it’s clear there is room for improvement here",
"username": "Sean_Teeling"
},
{
"code": "// this works, however using $inc instead of set does not work. That's ok, I can deal.\ncollection.UpdateOne(ctx, bson.M{\"id\": id, \"version\": version}, mongo.Pipeline{\n\t\t{{Key: \"$set\", Value: bson.M{\"index\": 3}}},\n\t\t{{Key: \"$addFields\",\n\t\t\tValue: bson.M{\n\t\t\t\t\"field2\": bson.M{\n\t\t\t\t\t\"$arrayElemAt\": []interface{}{\"$field2\", \"$index\"},\n\t\t\t\t},\n\t\t\t},\n\t\t}},\n\t})\n$inc$addFields$set{{Key: \"$set\", Value: bson.M{\"index\": 3}}}n{ $addFields: { $add: [ \"$index\", n ] } }$set$addFields// A little frustrating I can't use $index here, but again I'll live.\ncollection.UpdateOne(ctx, bson.M{\"id\": id}, bson.D{\n\t{Key: \"$set\", Value: MyStruct{}}, {Key: \"$push\", Value: bson.M{\n\t\t\"field2\": bson.M{\n\t\t\t\"$each\": myStringArray,\n\t\t\t\"$position\": \"$index\", // can't lookup the index here\n\t\t},\n\t}})\n$push\"$position\": \"$index\", // can't lookup the index here// But wait, I can't use the struct in a pipeline?!? (Hint: it complains about empty fields). Ok, starting to reconsider using mongo now...\ncollection.UpdateOne(ctx, bson.M{\"id\": id, \"version\": version}, mongo.Pipeline{\n\t\t{{Key: \"$set\", MyStruct{}}}, // doesn't like the empty fields here.\n\t\t{{Key: \"$addFields\",\n\t\t\tValue: bson.M{\n\t\t\t\t\"field2\": bson.M{\n\t\t\t\t\t\"$arrayElemAt\": []interface{}{\"$field2\", \"$index\"},\n\t\t\t\t},\n\t\t\t},\n\t\t}},\n\t})\n{{Key: \"$set\", MyStruct{}}}, // doesn't like the empty fields here.bson.D{{\"$set\", MyStruct{}}}\nbson.D{{Key: \"$set\", Value: MyStruct{}}}\n",
"text": "Hello @Sean_Teeling, welcome to the MongoDB Community forum! I have tried to address your issues (three of them).The $inc is a MongoDB Query Language’s (MQL) update operator. The MQL operators do not work with Aggregation Pipeline’s $addFields and $set stages. In the above update operation, you are using a pipeline to perform the updates, so use the pipeline operators.Why MQL and Pipeline for update operations?Most of the general purpose updates can be handled with the MQL operators, and for more complex operations the pipeline is the tool to go with as aggregation operators are a larger and versatile set of tools.Now consider your code: {{Key: \"$set\", Value: bson.M{\"index\": 3}}}\nSince, you want to increment the value by n, you can also write it as (in native code):\n{ $addFields: { $add: [ \"$index\", n ] } }Note: The $set and $addFields are the same - when used within a pipeline.Reference:This update operation is an MQL update operation. The $push you are using is a MQL update operator used for working with array fields.\"$position\": \"$index\", // can't lookup the index hereYes, that error is correct and is as expected. With MQL update operators you cannot assign the document field values to other document fields (i.e., use them on the right-hand side of an update operation, as you are trying). This is where you should be using the pipeline, which allows assigning same document field values.About your code: {{Key: \"$set\", MyStruct{}}}, // doesn't like the empty fields here.You can use either of the following two - both worked for me:",
"username": "Prasad_Saya"
}
] | Inconsistent API for updates and aggregation | 2021-04-16T22:22:04.784Z | Inconsistent API for updates and aggregation | 2,092 |
null | [] | [
{
"code": "{\n \"data\": {\n \"plans\": {\n \"1\": \"14\",\n \"2\": \"20\",\n \"3\": \"40\"\n }\n }\n}\n{ \"_id\": { \"$oid\": \"5fe3ff5d909016064978f2bd\" }, \"plans\": [null, \"14\", \"20\", \"40\"] }\n\n",
"text": "I have a simple .json which I am trying to import:When I use MongoDB Compass to directly import the json file, the plans object is converted into an array:Am I doing something wrong? Or can I not use numbers as keys in JSON",
"username": "promisetech"
},
{
"code": "",
"text": "I think it is doing what you told it to do \nWhat is an object with numbered indices other than, precisely, an array?\nSo Compass inserts your JSON as an array with a null value for the 0th offset.",
"username": "Jack_Woehr"
},
{
"code": "{\n \"data\": {\n \"plans\": {\n \"1\": \"14\",\n \"2\": \"20\",\n \"3\": \"40\",\n \"5\": \"50\"\n }\n }\n}\n",
"text": "I never thought of it like that. But what if my object isn’t in sequence?",
"username": "promisetech"
},
{
"code": "plans : [ null , 14 , 20 , 40 , null , 50 ]\n> db.test.insertOne( { _id:1 , plans : { \"1\": \"one\" , \"3\" : \"three\" } } )\n{ \"acknowledged\" : true, \"insertedId\" : 1 }\n> db.test.find()\n{ \"_id\" : 1, \"plans\" : { \"1\" : \"one\", \"3\" : \"three\" } }\n",
"text": "You would end up with:Note that plans.X would gives the same value whether plans is an array or an object. Array are simply, more efficient as no storage is use for the key (index). Except may be for sparse array.Also note that with mongo (the old shell, I do not know about mongosh), you do not end up with an array.",
"username": "steevej"
},
{
"code": "mongoimportmongoimport> mongoimport --db dbName --collection collectionName <fileName.json \n{\n \"_id\": { \"$oid\": \"5fe3ff5d909016064978f2bd\" },\n \"data\": {\n \"plans\": {\n \"1\": \"14\",\n \"2\": \"20\",\n \"3\": \"40\"\n }\n }\n}\n{ \n \"_id\": { \"$oid\": \"5fe3ff5d909016064978f2bd\" }, \n \"plans\": [null, \"14\", \"20\", \"40\"] \n}\n",
"text": "Hello, @steevej and @Jack_Woehr, I am not sure why its working properly through mongoimport command and why its not through mongo compass,Import through mongoimport: (Working)This results exact input result:Import from mongo compass: (Not Working)This results:Ultimately both are same JSON import and using same JSON file, but results are different why it is happening?",
"username": "turivishal"
},
{
"code": "mongomongoshmongoimportcompass",
"text": "Every time one adds a tool layer on top of an existing layer one adds complexity and something new to debug.\nIf I were troubled by the phenomenon @turivishal is describing here is the way I would explore this:In fact, you have mostly already done all of this.\nThe answers you get are what the truth is!\nIf Compass does something different, the answer is in the Compass source, which is publicly available.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "@Jack_Woehr Now that we’ve all acknowledged that this is in fact a bug and not correct syntax, Isn’t there a process for bugs to be submitted within the mongodb community?",
"username": "promisetech"
},
{
"code": "",
"text": "https://jira.mongodb.org/plugins/servlet/samlsso",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "The issue has been submitted https://jira.mongodb.org/browse/COMPASS-4548It will take some days to process.",
"username": "turivishal"
},
{
"code": "",
"text": "Isn’t there a process for bugs to be submitted within the mongodb community?Hi @promisetech,There is indeed a standard process to Submit a Bug Report for MongoDB Compass. The short scoop is that bug reports should be submitted to the COMPASS project in MongoDB’s JIRA issue tracker. The Bug Report documentation link includes a screenshot with some further details, as well as the procedure for submitting Feature Requests.Thanks @turivishal for being proactive and submitting (and sharing) a bug report.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "doc = {\n \"data\": {\n \"plans\": {\n \"2\": \"20\",\n \"1\": \"14\",\n \"3\": \"40\"\n }\n }\n}\n{\n \"data\": {\n \"plans\": {\n \"1\": \"14\",\n \"2\": \"20\",\n \"3\": \"40\"\n }\n }\n}\ndoc = {\n \"data\": {\n \"plans\": {\n \"v2\": \"20\",\n \"v1\": \"14\",\n \"v3\": \"40\"\n }\n }\n}\n{\n \"data\": {\n \"plans\": {\n \"v2\": \"20\",\n \"v1\": \"14\",\n \"v3\": \"40\"\n }\n }\n}\nmongoimportMapmongoimport",
"text": "Am I doing something wrong? Or can I not use numbers as keys in JSONHi @promisetech,JSON requires key names to be strings, so using numeric strings is technically valid. However, JavaScript has some interesting assumptions (and unexpected coercion) for numeric values so I’d recommend against using numeric strings as keys.Here’s an example of JavaScript producing unexpected results for what appear to be similar objects.a) Doc with string key names that could be interpreted as numbers:==> JavaScript sorts the object keysb) Doc with string key names that are alphanumeric:==> Object keys are maintained in insertion order!I am not sure why its working properly through mongoimport command and why its not through mongo compass,As my example above demonstrates, the order of keys in a generic JavaScript Object is not defined (or guaranteed to be preserved). In particular, there is some legacy handling for key names that can be parsed as a 32-bit integer: 164 - v8 - V8 JavaScript Engine - Monorail. JavaScript does have order-preserving types like Map objects and arrays, but the default Object behaviour is almost what you expect (except when it isn’t).Since Compass is a JavaScript application, it inherits some of these legacy quirks that have to be coded around.The mongoimport tool is written in Go, so its JSON implementation does not have to deal with the added quirks of JavaScript handling.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "responses = {\n \"101\":\"text\",\n \"201\":'text\n}\nresponses = [null, null , .......\"text\", null, null, ........... \"text\"]db = pymongo.MongoClient(MONGODB_URI).texam\nf = db.responses.find()\nfor i in f: \n temp = {}\n flag = False\n for n, j in enumerate(i['responses']): \n if j != None:\n temp[str(n)] = j\n else: \n flag = True\n print(i)\n if flag:\n db.responses.update_one(i, {\"$set\":{'responses':temp}})\n",
"text": "I also encountered the same problem I took a backup of a collection in JSON using compass and after some modifications I had to imported the same JSON file.Each document of my collection has a responses object which is parsed as an array after import.Original objectafter importresponses = [null, null , .......\"text\", null, null, ........... \"text\"]I wrote this script. This helped me out cleaning multiple collections.I had to use flag as some documents were right.",
"username": "Harsh_Singhvi"
}
] | Mongodb import object with numbers as keys results in array | 2020-12-25T03:07:03.966Z | Mongodb import object with numbers as keys results in array | 9,869 |
null | [
"swift"
] | [
{
"code": "",
"text": "I’m trying to migrate a UIKit app to SwiftUI.I am having issues figuring out how to bundle a populated Realm.Please let me know if you have an example this swift code.Thank you.",
"username": "Paul_Simmons"
},
{
"code": "",
"text": "There should not be any significant difference as there are no UI elements to Realm. The code and process should be the same.Do you have existing code you’re having difficulty with or you just don’t know how to do it in general?If you need to get started, the current documentation is lacking a bit on this topic but the legacy documentation has a section about Bundling A Realm",
"username": "Jay"
}
] | Bundling a Realm with SwiftUI | 2021-04-16T13:42:10.259Z | Bundling a Realm with SwiftUI | 1,724 |
null | [
"crud",
"golang"
] | [
{
"code": "{\n \"_id\" : ObjectId(\"60784451b5d2b589bcb138e8\"),\n \"regions\" : [ \n {\n \"name\" : \"us-east-1\",\n \"steps\" : [ \n {\n \"name\" : \"step0\",\n \"number\" : 0,\n \"completed\" : false,\n \"successful\" : false,\n \"message\" : \"\"\n }, \n {\n \"name\" : \"step1\",\n \"number\" : 1,\n \"completed\" : false,\n \"successful\" : false,\n \"message\" : \"\"\n }\n ]\n }, \n {\n \"name\" : \"us-west-2\",\n \"steps\" : [ \n {\n \"name\" : \"step0\",\n \"number\" : 0,\n \"completed\" : false,\n \"successful\" : false,\n \"message\" : \"\"\n }, \n {\n \"name\" : \"step1\",\n \"number\" : 1,\n \"completed\" : false,\n \"successful\" : false,\n \"message\" : \"\"\n }\n ]\n }\n ]\n}\n\nfunc UpdateStep(id string, step *Step) bool {\n\tctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)\n\tdefer cancel()\n\tcollection := config.Singleton().Database.Collection(\"deploys\")\n\tobjID, _ := primitive.ObjectIDFromHex(id)\n\tfilter := options.ArrayFilters{\n\t\tFilters: bson.A{bson.M{\"_id\": objID}, bson.M{\"x.name\": step.Region}, bson.M{\"y.name\": step.Name}}}\n\topts := options.Update().SetUpsert(false)\n\tupdate := bson.M{\n\t\t\"$set\": bson.M{\n\t\t\t\"regions.$[x].steps.$[y].completed\": step.Completed,\n\t\t\t\"regions.$[x].steps.$[y].successful\": step.Successful,\n\t\t},\n\t}\n\tret, err := collection.UpdateOne(ctx, filter, update, opts)\n\tif err != nil {\n\t\treturn false\n\t}\n\n\treturn true\n}\n",
"text": "my document:What I wanted to do: update the particular step of a particular region.golang code snipet:Got error when ran:\nmultiple write errors: [{write errors: [{No array filter found for identifier ‘x’ in path ‘regions.[x].steps.[y].completed’}]}, {}]Any help is greatly appreciated.",
"username": "mystery_bbs"
},
{
"code": "arrayFiltersUpdateOneUpdateOneUpsertArrayFilters// Create an instance of an options and set the desired options\nupsert := true\narrayFilters := options.ArrayFilters{ // .... }\nupdateOpts := options.UpdateOptions{\n ArrayFilters: &arrayFilters\n Upsert: &upsert\n}\nret, err := collection.UpdateOne(ctx, filter, update, &updateOpts)filterobjIDfilter := {bson.M{\"_id\": objID}",
"text": "Hello @mystery_bbs, welcome to the MongoDB Community forum!What I wanted to do: update the particular step of a particular region.To update a nested array based upon a condition matching the nested array element’s field, you need to use the arrayFilters option of the UpdateOne operation .The UpdateOne method takes the arguments: ctx, filter, update and opts. There are two UpdateOptions you are working with; these are the Upsert and the ArrayFilters. The code for update options can look like this in your case.And, use the update options:ret, err := collection.UpdateOne(ctx, filter, update, &updateOpts)Your filter field still requires a definition - it needs to be based upon what you want to filter upon. I think you want to filter upon the objID. Then the filter would be:filter := {bson.M{\"_id\": objID}For usage of array filters see this post (with native code, not golang): Updating nested array of objects using $addToSet",
"username": "Prasad_Saya"
},
{
"code": "func UpdateStep(id string, step *Step) bool {\n\tctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)\n\tdefer cancel()\n\tcollection := config.Singleton().Database.Collection(\"deploys\")\n\tobjID, _ := primitive.ObjectIDFromHex(id)\n\tfilter := bson.D{primitive.E{Key: \"_id\", Value: objID}}\n\tarrayFilters := options.ArrayFilters{Filters: bson.A{bson.M{\"x.name\": step.Region}, bson.M{\"y.name\": step.Name}}}\n\tupsert := true\n\topts := options.UpdateOptions{\n\t\tArrayFilters: &arrayFilters,\n\t\tUpsert: &upsert,\n\t}\n\tupdate := bson.M{\n\t\t\"$set\": bson.M{\n\t\t\t\"regions.$[x].steps.$[y].completed\": step.Completed,\n\t\t},\n\t}\n\tret, err := collection.UpdateOne(ctx, filter, update, &opts)\n\tif err != nil {\n\t\tfmt.Printf(\"error updating db: %+v\\n\", err)\n\t\treturn false\n\t}\n\n\treturn true\n}\n",
"text": "@Prasad_Saya thank you for the help! This works:",
"username": "mystery_bbs"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Updating a document in an array of array | 2021-04-15T18:53:50.156Z | Updating a document in an array of array | 10,529 |
[] | [
{
"code": "select dbo.VisitNo(u.id) as visitNo , o.id, o.PatientId, u.VisitDate\nfrom dbo.Observation o \njoin dbo.ProspectiveFollowUp u on u.rootid = o.Id\norder by o.PatientId\nCREATE FUNCTION dbo.VisitNo(@Id int)\nRETURNS INT\nAS\nBEGIN\n\n DECLARE @VisitDate date, @RootId int\n SELECT @VisitDate=VisitDate, @RootId=RootId FROM dbo.ProspectiveFollowUp WHERE Id=@Id\n\n RETURN (SELECT COUNT(1) FROM dbo.ProspectiveFollowUp WHERE RootId = @RootId AND VisitDate <= @VisitDate)\n \nEND\n{\n \"_id\",\n \"values\":[\n {\n \"Id\",\n \"PatientId\",\n \"ProspectiveFollowUp\":[\n \"Id\",\n \"RootId\",\n \"VisitDate\"\n ]\n }\n ]\n}\ndb.dbo_ObservationJSON.aggregate([\n { $unwind: '$values' },\n {\n $project: {\n _id: 0,\n Id: '$values.Id',\n PatientId: '$values.PatientId',\n VisitDate: '$values.ProspectiveFollowUp.VisitDate'\n }\n },\n { $unwind: '$VisitDate' },\n { $sort: { PatientId: 1 } }\n])\nvar id = 4\n\nvar result = db.dbo.ObservationJSON.aggregate([ \n{ $unwind: '$values' }, \n{ $unwind: '$values.ProspectiveFollowUp' }, \n{ $project: { Id: '$values.ProspectiveFollowUp.Id', RootId: '$values.ProspectiveFollowUp.RootId', VisitDate: '$values.ProspectiveFollowUp.VisitDate', _id:0 }}, \n{ $match: { Id: id }} \n]).toArray()[0]\n\nvar totalResult = db.dbo_ObservationJSON.aggregate([{\n $unwind: {\n path: '$values'\n }\n}, {\n $unwind: {\n path: '$values.ProspectiveFollowUp'\n }\n}, {\n $project: {\n Id: '$values.ProspectiveFollowUp.Id',\n RootId: '$values.ProspectiveFollowUp.RootId',\n VisitDate: '$values.ProspectiveFollowUp.VisitDate'\n }\n}, {\n $match: {\n RootId: result.RootId,\n VisitDate: {\n $lte: result.VisitDate\n }\n }\n},{$count: 'total'}]).toArray()[0]\n",
"text": "I’m trying to grasp the mongodb concepts by translating some of our sql queries into mongo aggregation framework.I have an sql code:The dbo.VisitNo is implemented as:result:\nMy document in Mongo has following structure:The values array has always one element, but that’s how the data was imported. ProspectiveFollowUp has at least one record.Creating query for retrieving the data was rather easy:The harder part is the custom function itself. I can’t think outside od tsql world yet, so I have hard time getting this to work. I have translated the function into mongo the following way:But don’t know how to integrate it into the aggregation pipeline above. Can I write the entire sql query equivalent into one mongo aggregate expression?",
"username": "lkurylo"
},
{
"code": "db.dbo_ObservationJSON.aggregate([\n { $unwind: '$values' },\n { $unwind: { path: '$values.ProspectiveFollowUp', \"includeArrayIndex\": \"index\" } },\n {\n $project: {\n _id: 0,\n VisitNo: { $add: ['$index', 1] },\n RootId: '$values.ProspectiveFollowUp.RootId',\n PatientId: '$values.PatientId',\n VisitDate: '$values.ProspectiveFollowUp.VisitDate'\n }\n },\n {\n $sort: {\n PatientId: 1\n }\n }\n]);\n",
"text": "I got it to work the following way:Can anybody suggest me, can to write it differently, without the index for example?",
"username": "lkurylo"
}
] | Translate sql query into mongodb query | 2021-04-15T10:19:39.337Z | Translate sql query into mongodb query | 2,464 |
|
null | [
"monitoring"
] | [
{
"code": "mongod 8955 root *366u IPv4 120213606 0t0 TCP testmanager:33445->node03:49816 (CLOSE_WAIT)\nmongod 8955 root *367u IPv4 120213789 0t0 TCP testmanager:33445->node03:49860 (CLOSE_WAIT)\nmongod 8955 root *368u IPv4 120402126 0t0 TCP testmanager:33445->node03:49864 (CLOSE_WAIT)\nmongod 8955 root *369u IPv4 120437763 0t0 TCP testmanager:33445->node03:49866 (CLOSE_WAIT)\n2021-02-24T22:50:04.692+0000 I - [listener] pthread_create failed: Resource temporarily unavailable\n2021-02-24T22:50:04.692+0000 W EXECUTOR [conn480782] Terminating session due to error: InternalError: failed to create service entry worker thread\n2021-02-24T22:50:05.589+0000 I - [listener] pthread_create failed: Resource temporarily unavailable\n2021-02-24T22:50:05.589+0000 W EXECUTOR [conn480783] Terminating session due to error: InternalError: failed to create service entry worker thread\n",
"text": "Hi Guys,We are running 32k connections on mongodb so facing below error on log file. After suggestions from mongo community to check pid_max and threads-max having little bit high only and number of connections are opened which means sockets are quite high and not closed like below.cat /proc/sys/kernel/pid_max 4194304\ncat /proc/sys/kernel/threads-max 94465After some time socket descriptors reaching max limit. And mongo throwing an errror for thread creation.https://jira.mongodb.org/browse/SERVER-17687Below observation copied from above jira ticket.If the issue is not the system-wide limit on the number of threads then the resource exhaustion is somewhere else. You’ll need to investigate what resource is being exhausted (memory and number of file descriptors / sockets are the usual suspects) or simply lower the number of threads. If you’re not using connection pooling you’re probably running out of sockets (netstat -a | grep TIME_WAIT may help).As per analysis, sockets descriptors getting exhausted and mongo thread creating is getting failed. Any suggestions why sockets are not getting closed or any workaround for this.",
"username": "Vasanth_M.Vasanth"
},
{
"code": "2021-04-15T10:53:15.304+0000 I - [listener] pthread_create failed: Resource temporarily unavailable\n2021-04-15T10:53:15.305+0000 W EXECUTOR [conn12574] Terminating session due to error: InternalError: failed to create service entry worker thread\n",
"text": "I don’t know if this is the same issue, but it may help.Our Mongo instances started to crash after an upgrade from Ubuntu 16.04 to 18.04. The only error in the > Mongo logs was the same as above:I also found the following log in syslog:Apr 15 10:53:15 mongo-server kernel: [65474.378840] cgroup: fork rejected by pids controller in /system.slice/mongod.serviceSo I checked the service again and could see the tasks limit was set to 4618:my-user@mongo-server:~$ sudo service mongod status\n● mongod.service - High-performance, schema-free document-oriented database\nLoaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)\nActive: active (running) since Wed 2021-02-24 15:21:38 UTC; 1 months 20 days ago\nMain PID: 17990 (mongod)\nTasks: 287 (limit: 4618)\nCGroup: /system.slice/mongod.service\n└─17990 /usr/bin/mongod --quiet --config /etc/mongod.confI changed this in the /lib/systemd/system/mongod.service file to unlimited by adding the below within [Service]TasksMax=infinityThen ran the below on all Mongo instances and this removed the Task limitsudo systemctl daemon-reload\nsudo service mongod restartThis has worked for us so far, hopefully it helps someone else.",
"username": "Niamh_Gibbons"
}
] | Pthread_create failed: Resource temporarily unavailable in mongo | 2021-03-25T06:01:28.026Z | Pthread_create failed: Resource temporarily unavailable in mongo | 9,768 |
null | [] | [
{
"code": "",
"text": "Hello , I am Willi and use Ubuntu . I just have run the installatiion of server and clients on the dir-sql-m folder.Error happened , as always :This was the error spitted out from the terminal by calling the mongod:MongoDB shell version v3.6.8\nconnecting to: mongodb://127.0.0.1:27017\n2021-04-15T14:50:49.097+0200 W NETWORK [thread1] Failed to connect to 127.0.0.1:27017, in(checking socket for error after poll), reason: Connection refused\n2021-04-15T14:50:49.098+0200 E QUERY [thread1] Error: couldn’t connect to server 127.0.0.1:27017, connection attempt failed :\nconnect@src/mongo/shell/mongo.js:257:13\n@(connect):1:6\nexception: connect failed\nAny clue abouzt what I am doing falsely in installation?Thanks, man, for any hints !!!",
"username": "willie_EkaIndLand"
},
{
"code": "",
"text": "You must start the mongod server.",
"username": "steevej"
},
{
"code": "",
"text": "Hey Steeve, thx 4 thr reply, but when i entered the cmd. / mongosh, then it throws me kinda same error anyway… Clue about it?",
"username": "willie_EkaIndLand"
},
{
"code": "",
"text": "Post a screenshot of what you do that shows the issue you are having.",
"username": "steevej"
}
] | Failure of connecting to mongo Server / Connection Refused | 2021-04-15T10:46:54.448Z | Failure of connecting to mongo Server / Connection Refused | 9,114 |
null | [
"queries",
"swift",
"atlas-device-sync"
] | [
{
"code": "func didUpdatedPills(pillAmount: Int, isAlarm: Bool) {\n try! appDelegate.realm!.write {\n if medObj.pillsIntStock.value != pillAmount{\n medObj.pillsIntStock.value = pillAmount\n\n }\n if medObj.refillRemainder.value != isAlarm{\n medObj.refillRemainder.value = isAlarm\n\n }\n \n }\n \n \n }\n",
"text": "Hello Realm Team ,\nI am facing issue when updating realm object. Realm object updated successfully then got crashHere is my codeAnd Crashlibc++abi.dylib: terminating with uncaught exception of type NSException\n*** Terminating app due to uncaught exception ‘RLMException’, reason: ‘Realm accessed from incorrect thread.’\nterminating with uncaught exception of type NSException\nCoreSimulator 732.18.0.2 - Device: iPhone 12 Pro Max (07EE0930-EF65-4C15-ADED-C5F7EF67DB8C) - Runtime: iOS 14.2 (18B79) - DeviceType: iPhone 12 Pro Max",
"username": "Muhammad_Awais"
},
{
"code": "",
"text": "Hi @Muhammad_Awais,how are you calling this function – e.g., from the main thread or in a callback?",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "call back and initialising realm on appDelegate using OpenSync",
"username": "Muhammad_Awais"
},
{
"code": "mainDispatchQueue.main.async { ... }",
"text": "The exception is indicating that you’re attempting to work with the realm from a thread other than the one that opened it.If the realm was opened in the main thread then you could try wrapping your callback code inside a DispatchQueue.main.async { ... } block.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Resolved i was using realm object in another thread",
"username": "Muhammad_Awais"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Crash after updating realm | 2021-04-13T05:48:26.684Z | Crash after updating realm | 4,719 |
[] | [
{
"code": "select u.RootId, u.VisitDate, count(u.VisitDate) over(partition by u.rootid order by u.visitdate) as VisitNo\nFrom dbo.ProspectiveFollowUp u",
"text": "I have following json structure:I want to group it into this format (column VisitNo):\n\nHow can I do this using aggregation framework? Can’t find any similar example. I thought maybe to use the $accumulator but don’t know how to do it.I want an equivalent of sql window function:",
"username": "lkurylo"
},
{
"code": "db.dbo_ObservationJSON.aggregate([\n { $unwind: '$values' },\n { $unwind: { path: '$values.ProspectiveFollowUp', \"includeArrayIndex\": \"index\" } },\n {\n $project: {\n _id: 0,\n // Id: '$values.ProspectiveFollowUp.Id',\n RootId: '$values.ProspectiveFollowUp.RootId',\n VisitDate: '$values.ProspectiveFollowUp.VisitDate',\n VisitNo: { $add: ['$index', 1] }\n }\n },\n {\n $sort: {\n RootId: 1\n }\n }\n])",
"text": "I finally got it to work. But is there any othere, maybe better solution? Without the index usage?",
"username": "lkurylo"
}
] | Equivalent of window function | 2021-04-16T09:06:29.182Z | Equivalent of window function | 1,761 |
|
null | [
"aggregation"
] | [
{
"code": "loginDate{ \"_id\" : \"n2LXm3pzpbruqpPWm\", \"userId\" : \"ouFd8iwxDbxpLrNpQ\", \"loginDate\" : ISODate(\"2021-04-07T18:12:04.867Z\"), \"logoutDate\" : ISODate(\"2021-04-07T18:12:37.098Z\"), \"minutes\" : 1 }\n{ \"_id\" : \"vSveSbd3bbHGeKCPv\", \"userId\" : \"ouFd8iwxDbxpLrNpQ\", \"loginDate\" : ISODate(\"2021-04-07T18:12:04.867Z\"), \"logoutDate\" : ISODate(\"2021-04-07T18:52:48.390Z\"), \"minutes\" : 41 }\n{ \"_id\" : \"LNL4hpfWhZ7SFfqAC\", \"userId\" : \"ouFd8iwxDbxpLrNpQ\", \"loginDate\" : ISODate(\"2021-04-08T15:00:47.425Z\"), \"logoutDate\" : ISODate(\"2021-04-08T17:00:14.512Z\"), \"numCardsClicked\" : 16, \"numNewTG\" : 7, \"minutes\" : 119 }\n{ \"_id\" : \"9hGcpdaso2DQYcvTk\", \"userId\" : \"ouFd8iwxDbxpLrNpQ\", \"loginDate\" : ISODate(\"2021-04-08T17:04:53.973Z\"), \"logoutDate\" : ISODate(\"2021-04-08T17:19:44.931Z\"), \"numCardsClicked\" : 1, \"minutes\" : 15 }\n{ \"_id\" : \"HThgQYW28L2aQb8uu\", \"userId\" : \"ouFd8iwxDbxpLrNpQ\", \"loginDate\" : ISODate(\"2021-04-08T17:04:53.973Z\"), \"logoutDate\" : ISODate(\"2021-04-08T19:36:45.179Z\"), \"numCardsClicked\" : 5, \"minutes\" : 152 }\n{ \"_id\" : \"7qvdsdsMhvkb2bHQh\", \"userId\" : \"ouFd8iwxDbxpLrNpQ\", \"loginDate\" : ISODate(\"2021-04-08T21:25:10.617Z\"), \"logoutDate\" : ISODate(\"2021-04-08T22:09:55.682Z\"), \"numCardsClicked\" : 4, \"minutes\" : 45 }\n{ \"_id\" : \"4mXw2wGhwRjPYzJQB\", \"userId\" : \"ouFd8iwxDbxpLrNpQ\", \"loginDate\" : ISODate(\"2021-04-09T00:15:29.073Z\"), \"logoutDate\" : ISODate(\"2021-04-09T00:20:46.113Z\"), \"minutes\" : 5 }\n{ \"_id\" : \"9eBXyjXXpwaSpeA2C\", \"userId\" : \"ouFd8iwxDbxpLrNpQ\", \"loginDate\" : ISODate(\"2021-04-09T00:15:29.073Z\"), \"logoutDate\" : ISODate(\"2021-04-09T03:48:07.185Z\"), \"numCardsClicked\" : 4, \"minutes\" : 213 }\n$group: {\n _id: { $dateToString: { format: \"%Y-%m-%d\", date: \"$loginDate\"} },\n docs: {\n $sum: 1\n }\n}\n_id: \"2021-04-08\"\ndocs: 4\n\n_id: \"2021-04-09\"\ndocs: 2\n\n_id: \"2021-04-07\"\ndocs: 2\ngroup by date, userIdsum",
"text": "Hi everyone,still struggling with aggregations and here especially with the $group stage.I have the following data about user sessions with loginDate:The above snapshot is for 1 specific user only.I can get the sum of docs by date with the following aggregations:Based on the above data sample, I get 3 docs back:This is how far I got with the help of SO and the MongoDb documentation.However, I want to get achieve two more things:How can I do this?sum(minutes)\nsum(numCardsClicked)\nsum(numNewTG)How can I achieve that?Thanks in advance!",
"username": "Andreas_West"
},
{
"code": "$group: {\n _id: { \n date: { $dateToString: { format: \"%Y-%m-%d\", date: \"$loginDate\"} },\n user: \"$userId\"\n },\n sum_minutes: { $sum: \"$minutes\" },\n sum_numCardsClicked: { ... },\n sum_numNewTG: { ... }\n}",
"text": "@Andreas_West, you can use the group stage with some modifications, as shown below:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thanks so much, @Prasad_Saya. Once I entered your changes it all makes so much sense.",
"username": "Andreas_West"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Daily aggregation of user sessions - group by date, userId | 2021-04-16T09:04:41.884Z | Daily aggregation of user sessions - group by date, userId | 2,965 |
null | [
"vscode"
] | [
{
"code": "",
"text": "I have installed the VS Code entension for Mongo. Unfortunatelly the print function that I can use in CLI is missing here. How can I print something to the console for debug purposses?",
"username": "lkurylo"
},
{
"code": "print()printconsole.log",
"text": "The print() function is there and it should work. Can you share the code you are trying to run?Note that print and console.log will print in the VS Code Output panel, while the result of the playground is displayed in an editor.image1680×1025 66.6 KB",
"username": "Massimiliano_Marcon"
},
{
"code": "$accumulator:{\n init: function() { \n print('init function')\n return {result:0}; \n },\n //...",
"text": "I expected to see the text in Playground Result, not in Output window. Thank you \nIs there any way to debug inside $accumulation stages? E.g",
"username": "lkurylo"
},
{
"code": "",
"text": "No, unfortunately that does not work. That function is executed by the server.",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Print function in vscode extension is missing | 2021-04-16T08:21:39.416Z | Print function in vscode extension is missing | 5,687 |
null | [
"student-developer-pack"
] | [
{
"code": "",
"text": "A few days ago, my student pack was approved on GitHub. But once I visit the MongoDB student pack page it keeps showing this message:Welcome and thanks for logging in. Unfortunately, we are unable to verify your status as a participant in the GitHub Student Developer PackIs there anything I have to do to validate the student account on MongoDB?",
"username": "Hadi_Fadlallah"
},
{
"code": "",
"text": "Hi @Hadi_FadlallahThank you for your message and welcome to the forum! Can you check your GitHub account if the Student Pack was applied? Our app automatically checks if the GitHub Student Pack is attached to your GitHub account.If this is not working, can you send me more information in a DM?Thank you so much!Lieke",
"username": "Lieke_Boon"
},
{
"code": "",
"text": "Thanks @Lieke_Boon, Yes the Github Student Pack was activated on April 2 as shown in the image below:",
"username": "Hadi_Fadlallah"
},
{
"code": "",
"text": "@Hadi_Fadlallah Thank you for this! I sent you a DM.",
"username": "Lieke_Boon"
},
{
"code": "",
"text": "This was solved! Closing the topic ",
"username": "Lieke_Boon"
},
{
"code": "",
"text": "",
"username": "Lieke_Boon"
}
] | Student pack not approved on GitHub but not on MongoDB website | 2021-04-09T20:58:37.539Z | Student pack not approved on GitHub but not on MongoDB website | 6,236 |
null | [
"app-services-user-auth",
"graphql"
] | [
{
"code": "String firebaseIdToken = await FirebaseAuth.instance.currentUser.getIdToken();\nprint(\"Token - \"+ firebaseIdToken);\n",
"text": "I am authenticating the user using the Firebase phone authentication method and I get a ID token returned using the below code -Here is my short of my return token data - eyJhbGciOiJSUzI1NiIsImtpZCI6IjFkZTgwNjdhODI5OGE0ZTMzNDRiNGRiZGVkMjVmMmZiNGY0MGYzY2UiLCJ0eXAiOiJKV1QifQ.eyJpc3MiOiJodHRwczovL3NlY3VyZI want to use MongoDB Realm graphql api to connect to my backend MongoDB collection. Since MongoDB does not have an official Graphql package for Flutter, I will be using graphql_flutter package.How do I use the above token to connect to my MongoDB Realm graphql api using their Custom JWT authentication method and the graphql_flutter package? Please advise!",
"username": "Sumesh_Chandran"
},
{
"code": "",
"text": "Hey Sumesh - the JWT Auth Provider should work with Firebase Auth. You can pass in the access token into the package after getting it from Realm like described in this docs section",
"username": "Sumedha_Mehta1"
}
] | Is it possible to connect to MongoDB Realm graphql api using the JWT token returned from Firebase | 2021-04-11T23:41:03.248Z | Is it possible to connect to MongoDB Realm graphql api using the JWT token returned from Firebase | 2,752 |
[
"java"
] | [
{
"code": "",
"text": "Hi, is there any way to achieve Jackon like @JsonAnyGetter and @JsonAnySetter in a Bson POJOs?On this page we will provide Jackson @JsonAnyGetter and @JsonAnySetter example. @JsonAnyGetter is annotated at non-static, no-argument method to serialize a Java Map into JSON.",
"username": "Maciej_Jedwabny"
},
{
"code": "",
"text": "I didn’t use this one, I had to write my own mapper… this looks promising;https://mongojack.org/tutorial.html",
"username": "coderkid"
},
{
"code": "",
"text": "HI @Maciej_Jedwabny,This scenario isn’t currently supported with the PojoCodec. It might be achievable via a custom Convention but that would require some intense manipulation of the ClassModel and PropertyModel implementations.It could be simpler to use a custom Codec / CodecProvider for this scenario.Ross",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "Thanks for answers. I have managed this but it required copying literally whole org.bson.codecs.pojo.* package to add tiny modifications. Default pojo implementation is impossible to extend because of scopes and finals.",
"username": "Maciej_Jedwabny"
},
{
"code": "",
"text": "Hi,Would you consider making a PR or filing a ticket and sharing the code as the basis for the start of the work? Its a good way to help extend the library and add new features.Ross",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "Sure, I will work on that, in the upcoming weekend hopefully.",
"username": "Maciej_Jedwabny"
},
{
"code": "",
"text": "I didn’t use this one, I had to write my own mapper… this looks promising;https://mongojack.org/tutorial.html Does it handle Mongo’s DateTime and UUIDs? Or it uses Jackson’s behavior and stores them as Strings?",
"username": "Bogdan_Mart"
}
] | Java POJO codec + any getter/any setter | 2020-02-24T19:41:40.992Z | Java POJO codec + any getter/any setter | 3,723 |
|
null | [
"app-services-user-auth"
] | [
{
"code": "",
"text": "Hi there,\ncan you kindly say how can we programatically Deactivate a user until the next they log in. I see there is an option to disable a user but that does not seem to work as Deactivation. So here is the flow\nUser deactivates his/her account so they kind of become invisible until next time they log in again.\nKind regards,\nBehzad",
"username": "Behzad_Pashaie"
},
{
"code": "class User: Object {\n @objc dynamic var _id.....\n @objc dynamic var _partitionKey...\n @objc dynamic var favorite_food = \"\"\n @objc dynamic var active = true //a bool\n}\n",
"text": "Wouldn’t an ‘active’ flag associated with the user do that? It would indicate whether the user is active or inactive and when populating say, a list of users, the filter would only include active users.I don’t know what your coding platform is but if you have a User object stored in Realm SwiftThen to make the user inactive, set the active flag to false. When they next log in, set to true",
"username": "Jay"
},
{
"code": "",
"text": "Hi @Jay,\nThanks for the quick answer. Actually I also was thinking about a flag. But I was not sure and thought better to check for other likely solution which I might not be aware of. So we define the AKTIV flag in user customData.BTW I use react js.Kind regards,\nBehzad ",
"username": "Behzad_Pashaie"
}
] | How deactivate user account instead of deleteing the account | 2021-04-15T13:36:18.524Z | How deactivate user account instead of deleteing the account | 2,369 |
null | [
"database-tools",
"backup"
] | [
{
"code": "time mongodump --gzip --archive=mongo-$(date +\"%Y-%m-%d\").archive.gz--repair--oplogtime mongorestore --objcheck --drop --maintainInsertionOrder --gzip --archive=mongo-2021-04-13.archive.gzadmin 0.000GB\nconfig 0.000GB\nfireworks 0.000GB\njerry 0.001GB\nlocal 0.000GB\n# more...\nsimulations 0.024GB\nsimulations17.497GB--preserveUUID--maintainInsertionOrder--objcheck--nsInclude=\"simulations.*\"show dbspymongosimulationssimulations",
"text": "Howdy!\nI’m copying the contents of an old mongodb 3.2.11 running on a Google Compute Engine (GCE) VM to a fresh installation of mongodb 4.4 on a new GCE VM.Creating a new VM lets us revisit VM parameters, test the server before switching over, and leave behind unknown state on the old VM.The Mongo docs don’t promise that an archive dumped from one mongo release can be restored into a newer mongo release. They do say to use the release of mongodump that goes with the source mongodb and the release of mongorestore that goes with the destination mongodb.What I did so far:Notes:Q1. How to make mongorestore restore all of the simulations DB?\nQ2. How to verify that it did, at least to the level of document counts and such?Thanks so much!",
"username": "Jerry_Morrison"
},
{
"code": "db.stats()mongorestore-vvvvvKilledmongorestoretimemongorestoremongorestoremongorestoresimulations.historyshow dbssimulations",
"text": "A2. The mongo shell command db.stats() gives clear stats on a db.Rerunning mongorestore with more verbosity -vvvvv didn’t log any new info, but I finally noticed Killed at the end of the mongorestore command, before the time stats! So mongorestore ran out of memory. A1. Run mongorestore from a VM with enough memory or swap enabled. (GCE VMs have swap disabled by default.)mongorestore is up to 81GB on simulations.history while show dbs shows 12.661GB for the entire simulations db. Apparently those stats aren’t comparable.",
"username": "Jerry_Morrison"
},
{
"code": "mongorestoresimulation.historymongorestore --nsInclude=\"simulations.history\" [etc.]",
"text": "mongorestore (version 100.3.1) required a humongo 128GB of RAM to restore the simulation.history collection.I managed that by temporarily resizing the Compute Engine VM. Memory usage while running mongorestore --nsInclude=\"simulations.history\" [etc.]:Screen Shot 2021-04-15 at 12.22.39 PM1582×492 43.4 KBThis is while restoring one collection with storageSize: 18753490944.In comparison, mongodump (version go1.7.4) dumped and gzipped that 9GB archive on the same VM as the running mongo server 3.2.11, a VM with only 1.75GB RAM and no swap space.",
"username": "Jerry_Morrison"
},
{
"code": "",
"text": "Maybe “steady state” use:\nScreen Shot 2021-04-15 at 12.23.15 PM1582×492 42.4 KB",
"username": "Jerry_Morrison"
}
] | How to make mongorestore restore all the data? | 2021-04-15T00:11:36.517Z | How to make mongorestore restore all the data? | 7,422 |
null | [
"atlas-device-sync",
"app-services-user-auth"
] | [
{
"code": "",
"text": "Abstract\nHi, we’ve a use-case where we don’t force first-time users to register in order to use our app. However, they can start creating content. Right now we’re creating anonymous accounts for these users and sync their content. Once they decide to create an account, we link their account with the authentication method they prefer. (Manual, Google, FB or Apple)The problem\nWe don’t wanna sync contents of anonymous accounts unless they sign in to an existing account or create a new one.Questions\nIs it possible to prevent cloud syncing for anonymous users. (To save some resources) Or should we create users on demand. If we create users on demand (email or with oauth provider) how can we migrate the content created without user.Thanks!",
"username": "ilker_cam"
},
{
"code": "",
"text": "Hi @ilker_cam,\nwhat are you using as your partitioning key?",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Hi @Andrew_Morgan\nuser id",
"username": "ilker_cam"
},
{
"code": "",
"text": "I ended up with migrating the local realm to synced realm upon login and register",
"username": "ilker_cam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Confusion about registering/linking users | 2021-04-06T16:51:50.400Z | Confusion about registering/linking users | 2,078 |
[] | [
{
"code": "Type: Bug",
"text": "For some reason, we’re using mongo 3.6 version. We want to search all possible bugs about 3.6 version to decide whether to apply those patches to production. So I need a way to search by Type: Bug. Is that possible in Jira ?image1732×634 68.3 KBThanks.",
"username": "Lewis_Chan"
},
{
"code": "",
"text": "It will be great to sort the searched results by resolved time, so users can apply them one by one.",
"username": "Lewis_Chan"
},
{
"code": "",
"text": "I think your best bet here is to use the minor releases of the v3.6 branch. We’re currently on v3.6.23 and you can download these binaries here, use a package manager on linux, or build from source to stay up to date with the latest patches and bug fixes.Regarding your original question on querying JIRA for all bug fixes in v3.6, you can try this JQL query.",
"username": "Daniel_Pasette"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to search issue by Type in jira? | 2021-04-15T07:28:02.498Z | How to search issue by Type in jira? | 3,610 |
|
null | [
"python"
] | [
{
"code": "",
"text": "I have a problem on my mind. Now when I start with pymongo flask will it try to reconnect to mongodb on every request or will it make a single connection?It’s a simple question but I just started I want to learn.",
"username": "Mehmet_97757"
},
{
"code": "",
"text": "PyMongo uses a connection pool, opening connections as needed and reusing them:https://pymongo.readthedocs.io/en/stable/faq.html#how-does-connection-pooling-work-in-pymongo",
"username": "Bernie_Hackett"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Python connection pymongo | 2021-04-15T11:20:07.826Z | Python connection pymongo | 2,469 |
null | [
"swift",
"atlas-device-sync"
] | [
{
"code": "",
"text": "Hello,We are in the early planning stages of migrating from CouchbaseDB to MongoDB Atlas and Realm Sync for our iOS mobile POS application. One of the features we currently offer our merchants is offline sync between devices through peer-to-peer with Couchbase Mobile. From what I understand Realm Sync SDK doesn’t currently have any support for syncing between offline devices through peer-to-peer or multi-tier syncing to a physical server with a centralized Realm. We are wanting to still offer the offline sync feature to our merchants.We are wanting to move away from peer-to-peer and implement a physical server solution instead. My thoughts are running a mongod instance that our iOS apps can keep up-to-date for our merchants with spotty internet connection. The physical server will act as a messenger essentially, I don’t want to bog anyone down with the specifics of the current design.Device A saves a document to local Realm Sync while not connected to internet, and sends the document to the physical server through a socket connection. The physical server will emit the document to all the connected Devices B through D. The devices B through D will handle some business logic then save the document to their Local Realm.I am concerned this could cause problems with the Realm Sync environment, I am not sure what potential issues there are, but it feels wrong to save a document manually synced from another device that will be synced through Realm Sync to all the devices when internet is regained.Is there any reason(s) I shouldn’t try to implement a solution like this?Sorry for asking such a hypothetical question.Thanks.",
"username": "Kaya_Click"
},
{
"code": "",
"text": "Hi @Kaya_Click, welcome to the community forum!I just wanted to test my understanding of your requirement…You’re looking to sync data between devices within a store over a local network, even when that store is disconnected from the internet. When the store is connected to the internet, it can then sync the data with a central copy of the data.MongoDB Realm Sync relies on devices being connected to the internet to sync data via your Realm cloud service. A device can work offline for hours, and as soon as it gets internet connectivity Realm Sync performs a 2-way sync with the backend Realm service, where changes are persisted in Atlas and synced to other devices.Copying @Ian_Ward in case he’s seen customers implementing this tiered approach.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "What we have seen users do in the past for this use case is to use one of our server or desktop SDKs, like node.js or .NET Core, to implement a type of web server that can be deployed locally at the branch locations - and then clients can use web requests to get or mutate data from this server even when the branch is offline, once connectivity is re-established, any changes will be replicated to the cloud.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "So how would we be designing this realmsync wise on the app? Would the iPads exclusively connect and operate off the webserver even when internet is up? Or would we be able to fall back to this only when internet access is down and rely on the normal realmsync capabilities when it is nominal?",
"username": "Kaya_Click"
},
{
"code": "",
"text": "@Kaya_Click They would only connect to the Webserver - the mobile devices wouldn’t use realm sync at all, only the Webserver. They would just connect to the local web server via REST. Although you could use a non-sync Realm to store data locally on the device if you wanted.Of course, this is only needed if you want to support the use case of devices sharing data between themselves while the branch location is offline. For most use cases, we find that using realm sync on the end-client device to be fine since the biggest need from the customer is for the user to be continue to work whether they or the branch is offline - and then replicating changes once the connection is re-established.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Thank you very much @Ian_Ward and @Andrew_Morgan. I like this solution, I’m glad to hear other users have implemented like this.",
"username": "Kaya_Click"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Any known issues with manually syncing offline devices through a physical server? | 2021-04-13T20:07:13.586Z | Any known issues with manually syncing offline devices through a physical server? | 3,610 |
[
"aggregation",
"queries",
"node-js"
] | [
{
"code": "Matcheskit1kit2Matcheskit1kit2kitRemove') and want to return the identifier that is not const kitsWithRelationArray = await MatchesRaw.aggregate([\n { $match: { $or: [{ kit1: kitRemove }, { kit2: kitRemove }] } },\n { $project: { kit: { $cond: { if: { $eq: [\"$kit1\", kitRemove] }, then: \"$kit2\", else: \"$kit1\" } }, _id: 0 } },\n ], { session: mongoSession }).toArray();\nkitMatcheskit1kit2kit",
"text": "Hi everyone,seeking help with an aggregations query. I have a collection Matches which has transactional data in it, identified by kit1 and kit2.This is a sample doc (other fields hidden for simplicity):{ “_id” : “P5M5NzYGJ5aKk5FSv”, “kit1” : “7jVr2Hul1Vae3bqRVhyKA”, “kit2” : “7lljS20fzXF-uv2aAh3WE” }I’m querying for all Matches docs where either kit1 or kit2 is equal to a certain identifier (kitRemove') and want to return the identifier that is not kitRemove` (using Meteor framework in my app - hence the use of JavaScript code):This is the result I get from the above query:\nI’m facing two problems with the result:Example:{ “_id” : “troqybYEBCPB97cDr”, “kit1” : “7jVr2Hul1Vae3bqRVhyKA”, “kit2” : “85n_Re9XRQCiYQd-VxzhV” }\n{ “_id” : “YKXKhaNf7xCrihFPM”, “kit1” : “7jVr2Hul1Vae3bqRVhyKA”, “kit2” : “85n_Re9XRQCiYQd-VxzhV” }Question 1: How can avoid getting duplicate values like this?[“7lljS20fzXF-uv2aAh3WE”, “7pgnWX288a3V0RdIF-zlq”, “85n_Re9XRQCiYQd-VxzhV”]Question 2: How can I alter the query to get an array of strings (kit) back?Many thanks in advance, still learning all the bits and pieces of MongoDb’s powerful functions.",
"username": "Andreas_West"
},
{
"code": "$project$groupkit{ \n $group: { \n _id: null, \n \"distinct_kits\": { $addToSet: \"$kit\" } \n } \n}\ndistinct_kitskitsWithRelationArrayvar distinct_kits_array = kitsWithRelationArray[0].distinct_kitsdistinct_kits_array$match",
"text": "Hello @Andreas_West, here are some ideas and solutions.Question 1: How can avoid getting duplicate values like this?In your query, after the $project stage, include this new $group stage, to get the distinct kit values.Then extract the distinct_kits from the kitsWithRelationArray:var distinct_kits_array = kitsWithRelationArray[0].distinct_kitsThe distinct_kits_array will be an array of distinct kit values - your final result.Question 2: How can I alter the query to get an array of strings (kit) back?There is no way to get an array of strings from the aggregation directly - this is because the aggregation result is always a cursor of document(s) (or object(s)). You need to extract the values as per your application needs within the application code (as I had shown above).NOTE: I had noticed that in your query’s $match stage you are checking for kit1 or kit2 matching. I am assuming you don’t need the check for kit1 and kit2 are matching.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thanks @Prasad_Saya for taking time to look at my problem and come up with a solution.Will implement the additional $group stage step!It gets rid of the duplicate values and I think I can work around the fact that it’s now an array of length 1 with an object which then has the final array in it that I want:I am assuming you don’t need the check for kit1 and kit2 are matching.That is correct, they can never be the same.",
"username": "Andreas_West"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Aggregation with transactional data should return unique value only | 2021-04-15T09:29:10.323Z | Aggregation with transactional data should return unique value only | 4,630 |
|
[] | [
{
"code": "",
"text": "Hi,it tried to filter a lookup field in Series with the following query: {name:“Test1”} but it didn’t work.image1920×1030 225 KBis it possible to filter a lookup field in Series so the graph only shows one line?\ni need this query for embedding my chart with JavaSDK.",
"username": "Ruben"
},
{
"code": "",
"text": "Hi @Ruben -For questions like this it’s useful to understand how Charts builds the aggregation pipeline. This is documented at Backing Aggregation Pipeline. Lookup fields aren’t explicitly listed (I’ll try to get that fixed) but they are included in step 5, Calculated Fields.Since the Lookup stage happens after the query bar, the lookup field can’t be affected by the query. To fix this you have two options:HTH\nTom",
"username": "tomhollander"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Filtering a lookup field in Series with query in MongoDB Charts | 2021-04-14T09:21:31.984Z | Filtering a lookup field in Series with query in MongoDB Charts | 4,066 |
|
null | [
"react-native"
] | [
{
"code": "",
"text": "Tutorial LinkWhen attempting to run the application in the iOS simulator, I receive a number of errors including issues with target iOS with Flipper; I’ve attempted to update the target OS from 8.0 to 14.4 however this leads to further problems. Has anyone been able to resolve this that can provide a link to an updated Podfile, Podfile.lock, or rn.workspace file to get it working?",
"username": "Andrew_W"
},
{
"code": "npm installrn.workspaceiOSrnBuild SettingsteamDevelopment TeamPodsTarget OS8.014.4npx react-native run-iossudo gem install cocoapods",
"text": "RESOLVED!Steps to resolve:You’ll receive an AuthProvider issue when the application runs but I’m guessing that’s expected until the front-end is set up properlyNOTE: You may want to update CocoaPods on your system as well with sudo gem install cocoapods",
"username": "Andrew_W"
},
{
"code": ".bashrc.zshrc",
"text": "I should add that a lot of these issues stemmed from using Node Version manager (NVM) to install Node. Once I commented out all references in my .bashrc/.zshrc file I was able to get past this as apparently prefixing is not supported. Give this a shot yourself. Your mileage may vary. You may need to completely remove NVM as well as make sure Node wasn’t installed with homebrew either.",
"username": "Andrew_W"
},
{
"code": "",
"text": "Thanks for circling back to let everyone know the solution!",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Task Tracker React Native Tutorial Fails on iOS run | 2021-04-10T21:05:20.650Z | Task Tracker React Native Tutorial Fails on iOS run | 2,642 |
null | [
"atlas-functions"
] | [
{
"code": "",
"text": "is it possible to run this at realm function?db.getReplicationInfo () .timeDiffHours",
"username": "Arthur_Fan"
},
{
"code": "db.getReplicationInfo()mongoTypeError: 'getReplicationInfo' is not a function\n\tat exports (function.js:5:17(25))\n\tat function_wrapper.js:3:29(21)\n\tat <eval>:11:8(3)\n\tat <eval>:2:15(7)\n",
"text": "db.getReplicationInfo() is a helper function in the mongo shell and so isn’t available to a Realm function…What problem are you trying to solve with this function?",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "I try to set up a throttling at realm function to avoid opLog < 1hr when I am doing a bulk insert().My idea was, if db.getReplicationInfo().timeDiffHours < 1 then I should slow down the consumer for bulk insertion.any workaround?please advise, Thanks!",
"username": "Arthur_Fan"
},
{
"code": "",
"text": "One option could be to configure and react to Atlas replication oplog alerts (https://docs.atlas.mongodb.com/reference/alert-conditions/#replication-oplog). There are a number of ways to consume those alerts, but it’s going a bit beyond my knowledge and the scope of this category.I’d suggest posting a question to Ops and Admin - MongoDB Developer Community Forums in addition to keeping this topic open.",
"username": "Andrew_Morgan"
}
] | getReplicationInfo at realm? | 2021-04-13T00:39:19.355Z | getReplicationInfo at realm? | 2,299 |
null | [
"kafka-connector"
] | [
{
"code": "",
"text": "We are trying to break the mongo DB document into chunks in order to fit into Kafka message with the help of $unwind operation through MongoSourceConnector(pipeline aggregation).org.apache.kafka.connect.errors.ConnectException: com.mongodb.MongoCommandException: Command failed with error 20 (IllegalOperation): ‘$unwind is not permitted in a $changeStream pipeline’ on server :27017. The full response is {“operationTime”: {\"$timestamp\": {“t”: 1614932863, “i”: 4}}, “ok”: 0.0, “errmsg”: “$unwind is not permitted in a $changeStream pipeline”, “code”: 20, “codeName”: “IllegalOperation”, “$clusterTime”: {“clusterTime”: {\"$timestamp\": {“t”: 1614932863, “i”: 4}}, “signature”: {“hash”: {\"$binary\": {“base64”: “x5sWtboaMhg5aSSMWLYNswP3zKE=”, “subType”: “00”}}, “keyId”: 6880913173017264129}}}\nat com.mongodb.kafka.connect.source.MongoSourceTask.setCachedResultAndResumeToken(MongoSourceTask.java:508)Kindly suggest if this is a supported feature through MongoSourceConnector or do we have any workaround for above use case.",
"username": "vinay_murarishetty"
},
{
"code": "$addFields$match$project$replaceRoot$replaceWith$redact$set$unset",
"text": "HI @vinay_murarishetty,The Source connector uses change streams functionality to provide change stream events. MongoDB only supports certain aggregation stages when using change streams:See: https://docs.mongodb.com/manual/changeStreams/#modify-change-stream-output for more information.Other pipeline stages are not supported by MongoDB so can’t be used with the connector.Ross",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "Is there any alternative way to achive above use case if not with pipeline ?",
"username": "vinay_murarishetty"
},
{
"code": "",
"text": "Hi @vinay_murarishetty,Unfortunately, if you are hitting the 16MB limit then the only option is to reduce the amount of data the change stream cursor produces. Publishing both the fullDocument and updateDescription for very large documents could be the cause.Ross",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "Thanks for the information.myaarpmedicare",
"username": "Arlene_Boyer"
}
] | Error running $unwind operation in mongo source connector pipeline | 2021-03-05T11:58:46.804Z | Error running $unwind operation in mongo source connector pipeline | 2,807 |
null | [
"security"
] | [
{
"code": "",
"text": "Hi everyone!, is there away to implement a password policy in mongo besides kerberos and ldap integration?",
"username": "Oscar_Cervantes"
},
{
"code": "",
"text": "You can check this linkhttps://jira.mongodb.org/browse/SERVER-7363",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Password policy | 2021-04-14T14:58:01.857Z | Password policy | 2,424 |
null | [
"dot-net",
"unity"
] | [
{
"code": "",
"text": "Hi,I’ve imported the libraries for MongoDB from NuGet in Visual Studio 2019, on Mac. The namespaces and MongoClient are recognised in VS 2019, but Unity errors saying namespace and MongoClient can’t be found. I’m at a complete loss. I’ve been struggling for days on this.Thanks,\nJon",
"username": "Jonathan_Peplow"
},
{
"code": "",
"text": "OK, so I solved my own issue. I needed to include the MongoDB drivers in a folder called ‘Plugins’ inside of Unity, ‘Assets’ folder. Hmm did I miss this somewhere in the official tutorials on this site? Also, it’s not very ideal how I went about obtaining the DLLs, browising the package folders inside of the VS 2019 project files…? Thanks.Jon",
"username": "Jonathan_Peplow"
},
{
"code": "",
"text": "Did you manage to build your application without issues?",
"username": "christian_aubert"
}
] | Unity can't find MongoClient | 2020-10-26T19:08:03.156Z | Unity can’t find MongoClient | 3,845 |
null | [
"atlas-device-sync"
] | [
{
"code": "",
"text": "Within the MongoDB Realm > App Users UI, there’s the option to Revoke all sessions for individual users. We used this option to test with and found we’re unable to detect the user needs to reauthenticate with the Realm Cocoa SDK. We checked RLMUser::state and RLMUser::isLoggedIn; both indicate the RLMUser is logged in. Furthermore RLMUser::sessionForPartitionValue returns a valid session and we can see in the debug console that the server returns a HTTP 401 error stating the user need to authenticate, so how does one detect the user needs to reauthenticate ?",
"username": "Mauro"
},
{
"code": "\n \n }\n }\n \n \n/**\n An error associated with network requests made to the authentication server. This type of error\n may be returned in the callback block to `SyncUser.logIn()` upon certain types of failed login\n attempts (for example, if the request is malformed or if the server is experiencing an issue).\n \n \n - see: `RLMSyncAuthError`\n */\n public typealias SyncAuthError = RLMSyncAuthError\n \n \n/**\n An enum which can be used to specify the level of logging.\n \n \n - see: `RLMSyncLogLevel`\n */\n public typealias SyncLogLevel = RLMSyncLogLevel\n \n \n/**\n A data type whose values represent different authentication providers that can be used with\n \n ",
"text": "@Mauro Do you have an error handler on the client? See here:And there should be an auth error here:",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Ian_Ward Yes, the error handler is set, and for a sanity check I just re-tested and the result is the same; the error handler is not being called in this instance.",
"username": "Mauro"
},
{
"code": "",
"text": "@Mauro Can you file an issue here with code snippet and steps to reproduce please - GitHub - realm/realm-swift: Realm is a mobile database: a replacement for Core Data & SQLite",
"username": "Ian_Ward"
}
] | Realm Sync Detect User Session Revoked | 2021-03-25T11:30:54.262Z | Realm Sync Detect User Session Revoked | 2,239 |
null | [
"aggregation",
"golang",
"views"
] | [
{
"code": "",
"text": "Hello all,\nI create a view A by a lookup pipeline which do a joint search with two collections, A1 and A2, and I can see the view A contains some data from A1 and A2, and if I change some data in A1 or A2, the data in view A will be changed accordingly. Then I create a view B, view B is created by a lookup pipeline which do a joint search from collection B1 and view A, currently, I can see the data appears in view A, but not in view B, what would be the problem?Thanks,\nJames",
"username": "Zhihong_GUO"
},
{
"code": "",
"text": "Hello, more information: I use community version 4.0. I read the document in https://docs.mongodb.com/manual/reference/method/db.createView/",
"username": "Zhihong_GUO"
}
] | Create view on another view | 2021-04-14T05:34:58.950Z | Create view on another view | 3,064 |
null | [
"compass"
] | [
{
"code": "mongod.conf:\nnet:\n tls:\n mode: requireTLS\n certificateKeyFile: <location of .pem file>\n certificateKeyFilePassword: \"<password>\"\n CAFile: <location of CA .pem file>\n2020-03-24T13:34:25.089+0000 I NETWORK [listener] connection accepted from <IP Address:Port number of client> (1 connection now open)\n2020-03-24T13:34:25.106+0000 E NETWORK [conn180] no SSL certificate provided by peer; connection rejected\n2020-03-24T13:34:25.106+0000 I NETWORK [conn180] Error receiving request from client: SSLHandshakeFailed: no SSL certificate provided by peer; connection rejected. Ending connection from <IP Address:Port number of client> (connection id: 180)\n2020-03-24T13:34:25.106+0000 I NETWORK [conn180] end connection <IP Address:Port number of client> (0 connections now open)\n",
"text": "I have enabled TLS with server and client validation but unable connect using Compass client, any ideas what this error means and how I can fix it?Client cert being used is in .cer format.Error message received:",
"username": "Kiran_K"
},
{
"code": "",
"text": "Looks like compass is not sending the client cert on connect. I have not used client certs since mongo university and never on Compass.SSL does need to be set to Server and Client Validation in Compass\nimage997×731 34.9 KB\n",
"username": "chris"
},
{
"code": "",
"text": "Tried the exact same settings as per the screen shot and attached the required certificate .pem files but I still get the same error message. Is there any other way of testing the connectivity to check what is wrong?",
"username": "Kiran_K"
},
{
"code": "",
"text": "You can try it with mongo cli as well to make sure all your certificates are in order/rule out Compass.",
"username": "chris"
},
{
"code": "",
"text": "A post was split to a new topic: SSL peer certificate validation failed: unable to verify the first certificate",
"username": "Stennie_X"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Unable to connect to Mongo with Server and Client validation using TLS | 2020-03-24T18:19:15.770Z | Unable to connect to Mongo with Server and Client validation using TLS | 8,582 |
null | [
"aggregation",
"queries"
] | [
{
"code": "{'_id': ObjectId('6068da8878fa2e568c42c7f1'),\n 'first': datetime.datetime(2018, 1, 24, 14, 5),\n 'last': datetime.datetime(2018, 1, 24, 15, 5),\n 'maxid13': 12.5,\n 'minid13': 7.5,\n 'nsamples': 13,\n 'samples': [{'c14': 'C',\n 'id1': 3758.0,\n 'id10': 0.0,\n 'id11': 274.0,\n 'id12': 0.0,\n 'id13': 7.5,\n 'id15': 0.0,\n 'id16': 73.0,\n 'id17': 0.0,\n 'id18': 0.342,\n 'id19': 6.3,\n 'id20': 1206.0,\n 'id21': 0.0,\n 'id22': 0.87,\n 'id23': 0.0,\n 'id6': 2.0,\n 'id7': -79.09,\n 'id8': 35.97,\n 'id9': 5.8,\n 'timestamp1': datetime.datetime(2018, 1, 24, 14, 5),\n 'timestamp2': datetime.datetime(2018, 1, 24, 9, 5)},\n {'c14': 'C',\n 'id1': 3758.0,\n 'id10': 0.0,\n 'id11': 288.0,\n 'id12': 0.0,\n 'id13': 8.4,\n 'id15': 0.0,\n 'id16': 71.0,\n 'id17': 0.0,\n 'id18': 0.342,\n 'id19': 6.3,\n 'id20': 1207.0,\n 'id21': 0.0,\n 'id22': 0.69,\n 'id23': 0.0,\n 'id6': 2.0,\n 'id7': -79.09,\n 'id8': 35.97,\n 'id9': 6.2,\n 'timestamp1': datetime.datetime(2018, 1, 24, 14, 10),\n 'timestamp2': datetime.datetime(2018, 1, 24, 9, 10)},\n .\n .\n .\n .\nmatchfirstsamples.id13samples.id9samples.timestamp1samples.id13samples.id9sortfirst",
"text": "Hello guys.I am using bucket pattern for time-series.My data look like this:In the most cases in the match stage i use first with samples.id13 andsamples.id9 and in some other queries i use samples.timestamp1 with samples.id13 and samples.id9 ,and in sort stage sometimes i use first.Should i create 4 single index on them or a compound(with the correct order)?",
"username": "harris"
},
{
"code": "",
"text": "Hi @harris,Compound index are always better versus index intersections which are usually not well ranked by the query planner and usually any other solution is preferred by the query planner.I don’t know which query parameters here are a range or an equality so I can’t make a proposition. But I’d recommend to follow the ESR rule to make sure that you avoid in-memory sorts and scan as little index entries as possible.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "cursor = mydb1.mongodbbuckethour.aggregate([\n\n {\n \"$match\": {\n \"first\": {\"$gte\": datetime.strptime(\"2010-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\"),\n \"$lte\" :datetime.strptime(\"2020-12-31 23:05:00\", \"%Y-%m-%d %H:%M:%S\")}\n }\n },\n { \"$unwind\": \"$samples\" },\n{\n \"$match\": {\n \"first\": {\"$gte\": datetime.strptime(\"2010-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\"),\n \"$lte\" :datetime.strptime(\"2020-12-31 23:05:00\", \"%Y-%m-%d %H:%M:%S\")}\n }\n },\n\n{\n \"$group\": {\n \"_id\": {\"$dateToString\": { \"format\": \"%Y-%m-%d \", \"date\": \"$first\" }},\n\n\n \"avg_id13\": {\n \"$avg\": \"$samples.id13\"\n }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"day\":\"$_id\",\n \"avg_id13\": 1\n }\n },\n {\"$sort\": {\"day\": -1}}\n])\nfirstsortrangecursor = mydb1.mongodbbuckethour.aggregate([\n\n {\n \"$match\": {\n \"first\": {\"$gte\": datetime.strptime(\"2010-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\"),\n \"$lte\" :datetime.strptime(\"2020-12-31 23:05:00\", \"%Y-%m-%d %H:%M:%S\")},\n \"samples.id13\": {\n \"$gt\": 5\n }\n\n }\n },\n { \"$unwind\": \"$samples\" },\n{\n \"$match\": {\n \"first\": {\"$gte\": datetime.strptime(\"2010-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\"),\n \"$lte\" :datetime.strptime(\"2020-12-31 23:05:00\", \"%Y-%m-%d %H:%M:%S\")},\n \"samples.id13\": {\n \"$gt\": 5\n }\n }\n },\n\n{\n \"$group\": {\n \"_id\": {\"$dateToString\": { \"format\": \"%Y-%m-%d \", \"date\": \"$first\" }},\n\n\n \"avg_id13\": {\n \"$avg\": \"$samples.id13\"\n }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"day\":\"$_id\",\n \"avg_id13\": 1\n }\n },\n {\"$sort\": {\"day\": -1}}\n])\nsamples.id13first",
"text": "Thank you for you help! @MaBeuLux88\nSo for example if we have this query:So with the esr rule i should create an single index on first because i use him in sort and also in range?And for this query:It should be better to use an compound index on samples.id13 for equality and first for sort and also in range?",
"username": "harris"
},
{
"code": "samplesid13{\"first\":1, \"samples.id13\": 1}",
"text": "In the first pipeline, the 3rd stage ($match) is redundant with the first one. It’s not removing any other doc from the pipeline after the $unwind stage.In the second pipeline though, it makes sense because the first $match is checking that at least one of the sub documents in the samples array has one id13 > 5 (so you have less docs to unwind). Then after the $unwind, you want to eliminate all the sub docs that don’t respect this contraints, so you have to repeat that condition which this time is applied on all the (sub) docs. So for the second pipeline match (date, id13) => unwind => match (id13). Repeating the date filter doesn’t add anything after the unwind as all the docs at this stage at already in the range.Now, I didn’t know you were using the aggregation pipeline and the use of indexes is actually a bit different here.In your case, once you have passed the $unwind stage, you can’t use indexes anymore. So index on {\"first\":1, \"samples.id13\": 1} is the optimal index. Both queries can use it. The first pipeline will only use the first part though.I would also recommend that the $project should always be the very last stage. If done earlier in the pipeline, it could prevent automated optimization.Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
] | Multiple index vs single index in multiple columns | 2021-04-13T18:13:15.062Z | Multiple index vs single index in multiple columns | 3,709 |
null | [
"data-modeling"
] | [
{
"code": " users\n firstname :\n lastname :\n .....\n location :{\n longitude: 46,5412\n latitude: -21.6546 \n }\n matchedUsers : [user1.id, user2.id........]\n const users = await User.find({ \n location: { $nearSphere: { $geometry: { type: \"Point\", coordinates: [ 0.4539, 49.3784 ] }, $maxDistance: 10000 } },\n matchedUsers: {$ne: req.params.userId}\n })\n",
"text": "Hi, i’m new to MongoDB and I’m building a Tinder like app and but i’m not sure what is the best way to model my data. The first approach came to my mind was doing something like this :Because i want to fetch users by their location and also i don’t want any user that is my matchedUsers list.\nsomething like this :this works fine but like i said i’m new to MongoDB and i’m worried as my list keep growing. Is this a good way to structure my data or there is a better way?",
"username": "Ali_Khodr"
},
{
"code": "",
"text": "Hi @Ali_Khodr,The only problem with this query is that $ne is not a selective operator and cannot utelize indexes.On the other hand if the geo index can filter out most results I would say this filtering is fine.I assume that when both users match you will probably create a matching document , consider utilizing Atlas triggers for that Edited:\nIf you are afraid that for some users the list will grow significantly consider using the outlier pattern:The Outlier Pattern helps when there's exceptionally large records occasionally occurring in your data setBest\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny,First of all, thanks for your reply. I read the article, it is interesting but unfortunately i don’t think that the outlier pattern could be useful in my use case.Actually, i didn’t explain well my use case. In my app when a user clicks on the search button, the app will suggest another user to chat with. So there is no this notion of matching. Maybe SuggestedUsers will be better naming than matchedUsers.So basically, what i’am trying to do is to fetch a nearby user without suggesting the same user twice.",
"username": "Ali_Khodr"
},
{
"code": " users.update(\n { userId: \"149064515180820987\" },\n {\n $push: {\n SuggestedUsers: {\n $each: [ { userId: \"xxx\", added: new Date()} ],\n $sort : { added : -1},\n $slice: -300\n }\n }\n }\n )\n",
"text": "@Ali_Khodr, interesting question…I think that the way you attack it should be fine however, you cant let this array grow unbound thats a know anti pattern and should be avoided.Therofore , please consider using $push with $slice and sort to keep the array in reasonable sizes otherwise it will be hard to manage.Think if you can keep 200-300 users in there and allow some recycling of users for performance reasons.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "@Pavel_DuchovnyThanks for your help",
"username": "Ali_Khodr"
},
{
"code": "",
"text": "3 posts were split to a new topic: Scaling a data model using bloom filters",
"username": "Stennie_X"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Data modeling for Tinder like app | 2020-09-27T04:42:29.316Z | Data modeling for Tinder like app | 5,328 |
null | [
"production",
"ruby",
"mongoid-odm"
] | [
{
"code": "",
"text": "This patch release in the 7.2 series repairs a Rails 6.1 incompatibility in the Mongoid config generator and fixes several user-reported issues. Please see Release 7.2.2 · mongodb/mongoid · GitHub for the complete list of changes.",
"username": "Oleg_Pudeyev"
},
{
"code": "",
"text": "",
"username": "system"
}
] | Mongoid 7.2.2 released | 2021-04-14T15:42:57.538Z | Mongoid 7.2.2 released | 2,942 |
null | [
"aggregation",
"performance"
] | [
{
"code": "db.getCollection('events').aggregate([\n{\n \"$addFields\": {\n \"6059ff5a2aa6a105ae85d7f1\": {\n \"$cond\": [\n {\n \"$and\": [\n {\n \"$eq\": [\n \"$campaign_id\",\n ObjectId( \"604baadfa1a21c0c6d03901f\" )\n ]\n },\n {\n \"$eq\": [\n \"$event\",\n \"processed\"\n ]\n },\n {\n \"$eq\": [\n \"$channel\",\n \"email\"\n ]\n }\n ]\n },\n 1,\n 0\n ]\n },\n \"6059ff5a2aa6a103a585d7f0\": {\n \"$cond\": [\n {\n \"$and\": [\n {\n \"$eq\": [\n \"$campaign_id\",\n ObjectId( \"604baadfa1a21c0c6d03901f\" )\n ]\n },\n {\n \"$eq\": [\n \"$event\",\n \"open\"\n ]\n },\n {\n \"$eq\": [\n \"$channel\",\n \"email\"\n ]\n }\n ]\n },\n 1,\n 0\n ]\n }\n }\n},\n{\n \"$group\": {\n \"_id\": \"$contact_id\",\n \"6059ff5a2aa6a105ae85d7f1\": {\n \"$sum\": \"$6059ff5a2aa6a105ae85d7f1\"\n },\n \"6059ff5a2aa6a103a585d7f0\": {\n \"$sum\": \"$6059ff5a2aa6a103a585d7f0\"\n }\n }\n }\n ], \n{ \"allowDiskUse\": true })\n {\n \"$cursor\" : {\n \"query\" : {},\n \"queryPlanner\" : {\n \"plannerVersion\" : 1,\n \"namespace\" : \"production.events\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {},\n \"queryHash\" : \"8B3D4AB8\",\n \"planCacheKey\" : \"8B3D4AB8\",\n \"winningPlan\" : {\n \"stage\" : \"COLLSCAN\",\n \"direction\" : \"forward\"\n },\n \"rejectedPlans\" : []\n }\n }\n }, \n {\n \"$cursor\" : {\n \"query\" : {},\n \"fields\" : {\n \"contact_id\" : 1,\n \"_id\" : 0\n },\n \"queryPlanner\" : {\n \"plannerVersion\" : 1,\n \"namespace\" : \"production.events\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {},\n \"queryHash\" : \"14AB7FAF\",\n \"planCacheKey\" : \"14AB7FAF\",\n \"winningPlan\" : {\n \"stage\" : \"PROJECTION_COVERED\",\n \"transformBy\" : {\n \"contact_id\" : 1,\n \"_id\" : 0\n },\n \"inputStage\" : {\n \"stage\" : \"DISTINCT_SCAN\",\n \"keyPattern\" : {\n \"contact_id\" : 1\n },\n \"indexName\" : \"contact_id_1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"contact_id\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"contact_id\" : [ \n \"[MinKey, MaxKey]\"\n ]\n }\n }\n },\n \"rejectedPlans\" : []\n }\n }\n }, \n {\n \"$groupByDistinctScan\" : {\n \"newRoot\" : {\n \"_id\" : \"$contact_id\"\n }\n }\n }\n ],\n \"stages\" : [ \n {\n \"$cursor\" : {\n \"query\" : {},\n \"fields\" : {\n \"6059ff5a2aa6a103a585d7f0\" : 1,\n \"6059ff5a2aa6a105ae85d7f1\" : 1,\n \"campaign_id\" : 1,\n \"channel\" : 1,\n \"contact_id\" : 1,\n \"event\" : 1,\n \"_id\" : 0\n },\n \"queryPlanner\" : {\n \"plannerVersion\" : 1,\n \"namespace\" : \"production.events\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {},\n \"queryHash\" : \"8B3D4AB8\",\n \"planCacheKey\" : \"8B3D4AB8\",\n \"winningPlan\" : {\n \"stage\" : \"COLLSCAN\",\n \"direction\" : \"forward\"\n },\n \"rejectedPlans\" : []\n }\n }\n }, \n",
"text": "Hi there!I am struggling with an aggregation that runs super fast when separate stages but becomes super slow when combining them.The aggregation is on an about 4MM documents collection.If I run the $addFields alone it takes 1 sec and if I run the $group alone it takes 1 sec. But the combo takes 70 seconds (or more).Fields are indexed.When running the $addFields explain givesFor the $group alone explain shows:for the combo explain is:Thoughts???",
"username": "Admin_MlabsPages_mLa"
},
{
"code": "$match$match: { \n \"campaign_id\": ObjectId(\"604baadfa1a21c0c6d03901f\"), \n \"channel\": \"email\" \n}\ncompound indexcampaign_idchannel$group {\n \"$group\":{\n \"_id\":\"$contact_id\",\n \"6059ff5a2aa6a105ae85d7f1\": {\n \"$sum\": { \n $cond: [ { $eq: [ \"$event\", \"processed\" ] }, 1, 0 ]\n }\n },\n \"6059ff5a2aa6a103a585d7f0\": {\n \"$sum\": { \n $cond: [ { $eq: [ \"$event\", \"open\" ] }, 1, 0 ]\n }\n }\n }\n }",
"text": "Hello @Admin_MlabsPages_mLa, welcome to the MongoDB Community forum!I suggest use a $match stage as the first stage of the aggregation with the following conditions:The match stage will benefit from a compound index on the two fields used above, campaign_id and channel.Then use the following $group stage to complete the aggregation.",
"username": "Prasad_Saya"
},
{
"code": " db.getCollection('contacts').aggregate([\n {\n \"$match\": {\n \"isDeleted\": false,\n \"tenant_id\": ObjectId( \"5ec2a723a73af34fd5964c93\" ),\n \"$or\": [\n {\n \"emails\": {\n \"$exists\": true,\n \"$not\": {\n \"$size\": 0\n }\n }\n },\n {\n \"cellphones\": {\n \"$exists\": true,\n \"$not\": {\n \"$size\": 0\n }\n }\n }\n ]\n }\n }, \n { \"$lookup\": {\n \"from\": \"events\",\n \"let\": { \"cId\": \"$_id\" },\n \"pipeline\": [\n { \"$match\": \n { \"$expr\":\n { \"$and\":\n [ \n { \"$eq\": [ \"$contact_id\", \"$$cId\"] },\n { \"$eq\": [ \"$channel\", \"email\" ] }, \n { \"$eq\": [ \"$event\", \"open\" ] }, \n { \"$eq\": [ \"$campaign_id\", ObjectId( \"60648747f78ba3fd5b00e8ba\" ) ] }\n ] \n }}},\n {\n \"$addFields\": {\n \"6075ecec3319af23fd597b0f\": 1\n }}\n ],\n \"as\": \"events\"\n }},\n {\n \"$unwind\": {\n \"path\": \"$events\",\n \"preserveNullAndEmptyArrays\": true\n }},\n{\n \"$group\": {\n \"_id\": \"$_id\",\n \"emails\": {\n \"$first\": \"$emails\"\n },\n \"6075ecec3319af23fd597b0f\": {\n \"$sum\": \"$events.6075ecec3319af23fd597b0f\"\n }\n }\n}\n], \n{ \"allowDiskUse\": true })\n",
"text": "The solution above make the query a little faster and gave me insigth to refactor my aggregate…In this new query all goes fine until the $group stage… The $group is taking 50 seconds… Any idea on how to make it faster?",
"username": "Admin_MlabsPages_mLa"
},
{
"code": "explain",
"text": "Hello @Admin_MlabsPages_mLa,This looks like another (or different) aggregation query. I suggest you make another post with a properly formatted code (please use the code tags), sample input documents and the output from the explain (with “executionStats” mode) which is run with the aggregation query.",
"username": "Prasad_Saya"
}
] | Super slow aggregation when combining stages | 2021-04-08T21:12:56.275Z | Super slow aggregation when combining stages | 3,351 |
null | [
"dot-net"
] | [
{
"code": "public class Template : MongoModel, ITemplate\n {\n [BsonRequired]\n [BsonElement(\"Name\")]\n public string Name { get; set; }\n [BsonRequired]\n [BsonElement(\"Key\")]\n public string Key { get; set; }\n\n [BsonElement(\"Desc\")]\n public string Description { get; set; }\n [BsonRequired]\n [BsonElement(\"PS\")]\n public long PageSize { get; set; }\n \n [BsonElement(\"Gps\")]\n public List<FieldGroup> Groups { get; set; }\n \n [BsonElement(\"EGUID\")]\n public string EntityGuid { get; set; }\n \n [BsonDefaultValue(false)]\n [BsonElement(\"STemp\")]\n public bool IsSystemTemplate { get; set; }\n\n [BsonElement(\"CTemp\")]\n public bool HasCustomTemplate { get; set; }\n\n [BsonElement(\"CCmds\")]\n public List<ITemplateCommand> CollectionCommands { get; set; }\n [BsonElement(\"ECmds\")]\n public List<ITemplateCommand> EditorCommand { get; set; }\n }\nBsonClassMap.RegisterClassMap<Template>(opt =>\n {\n opt.AutoMap();\n });\n",
"text": "Hi All,I am facing issue related to element name in mongo collection. My class declaration isAs you can see i have assigned BsonElement to all properties. But when I save the document i am seeing property names, instead it should take values from BsonElement attribute. For some strange reason it is working for embedded document but not main document.I wrote a extension method to register the call mapping and calling that from Start of application.I am using Mongodb 4.2.1 community and c# driver version is 2.10.4I want to know why [BsonElement] attribute is not honored.",
"username": "Veeresh_Angadi"
},
{
"code": "",
"text": "Compass View1088×473 41 KB\nThis is how records look in compass",
"username": "Veeresh_Angadi"
},
{
"code": "",
"text": "same issue here (I have class with bselements that do not work in external classlib). have you resolved the issue?",
"username": "Przemyslaw_Michalski"
}
] | C# mongodb BsonElement mapping not working as expected | 2020-10-22T09:42:31.017Z | C# mongodb BsonElement mapping not working as expected | 6,268 |
null | [
"node-js"
] | [
{
"code": "users.insertOne()",
"text": "Hi, I’m following a node.js tutorial and having trouble creating a user in MongoDB Atlas.\nIt’s connecting fine using just the db URI but once I try to create a user it fails with\n“(node:64425) UnhandledPromiseRejectionWarning: MongooseError: Operation users.insertOne() buffering timed out after 10000ms”The node.js command that tries to create the user is\nnew User({\nusername: profile.displayName,\ngoogleId: profile.id\n}).save().then((newUser) => {\nconsole.log('new user created: ’ + newUser);\n});I’d appreciate some help.THanks.",
"username": "George_Ormanis"
},
{
"code": "",
"text": "I am also facing a similar kind of issue.",
"username": "Indrajit_Rathod"
},
{
"code": "",
"text": "I am not sure you can do that on Atlas in a free account. I may be wrong.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Did you solve this issue?",
"username": "Ke_Xu"
},
{
"code": "",
"text": "Standard Mongo API createUser cannot be used to create database user on Atlas. You may consultfor more information.However, you may use the Atlas API, https://docs.atlas.mongodb.com/reference/api/database-users-create-a-user to create database user. But as Jack_Woehr mentioned, this might also not work on free tier. For example, you do not have access to server logs on the free tier.",
"username": "steevej"
},
{
"code": "",
"text": "Were you able to solve this? Am running into the same bug and I can’t seem figure it out.",
"username": "solo"
},
{
"code": "",
"text": "Please read the thread attentively. It is not a bug. It is a feature.",
"username": "steevej"
},
{
"code": "",
"text": "4 posts were split to a new topic: Creating users in MongoDB Atlas using Node.js",
"username": "Stennie_X"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Node.js cant create user in MongoDB Atlas | 2020-12-18T00:06:14.836Z | Node.js cant create user in MongoDB Atlas | 13,063 |
null | [
"atlas-triggers"
] | [
{
"code": "",
"text": "It says “maximum database trigger count for cluster size=‘M0’ is 5”",
"username": "Grzegorz_Golec"
},
{
"code": "maximum database trigger count for cluster size='M2' is 10",
"text": "Hi @Grzegorz_Golec, welcome to the community forum.I’m assuming that you’re raising the fact that the error message is referring to the wrong size of Atlas cluster?I just successfully created 10 database triggers on an M2 and then got this error when attempting to add the 11th: maximum database trigger count for cluster size='M2' is 10. Can you confirm that this is an M2?If your concern is the limited number of triggers, one approach is to make sure you only have 1 trigger per collection and then add logic to the function to handle inserts, deletes, etc. diferently.",
"username": "Andrew_Morgan"
}
] | Cant create more than 5 triggers on M2 cluster | 2021-04-13T20:23:16.872Z | Cant create more than 5 triggers on M2 cluster | 2,502 |
Subsets and Splits