image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "atlas-functions", "realm-web" ]
[ { "code": "", "text": "Hi everyone,\nI use Realm for hosting my website. Is there any way to retrieve/get Image from Realm Hosting using Realm Functions and convert them to Base64? I looked for MongoDB docs but no luck.\nThank you for any recommendations.", "username": "Tuan_Nguyen1" }, { "code": "", "text": "Could you use the image’s URL to fetch it from within your function code?I haven’t come across a way to fetch hosted files directly.", "username": "Andrew_Morgan" }, { "code": "realm-cli login --api-key XXX --private-api-key YYY\nrealm-cli export --app-id images-flxwq --as-template --for-source-control --include-hosting\nImages/\n├── auth_providers\n│ └── api-key.json\n├── config.json\n├── environments\n│ ├── development.json\n│ ├── no-environment.json\n│ ├── production.json\n│ ├── qa.json\n│ └── testing.json\n├── functions\n├── graphql\n│ ├── config.json\n│ └── custom_resolvers\n├── hosting\n│ ├── files\n│ │ ├── index.html\n│ │ ├── lolcat10.jpg\n│ │ ├── lolcat11.jpg\n│ │ ├── lolcat12.jpg\n│ │ ├── lolcat13.jpg\n│ │ ├── lolcat14.jpg\n│ │ ├── lolcat15.jpg\n│ │ ├── lolcat1.jpg\n│ │ ├── lolcat2.jpg\n│ │ ├── lolcat3.jpg\n│ │ ├── lolcat4.jpg\n│ │ ├── lolcat5.jpg\n│ │ ├── lolcat6.jpg\n│ │ ├── lolcat7.jpg\n│ │ ├── lolcat8.jpg\n│ │ └── lolcat9.jpg\n│ └── metadata.json\n├── services\n│ └── mongodb-atlas\n│ └── config.json\n└── values\n\n10 directories, 26 files\n#!/usr/bin/env bash\nfor file in `ls Images/hosting/files/*jpg`\ndo\n base64 -w 0 \"$file\" > \"$file\"-base64.txt\ndone\n$ ls -l Images/hosting/files/\ntotal 1576\n-rwxr-xr-x 1 polux polux 742 Jun 1 14:27 index.html\n-rwxr-xr-x 1 polux polux 15067 Jun 1 14:27 lolcat10.jpg\n-rw-r--r-- 1 polux polux 20092 Jun 1 14:32 lolcat10.jpg-base64.txt\n-rwxr-xr-x 1 polux polux 45186 Jun 1 14:27 lolcat11.jpg\n-rw-r--r-- 1 polux polux 60248 Jun 1 14:32 lolcat11.jpg-base64.txt\n-rwxr-xr-x 1 polux polux 22496 Jun 1 14:27 lolcat12.jpg\n-rw-r--r-- 1 polux polux 29996 Jun 1 14:32 lolcat12.jpg-base64.txt\n-rwxr-xr-x 1 polux polux 91193 Jun 1 14:27 lolcat13.jpg\n-rw-r--r-- 1 polux polux 121592 Jun 1 14:32 lolcat13.jpg-base64.txt\n-rwxr-xr-x 1 polux polux 10118 Jun 1 14:27 lolcat14.jpg\n-rw-r--r-- 1 polux polux 13492 Jun 1 14:32 lolcat14.jpg-base64.txt\n-rwxr-xr-x 1 polux polux 52967 Jun 1 14:27 lolcat15.jpg\n-rw-r--r-- 1 polux polux 70624 Jun 1 14:32 lolcat15.jpg-base64.txt\n-rwxr-xr-x 1 polux polux 10794 Jun 1 14:27 lolcat1.jpg\n-rw-r--r-- 1 polux polux 14392 Jun 1 14:32 lolcat1.jpg-base64.txt\n-rwxr-xr-x 1 polux polux 40502 Jun 1 14:27 lolcat2.jpg\n-rw-r--r-- 1 polux polux 54004 Jun 1 14:32 lolcat2.jpg-base64.txt\n-rwxr-xr-x 1 polux polux 42785 Jun 1 14:27 lolcat3.jpg\n-rw-r--r-- 1 polux polux 57048 Jun 1 14:32 lolcat3.jpg-base64.txt\n-rwxr-xr-x 1 polux polux 28546 Jun 1 14:27 lolcat4.jpg\n-rw-r--r-- 1 polux polux 38064 Jun 1 14:32 lolcat4.jpg-base64.txt\n-rwxr-xr-x 1 polux polux 34118 Jun 1 14:27 lolcat5.jpg\n-rw-r--r-- 1 polux polux 45492 Jun 1 14:32 lolcat5.jpg-base64.txt\n-rwxr-xr-x 1 polux polux 75977 Jun 1 14:27 lolcat6.jpg\n-rw-r--r-- 1 polux polux 101304 Jun 1 14:32 lolcat6.jpg-base64.txt\n-rwxr-xr-x 1 polux polux 34714 Jun 1 14:27 lolcat7.jpg\n-rw-r--r-- 1 polux polux 46288 Jun 1 14:32 lolcat7.jpg-base64.txt\n-rwxr-xr-x 1 polux polux 9326 Jun 1 14:27 lolcat8.jpg\n-rw-r--r-- 1 polux polux 12436 Jun 1 14:32 lolcat8.jpg-base64.txt\n-rwxr-xr-x 1 polux polux 148264 Jun 1 14:27 lolcat9.jpg\n-rw-r--r-- 1 polux polux 197688 Jun 1 14:32 lolcat9.jpg-base64.txt\n/9j/4AAQSkZJRgABAQAAAQABAAD/2wCEAAkGBxMTEhUTExMVFhUXGBcXFxcXGBcaFhgYGhcXFxcXHhcaHSggGholHRcXITEiJSkrLi4uFx8zODMtNygtLisBCgoKDg0OGhAQGy0lICUtLS0tLS0rLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS4tLf/AABEIALYBFAMBIgACEQEDEQH/xAAcAAABBQEBAQAAAAAAAAAAAAAEAAIDBQYBBwj/xABLEAACAQMCAwUEBwQIAwUJAAABAgMABBESIQUxQQYTIlFhFHGBkQcjMqGx0fBCUsHhFRYkM2JygpI0Q/FTVIOTsiU1RFVzo7O0wv/EABoBAAMBAQEBAAAAAAAAAAAAAAABAgMEBgX/xAAkEQACAgICAgMBAQEBAAAAAAAAAQIREiEDMRNRBCJBYbHwkf/aAAwDAQACEQMRAD8A8d4Rw2W4mSGBC8rE6VBAJwCx3JA5An4Vru0/Ar6PE15ayxRLpDN4WUeQyrHGTsM9SKh+h0f+2bP3y/8A4ZK9q7dZ/o/i/dOZ21HvEYlRAO4gLKuQdWFxJgbEueuaBUYmfgt4IQBw+5CaeY7ltsc8JISflXmg7J3kkPtSQMbd3CrJlMFmkEajBbO7kDlX1IpAmtyJW1+zyaYOSyf3OWJ5ZU4A/wA5rC27E8DUsmhjfqSn7pPEwSu3kdvhQCVGE7EcCvrS4lgks5WkEYkZFaLIQkqGy0gBBKkbHpWs4VezXMQmtrK4ljIzqHdLv1ADyAsRyyoI9a9Tms0M7TA+NYmhYehIkXPu3/3VjeDpL/R3Bu4eNXARgsjsiSf2WXKeEEk+LVjH7FAYozMXHFbCxxyPI0ndCIALIJN8qQ5UKRpOcmp3uLoTLAbC5ErI0irm33RSqsc97jYsvXO9RR28n9MF5ggk9vgDCMsUybUciwB5Y6V6hNJH3vtB+1EWtsf/AFZIcfH7PzoFR5rDNdu8ka8PuC8RUONVv4Syhl/5u+RvtVLxLi8iTiNreRJldUMR0s5ZgrKBoYg5z516lYD+08V8fd/3Xj/c/sw8Xw5/CvK+GXot7gXJnkvu7vExIFZpJ1NtoJUDOSqsx/8ADpMeKLbiNxd26d5c2ksEWQDIxjZFzsNXduxXcjcjFdVrwxd8LK4MONWsBM6cZ1CMv3hGN/s59Kuu2wnltJ57KZZLeVlNzDMh7xBiNW0asGPCqGKMM8yOeK2Rc/0hGuTp9lkOOme9iGcUUFI8iHFJ3gS5EMwtncIs31Wkkyd0DjXrxq/w1JBYXU6d7Da3E0W+JAYwrY2JVXdWYbcwMHpVrx4Y7PMBti9kA9McRkArY8JOmPg4GwMQyBsD/YyeVQ+NPstSro8r7waQyqzFnWNUAAfvGcRhMMRhtRwQccqk4tHPaiPv7OePvXWKPJhOp25L4ZDjPrtV6bYNxfRtg8RLkdPq4BN89QB99aP6Qsz2qsykdxxC309MqJkj1b8x9Z91RHiW7Kc2YT2W575oPYrjvUjErLmDaMkgNnvcEZBGx6VLYWV3LEk0djcNG6h1YGA5UjIOO9zy9M17Be2aGR5gfGkMkTD0bTIufdg/76yfAp2Ww4EFYgM8KsAcal9jnOk+YyAcegq/FEWbPPpYzI6JHHK0ru0XdAKsgdFLMGEjKFIC55+VWFrwq8thlrG40llXJa3+0zBVH971JA+NLtuGW74lMjvHJC6yRsjaSrexRDOR6E/OtP8ASHeyi64TGsrqkkimRQfC+JbbGodcaiaca6JlsojHdd5IvsFxqRVZxm32VtWk/wB710t8qCiju5YluEsJjCU7xXBg3TGrVgyZ5V65JBGJrpw+ZGhjV0/dVRNoPxy3+2sLxJgOz1lm9az+oiwy85T7O2If9XP/AE1ZNHknFOB3N0011bwO8MahpGyg0juxJyLZPhIO1Qv2K4gnc6rVx35Cw+KPxkqXA+1t4QTvjlXov0OapLfiNsftPBGQD/ijkjH4CvULqyWV7QL/APCzguPL+ySqo/8AuoaAo8W7K8Eu4w9stlO00ZBlUGLSpcah4zJpzjG2c+lWgmfvfZzBMLnO1vpXvCMZ1g6tBTA+1q09M52rX3F0TZ8akVsH2qRQy7EaIbaLYj/Ka0ctuv8AS0b4Gr2OVc9SO/iI+WT86ylxRbs0U2jzG+a4gaNJrOdJJWCRL9URIx/ZDq5QHG+GIp4NyJJI/YbjXEiySDVb+FG16WJ73G/dt8q17sWsrMsST7eu5OT/AMVKOZ9NqK+lDVFw2/lhGZXjVXOdwmQjH4Izn40eGIeSRhzJcgwj2G4zPnud7fx4QyHH1u3hBO+OVSyx3SukRsLkPIHKLqt9wmnVv33TUvPzrYXH97wP/wAT/wDRkoJGH9YFAvDMe7nzb9LbwQcv83OjwxDySM1I0qSrFPbzQs6O66zEQyoVDf3btvll511l6+dN7SY/pZsXjXHgnBjPK1JeL6se/wD/AJqU1hyRUZUjWDtbApI6DcVZPvQkiVBRXODmlRRSlVWSeediePLY3sN0yFxEWJUEAnVGycz/AJs/CvU7n6QhfWt5FaWfdvc5EjvMN2aNYy2kDOdCqByGwrxQlcbCrbstfPE+U5ghq7Wc/Z7tFx+7ae3nHDyBFDJEQZ4wW1mLBG3L6s/Oq6aW8NmbX2QZN17Rq7+PGPa/adOMc8eHPnVv2d4wl1EHB8Q2YeR86sWSs82VRXW/H71bm4l9iBjmSMBPaI8q6B1Zs43BBX/bVdwO9u4LW0glsVmltcGGRJ1SMERtH4wd9QR3XYMDsdul+VrhFLNjxRnJFuWuHu/ZwGN3FOIu9TJRIBEfHyByKIueKXjCYeybSXcFyP7RH4ViNuzJy5kwnf8AxVc6KbopZseKArTjlwst27WGpbkplfaI9gsQjIO3XB+dUdpw2SOVp4LdLfRLFJBCZNaeCMxyIWX7IYM3uyNjyrU6KYRSc2NRRXcU4vPPDcQw2ZtmuT9dNJKjqCVWMsqISWOlABnSPOi07VXIKs1gGuVjMQmEyC3IJBJ0k6wMqDp0kjlmpNNLu6PIwwRScKupFsPYrqwW4XXI7N36Kjs0zyhtJGRu3Wi+C9pLmKC2jnse9lt10xvHPGE+x3Y1hsEHScHAYdRR4tiakHDWOcU85CxiZvh8txDdpdtEJnY3Ekio6oFeXQqBS+5VUUrnG+M+lHzdpL2SKeOa1165dcX10X1aBkZE5b4K8/Wj57Nl3KkCoNFT5JIeCZFF2puhcXMhs8xzJGqp38eVdFdWbONwQV/203gfaCWG2tIJOH62tlQK/fxgB1jMesDHkzD41MUphSjyyH40UXF7ea4W9kdVWW51EIrZVfqliRdZAycICTyyTU3aTiVxcz2UwtdAtTqIMyEviSFsAgbbRnn51brbk7AGpV4cxOCKUZy/AcUR/wBbpxNPL7FtLHHGB7RHto73J5de8HyqoHHVNhBZXPDhMII0UEzpjWkZQOBj1Pzo3ivDcgryPn5HzrBcUimU6W2I66tj671tCWWv0xncd/gX2Y7U/wBESCaSIyhrdYXCsAQ6srA5PMfa+daXsr9JcjS3c62jPHPKjIveopTRCkZG43zoB2868u4raM4OqRfPdhWy7M8P7mBEPPGW953pck3BD41ky54DxyWGO6iuLbv4buWWZkSRVkQyHdMkhWXSq7ggg5ox+1l0b9b4QqFWIwezFxrKMwdn7zGkSagu3LC8980EopwUVz+eRv4kF8R7TTOLdILLuoYZ1nZHlQySEOXKqQSqjUxOSfIYom47XTSNdCSyJhuIkTR38eQQrq5zjByGX/bVcBT8U/PIPEgl+0s5ewb2P/hNWR38fjzbtDtttu2aeeO4vFvU4aFlw4lbv01SalVR0xtpHyoQrT1H50eeQvEiDi0sE0rytwiMs6y69U0bZlfRpk3GMjDcv3qi4ZCyQwo321jjVt87qoB39450UM041MuRy7KjBIhdaGkj/KjGqJ6goAkj361yjBHnzpVViPERRHDWOvAOMioCK7aSBWzv8Oddxym47J8als5A2coT418x1r2u2nWVFkQ5VhkGvm1pJj9mM49a3/0Y9pZIm9nuPCjHwE/sk/wrFr2zXvpHqxFcxUpWmMtSBGRXCtPxSIpDI8U3FTYrmKAICKfbwEknyruKKt4iNxv+P86EDZKpXAxRCc6HVV2zsenr7qmGfl99U2yUgkMuDkbfdTI7aI7d2vy/jUYXUDj76jjl3570ZBRIODwNg4YegO331KtjDuvdjHmefz51FLORn0z929RxzFt/Pf7qMkFMKigjXIUAUxxUYXfenyj5+VGQUB3Vqr++sN2t7LLcKVP20J0sD16j3Gt+UyQf1mh7qMeVJ72P+Hz/AP1aInWNkYb5OfIb1ul2FbK84erHON+W1VknCFG5+AG1ZzuRcaRSrT0aiZ4D5YHT/rQ5WsqNLHpTtVRinUgJFp55e+ohT2Y/KgDope+uaqcBQAwrUbipiKQ9ffQhAjA0qkINKqFR57N2ciuEaS2dFK/aQnCA43GGOqPfPPKnHMVk7q0khk0yIUYEbH8QeRHqKnu3aOYlCVPQjY1qeFdpYJ1EF7GrAnZ9hpJJyVfnEfT7J/w12purMJJXRDHMMDfpQ11MTyOD0I6VpL3srpXVbnvkAGR/zVHTK/tD1FZyaP0rl0mda+yPUPo27T+0RCCVsyoMAnmyj+NbZhXzpwy+eCUSRnDKc7dfSvd+zfG0u4VlTnyYdQ3Ue6tE7MZwxLEim1IVppFMg5iuYp5FcXekMaFo6GQoMlagiUDc/wAqn7xW21D+NUtEvYRrDDJX5Gh3Qnkc+/8AMVPHGoHWgOIXrRglME9M4z86mT1sqK3oKi2OD99RS2+JVPnk/IV5zxP6Q7hWCrAGJYpjPMqTqwPIYO5wOtSWXbiSZ0SRNDHkwO3iyAT6chz61m5a6NfG77N3xScAMeWFJ+486m4cRz9MD5VjjdTzBgcIu41EZJ6E4yB02JNFW9xNHpUyKVx9obHI6FcnoOlSpfoOGqNRbZJd3OwJA+BrkMrE5Fea3n0hTqdMNvlSxHizknBZuXLABJ91H9m+2E05GYlAK6shsgrtvk45Z5etXekxeN7PRIxnmKZIV5CorOfUMn5DpXXjA5D51ona0ZNU9kDxZO1Qvbnyz64qxSMDrk/rpyFOA/X86WIXRTvaZ5jP3D58zQc/CtWygj3HH4Vomx76hcelJxQ1JmPm4FIu+QR6ZzQLnScHIPqMVutJ9KGnhU7NpPwFQ4FqZkEfzp2c1bXPBIjvGdB9OXy5VS3ttNDzXUPNRv8AjUOLRaaZKRXQTQMd+pO+QfJhg0YJhUjocRXG2+NdBrpG/u2oERE0q6wpVQjxTi+dYJGMigWq448NgfWqdjXZxO4ow5lU2a3svxuSFBoLYUnbfA9QQcg/j1zWte4tL1CZAYpsfbVRqPoYxjvfeu/oOvnfAn2YeuasGcg5Gdqym/s0bQX1TJ+P8Cmt/GR3kXSWPJTPLDdUb0apuw/aZrSfJyY32cdMefwqfhfauSNsSEkHALYBbGrJ1Kdpc7jxb7neraXs5b3g7y2ZIpD+yCTA53wvnE+2dPr1oVDk3Wz12CYOqupBVhkEdQa6wrz3sPxia2lFjeKyZ/umb7JP7obkQcdDW7ub1E+0wHvNMyolpyr6VUPxgMcRoWPy/nV3wqGVhmTAH7uMfjSi0xNUd0k8h+VBXMZyAdt855D54q9DhTyGP10qVlB9RVNWJOiqFzp2IGfQH8aFurlP2wD6frNWc9gT1x8j+NV95YgDB38z+jWM8jaOJ5/x/gEFw5VHOSSyqASwJ+1uOQJPXHp6WXD+yqq4MjeEADHXA5Y8uvrWgjhCZ0qF93Unf51O0BALMSCdIGFJ0+7zb/08zWcU2aSkvwrOIcCjcowlZQjZCALoJAOQds4+NS2HAYYwwZ3cEsSGwFGs5wABnAzgbnkK4xJ+rVgrMCikbhAc6nG/i/OuW1wwwjMpZMIxYeFhsVYr0BHyOfWtdUTuqKTjvYZJVZVbAzkYxkN1z0J6dKn4BwWO3JVkdho0DbAABzjmSSSOfLb56eK31cyO8C525kE5xqAAZegONqCDHOd+uRyI+FQ010Cld2WHDnVFAUaV8uWKKkIJBO3SoLOPI229+f4iju5JGGxj0rXjRjNkGSM9SaUfqcnr+ugrsEWk7ZqRlx0zVtEpjRk/ZG3U9PnTGUjr8qcrH9r+VSavICpqxgjS9Cfuod48/oijJD6D9fGh2fz2pDRA8A8s0NLCvkc/GiWuFztkn3nFQNKTyA+B3qdFUynvuG69sD8Ko7iwaPlsPTcfLNaZteds0NejUMEDPmNqiky7aM9HeMPI/cfvqdL8E77HyIx9/Wq+5yGO1MEm2Kz6NKsue/HnSqnGn1HuJH4GuVVk4mB4vH4D6VRmtNfx5RvdWYBrr4Hox+QvsH8FbxEeYq4ZT51Q8LP1g9c1oQnnUc2pGnBuNFdcIc1Lwy+lhfUjY5ZU7o2NxqU7Hff06Yp80dQGOpUjVxPS+BdrknVIZow+6jSxy+w+0jHZiCAdJw2CcE4GbufhfffWQSgqeYbOoehPMe4/fXjunar3gna2aFl1szAafED9aFGfDk7OuCdm+YodPslcbe4nsfAOFvCuWKeew/iatZ+JBemfcd/lWV7Pdq0lTUzLjYGQbJkkgB1O8TcvTfnyzq4dLY35774OfcTnNV+UjBpp7KC77SRhsFJF/wAWZMD34U1NJxgsuY5A23Nd/mCBWkj2/wCgpT2wceLPuqcX7HkvRk4O1zLtIMgftIAfmASfuq0t+LRTDwNk+WDn5EbV3iEWhdoxgcx4fz3quitYpvCmzDmN1denn+FZuy/r2WejBBI1Y3C8s+RPkP17o7qVtJLMOW5A2PPwIDzHm3M/hELaZNg5A33O+wG3Pc77VJbW+nBY6mHMk8uuw89zVUybRBwvh+nLtnUdgP3V8vT1rvErLOl1HiXZvNl/iRzHx86K9oGcE86cJxjc/r30klVDcn2Q2bAgMpJXmV21K3UqenLkdv4lnc5xv7sA+Xng0JdWRPijOG+5h/Oh57S6I8M4Hn4Ry/X63qlfol0FXF8IvteE+TMufgM5ND2nGJJGwEKqOpXGfnUb8MbGo6WYc85I+/OPz99DF3U4aIKPMEY/EY+VDtAkmjRGU7Yx61PJuMj40Jw0HHNfmf40azEbEfHmK1TszaoEAJ9PvqYLj0+O5qOSTHIfdUSM2csx9wFS3RXaCQ+eX8KinTI3x8TXQc/st8TiopU65Pu50xAU0YA2IHu/jUUEQHI59aluWx50MrM3L+X3dazkaI7c48/uoEpkHDD4flRrI36PP4VV3aMDyKnofOp/o16KPioIY5/l86q2NXt9DrHiG45EVSSwaTUTRpDo5mlXMVypKMg+SDvWacYJFas25NZq9j0uwPnXbxNfhzcyZHath1PrWjaQ+YrNKN9qvUQ4FPkSYcTase7H0ppJ9KY4PpUYbzrJxNsiXUfSmkUtVdNOjXhdtklndvE2uNyjDqPwIOxHoa3HZrtyUwsmF2Ox/u2YtnO/91zI2yvLYYrBV0DNI2lBS7Poa14qkpwrlW6KTzxjcdHGCDkHqPdRPdHOXmdh+4vhHzAB++vAuGcTmhI0kFQc6GOVzjGQM+E4J3GK9D7PdtVKBJc5wAVcjUd8eF+Te44NByT4nHo202k5UR6F6Y6nzPn7t65aFi4WNNhz0jLZ6Zbko953qputUmkwgOgZe8XLLIq6hqwAd/DnbY8sZrZWuGSMWroiA+IFSSRgjTucq2cc99qIxbZk2loIi4XlcZwCDkZJG+/WhLzgunxAnHXH5VZ8Kt5EUiWTWxJOcAYHltR1dDgujKTqWnZhrlFXdtwTtjPptUUTxlQRsTsAeeauuM6Y322yM+h51QXHCoZGL62XUckDkT8Dv8a5pRpmiegmK80nT+15HIJHmDyYVL3wO+CD5HIPzGxoQQYwvjYDcMVUqPieVFRqx35j0O3yO3yIpKxuiRY2JyNJHzNExW5HPSR7vz2qFIt+WT8QfganjkYEZJI+8e/89+VNITGywY3QYPpjSf4VDb8SP2XXSfuo7WPLHu/KpDaqwwQK0S9EWA3ewyNvwqKFl6mjJ7IgYHL5gfGg0VhtQ1uwT0SlBjOflTGcD1/Xurphz6H02P5Go2i/68x8ulFADzENy2+dQkBBgHenTAA8l36imsiqRnryzy+dQy0ZbiPGiJcMdOwwDkczzqxhnaQeGRD6ZB/mKsH4bqOogN64zRENsq/sr7wBU4luRnLuCTyIPnjI/mKp72LO5A1frevQWUHpj4VW3/DQw3AI9c/jQ42hRlTMEVpVa3HC8MQCfjSrCmbZIxzvjr91VN3HqJJ01bPG1B3Vs+eVaLReDfsqxGBy/CiF3GBUj2z4+zTkjYD7NXkS+N/8gWRDQ7A0XNnyoY+6tEzJpjADUw5CmKKfQ9orh1IVdLU2lmpOyztPjcjlTKVBRuuyPHdI0u32cldJOoHlseePQ5G1bnhnaePZydTDmUHi8t1H2vhXhyORyJB9KkhuHUhlYgg5BB3p5GE/jqTPpC07UZww+sjPN030nyYc1+VXnC+KLKNs+h6H4181cM7UTQuHU5wc75yfMEjn8c1uuEduoXOoSezStgHUC0JI5ZXOFG/MYPvqlyNdnNyfFklaPTuM2xMyM391jBPkc9fLpRtrw23wdCqfUHPp57UzgvEO9iBfSGI3CnUh9VP7SkUHfr3DGVNh5fs+XwrSv05bfQr7h/deIZZev7y+vqKFkjUtkHG3Ll57+tc4vxsMi4yu++T5fEZG4qtlukyuduZU+oyNs9D+VZSaXRrFNrZaBCOT5+G4+JrrTHky5z5jI/P8aGk4rhU+ydQOnIwT16dOW9T295qG+ny59fQ0lQOzqE9NJHkW5fOjY3x0+/NC3AQjxYPocGgwyrsjAf4T9k+7NWtEvZeGeg7hQdxQwO3iHyJ/A1WXEyg+EkfHH3biqb0TRaCToRTjIKroWYjOc1Ky55fIilY6FIqZzk79OlcS3O+jSR5efxoWaEkEZwefOiOGSsBhunX0qO2X0hJIQcFSKJjQHmoPv/W9TS3igZY+7/rQjXufPHlyFJUv0HbDAo5AAfroKGuyg2zk+QH6xQzzs3oPIfnzNQgU8vQsfZA8IJ6fKlUrEeVKpLPLD61zFOx6101NnpcbB3FQSRUY4PlUbKamwx1srJ7eq+aIitA8eelCyw+lUpGU+NNdFAwNJDRtzbnyoInFadnByxxOtXKRNcpGaY4Uqbmu5pFpjq7Tc12gqztKlXaCi97OdqrizIETZjzvG2Svw6r8POvVuA9uIbwKuvRJ1jcjLf5W/aH314YRXaak0c/J8aMto+grttRKcmJyDtn1I9SNqg43buIsxYA5nO5Ix+e2K8r4F23nhASX6+LyY/WL/lf+DfdXqPA+P295HiJ9x9pW2dd87r5ZHMZFFWck4Sg9lHN2kMQLGI6mxg6gc/4fERo+H8adb9sZs/WWz6DyO2nHTfJGfiTVlfWqKmllBC88505/L3VVrbIh1JBscciQp8jvUbQWmXP9Y9QH9nB6gMT/AAG386IXjM2NoAB65I+ZJquiMnPTjPXwjA8s4yaRuJM7oceeF/FsmmmyWkE3cl1IMxrH/pzkeuP50BbM52ceL3AZ+6jo4g26Phv8x/Pb5UQY3/bb47EVptk2RRNjfOM+8ffRsdyeR3HQjmKi7onlT0tfOl0Ls5PADuCc9Dnr6jpvTE7xht4QfifhRSxiu4qWy0RdyB646n76QqQ1z9b0hjWFNNOxTSNyc/CgREW9P186VIj9bUqaEeSGIeXr7hXO6Hl6/CpyRz9fmfyqC4u0Td2A955n+VFHr3JJWxd2Pl7/AICuFR1z8z8qg/pOH/tU+fM1E3Eov+0T5jn50Yv0Q+fi9r/1BLp6n5moXUeZ+dQtxGPpInlzHzqGS+j6SL86MWZP5HH7Q+YeRPzqvuEPMb++iTeR/vqPiOVQvcx/vr8+lNJo5eWUJqm0BrID76eDTZyh/aXPnmoFnxzI94q6PnS+n6E5rtRJIDyOaRmUdRU0NTXslzTgahE6/vCurMvLIzSplqa9k1IGo2kA5mud+v7w+dKmXmv1k+aWah9oT94fOl7Qn7w+dDTKzj7RNUttcMjBlJVl+ywOCPcRvQvtCfvD505JVPIg/GltA3CWrRvOC9v3wEul7wDlIoHeL6leT8+mD762tpeoy64frAceIc888EHdT6HFeHs4Xmce+rPhfaFofEjYxsGVtJHp1BH+Egj8aZy8nAr+rPV+/mkJAKrjoT4vl0qMQTqfCGbzXL5+RoDs326t5xol0d4dhuFzy6E8/if4UTxHt1bImIbpcgZAUoT5nmMHketFNHNX4WUvZp3Ks0roeZBOfhVtbWIQAambHMnl8ulYCy+lJNZDMCudiyFcj/QWx8jyq/tvpDsyAZGEYPLJDA/LxfdRuxOLo1IXHIVzTVOna+wKhva4MHOCXA5c/tYNc/rbYf8AfLf/AMxaKYrLgj9bUzFVLdrLH/vlv/5i/nUlv2lsnIVLqBmOwAkXJPkBnejFhZY4pBKrrntJZxsySXUKOpIZWdQQeoI6GnXXHrWNVeS4hVZBqQlx4l8x5j1oxCywqJ2oCftHZqqsbqAK/wBkl1w2Dg4OeldseLwTZEE0chG5CMGwDsM0UwsILUq4WNKkB5MW/l+dF8E4tJa3MU8QjJ2iIkUsCskkYYjBGDtsarwc8/0PKlJIBpY8g8THrhRIhOw9BmqXZ6f5UVPhkn6/w9l7V8bdOK8PslSPupiZGJU6wya8AHOMbdRRnaLhpjteKyELiWN3THMBbVYznbY5U1hO1Ha+zl4zw+6SYtDCriV+7lAXOvGxTJ5jkDVjxPt1YtDxRRcEmcEQju5vFm1RNspt4wRviug8q4st+3013HwpHtI4SogzOz/aVBGpDJgjxc/PlUnEeIxJZy8ZULqksYkQYGzlnKjHnrlUf6ay3bbj3C76xijN9IkkMRKokcoDv3YARsx4IyMc+tZ+47SW7cAgsu9/tCyoWj0yDCiZm+1p04xg86LCMG2kW/0TxhrHilsQCe6Dg9fHDImfnH99ekX/AAdCeGKFUCGZSRgcltJgB+FeS/RX2kt7O5na6k0Rywhc6XYalfYYVT0Y1t+D/SZw/wBpuzLPpj7yNoGMcp1L3KoxGEyNww3xzpRdo1+RDHlkkibjkqyWHHCAPC8qDYfsWsIP35qk+leMe28DwB/ejO3P621qptO2Np/R3F4nlxNcz3ckSaJMssgGg504GfUirTth2i4Tdmyn9sbvbSSJlRY5NLAywGTVmPosZOx6VRik76M39PkeOJppAGLWPlt/zZq9I+iyNIuGWMbqC1x3rZwP2u8l/wDSAK89+l7inDrxlura6eScKkRiEbhO7BkbVlkG+W861PCvpC4bbxcNgDq/dIEkkKS5gIg0lh4NyzeHboTS/S5L6Rr+gdpZCLs7xGPAzHLdpy/dlC/woPtMg/qvYnAzqg3683q3k7WcHltr21kvNCXE0rhljlzpk0MWGYyPtaudVlzxvg8/CIrB75k7rdSscmomNn7vOYyNwQaDNaZD9B8kZW9jjaNL1lBgaQA4XQwBA5lQ+7AeYrVcKujLx5RLZm3lWzlDsdDLMO9h0urL9oDxc9xnBxyrzD6N7+wWO5ivSkMsig290YyzQvoZSVcDKMp0sNxuOdb6++kqwXiVq4laSKOCaGWdUbSGkMLK2MZYfVHOkHGqhdFzTcmG3cY/oxdh/wC9COQ/+bPVv2qa6W+skEUHsbTRhnx9d3uJWAG+NOFHSsf2j7Y8PjtoraC478tercOyo2mNDeG5Yk46Z04GScZxVf2l7YWkvHLO6jnLW0QTvG0y6VP137BXJPiG4HWmQotnoHEDdf0tbo8UAs8uYmA+taQW7agwzjTu3ToKpvpahX+jZ2nSMSC4UWzKviA1rpyw5HGsHltWZXtda/1hN4Zz7Lo0h9MukN3CqfBpyN8jOKK7TdsbKWwvoUm1SS3BeNe7lGpS8RByUwPsnn5UrKSaaI/oTsVjMt3IudUiWkWfNiHkIz/p/wBpqs7ZwD+sDnyubEY6bpBU3YvtutqttbXEERgSd5O/JdnjZu8YSaAp3AYrkedB9rOO29xxb2uIKIEktAZQrhpNDozyFSobwjwcjtGKhNYqv4bSy8km17/w9ivESe5urSWKJ4VtonAKDOZGnVsny+rXHlvWPnlgXhnAnudHdCW21lwNP/CzaS3pq08/KrC47e8LSWe5S5eR5IUiESwzf8sysMEoNyZOpwMVk5u1Ni/D+EwOO/8AZ5IDcw907YVbeVHyGXS2ksDgE5xtV2c9M0H0v3Uq2VwklsJ4JCpgnTRi32jChxz+2HIYbYcCs/8ARJiDhXEb1FX2iMyBHZQSAkKOo92piSOtWXHe1PDksL23trgS+0AiC3SN1WANEiacMAEUMpkOcbscDPOi7B8etIrO9sLqUwC41FJSpZPHEsbA4GzArnfAOaVqy0ngz0ySwjXjMEoRQ8tnP3hAHi0yQFSfMjUd6pO0nFXteDy3MYUvFeOyhgSpIvmxkAgkfGqy/wDpQsRxW3dXZreOCWJ5gjaQ0jIwIXGoqO7AyB+16VU/SR2usW4ZJZW1x7RJLM0uVVgqK07TnJIx104G/XaqIpmp+kLtVNHw+yYLHm97tJcq2wkjDHR4tjk7ZzWx43GGMvtCRtaiA6tS6n1ZOrYZOnT6c68W7d9rbOey4XFFMHeB4TKulxoCxqrblQDgjpmtw30k8L9uMvtS92bbu86Jca+9LYxozyNAiy+iyRYuG2auWLXDSsC25JYyy7n/ACqdz5VU9joWtLG+kiTVcR3csCHGpikcqxxxDP7OCcDlls0Hwf6ROGQW/DIe8jcxKqyOUkzbkW7Kzr4NyWJTbo5pQ9uuGOL63N33SyTd/HNokwQREzY8IIYOjbbcxigC+4OI2400yW0kDSWja+8QIXZZUw+ATk4IBPoKqDJeNxG4N4kSaYU7ru+Ri76XSWyx8W2/Knn6SuGNxGOf2kCMWroSUl2dpUYKRpznAPptWE4/Pwz+1yQcTlZmQyRqUOJJS8rd0S0f2BkY3H2jSatUNOmbSftBbKxHeZxz0qzDPllQRSr5/mvHY5LHPvx8MDYClWfiKyNmW/M08N+ZrlKsT2KO6vzpjDPP3mlSoB9Akke/Pnv8qic/fSpUGUktjATn9dOld/6/jSpUMzicP40vL13pUqRaFjb3mmHf40qVIGQSjPxoR0xuOXlSpVcGzh54J3Y6M5olErlKqZxRYVHDRCrSpVlI1Q6nq1KlUmqHLzomNq5SpMYRG3nyp1hJtIPj9w/OlSpETA7wdfSgX50qVVEyZV39uBuKBNKlXRHozY000mlSq0YsbXCaVKqM2NJppNKlVIxkcpUqVBJ//9k=\n", "text": "Hi @Tuan_Nguyen1,I’m not sure what you are trying to do in the end with the Base64 images but here is my attempt:Firstly, I create a little website with a bunch of images and hosted them in Realm Hosting.\nimage679×927 59.8 KB\nSecondly, I retrieve the files locally using the realm-cli:Which gives me a Realm folder:Now that I have my images locally, it’s easy to process them and retrieve the base64 value for each:This creates a bunch of files next to my images:Here is the content of one of the base64 files:I hope this helps,Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Retrieve Images from Realm Hosting
2021-06-01T09:14:26.422Z
Retrieve Images from Realm Hosting
2,789
null
[ "swift", "atlas-device-sync" ]
[ { "code": "write: falsedo {\n try realm.write {\n realm.delete(task)\n self.taskTableView.reloadData()\n }\n} catch let err as NSError {\n print(err.localizedDescription)\n}\n", "text": "What’s the process for handling a sync’d object that’s deleted, but the Rules are set to prevent a write.In other words, in the console, the Define Permissions for the App are set to write: falseAnd then in the app data is deletedThe end result is while an error is thrown, which is caught by the catch block, the task object is still deleted locally.How does one roll that back or prevent the object from being deleted locally?Or is there a totally different approach?", "username": "Jay" }, { "code": "", "text": "Hi @Jay, unfortunately, the SDKs don’t know whether the user is allowed to sync writes to the realm.I think this means that the app has to figure that out. I’d suggest doing it when opening the Realm.If the sync-write rule is complex and/or needs access to other data, then I’d suggest implementing the logic in a Realm function which can then be called by both the sync-write rule and your app.", "username": "Andrew_Morgan" } ]
Cancel delete of Sync'd Object
2021-05-31T21:59:18.620Z
Cancel delete of Sync’d Object
1,750
null
[ "atlas-device-sync", "monitoring" ]
[ { "code": "", "text": "I’ve been playing with Realm, and I noticed that the number of requests you charge for don’t match the activity in the app. Or, I cannot explain how you calculate the number of requests. The documentation on this is straightforward, but, in my mind, the numbers don’t match.I used your Swift Sync example and run the app and added 10 items. This generated 11 writes (1 group and 10 items).The number of requests shown is 18.In the logs, I can see 12 additive changes. 4 connection starts and 3 connection ends. 2 session starts and 2 ends. 1 login. 1 sync other. In total: 36 OK requests.I understand 11 writes would generate 11 requests. What are the remaining 7 requests?I didn’t run any functions or triggers. I only launched the app 1 time. In addition to your example code, I also had 2 additional model classes, so 4 model classes/objects in total.Update:Next day and I can see 19 requests - even though I did not launch the app a single time. Why is that? Is any of the activity related to signing in the admin app generating additional requests?", "username": "Lukasz_Ciastko" }, { "code": "", "text": "It’s probably a bit late for you @Lukasz_Ciastko but this might help other people:I experienced a similar issue with 36 requests (and above an hour of sync runtime) overnight even though the app was not launched. In my case, Realm was running the following requests every 3 minutes (in logs):It turned out the additive changes in question were due to old bits of schema that were accidentally leftover in my client schema. I’m not sure why this was an issue while the app was not launched though - it seems that Realm was continuously trying to make those changes (but wasn’t complaining about not being able to).After deleting these bits of schema from my client schema and restarting Sync (not sure if this step was necessary, and I understand this may not be feasible for apps in prod), the problem disappeared.Hope this helps people with similar issues.", "username": "Laekipia" } ]
Number of requests doesn’t add up
2021-03-26T22:15:26.924Z
Number of requests doesn’t add up
1,967
null
[ "mongoose-odm", "security" ]
[ { "code": "return (author.save())\nconst mongoose = require('mongoose');\nconst Schema = mongoose.Schema;\n\nconst authorSchema = new Schema({\n name: String,\n age: Number\n});\n\nmodule.exports = mongoose.model('Author', authorSchema);\n\n\nconst graphql = require('graphql');\nconst Book = require('../models/book');\nconst Author = require('../models/Author');\nconst _ = require('lodash');\n\nconst {\n GraphQLObjectType,\n GraphQLString,\n GraphQLSchema,\n GraphQLID,\n GraphQLInt,\n GraphQLList\n} = graphql;\n\nconst BookType = new GraphQLObjectType({\n name: 'Book',\n fields: ( ) => ({\n id: { type: GraphQLID },\n name: { type: GraphQLString },\n genre: { type: GraphQLString },\n author: {\n type: AuthorType,\n resolve(parent, args){\n //return _.find(authors, { id: parent.authorId });\n }\n }\n })\n});\n\nconst AuthorType = new GraphQLObjectType({\n name: 'Author',\n fields: ( ) => ({\n id: { type: GraphQLID },\n name: { type: GraphQLString },\n age: { type: GraphQLInt },\n books: {\n type: new GraphQLList(BookType),\n resolve(parent, args){\n //return _.filter(books, { authorId: parent.id });\n }\n }\n })\n});\n\nconst RootQuery = new GraphQLObjectType({\n name: 'RootQueryType',\n fields: {\n book: {\n type: BookType,\n args: { id: { type: GraphQLID } },\n resolve(parent, args){\n //return _.find(books, { id: args.id });\n }\n },\n author: {\n type: AuthorType,\n args: { id: { type: GraphQLID } },\n resolve(parent, args){\n //return _.find(authors, { id: args.id });\n }\n },\n books: {\n type: new GraphQLList(BookType),\n resolve(parent, args){\n //return books;\n }\n },\n authors: {\n type: new GraphQLList(AuthorType),\n resolve(parent, args){\n //return authors;\n }\n }\n }\n});\n\nconst Mutation = new GraphQLObjectType({\n name: 'Mutation',\n fields: {\n addAuthor: {\n type: AuthorType,\n args: {\n name: { type: GraphQLString },\n age: { type: GraphQLInt }\n },\n resolve(parent, args){\n let author = new Author({\n name: args.name,\n age: args.age\n });\n return (author.save())\n }\n }\n }\n});\n\nmodule.exports = new GraphQLSchema({\n query: RootQuery,\n mutation: Mutation\n})\n\n;\n\n(node:31482) MongoError: (Unauthorized) not authorized on admin to execute command { \ninsert: \"authors\", documents: [[{name gyfdgyiszukjfheusdzyih} {age 88} {_id \nObjectID(\"60af9c682215ea7afad86f4c\")} {__v 0}]], ordered: false, writeConcern: { w:\n\"majority\" }\n\n", "text": "I used mongoose and Graphql to send my queries to the database but for some reason it doesn’t let me create documents. I have tried creating a new user with full admin privileges it hasn’t worked I tried changing the default user password but it didn’t work.I rechecked my mongoose model no errors so what might be the problem.FYI the problem arose with the return (author.save()) and the database connects normallyAlso posted on Stack Overflow:", "username": "saeed_Almulla" }, { "code": "", "text": "Hello @saeed_Almulla, welcome to the MongoDB Community forum!I have tried creating a new user with full admin privileges it hasn’t worked I tried changing the default user password but it didn’t work.The proper procedure to create users with proper access on MongoDB data is to follow the steps from following tutorial. If you had followed it you should be able to do the desired operations. Please verify what you had tried.", "username": "Prasad_Saya" } ]
Not authorized on admin to execute insert
2021-05-27T22:46:02.139Z
Not authorized on admin to execute insert
7,236
null
[ "queries" ]
[ { "code": "groups.findOne( { 'members.username': { $in: [ 'john22', 'david7' ] } } ){ groups: [\n\n {\n\n id:1,\n\n name:\"aliens\",\n\n members:[\n\n {\n\n username:\"john22\",\n\n joineddate:222222222,\n\n removed: false\n\n },\n\n {\n\n username:\"david7\",\n\n joineddate:3333333333,\n\n removed: false\n\n },\n\n {\n\n username:\"william4\",\n\n joineddate:444444444444,\n\n removed: false\n\n },\n\n ]\n\n },\n\n {\n\n id:2,\n\n name:\"stars\",\n\n members:[\n\n {\n\n username:\"john22\",\n\n joineddate:111111111111,\n\n removed: false\n\n },\n\n {\n\n username:\"david7\",\n\n joineddate:111111111111,\n\n removed: false\n\n },\n\n ]\n\n }\n\n] }", "text": "hi everyone,\nFirst of all, my English is not good, I apologize if I am rude.\nI need your help with a query. I tried many things but failed.I’ve written a little database example below.\nI want to find the group whose members match exactly with the given username array.groups.findOne( { 'members.username': { $in: [ 'john22', 'david7' ] } } )As a result, the first group is returning. But there is another user in the first group. Whereas the array matches exactly with the second group. How should I write the query for exact match. Please help me.Thank you", "username": "MUSTAFA_SAHIN" }, { "code": "find$exprfind", "text": "Hello @MUSTAFA_SAHIN, welcome to the MongoDB Community forum!If you are looking for exact match with the elements of an array you need to use $setEquals aggregate operator. To use this operator within the find method’s filter, you use it with another operator $expr. The $expr allows using aggregate operators within the find method.", "username": "Prasad_Saya" }, { "code": "", "text": "Thank you @Prasad_Saya. I will try this. I hope I can.", "username": "MUSTAFA_SAHIN" }, { "code": "", "text": "I tried but it didn’t work. Could you please write a query for the example above? @Prasad_Saya", "username": "MUSTAFA_SAHIN" }, { "code": "", "text": "What did you try? Please post your code.", "username": "Prasad_Saya" }, { "code": "groups.find({$expr:{$eq:['members.username',[ 'john22', 'david7' ]]}})", "text": "I am writing according to the example above.groups.find({$expr:{$eq:['members.username',[ 'john22', 'david7' ]]}})", "username": "MUSTAFA_SAHIN" }, { "code": "$setEquals$eq$setEqualstrue", "text": "I had mentioned about using the $setEqualsoperator. Try it, instead of the $eq. The $setEquals is defined as follows, and it is what you are trying:Returns true if the input sets have the same distinct elements.", "username": "Prasad_Saya" }, { "code": "", "text": "I also need help answering this question. Thank you for asking this question.", "username": "Faruk_AYDIN" }, { "code": "({$expr:{$eq:['members.username',[ 'john22', 'david7' ]]}})", "text": "({$expr:{$eq:['members.username',[ 'john22', 'david7' ]]}})This should have worked - what was the result when you used this?", "username": "Asya_Kamsky" }, { "code": "", "text": "Unfortunately, it didn’t work. The blank result has returned. Thank you for your interest, I still haven’t found the solution for this.", "username": "MUSTAFA_SAHIN" }, { "code": "", "text": "Do you have another idea? @Asya_Kamsky", "username": "MUSTAFA_SAHIN" }, { "code": "groups.find({$and:[{'members.username':{$all:arg.userNameArray}},{'members':{$size:arg.userNameArray.length}}]})", "text": "I think I found a solution for this.groups.find({$and:[{'members.username':{$all:arg.userNameArray}},{'members':{$size:arg.userNameArray.length}}]})This gives the results I wanttnx", "username": "MUSTAFA_SAHIN" }, { "code": "({$expr:{$eq:['members.username',[ 'john22', 'david7' ]]}})", "text": "({$expr:{$eq:['members.username',[ 'john22', 'david7' ]]}})This should have worked - what was the result when you used this?It does with $members.username rather than members.username.However it does not work if the order is not the same. That is why $setEquals as hintedIf you are looking for exact match with the elements of an array you need to use $setEquals aggregate operator.is the proper solution.", "username": "steevej" }, { "code": "$all", "text": "I actually think the solution with $all operator that OP eventually find is better here!", "username": "Asya_Kamsky" }, { "code": "id:1 has members with username: \"john22\", \"david7\", \"william4\"\nid:2 has members with username: \"john22\", \"david7\"\ngroups.findOne( { 'members.username': { $in: [ 'john22', 'david7' ] } } )id:2", "text": "I think the OP wants to get the second document as the result.The data (short version):I want to find the group whose members match exactly with the given username array.groups.findOne( { 'members.username': { $in: [ 'john22', 'david7' ] } } )As a result, the first group is returning. But there is another user in the first group. Whereas the array matches exactly with the second group. How should I write the query for exact match.I think, as per the OP’s note above, the solution is to get the exact match ( ‘john22’, ‘david7’ ) would be, to find the document with id:2 only.Well, it turns out there are quite a few operators to deal with when working with matching array data with array data (exact matches, partial matches, etc.). Finally it is a matter of the requirement, not choice.", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Help for a query finding a group whose members match exactly with a given string
2021-05-28T08:46:28.383Z
Help for a query finding a group whose members match exactly with a given string
2,751
null
[ "data-modeling" ]
[ { "code": "", "text": "Hi,What is the best way to design an audit trail log for update and delete operations?1 Collection -> 1 Audit trail collection or all collection -> 1 Audit trail collection?", "username": "Christian_Angelo_15065" }, { "code": "", "text": "@Christian_Angelo_15065,The best way to audit MongoDB databases is with the enterprise audit features\nhttps://docs.mongodb.com/manual/core/auditing/A simpler approach can be using application code with change stream or MongoDB Atlas triggers to be fired on every update delete recording what happened.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Does it also store the user id who edited or deleted the document?", "username": "Christian_Angelo_15065" }, { "code": "", "text": "Auditing does gather users that performed an operation and you can filter on the actual operation:Having said that, be aware that auditing might add some overhead to overall database activity so we do not recommend tracking a high volume operation if its not obsoletely necessary.Pavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "We are making an HRIS system in which we need to log all changes that happened on an employee’s data.", "username": "Christian_Angelo_15065" } ]
Audit trail logs model
2021-05-30T07:20:16.724Z
Audit trail logs model
1,903
null
[ "node-js", "connecting" ]
[ { "code": "MongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common \nreason is that you're trying to access the database from an IP that isn't whitelisted. Make sure your current IP address is on your Atlas cluster's IP whitelist: https://docs.atlas.mongodb.com/security-whitelist/\n at NativeConnection.Connection.openUri (D:\\Node-Jonas\\Node-Projects\\node_modules\\mongoose\\lib\\connection.js:828:32)\n at Mongoose.connect (D:\\Node-Jonas\\Node-Projects\\node_modules\\mongoose\\lib\\index.js:335:15)\n at Object.<anonymous> (D:\\Node-Jonas\\Node-Projects\\server.js:21:4)\n at Module._compile (internal/modules/cjs/loader.js:1138:30)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:1158:10)\n at Module.load (internal/modules/cjs/loader.js:986:32)\n at Function.Module._load (internal/modules/cjs/loader.js:879:14)\n at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:71:12)\n at internal/main/run_main_module.js:17:47 {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n setName: null,\n maxSetVersion: null,\n maxElectionId: null,\n servers: Map {\n 'cluster0-shard-00-00-g4ttz.mongodb.net:27017' => [ServerDescription],\n 'cluster0-shard-00-01-g4ttz.mongodb.net:27017' => [ServerDescription]\n },\n stale: false,\n compatible: true,\n compatibilityError: null,\n logicalSessionTimeoutMinutes: null,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n commonWireVersion: null\n }\n}\n\n", "text": "while trying to connect with mongodb i’m getting this error :i already whitelisted my ip but still getting the problem.also can’t connect to compass. it was all fine even yesterday and so don’t know why all of suuden this problem occured.need help.Thanks", "username": "Md_Sahalan_Hasan" }, { "code": "", "text": "i have same problem ", "username": "ali_bangi" }, { "code": "", "text": "I have the same problem", "username": "Alejandra_Monroy" }, { "code": "", "text": "The IP address seen by Atlas is not necessarily the IP address of your workstations. If behind a router that does NAT or using VPN the address seen is the one of the router or the exit point of the VPN.First try to Allow Access From Anywhere. If it works it is most likely because you whitelisted an IP address that is not your public address. Find your public IP with https://www.whatismyip.com/.If the above does not work then we need more information to help.", "username": "steevej" }, { "code": "", "text": "What additional information can I provide?I also have my IP whitelisted, checked it against the whatismyip, and enabled Allow Access From Anywhere. I still cannot connect.I have also tried resetting the password, making a new user, making a new project/database/user and none of them can connect, either.It is strange because it was working yesterday, I installed regex, did an audit for a red vulnerability alert, and shortly thereafter it stopped and I have not been able to connect since.ETA: I did a stash and the issue does seem to be something that happened when I npm installed a package and did the audit; the file w/strings is in .gitignore but I am able to connect now.", "username": "Aya_Shirai" }, { "code": "", "text": "I am also facing the same problem.", "username": "Ayush_Shalya" }, { "code": "", "text": "Have you checked with MongoDB support? Are these issues continuing?", "username": "Andrew_Davidson" }, { "code": "", "text": "well I have faced same problem on Atlas : MongoDB …so than I went to network access section and then edit it again nd set my current id address again …It worked !! I don’t know its permanent solution or what but it may helpful to you.", "username": "Depak_dadlani" }, { "code": "", "text": "thanks @Depak_dadlani this worked for me", "username": "nithin_samudrala" }, { "code": "", "text": "yes worked for me. Thankss", "username": "Mansi_Sharma" }, { "code": "", "text": "I have the same problem, tried all the above but still not working. any other ideas?", "username": "111542" }, { "code": "", "text": "I have same problem. I 'm trying from c-panel, running script to connect atlas server, even though my IP is whitelisted. Can anyone help me on this please…", "username": "Pavan_Naik1" }, { "code": "", "text": "Hi @111542 @Pavan_Naik1 (and anyone else who is having trouble connecting to their cluster),There are many different causes for issues connecting to clusters. For the community to best help you, please start start new discussions topics with more details of your environment including:A good starting point is the documentation guide to Set Up Atlas Connectivity.You may also find this previous forum discussion helpful: Error: couldn't connect to server 127.0.0.1:27017 - #16 by Stennie.Thanks,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Can't connect to mongodb.Could not connect to any servers in your MongoDB Atlas cluster
2020-09-24T19:38:07.663Z
Can&rsquo;t connect to mongodb.Could not connect to any servers in your MongoDB Atlas cluster
128,884
null
[ "server", "installation" ]
[ { "code": "", "text": "Hello All,i have installed mongo server on my computer, started the server, it shows ‘Active’, i have developed a spring boot project where i used Mongodb as database, i connected mongodb server from my projrct. i set\nspring.data.mongodb.host=localhost\nspring.data.mongodb.port=27017\nspring.data.mongodb.database=mvdb\nbut when i run this the project from my computer, there EOF error occured.please help me to resolve it", "username": "Alauddin_Tuhin" }, { "code": "mongosh", "text": "Hi @Alauddin_Tuhin and welcome in the MongoDB Community !First thing to determine before running into the Java code is: is the MongoDB server running correctly and can you connect to it using mongosh or the mongo shell?If not, please try to find why and share the mongodb logs if you can’t find the issue.If it’s all running fine and the issue comes from the Java + Spring Boot project, then have a look to this repo and try to compare because mine is connecting just fine. MongoDB Blog Post: REST APIs with Java, Spring Boot and MongoDB - GitHub - MaBeuLux88/java-spring-boot-mongodb-starter: MongoDB Blog Post: REST APIs with Java, Spring Boot and MongoDBCheers,\nMaxime.", "username": "MaBeuLux88" } ]
Mongodb Server Inactive autometically
2021-05-31T17:45:48.762Z
Mongodb Server Inactive autometically
3,079
null
[ "replication", "golang" ]
[ { "code": "serverStatusgetParameterClientvar client *mongo.Client\nclient, _ = mongo.Connect(\n ctx,\n options.Client().ApplyURI(\n \"mongodb://repl-1,repl-2,repl-3/?replicaSet=my-repl\"\n )\n\nresult := client.Database(\"admin\").\n RunCommand(\n ctx,\n bson.D{{Key: \"serverStatus\", Value: 1}},\n options.RunCmd().Member(\"repl-1\")\n )\n// do something with result\n", "text": "I’m writing an internal tool to wrap some of the common things we do to our MongoDB replica sets, and as part of that I occasionally find myself wanting to create direct connections to individual servers in the replica set (for example to call the serverStatus command or getParameter command).At the moment I do this by creating extra Clients in addition to the “main” replica-set client, but I was wondering if there’s some way I can use the “main” client to direct commands to a given member?I think I can do this by assigning a tag to each member with its hostname, but I was wondering if there’s already some built in way of doing this that doesn’t involve configuring the replica set some special way.I’d love to be able to do something like this:", "username": "Sarah_Hodne" }, { "code": "readPreference", "text": "Hi @Sarah_Hodne and welcome in the MongoDB Community !I think what you are trying to do is here:You need to tag the members and then use the readPreference option to target the member of your choice with your read operation. I hope this also works for the admin commands you are trying to send.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Direct connections from replica set client
2021-05-31T19:33:34.131Z
Direct connections from replica set client
2,858
null
[ "text-search" ]
[ { "code": "", "text": "Text Search Languages docs have no korean and japanese.\nand your docs says mongodb support utf-8, why not support these languages?", "username": "anlex_N" }, { "code": "", "text": "Hi @anlex_N,Atlas Search is using Lucene in the background and do support both these languages.Use a language analyzer to create search keywords in your Atlas Search index that are optimized for a particular natural language.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "please update your docs: Text Search Languages", "username": "anlex_N" }, { "code": "$text", "text": "Sorry, I think my post was confusing actually now that I read it again.The text index and the $text operator are ONE thing. Atlas Search is ANOTHER new service available only in Atlas that offer way more flexibility and options.The first can be used for simple text search and only supports a few languages. If you want more options, languages, text pre-processing (analyzers, …), etc, have a look to Atlas Search. It’s only available in Atlas because it requires more hardware and software contrary to the first option.", "username": "MaBeuLux88" }, { "code": "", "text": "so postgresql is preferable over mongodb. i hope mongodb community can support these languages.", "username": "anlex_N" }, { "code": "", "text": "@anlex_N Atlas Search (a search engine) is superior to Postgres (not a search engine) in almost every way when it comes to interpreting natural language.Have you tried it out in the free tier?", "username": "Marcus" }, { "code": "", "text": "no, i don’t care about atlas search. because i have no money. i love community, love open source.\ni think the gap between mongo and postgresql will only widen. mongodb community can not support CJK languages, it is really your very big Strategic error. and i can not find your mongodb community roadmap, so sad… but postgresql have roadmap and execute to the letter.", "username": "anlex_N" }, { "code": "", "text": "I care about open source and community, too. That’s why I volunteer my time to work on Apache Lucene, the kernel of Atlas Search. There’s a lot to do in the community. We need help if you have time.If you have no money, the free tier is perfect for you. If you want an inferior user experience, Postrgres for search is the better choice.", "username": "Marcus" } ]
Can mongodb support korean and japanese text search?
2021-03-26T03:37:01.695Z
Can mongodb support korean and japanese text search?
5,609
null
[ "aggregation" ]
[ { "code": "", "text": "We have an aggregation pipeline that takes roughly 15-20 minutes to fully complete. During this pipeline we are calculating aggregated values based on the properties on the documents. In the event a value on one of these documents is changed via an update from another process, how does that affect the results of the pipeline currently in progress?In general when programming objects use references to each other. In the event this object changes, all references to the object are affected. This is why we have to worry about Deep vs Shallow copies in some programming languages.So my question is; if during this 10 minute aggregation pipeline a document is changed and its values updated, does that change the results that we WOULD have gotten if the document itself had not been changed?I hope that makes sense.", "username": "Wyatt_Baggett" }, { "code": "", "text": "Hi @Wyatt_Baggett,Simple question but not so easy to answer .I guess it’s theoretically possible to read from a consistent snapshot if you are using the read concern “snapshot” in a multi-document transaction which is itself part of a causal consistent session… But multi-document transactions are limited to 60s by default so not very useful in your case, unless you increase transactionLifetimeLimitSeconds to some value > 20 min… but running such transactions would generate a non-negligible cache pressure I guess on the mongod server which would have to keep in memory all the different snapshots and it’s most probably not worth it.So my simple answer is “yes”. The final value will be impacted if some write operations are interfering with data used in the aggregation during its execution.That being said, once you have passed the first few (match and sort) steps of your aggregation, your mongod is then working on an in-memory copy of your documents so at that point, your final result wouldn’t be impacted anymore I guess as the copy can’t possibly be in sync with the initial document.My little finger is also telling me that you should keep an eye on the MongoDB 5.0 version which should come out in a couple of months now and COULD contain a solution to this problem…You didn’t hear that from me.Cheers,\nMaxime.PS: Dear judge, this post can’t be a proof. I didn’t write this. I swear. It wasn’t me. I was at the cinema at that time.", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Updates to documents in aggregation pipelines affecting results?
2021-05-11T22:44:56.806Z
Updates to documents in aggregation pipelines affecting results?
1,695
null
[ "react-native" ]
[ { "code": "", "text": "does anyone have an example of how to use realm mongodb with react native?", "username": "Samuele_Cervietti" }, { "code": "", "text": "We have an example here - https://docs.mongodb.com/realm/tutorial/react-native/", "username": "Ian_Ward" } ]
Realm MongoDb How to do?
2021-05-31T20:25:19.785Z
Realm MongoDb How to do?
1,854
null
[]
[ { "code": "", "text": "Atlas documentation says, “Atlas only charges paused clusters for storage. Atlas does not charge for any other services or data transfer on paused clusters.”Well, could anyone in the community please help me in estimating the storage charges exclusively? I want to estimate the price of a cluster that is currently paused, but I haven’t found any references for this; please help:)Thanks!", "username": "SwamyNaidu_Ch" }, { "code": "", "text": "Hi @SwamyNaidu_Ch and welcome in the MongoDB Community !It depends on the storage only. The exact price per hour is indicated when you try to pause a cluster:image693×664 52 KBThis is an example for an M10 cluster on AWS eu-west-1 with storage size set to 20GB.In my case here, the price is divided by 4.166 and instead of costing 70.13$/month is will now cost 17.28$/month approximately.I have another M20 that also runs with 20GB storage space and in this case, I’m down from 0.22 => 0.024 again.I hope this helps a bit.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Thanks @MaBeuLux88 this is exactly what I need", "username": "SwamyNaidu_Ch" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Price of a paused Cluster
2021-05-28T12:30:28.794Z
Price of a paused Cluster
4,390
null
[ "aggregation" ]
[ { "code": "totalholes{\n \"compId\": \"607019361c071256e4f0d0d5\",\n \"courseId\": \"608952e3abebbd503ba6e115\",\n \"playerId\": \"609d0993906429612483cea0\",\n \"holes\": {\n \"1\": 2,\n \"2\": 1,\n \"3\": 2,\n \"4\": 2,\n \"5\": 2,\n \"6\": 2,\n \"7\": 0,\n \"8\": 2,\n \"9\": 3\n },\n \"playerName\": \"Tiger Woods\",\n \"total\": 16\n}\nholesgroup by$sum[{\n $match: {\n \"playerName\": \"Dan Burt\"\n }\n}, {\n $group: {\n _id: \"$playerName\",\n \"total\": {\n $sum: \"$total\"\n }\n }\n}]\n{\n \"_id\" : \"Tiger Woods\",\n \"total\" : 45\n}", "text": "If I use the Aggregate Framework, just to do a simple “group by” function, how do I retain the original document contents? Is there a preferred practice?In my case I have a collection of golf scores, such as (note, total equals the sum of the holes array values):Tiger may have multiple documents, for multiple rounds that I want to sum the total, but I also want to show all the holes (iterate over the holes sub-array).If I apply a simple group by function, it only returns the $sum'ed total.Returns:", "username": "Dan_Burt" }, { "code": "$push$$ROOT", "text": "You can use $push and create an array of whatever fields you want to keep. You can use $$ROOT to keep the entire original document but it’s not really a good idea when you know which fields you actually will need - it’s better to just push those fields then.Asya", "username": "Asya_Kamsky" }, { "code": "holescourseId{\n \"_id\" : \"Tiger Woods\",\n \"total\" : 45\n \"holes\" : {\n \"course1\" : { ... },\n \"course2\" : { ... },\n \"courseN\" : { ... }\n }\n}", "text": "is this the normal / preferred method?I can try some testing scenarios tonight, but can you indicate how I would do it with documents structured as above, so that the holes array in each document would be preserved?They would actually be tagged against the courseId value, so would that become an array key?Something like:", "username": "Dan_Burt" }, { "code": "holes$push{\n _id: \"$playerName\",\n total: {\n $sum: \"$total\"\n },\n rounds: { $push: \"$holes\" }\n}\ncourseId$groupkeyvalue", "text": "So I have been able to add in the holes array using a $push as follows:Which returns a document as:Can I put the courseId from each of the source documents (from before the $group) so I can reference it later - basically so my PHP page can loop through these and add the scores to the correct tables.Ideally, it would look like what I posted in my previous thread. Is this just posting the array key as well as the value?", "username": "Dan_Burt" }, { "code": "", "text": "Bump! Any one with suggestions?", "username": "Dan_Burt" }, { "code": "rounds: { $push: \"$holes\" }rounds: { $push: {course:\"$courseId\", holes: \"$holes\" } }\nholes", "text": "rounds: { $push: \"$holes\" }You would do the same thing but adding a subdocument:Alternatively, you could group by course pushing holes - then if you want a single document per player you can group again pushing course which now has holes array… Not sure exactly the format you want but you can definitely get it with some permutation of these techniques.", "username": "Asya_Kamsky" } ]
Group By but retaining the original document contents
2021-05-28T15:56:50.688Z
Group By but retaining the original document contents
9,351
null
[ "aggregation" ]
[ { "code": "db.players.insertMany([\n { _id: 1, name: \"Miss Cheevous\", scores: [ 10, 5, 10 ] },\n { _id: 2, name: \"Miss Ann Thrope\", scores: [ 10, 10, 10 ] },\n { _id: 3, name: \"Mrs. Eppie Delta \", scores: [ 9, 8, 8 ] }\n]); db.players.find( {$expr: { $function: {\n body: function(name) { return hex_md5(name) == \"15b0a220baa16331e8d80e15367677ad\"; },\n args: [ \"$name\" ],\n lang: \"js\"\n} } } ); \nError: error: {\n\"ok\" : 0,\n\"err msg\" : \"Unrecognised expression '$function'\"\",\n\"code\" : 168,\n\"codeName\" : \"InvalidPipeline Operator\" \n}\n", "text": "Above is the error while making query on collection , ‘players’.", "username": "Arindam_Biswas2" }, { "code": "db.players.finddb.players.aggregate()", "text": "Hi Arindam,You can only use the $function aggregator within an $aggregation expression. So instead of db.players.find try: db.players.aggregate().I wrote a blog post that goes into it a lot more that you can find here: https://www.mongodb.com/how-to/use-function-accumulator-operators/ ", "username": "ado" }, { "code": "", "text": "ok, I get the same output with 1. $where and $function 2. $expr and $function operator which are used for querying the collection.\nI have another query. - how to add md5 hash value in mongodb collections?\nIn the above example hex_md5(name) == “15b0a220baa16331e8d80e15367677ad”\nHow do we get this 32 digits hexadecimal number?", "username": "Arindam_Biswas2" }, { "code": "db.players.insertMany([\n { _id: 1, name: \"Miss Cheevous\", scores: [ 10, 5, 10 ] },\n { _id: 2, name: \"Miss Ann Thrope\", scores: [ 10, 10, 10 ] },\n { _id: 3, name: \"Mrs. Eppie Delta \", scores: [ 9, 8, 8 ] }\n]); db.players.find( {$expr: { $function: {\n body: function(name) { return hex_md5(name) == \"15b0a220baa16331e8d80e15367677ad\"; },\n args: [ \"$name\" ],\n lang: \"js\"\n} } } ); \n { \"_id\" : 2, \"name\" : \"Miss Ann Thrope\", \"scores\" : [ 10, 10, 10 ] }\n$function", "text": "When I run this I get back:Is it possible you’re not connecting to the correct version of MongoDB? $function was introduced in 4.4.", "username": "Asya_Kamsky" }, { "code": "", "text": "I get your point. But , still you didn’t tell me the specification for incorporating hex_md5 method in MongoDB Shell.\nHow do you get hex_md5 = “15b0a220baa16331e8d80e15367677ad” ? Any hexadecimal 32 digits can not be used instead.", "username": "Arindam_Biswas2" }, { "code": "$functionmd5md5$functionaggregatefind > db.players.find( {}, {name:1, md5: { $function: { body: function(name) { return hex_md5(name); }, args: [ \"$name\" ], lang: \"js\" } } } );\n{ \"_id\" : 1, \"name\" : \"Miss Cheevous\", \"md5\" : \"5e89555bdf7601c64b5f7984a76bf7a0\" }\n{ \"_id\" : 2, \"name\" : \"Miss Ann Thrope\", \"md5\" : \"15b0a220baa16331e8d80e15367677ad\" }\n{ \"_id\" : 3, \"name\" : \"Mrs. Eppie Delta \", \"md5\" : \"fe698b4989844ab8d786ad263c5d7350\" }", "text": "Your example code uses $function in the query predicate to test for equality with md5. If you want to return md5 then you need to use $function in the projection part of the query to create the new field. You can do this with aggregate command in 4.2 or if you are already on 4.4 you can just use aggregation expressions in find. Using the same sample data:", "username": "Asya_Kamsky" } ]
$function(aggregation)
2021-03-16T06:43:02.545Z
$function(aggregation)
3,169
null
[ "android" ]
[ { "code": "User user = ClientApp.currentUser();\n\nFunctions functionsManager = ClientApp.getFunctions(user);\nList<String> args = Arrays.asList(\"[email protected]\");\n\nfunctionsManager.callFunctionAsync(\"GetClient\", args, String.class, result -> {\n if (result.isSuccess()) {\n Log.v(\"Collection Found\", \"La collection est trouvée : \" + result.get());\n UserContent = result.get();\n } else {\n Log.e(\"Collection Not Found\", \"La collection n'est pas trouvée : \" + result.getError());\n }\n});\n\nmText.setValue(UserContent);\n}\n", "text": "Hi everybody,I work on a student project and discover the MongoDB world (I go back to school at 35 years old !!! :D)After build a simple android app and finally connect my App to Realm through email/password authentification, I would like to show a part of an Atlas collection.\nAt the end, it will be to show just a collection’s array but for begin, and understand how it work … show a collection should be good.But …of course … it doesn’t work and I don’t understand why.I have a collection called “consumer” with 2 consumers inside.\nconsumer 1 & 2\n- _Id\n- email\n- passwordI would like to show one of the two consumers when i give the email adress.\nFor this, I wrote :public HomeViewModel() {\nmText = new MutableLiveData<>();This code call the following function :exports = function(arg){\nlet collection = context.services.get(“4proj”).db(“4projDB”).collection(“consumer”);\nreturn collection.findOne({email: arg});\n};And I have the following error message :E/Collection Not Found: La collection n’est pas trouvée : BSON_DECODING(realm::app::CustomError:1102): Error decoding value {“value”:null}\norg.bson.BsonInvalidOperationException: readString can only be called when CurrentBSONType is STRING, not when CurrentBSONType is NULL.\nat org.bson.AbstractBsonReader.verifyBSONType(AbstractBsonReader.java:690)\nat org.bson.AbstractBsonReader.checkPreconditions(AbstractBsonReader.java:722)\nat org.bson.AbstractBsonReader.readString(AbstractBsonReader.java:457)\nat org.bson.codecs.StringCodec.decode(StringCodec.java:39)\nat org.bson.codecs.StringCodec.decode(StringCodec.java:28)\nat io.realm.internal.jni.JniBsonProtocol.decode(JniBsonProtocol.java:87)\nat io.realm.mongodb.FunctionsImpl.invoke(FunctionsImpl.java:65)\nat io.realm.mongodb.functions.Functions$1.run(Functions.java:146)\nat io.realm.internal.mongodb.Request$1.run(Request.java:57)\nat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:462)\nat java.util.concurrent.FutureTask.run(FutureTask.java:266)\nat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)\nat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)\nat java.lang.Thread.run(Thread.java:923)I suppose that the return is not String, but I’m not really sure.Do someone may help me to understand ?", "username": "Florian_Vigier" }, { "code": "4proj4projDBconsumer", "text": "Hi @Florian_Vigier,I notice that you’re getting this error:E/Collection Not Found: La collection n’est pas trouvéeJust to confirm – is your MongoDB service in your Realm app named 4proj and does the associated cluster contain a database named 4projDB which in turn contains a collection named consumer?If you run the Realm function directly from the Realm UI, does it produce the expected results?", "username": "Andrew_Morgan" }, { "code": "", "text": "Hi Andrew,Thanks for your answer and your time.For be more cluear :\nMy Realm app is nammed “ClientApp”\n“4proj” is my Atlas Cluster.\nInside this cluster, I have a collection named “consumer”.So i would like say : YES !If I run manually this function trough the Realm UI, and just add argument value by export command, the function work well.\nI did the same function which modify an account in the consumer collection. This function work well also through Realm UI but not when i try to call it with android program.I guess the issue is on the Android code and not on the function code.", "username": "Florian_Vigier" }, { "code": "", "text": "Thanks @Florian_Vigier.@Mohit_Sharma could you please take a look at the Android code?", "username": "Andrew_Morgan" }, { "code": "", "text": "@Andrew_Morgan: Thanks for tagging in.@Florian_Vigier: Would it be possible for you to share your GitHub public repo url with us, so that I can have look and get back to you.", "username": "Mohit_Sharma" }, { "code": "", "text": "Hello @Mohit_Sharma,Thank you for your answer and your proposition.Of course it’s possible.So please find below my GitHub link.ClientApp. Contribute to FlorianVigier/ClientApp development by creating an account on GitHub.As it’s completely new for me to work on an Android application, work with a database and even work in Javascript … feel free to tell me what’s not good. It will be every time helpfull.", "username": "Florian_Vigier" }, { "code": "", "text": "@Florian_Vigier: Thanks for sharing the same, had a quick look at the project but couldn’t find the above code in the repo, therefore can’t help.Although I have raised a PR for fixing an issue that I found and had to fix it in order to run & validate the app.", "username": "Mohit_Sharma" }, { "code": "", "text": "It’s normal that you couldn’t find the code shared before … this code is a Realm Function declared on the Realm web site.image963×1072 52.1 KBIn this case, the function name is GetClient and have just to show a result.In the same way, this function have to edit a user.\nimage683×529 17.1 KBBoth doesn’t work when called but the application but work well by the UI.Is it enough to help me ? if not a can try to give you access to the project directly since it’s just a scool work with nothing confidential !!! Ps: Thanks for the fix. I will have a look this evening.", "username": "Florian_Vigier" }, { "code": "", "text": "For be simple, I would like to do the same as https://docs.mongodb.com/realm/sdk/android/examples/call-a-function/#call-a-function-by-nameSo first, I do the same as the example, copy the code even if it’s a little bit stupid and create a function called “sum”The function :\n\nimage661×599 22.6 KB\nBut when I call the function, it doesn’t work since it don’t find the function even if the function exist.My debug mod > show the function which has been called but seems to doesn’t exit.\n\nimage1366×575 74.1 KB\n", "username": "Florian_Vigier" }, { "code": "", "text": "@Florian_Vigier: I just tried the same thing and works fine on my end. Can you check once in the settings of sum function whether it’s private or not?", "username": "Mohit_Sharma" }, { "code": "", "text": "@Mohit_Sharma : All the function I tried are “public”.May you share a simple example ?\nBy this way I would find (I hope) a difference and by the way my mistake.", "username": "Florian_Vigier" }, { "code": "", "text": "@Florian_Vigier: You can have a look at this, You can have a look at this, GitHub - mongodb-developer/HelloRealmFunction", "username": "Mohit_Sharma" }, { "code": "", "text": "Hi,Sorry for the delay, I had some exams to pass.\nThank you for the example.I find what’s wrong on my app > 2 things\n1/ For each call, it seems it’s necessary to say to the app it’s the currentUser who call the function even after a successful login\n2/ I had to create a new User user = App.currentUser since I already have an user = AtomicReference which is not compatible.So finally it work well.Thank you so much for your help.", "username": "Florian_Vigier" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Show collection content with a function
2021-04-19T14:06:14.891Z
Show collection content with a function
4,449
null
[ "server", "installation" ]
[ { "code": "", "text": "I have installed mongodb-community4.4 via homebrew. I can make mongodb.community start and stop succefully however when I run brew services the status is ‘error’.Name Status User Plist\[email protected] error jbs /Users/jbs/Library/LaunchAgents/[email protected]", "username": "Jacob_Shortman" }, { "code": "", "text": "Check mogod.log.It may give more details on why it is failing\nIt could be permissions issues\nHow are you stop/starting service\nas sudo or normal user?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Okay so my work around which seems to work is simply:Download the mongodb tgz. file. Initiate, with sudo authority, mongod using a path to the location of the download/bin and a --dbpath to a local file location. Something like:sudo /Users/…/mono1/bin/mongod --dbpath /Users/…/mono1/dataThen I can start mongo and it connects to the local host fine. The only issue is memory. When my local drive fills up the server gets interrupted and mongo looses connection to the host. I assume the only solution is an external hard drive?", "username": "Jacob_Shortman" }, { "code": "", "text": "@Jacob_Shortman\nCheck the mongodb log for error. If it says space issue then definitely you need to add external hard drive.Thanks\nBraj Mohan", "username": "BM_Sharma" } ]
[email protected] status error whilst trying to run
2021-05-30T17:49:04.786Z
[email protected] status error whilst trying to run
6,254
null
[ "connecting", "php" ]
[ { "code": " try {\n $DB_CONNECTION_STRING=\"mongodb://localhost:27017\";\n $con = new Client($DB_CONNECTION_STRING);\n $db = $con->test;\n $collection = $db->tester;\n $collection->insertOne(['name'=>'Tom', 'email'=>'[email protected]']);\n }\n catch (\\Exception $e) {\n print_r($e->getMessage());\n } \nserverSelectionTryOnce", "text": "Hi,I’m using the PHP driver for the first time and get this error message. I try to get a connection from a php based host to a mongoDB container.\nI’ve installed MongoDB extension version > 1.8. The connection to MongoDB through the MongoDB Client seems to be ok (maybe it’s not), but the insertOne function doesn’t work.\nDon’t know if something is missing. There are some infos here according this error message, but none of them worked in my case. Would be very grateful for help.Here’s the code snippet, I use for testing the connection:The output is here:No suitable servers found (serverSelectionTryOnce set): [connection refused calling ismaster on ‘localhost:27017’]And here is some information from the phpinfo:", "username": "M_W" }, { "code": "insertOnelocalhost:27017mongo mongodb://localhost:27017", "text": "When the client is created, no connection to the server is established, which is why you see the failure only when you run insertOne. The exception you’re seeing is the result of the driver trying to find a server to send this command to, which it can’t because nobody is listening on localhost:27017. You mentioned you want to connect “to a mongoDB container”, are you sure the connection string is correct?To confirm that you are trying to connect to the right server by calling the mongo shell: mongo mongodb://localhost:27017. I would expect this to fail as well, so you’ll have to find where MongoDB is listening on your PHP host and use the correct connection string. I can’t help you without more information on your setup, so please do elaborate on how you run your setup if you need more help.", "username": "Andreas_Braun" }, { "code": "", "text": "Thank you very much, Andreas. That was the hint I needed (had already suspected that the error was caused by this). After reconfiguring the container/network settings (also changing the connection string) i can connect.", "username": "M_W" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
No suitable servers found: PHP error
2021-05-31T08:09:39.239Z
No suitable servers found: PHP error
9,580
null
[]
[ { "code": "import React, { FC, useEffect, useRef, useCallback } from 'react'\nimport ChartsEmbedSDK from \"@mongodb-js/charts-embed-dom\"\nimport { Chart } from '../Models/Chart'\n\n// Fill interface with values. The baseURL and chartID are retrieved from MongoDB.\nconst ChartElement: FC<Chart> = ({ name, baseUrl, chartId, width, height, startDatum, eindDatum}) => {\n const refChartPosts = useRef(null);\n const sdk = new ChartsEmbedSDK({\n baseUrl: baseUrl,\n });\n\n const chart = sdk.createChart({\n chartId: chartId,\n showAttribution: false,\n });\n\n// Rendering the graph.\n const renderChart = useCallback(async (ref) => {\n try {\n var start = Date.now();\n await chart.render(ref);\n var end = Date.now();\n var time = new Date(end - start);\n console.log(\"rendered chart \" + \"'\" + name + \"'\" + \" after \" + time.getSeconds() + \" seconden\")//logging details\n } catch (e) {\n console.error(e);\n }\n\n }, []);\n // Returning reference.\n const setRefChart = useCallback(\n (ref) => {\n if (ref) {\n renderChart(ref);\n console.log(Date() + \"setting reference chart \" + \"'\" + name + \"'\" + \"...\")//logging details\n }\n refChartPosts.current = ref;\n },\n [renderChart]\n );\n\n \n // React thing where it is checked whether the start or end date has been changed.\n // Then filterchart is called\n useEffect(() => {\n filterChart(startDatum, eindDatum)\n }, [startDatum, eindDatum]);\n \n // useCallback causes the chart to be filtered after rendering\n const filterChart = useCallback((startDate, endDate) => {\n try {\n chart.setFilter({ createdAt: { \"$gte\": new Date(startDate), \"$lte\": new Date(endDate) } }); // gte: greater than / equal to, lte: less than / equal to\n } catch (e) {\n console.error(e);\n }\n }, []);\n\n // Graph is created in html with calling the graph.\n return (\n <>\n <div id={name} ref={setRefChart} style={{ width: width, height: height }}></div>\n </>\n );\n};\n\nexport default ChartElement;\n", "text": "Hi,We are building a Charts dashboard with javaSDK embedding in a React application using Typescript.\nWe already created a filter that filters our charts on a selected period of time. Our next goal is to filter a Lookup field that we are using. This raises the following questions:is it possible to create a filter on a lookup field from MongoDB Charts?is it possible to put a aggregation pipline query in a filter for embedding a chart?For your information: In de code below we’re creating our component and rendering our chart to embed it in the application.", "username": "Ruben" }, { "code": "", "text": "Hi Ruben, I think you could make use of Aggregation Pipeline. It is possible to query a pipline field. See https://docs.mongodb.com/manual/core/aggregation-pipeline/", "username": "Wilbert_Gotink" }, { "code": "", "text": "Thanks Wilbert. I was looking for this. ", "username": "Mario_Yip" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Creating a filter containing a lookupfield with JavaSDK embedding
2021-04-21T08:08:23.677Z
Creating a filter containing a lookupfield with JavaSDK embedding
2,200
null
[ "migration" ]
[ { "code": "", "text": "Hi Team,Can i get any help in migration 500-700 GB data from GCP to Azure , what is the best option i can choose?Thanks\nRajaram A\n733888180", "username": "Ayyadevara_Rajaram" }, { "code": "", "text": "Welcome to the MongoDB Community @Ayyadevara_Rajaram!Your options will depend on the type of deployment you have (standalone, replica set, or sharded cluster) but assuming you are migrating a cluster between the same version of MongoDB in both cloud services, some common approaches are:Most users want to avoid downtime and choose one of the first two approaches.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Migration from GCP to AZure
2021-05-29T09:40:27.551Z
Migration from GCP to AZure
4,346
null
[ "aggregation" ]
[ { "code": " {\n \"department_id\" : 1, \n \"title\" : \"Department1\",\n \"departmentType_id\" : 11\n }\n {\n \"department_id\" : 2, \n \"title\" : \"Department2\",\n \"departmentType_id\" : 12\n }\n {\n \"department_id\" : 3, \n \"title\" : \"Department3\",\n \"departmentType_id\" : 13\n }\n {\n \"department_id\" : 4, \n \"title\" : \"Department4\",\n \"departmentType_id\" : 14\n }\n {\n \"department_id\" : 5, \n \"title\" : \"Department5\",\n \"departmentType_id\" : 15\n }\n {\n \"department_id\" : 6, \n \"title\" : \"Department6\",\n \"departmentType_id\" : 16\n }\n {\n \"department_id\" : 7, \n \"title\" : \"Department7\",\n \"departmentType_id\" : 17\n }\n{\n \"departmentDepartmentRelationshipId\" : 100,\n \"relationshipTitle\" : \"Department1 is parent of Department2\",\n \"department1_id\" : 1,//reference of department with department_id: 1\n \"department2_id\" : 2,//reference of department with department_id: 2\n \"rootParentDepartment_id\" :1,//reference of department with department_id: 1\n \"rootParentDepartment_id\" : 1,//reference of department with department_id: 1\n}\n{\n \"departmentDepartmentRelationshipId\" : 200,\n \"relationshipTitle\" : \"Department2 is parent of Department3\",\n \"department1_id\" : 2,//reference of department with department_id: 2\n \"department2_id\" : 3,//reference of department with department_id: 3\n \"rootParentDepartment_id\" :1,//reference of department with department_id: 1\n \"rootParentDepartment_id\" : 1,//reference of department with department_id: 1\n}\n{\n \"departmentDepartmentRelationshipId\" : 210,\n \"relationshipTitle\" : \"Department2 is parent of Department7\",\n \"department1_id\" : 2,//reference of department with department_id: 2\n \"department2_id\" : 7,//reference of department with department_id: 7\n \"rootParentDepartment_id\" :1,//reference of department with department_id: 1\n \"rootParentDepartment_id\" : 1,//reference of department with department_id: 1\n}\n{\n \"departmentDepartmentRelationshipId\" : 300,\n \"relationshipTitle\" : \"Department3 is parent of Department4\",\n \"department1_id\" : 3,//reference of department with department_id: 3\n \"department2_id\" : 4,//reference of department with department_id: 4\n \"rootParentDepartment_id\" :1,//reference of department with department_id: 1\n \"rootParentDepartment_id\" : 1,//reference of department with department_id: 1\n}\n{\n \"departmentDepartmentRelationshipId\" : 400,\n \"relationshipTitle\" : \"Department4 is parent of Department5\",\n \"department1_id\" : 4,//reference of department with department_id: 4\n \"department2_id\" : 5,//reference of department with department_id: 5\n \"rootParentDepartment_id\" :1,//reference of department with department_id: 1\n \"rootParentDepartment_id\" : 1,//reference of department with department_id: 1\n}\n{\n \"departmentDepartmentRelationshipId\" : 500,\n \"relationshipTitle\" : \"Department4 is parent of Department6\",\n \"department1_id\" : 4,//reference of department with department_id: 4\n \"department2_id\" : 6,//reference of department with department_id: 6\n \"rootParentDepartment_id\" :1,//reference of department with department_id: 1\n \"rootParentDepartment_id\" : 1,//reference of department with department_id: 1\n}\n{\n \"_id\" : ObjectId(\"60af33d948d800a1315e96f6\"),\n \"departmentDepartmentRelationshipId\" :100,\n \"childs\" : [ \n {\n \"departmentDepartmentRelationshipId\" : 200,\n \"childs\" : [ //child could by more than one\n {\n \"departmentDepartmentRelationshipId\" : 300,\n \"childs\" : [//child could by more than one\n {\n \"departmentDepartmentRelationshipId\" : 400,\n \"childs\" : []\n }\n {\n \"departmentDepartmentRelationshipId\" : 500,\n \"childs\" : []\n }\n\n ]\n },\n {\n \"departmentDepartmentRelationshipId\" : 210,\n \"childs\" : []\n }\n ]\n }\n ]\n}\n", "text": "I have a MongoDB collection Department as documents with the following format:I have a MongoDB collection departmentDepartmentRelationship as documents with the following format:Now my model where i define hierarchical structure is:is it possible to fetch above hierarchical model in a single query and instead of its references i want departmentDepartmentRelationship JSON object from departmentDepartmentRelationship collection ?\ni’m an absolute beginner in mongoDB , i would be very thankful if anyone can help me.", "username": "MANZAR_ABBAS" }, { "code": "", "text": "Hi @MANZAR_ABBAS,What you are doing looks a bit SQLish to me.Have a look to this doc:Maybe something will help?Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
How can i fetch hierarchical data with their refrences in mongodb?
2021-05-30T22:05:10.658Z
How can i fetch hierarchical data with their refrences in mongodb?
3,123
https://www.mongodb.com/…6_2_1024x259.png
[]
[ { "code": "", "text": "Hi forum,I want to create a chart to look up the total registered users per month per ID.\nWhy does the cumulative option disappear when adding an item to series?In the image below, you can see on the left that it works…when it’s just 1 ID. The image on the right is what I have for now.\nHow can I add multiple lines (multiple ID’s) that are also showing their cumulative values per month.aaaa1844×467 116 KBThanks in advance!Mario", "username": "Mario_Yip" }, { "code": "", "text": "Hi Mario -This is a limitation of the current version of the product. We didn’t have time to get this working with multiple series, but we will try to get to this before too long.Tom", "username": "tomhollander" }, { "code": "", "text": "Thanks Tom. Do you have an indication of when this new feature will be live?", "username": "Mario_Yip" }, { "code": "", "text": "At this stage we don’t have a timetable for doing this, but it is on our backlog.Tom", "username": "tomhollander" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Why does the cumulative option disappear when adding item to series
2021-05-26T07:49:42.530Z
Why does the cumulative option disappear when adding item to series
1,969
null
[ "dot-net" ]
[ { "code": "System.Reflection.TargetInvocationException : Exception has been thrown by the target of an invocation.\n ----> System.IO.FileLoadException : Could not load file or assembly 'MongoDB.Driver.Legacy, Culture=neutral, PublicKeyToken=null'. Invalid pointer (0x80004003 (E_POINTER))\n", "text": "We are trying to upgrade a project to .net core 3.1 and also needed to update the MongoDb drivers that we use.\nUpdated the version to 2.12.2 and this error started to happen:I looks like the ClientDocumentHelper.CreateDriverDocument method was change a while ago that now checks for the legacy driver, however that throws exceptions in certain test projects.\nIt looks to start fine in the actual component and unit test but we also have some Specflow tests running and there we cannot get past this exception.\nAnyone any clue what to do or would could cause this?", "username": "Willem_Peters" }, { "code": "", "text": "Hi !Have you found a solution for this ?Same problem for us, trying to run specflow tests with webapifactory including mongoclient loading.", "username": "Alexis_Bachelet" }, { "code": "", "text": "We ended up having to add a package reference to the mongocsharpdriver package, could not solve it any other way.", "username": "Willem_Peters" } ]
C# MongoDB.Driver .net core 3.1 exception
2021-05-06T15:45:03.087Z
C# MongoDB.Driver .net core 3.1 exception
4,452
null
[ "crud" ]
[ { "code": "{\n { \"_id\" : \"628073739057102858\",\n \"company\" : {\n \"name\" : \"Unnamed Company\",\n \"funds\" : 10000,\n \"employees\" : [ \n {\n \"name\" : \"Owen\",\n \"gender\" : \"Male\",\n \"pay\" : 1378,\n \"trait\" : \"Independent\",\n \"ips\" : \"14.0\",\n \"sales\" : 0\n }\n ],\n \"stats\" : {\n \"max_employees\" : 2,\n \"experience\" : 0,\n \"totalincome\" : 0,\n \"sales\" : 0\n },\n },\n \"__v\" : 0\nsalesemployees.forEach( person => {\n if (person.name == \"Owen\"){\n // Do Something\n }\n })\n", "text": "I have this data structure}I was wonder how I could access the object inside of employees, such as Owen and set one of it’s properties.For example I would like to set his sales properties to a different value, maybe something like 5.Normally if this was in code I would just loop through the employees array to find the object property, for example:However I have no clue how to achieve this as a MongoDB function. As normal I tried code snippets from stack-overflow and tried to skim through the documentation however I don’t seem to have any luck. Any help would be much appreciated.", "username": "eitzen_N_A" }, { "code": "db.coll.update({}, { $set: { \"employees.$[elem].sales\": 2 } }, { arrayFilters: [ { \"elem.name\": \"Owen\" } ]})\n", "text": "Hi @eitzen_N_A,You will need to use array filters update with $[] opertors :\nhttps://docs.mongodb.com/manual/reference/operator/update/positional-all/#update-nested-arrays-in-conjunction-with----identifier--Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "You’re a life saver, thanks Pavel!!", "username": "eitzen_N_A" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Update property of a nested object inside an array
2021-05-28T17:10:27.777Z
Update property of a nested object inside an array
6,037
null
[ "indexes" ]
[ { "code": "", "text": "can create multiple indexes together with foreign key relations?", "username": "Harish_Vagjiyani" }, { "code": "", "text": "Hi @Harish_Vagjiyani , welcome to the community.\nI am not really sure but are you looking for something like: Compound Indexes.\nCan you please explain a little more about what you are trying to achieve?Also, I would recommend you to go through the Indexes Documentation.In case you have any doubts, please feel free to reach out to us.\nThanks and Regards.\nSourabh Bagrecha,\nCurriculum Services Engineer", "username": "SourabhBagrecha" } ]
Indexes relations
2021-05-30T06:23:54.563Z
Indexes relations
1,640
null
[ "react-native" ]
[ { "code": "new Realm({...})Realm.open()useEffect\nimport React from 'react';\nimport {...} from 'react-native';\n\n// Redux\nimport { useSelector, useDispatch } from 'react-redux';\n\nimport Realm from 'realm';\nimport { TheSchemaOne, TheSchemaTwo } from 'far-far-away-land';\n\n[...]\n\nuseEffect(() => {\n Realm.open({\n schema: [TheSchemaOne, TheSchemaTwo],\n }).then(realm => {\n \n // Send items from database to state using Redux\n // Or, insert some items in DB?\n\n // The closing:\n return () => realm.close();\n // Or\n // return () => { realm.close() }\n \n });\n}, []);\n\n\nexport const TheSchemaOne = {...};\n\n\n[...]\n\n// Inside useEffect\n\nconst realm = new Realm({\n schema: [TheSchemaOne, TheSchemaTwo],\n});\n\n// Do some queries (no inserts)\n\n// The closing\nrealm.close();\n\n", "text": "Hola! I’m hoping this is the correct category?Got an issue with Realm with React Native (v0.64.1). There is also a confusion with new Realm({...}) and Realm.open(). A good example on where to use the latter?This post is about this error:…already opened on the current thread with a different schema.With React Native (RN), I rely on useEffect to load my state on screen load, a “componentDidMount-like” behaviour:Example taken from here.The above executed on, say, the “index.js” and inside the “index.js” file I have many components that perform some “realm actions”. Noticed I have closed the instance so why am I getting error when performing more operations? For example, should I copy the contents from “useEffect” into a function then call that function, I get the error. How I defined my schema?EDIT:Funny, but this is what works:I’m not happy pushing an app like this to production when I don’t know what’s the issue and I’m still confused when to to use both a/synchronously methods. And sometimes I get Access to invalidated Results objects. Not sure why the SO answer says to keep it open when the official docs says we should close? To get around that error, while closing, I have to map the array and return a cloned object.", "username": "Sylar_Ruby" }, { "code": "existingRealmFileBehavior schema,\n path: getDBPath(),\n sync: {\n user: app.currentUser,\n partitionValue: new ObjectID(getCatalogId()),\n error: (error) => {\n console.log(error.name, error.message)\n },\n existingRealmFileBehavior: { type: \"openImmediately\" }\n }\n}\n", "text": "@Sylar_Ruby I think what you’ll want to do is use Realm.open but then pass the extra parameter existingRealmFileBehavior -See the writeup here Realm issue with sync · Issue #3739 · realm/realm-js · GitHubGenerally, we’d recommend just opening the realm once via a Singleton pattern on App Start and using a Provider to pass it around to different Child Contexts with a hook. Then on app close you can call realm.close() in your lifecycle teardown. I’m not sure of your exact use case but this design pattern is the most common - there may be other reasons you want to do this open/close in every component so its not a one size fits all.We will be going over this and some of our other best practices in our community event here - Realm JavaScript for React Native applications - MongoDB Atlas App Services & Realm - MongoDB Developer Community Forumsif you can join we can address this and other questions", "username": "Ian_Ward" }, { "code": "", "text": "Thanks, @Ian_Ward. That seems to work and what also works is InteractionManager. The closing does work but I needed to wrap the open inside InteractionManager. I think I’ll take a step back and read the best practices.EDIT:I’ve went with using a context route and all is well.", "username": "Sylar_Ruby" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm not closing?
2021-05-27T13:15:28.292Z
Realm not closing?
3,777
null
[ "swift", "atlas-device-sync" ]
[ { "code": " // Some of the sample code videos uses this as well, which appears to be the same as above.\n // I am assuming both accomplish the same.\n // @objc dynamic var _id: ObjectId = ObjectId.generate()\n\n @objc dynamic var _id = NSUUID().uuidString\n override static func primaryKey() -> String? {\n return \"_id\"\n }\n @objc dynamic var uuid = NSUUID().uuidString\n override static func primaryKey() -> String? {\n return \"uuid\"\n }\nfinal class UserInfo: Object {\n @objc dynamic var someVar = \"\"\n @objc dynamic var anotherVar = \"\"\n\n @objc dynamic var uuid = NSUUID().uuidString\n override static func primaryKey() -> String? {\n return \"uuid\"\n }\n}\nvar uuidvar _id", "text": "I have an iOS app that was created before the acquisition of Realm by Mongo. I am in the process of updating the code to work with Realm Synch.Reading through the documentation it states that I should add the following to the class declarations I wish to synch:All of the class declarations I have already use:Here is an exampleMany of my classes look like this:Can I use what I already have using the var uuid instead of var _id. I have about 40 classes and need to convert and would prefer not to do that, if possible.Thx in advance.", "username": "Xavier_De_Leon" }, { "code": "_id_idstringintobjectId_id_id //Some of the sample code videos uses this as well, which appears to be the same as above.\n // I am assuming both accomplish the same.\n // @objc dynamic var _id: ObjectId = ObjectId.generate()\nObjectIdUUIDObjectIdUUID", "text": "Welcome to the forums!To work with sync, objects MUST use _id as their primary keyPrimary Key _id RequiredTo work with Realm Sync, your data model must have a primary key field called _id . _id can be of type string , int , or objectId .So you will want to change your models now, before moving to a sync environment.Some interesting information here ObjectId with this noteIf an inserted document omits the _id field (value), the MongoDB driver automatically generates an ObjectId for the _id field.and then the other part of the questionWell, kind of. The point is to generate unique values, which they both do. However, Realm Documents are BSON so the ObjectId may better align with that format. IMO, I would be using ObjectId going forward.This is from the docs and including it for completenessObjectId and UUID (Universal Unique Identifier) both provide unique values that can be used as identifiers for objects. ObjectId is a MongoDB-specific 12-byte unique value. UUID is a standardized 16-byte unique value. Both types are indexable and can be used as primary keys.ObjectId has additional features like being able to retrieve the timestamp directly from the object.", "username": "Jay" } ]
Can I use value besides `var _id` as primaryKey() for MongoDB Synch?
2021-05-29T01:08:48.267Z
Can I use value besides `var _id` as primaryKey() for MongoDB Synch?
4,146
null
[ "replication" ]
[ { "code": "", "text": "As far as I know, Replication Deployment does not recommend an even number of members.However, if I use an even number of members, and I give the configuration settings of one of them to {priority:0, vote:0}, then this Replication Deployment is an odd number of members?\nOr can I solve the problem of an even number of members?", "username": "Kim_Hakseon" }, { "code": "", "text": "Hi @Kim_Hakseon,The recommendation is based on voting members so if you have an even number of replica set members (for example, 4) but an odd number of voting members (3), that is still following the general guidance.You can include an even number of voting members in your replica set configuration, but thiat does not improve fault tolerance. For example: the strict majority of a 3 member replica set is 2; the strict majority of a 4 member replica set is 3. In both cases there is a fault tolerance that allows a primary to be elected (or sustained) in the event a single voting member of the replica set is unavailable.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks to you, I solved my curiosity. ", "username": "Kim_Hakseon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Question of Replication Node Configuration Setting - votes
2021-05-28T01:37:05.623Z
Question of Replication Node Configuration Setting - votes
1,841
null
[]
[ { "code": "", "text": "Hello hello!My name is Tim I’ve been part of the MongoDB family based in Sydney, Australia since May 2019 as an IT engineer. This week I’ve moved to the Developer Relations team as a Community Forum Engineer! My role involves forum administration, onboarding, and customisation. I’m looking forward to collaborating with everyone to make the forums a better place to learn, grow, and hang out! If you have any suggestions regarding the improvement of the forums and such, please start a new Site Feedback discussion!Let’s build together Cheers,\nTim", "username": "TimSantos" }, { "code": "", "text": " Welcome to DevRel @TimSantos! We’re very excited to have you join the team and looking forward to your contributions .Cheers,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Welcome Tim Happy to see the team growing! I am looking forward to collaborating with you to make the forums the best place to learn, grow, and get to know.Cheers,\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "Welcome Tim,Happy to have you here working on making these forums an even better place!Cheers,\nMike", "username": "Michael_Grayson" }, { "code": "", "text": "Awesome to have you here Tim! Thanks in advance for all your efforts and help to improve forum experience for all users.", "username": "hpgrahsl" } ]
🌱 Hi everyone! Tim here from MongoDB!
2021-05-04T01:26:19.485Z
:seedling: Hi everyone! Tim here from MongoDB!
4,188
https://www.mongodb.com/…8_2_1024x352.png
[ "connecting", "atlas" ]
[ { "code": "", "text": "Hi Team,Am unable to connect to atlas cluster through mongo shell or compass from my local machine.Steps i didAdd my current machine IP in ‘IP Access list’ settings in ‘Atlas Network Access’.Connect through mongo shell :\nmongo “mongodb+srv://gettingstarted.63ixx.mongodb.net/sample_airbnb” --username kunalEntered passwordError ‘You have failed to connect to a MongoDB Atlas cluster. Please ensure that your IP whitelist allows connections from your network.’Turned off my firewall in my local machine.Tried again , but failed to connect with same error message as above.No idea why the connection is getting refused again and again . Can someone point where i am doing wrong.error1338×460 11.2 KB", "username": "Kunal_Kumar" }, { "code": "kunal", "text": "Hi Kunal,Error: Authentication failedThe above error generally indicates credentials were entered incorrectly. You can try changing the password for user kunal in the Database Access section of the Atlas UI or create another temporary test user to verify that the credentials are being entered correctly.You may find the following Atlas documentation for this useful as well:Hope this helps.Kind Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi Jason,I already tried changing my username and password many times but no luck.I have created one new account (different email id) in Atlas and configured cluster but while connecting through mongo shell i am facing same issue with my new account toomongo mongodb+srv://clusterone.qnrfp.mongodb.net/MongoDB shell version v4.4.5connecting to: mongodb://clusterone-shard-00-00.qnrfp.mongodb.net:27017,clusterone-shard-00-01.qnrfp.mongodb.net:27017,clusterone-shard-00-02.qnrfp.mongodb.net:27017/?compressors=disabled&gssapiServiceName=mongodb&ssl=trueImplicit session: session { “id” : UUID(“9ffaa6dc-31d2-4af7-8bcf-9c96fb387ea7”) }\nMongoDB server version: 4.4.6Error while trying to show server startup warnings: user is not allowed to do action [getLog] on [admin.]MongoDB Enterprise atlas-11qgt0-shard-0:SECONDARY> db.auth(“m1003-admin” , “m1003-pass”)\nError: Authentication failed.\n0Wonder why ‘secondary server’ is picked up when connecting as belowmongo mongodb+srv://clusterone.qnrfp.mongodb.net/ -u m1003-admin -p m1003-pass\nMongoDB shell version v4.4.5\nconnecting to: mongodb://clusterone-shard-00-00.qnrfp.mongodb.net:27017,clusterone-shard-00-01.qnrfp.mongodb.net:27017,clusterone-shard-00-02.qnrfp.mongodb.net:27017/?compressors=disabled&gssapiServiceName=mongodb&ssl=true\nImplicit session: session { “id” : UUID(“d8ca331c-e0e4-41a7-8608-b17c7c0aac53”) }\nMongoDB server version: 4.4.6\nMongoDB Enterprise atlas-11qgt0-shard-0:SECONDARY>", "username": "Kunal_Kumar" }, { "code": "", "text": "Hi Jason,Thanks for reply. I have replied to your comment earlier, this one is just to add information that i created new account, new database user but all are failing with same error messageSeems, there are some other issues which i am not able to figure out.I have attached screenshots of my atlas configuration - Network Access, Database Access , Error MessageIP_Whitlisting1338×460 29 KBFailedToLogin1350×477 32.4 KBFailedToConnect1345×494 17.9 KB", "username": "Kunal_Kumar" }, { "code": "ssl=truereplicaSetreplicaSet", "text": "Hi @Kunal_Kumar,Thanks for getting back to me with that information and confirming you’ve tried different combination of credentials.Wonder why ‘secondary server’ is picked up when connecting as belowBased off your screenshots, I can see the connection attempts are being made to 3 hosts with a few other options such as ssl=true . However, I cannot see the replicaSet value. For troubleshooting purposes, would you be able to try connecting using the standard connection string format? To get this, please follow the below steps:Let me know how this goes.Kind Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Error: Authentication failedIn addition to my previous comments, I have also noticed that no authentication database is being passed through when you are connecting with the DNS seed list format which explains this error now.", "username": "Jason_Tran" }, { "code": "", "text": "Hi Jason,Thanks for pointing my mistake and helping me to look into right direction.My bad, i was not passing argument -authenticationDatabase admin while connecting through mongo shell or compass.Issue got resolved and able to do database operations through both medium (Mongo Shell , Compass)Once again, many thanks fro your prompt assistance.Cheers,", "username": "Kunal_Kumar" }, { "code": "nslookup -q=TXT clusterone.qnrfp.mongodb.net\ntextServer:\t\t127.0.0.1\nAddress:\t127.0.0.1#53\n\nNon-authoritative answer:\nclusterone.qnrfp.mongodb.net\ttext = \"authSource=admin&replicaSet=atlas-11qgt0-shard-0\"\n", "text": "Glad to hear @Kunal_KumarAs a side note, when connecting using mongoshell and using the DNS seed list format, a DNS TXT Record look up should be performed as well in which the record should contain connection options, to be added as parameters to the dynamically constructed connecting string.Since these were not being returned whilst you were using the DNS seed list format, it could indicate a possible DNS configuration issue.One way of testing would be to perform a nslookup as shown in the below example using the cluster you provided in this post:The output should contain the connection options (please note take note of the returned text value):Kind Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Not able to connect to MongoDB Atlas using Mongo Shell or MongoDB Compass
2021-05-29T04:31:26.178Z
Not able to connect to MongoDB Atlas using Mongo Shell or MongoDB Compass
12,417
null
[ "python", "connector-for-bi" ]
[ { "code": "", "text": "hi there,\nIs there a way to use BI connector inside the python program to write SQL queries… Just curious.", "username": "Prem_Singh" }, { "code": "", "text": "Hi @Prem_Singh - that’s a great question - and the answer is yes. The BI connector emulates a MySQL server, but on top of MongoDB, so you can use any compatible MySQL driver for Python to connect to your data from Python. Here’s an example I wrote for connecting to our open COVID-19 data set.Let me know if you have any problems.Mark", "username": "Mark_Smith" }, { "code": "def main():\n import mysql.connector\n try:\n connection = mysql.connector.connect(user='readonly', password='myreads',\n host='Mongo Bi host',\n port='3307',\n\n database='insightsdb',\n auth_plugin='mongosql_auth')\n cursor = connection.cursor()\n query = \"SELECT * from global_and_us limit 20\"\n cursor.execute(query)\n\n for i in cursor:\n print(i)\n\n cursor.close()\n connection.close()\n except Exception as e:\n print(e)\n", "text": "Hi @Mark_Smith ,\nThe above is not working, as it is giving Authentication plugin ‘mongosql_auth’ is not supportedCan you please help?", "username": "Sufiyan_Ansari" } ]
Using BI connector in python
2020-08-17T09:19:11.476Z
Using BI connector in python
3,564
null
[ "python", "crud" ]
[ { "code": "", "text": "Hi guys. Something weird is happening with my API. I use python to make my Android App communicating with MongoDB (Community). I have two collections, one for Data, and one for Metadata. To modify documents in the Metadata collection I use $set. I use $addToSet to add new data to an array in the Data collections. However, I’m noticing some weird behavior.This is the workflow, which is very trivial. The App sends a JSON with data and metadata, I receive it, I allocate each to two different variables, and then I update each collections with the respective info. Since data comes from the same individual, the ID is the same. So I use (python code):db.Metadata.update_one({’_id’: username}, {$set: {‘some key’: userMetadata}})\ndb.Data.update_one({’_id’: username}, {$addToSet: {some other ‘key’: userData}})Now, the weird thing is that it seems that the first operation cancels out the second. Indeed, I can see changes in the user document in the Metadata collection, but I can’t see changes in the user document in the Data collection. The most weird thing happens when I reverse the order of the operations. so I use:db.Data.update_one({’_id’: username}, {$addToSet: {some other ‘key’: userData}})\ndb.Metadata.update_one({’_id’: username}, {$set: {‘some key’: userMetadata}})In this case, it works perfectly and I see both changes in Metadata and Data collections for the user. I think some priority issue is going on. The problem is that I don’t know which one. And the problem is that I can’t lose user data, especially the ones in Data (Metadata are less important). So even though the reverse order works, I’m still afraid that in some situations (e.g. my server slows down, out of memory issues, other mongo issues) I will lose data due to such a priority issue.Do you have any idea what is going on here?Thanks", "username": "Marco_D" }, { "code": "$set$addToSetupdate_one", "text": "Hello @Marco_D, welcome to the MongoDB Community forum!The order in which the two collections are updated do not matter, in terms of priority of $set and $addToSet - there is no such rule. Each write or update operation on a single document is atomic, and there is no way there can be any interference in the two individual update_one operations.You can include some sample input data, collection documents, and the actual code in your application for a closer look. Also, specify the versions of the MongoDB, Python and PyMongo.That said, it is possible there is a runtime exception happening on one of the operations and you may want to catch runtime exceptions and deal with them in your application’s code.", "username": "Prasad_Saya" } ]
Priority between $set and $addToSet
2021-05-28T17:16:21.704Z
Priority between $set and $addToSet
4,475
null
[ "atlas-functions" ]
[ { "code": "", "text": "Hi everyone, I wonder if there is any way we can create new collection right from Realm 3rd Party Service? I need this function because I have dynamic collections that need to be created.\nThank you.", "username": "CSKH_ePlus" }, { "code": "", "text": "Hi @CSKH_ePlus, welcome to the community forum!We probably need some extra information on what you’re trying to do before properly answering your question.One thing I would mention is that in MongoDB, you can create collections simply by inserting a document into a collection with a name that hasn’t been used before.", "username": "Andrew_Morgan" }, { "code": "", "text": "Thank you so much. That seems a good way to do it.", "username": "CSKH_ePlus" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to create new Collection From Realm 3rd Party Service?
2021-05-27T03:21:16.549Z
How to create new Collection From Realm 3rd Party Service?
2,013
null
[ "security" ]
[ { "code": "openssl verify -CAfile test-ca.pem test-server1.pem", "text": "I’m following exactly instructions from these two pages https://docs.mongodb.com/manual/appendix/security/appendixA-openssl-ca/\nhttps://docs.mongodb.com/manual/appendix/security/appendixB-openssl-server/But when runningopenssl verify -CAfile test-ca.pem test-server1.pemI got this errorerror 7 at 0 depth lookup: certificate signature failure\nerror test-server1.pem: verification failed\n139886075573568:error:0407008A:rsa routines:RSA_padding_check_PKCS1_type_1:invalid padding:…/crypto/rsa/rsa_pk1.c:66:\n139886075573568:error:04067072:rsa routines:rsa_ossl_public_decrypt:padding check failed:…/crypto/rsa/rsa_ossl.c:588:\n139886075573568:error:0D0C5006:asn1 encoding routines:ASN1_item_verify:EVP lib:…/crypto/asn1/a_verify.c:170:Anyone please knows why? Thank you", "username": "Nam_Le" }, { "code": "", "text": "Took me almost a day to find out. Call me crazy or whatever, but apparently the common name between intermediate and root certificate must be different. I set them to be equal when using self certificates.https://jamielinux.com/docs/openssl-certificate-authority/create-the-intermediate-pair.htmlUse the intermediate key to create a certificate signing request (CSR). The details should generally match the root CA. The Common Name , however, must be different.Hope that helps someone.", "username": "Nam_Le" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Creating OpenSSL Server Certificates for Testing Failed
2021-05-28T11:35:28.046Z
Creating OpenSSL Server Certificates for Testing Failed
7,409
null
[ "aggregation" ]
[ { "code": "", "text": "Hello,\nIt seems that $bucket aggregation functionality (https://docs.mongodb.com/manual/reference/operator/aggregation/bucket/#mongodb-pipeline-pipe.-bucket) is missing from the node driver.\nIs there a specific reason for this, or a plan to include it?\nCan anyone suggest a workaround for $bucket aggregation using node?I had assumed the node driver would support all features of the mongo client.\nThanks", "username": "Mike_Monteith" }, { "code": "const { MongoClient } = require(\"mongodb\");\n\nconst uri = \"mongodb://localhost\";\n\nconst client = new MongoClient(uri, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n});\n\nasync function run() {\n try {\n await client.connect();\n\n const db = client.db('test');\n const artists = db.collection('artists');\n\n await artists.drop();\n\n await artists.insertMany([\n { \"_id\" : 1, \"last_name\" : \"Bernard\", \"first_name\" : \"Emil\", \"year_born\" : 1868, \"year_died\" : 1941, \"nationality\" : \"France\" },\n { \"_id\" : 2, \"last_name\" : \"Rippl-Ronai\", \"first_name\" : \"Joszef\", \"year_born\" : 1861, \"year_died\" : 1927, \"nationality\" : \"Hungary\" },\n { \"_id\" : 3, \"last_name\" : \"Ostroumova\", \"first_name\" : \"Anna\", \"year_born\" : 1871, \"year_died\" : 1955, \"nationality\" : \"Russia\" },\n { \"_id\" : 4, \"last_name\" : \"Van Gogh\", \"first_name\" : \"Vincent\", \"year_born\" : 1853, \"year_died\" : 1890, \"nationality\" : \"Holland\" },\n { \"_id\" : 5, \"last_name\" : \"Maurer\", \"first_name\" : \"Alfred\", \"year_born\" : 1868, \"year_died\" : 1932, \"nationality\" : \"USA\" },\n { \"_id\" : 6, \"last_name\" : \"Munch\", \"first_name\" : \"Edvard\", \"year_born\" : 1863, \"year_died\" : 1944, \"nationality\" : \"Norway\" },\n { \"_id\" : 7, \"last_name\" : \"Redon\", \"first_name\" : \"Odilon\", \"year_born\" : 1840, \"year_died\" : 1916, \"nationality\" : \"France\" },\n { \"_id\" : 8, \"last_name\" : \"Diriks\", \"first_name\" : \"Edvard\", \"year_born\" : 1855, \"year_died\" : 1930, \"nationality\" : \"Norway\" }\n ]);\n\n const result = await artists.aggregate( [\n {\n $bucket: {\n groupBy: \"$year_born\", // Field to group by\n boundaries: [ 1840, 1850, 1860, 1870, 1880 ], // Boundaries for the buckets\n default: \"Other\", // Bucket id for documents which do not fall into a bucket\n output: { // Output for each bucket\n \"count\": { $sum: 1 },\n \"artists\" :\n {\n $push: {\n \"name\": { $concat: [ \"$first_name\", \" \", \"$last_name\"] },\n \"year_born\": \"$year_born\"\n }\n }\n }\n }\n },\n {\n $match: { count: {$gt: 3} }\n }\n ] ).toArray();\n\n console.log(result);\n } finally {\n await client.close();\n }\n}\nrun().catch(console.dir);\n[\n {\n _id: 1860,\n count: 4,\n artists: [ [Object], [Object], [Object], [Object] ]\n }\n]\nmkdir nodemdb\ncd nodemdb\nnpm init -y\nnpm install mongodb\nvim index.js // put the above code in this file\nnode index.js\n", "text": "Hi @Mike_Monteith and welcome in the MongoDB Community !I built a little test using these 2 pages:And I get this result:To run this I did:It works fine by me .\nI hope this helps.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "You’re right. Thank you Maxime.My problem is actually that Microsoft Azure’s weird fake mongo product doesn’t support $bucket ", "username": "Mike_Monteith" }, { "code": "", "text": "No news here :-).That’s why we are running compatibility tests on DocumentDB and CosmosDB and they are both failing hard.Correctness test runner for MongoDB API endpoint and instructions for performance testing - GitHub - mongodb-developer/service-tests: Correctness test runner for MongoDB API endpoint and instructi...Here are the latest results I have for CosmosDB.\nimage1511×766 81.1 KB\nAlso, just saying… But when we run these tests on Atlas it costs us less than $1 for a cluster for 30min. It’s more about $0.50 in reality as an M30 cluster costs $0.59/hour and I just need 30min. But on CosmosDB, it costs us more than $200 for a single run… Not joking .Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "I wish I had the power to choose which cloud provider we use at work \nIt would not be Azure", "username": "Mike_Monteith" }, { "code": "", "text": "MongoDB Atlas is also available on Azure. You don’t even have to sell them a new Cloud Provider !The big difference is that MongoDB Atlas provides the REAL MongoDB on Azure…", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
$bucket functionality missing from DocumentDB
2021-05-26T15:48:32.397Z
$bucket functionality missing from DocumentDB
1,934
null
[ "production", "ruby" ]
[ { "code": "", "text": "This patch release in the 4.12 series of bson-ruby fixes the following issue:", "username": "Dmitry_Rybakov" }, { "code": "", "text": "", "username": "system" } ]
Bson-ruby 4.12.1 released
2021-05-28T15:55:47.081Z
Bson-ruby 4.12.1 released
2,935
null
[]
[ { "code": "", "text": "I ran into this problem and solved it.\nfirst answer is the user created?\nif yes, then look for the problem in the connection code. I can tell you something after analyzing your code for connecting with MongoDB.\nNot the code for creating the User, but the code for creating the connection.", "username": "nb_no" }, { "code": "", "text": "I ran into this problem and solved it.Were you really able to create an Atlas database user with the node-js driver?", "username": "steevej" }, { "code": "", "text": "this is how the user entry looks like in MongoDB Compass{“_id”:{“$oid”:“60afb2309bec572f1486d455”},“email\":\"[email protected]”,“password”:“$2a$10$iqPN8mgnvEHkRmZXa.tEW.fVuUEsN.m8OPC6eSX0HgiHoMNAdIZMW”,“__v”:0}", "username": "nb_no" }, { "code": "", "text": "this is how the user entry looks like in MongoDB Compass:\nuser MongoDB835×570 54.8 KB", "username": "nb_no" }, { "code": "", "text": "While your collection is named users, this is not a user that can access an Atlas database. It might give access to an application that has access to Atlas, but that user cannot access Atlas directly.See Node.js cant create user in MongoDB Atlas - #5 by steevej", "username": "steevej" }, { "code": "", "text": "I mean the user in the database created by the application. I saw a construct in the question: “… creating a user in MongoDB Atlas”. if it was written: “… database user”, I would not write my answer.", "username": "nb_no" } ]
Creating users in MongoDB Atlas using Node.js
2021-05-27T22:45:39.370Z
Creating users in MongoDB Atlas using Node.js
3,607
null
[ "performance", "change-streams" ]
[ { "code": "", "text": "We are trying to build an application which tries to watch a collection and identify a specific field being written to it as part of a db commit. While this has worked out well for the most part, we are struggling to keep up with the db changes associated to one particular collection in our clusterThe collection that is being tailed slow can typically generate large (1MB+) oplog entries, and this arises due to the usage of addToSet against certain large array fields in it.The watch operation we run only projects a single json field (~400 bytes) in an effort to remain efficient, but the getmore queries (printed in mongod during the watch operation) is only able to scan very few documents before hitting the 16mb bson size limit in reslen. The queries take time (150ms+) and the rate of processing is never able to keep up with our load.It would be good to understand if the filter pipeline used with the watch operation is indeed of any help in this scenario or if there is a better way to watch & process changes involving large oplog entries.", "username": "Prasanth_R" }, { "code": "", "text": "hi @Prasanth_R, do you observe this on a shared cluster? If yes, there is a ticket tracking improvements for this use case https://jira.mongodb.org/browse/SERVER-48694\nIt is on our roadmap, stay tuned for the updates.", "username": "Katya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Efficiency of change streams when large oplog entries are created
2020-06-08T13:33:05.274Z
Efficiency of change streams when large oplog entries are created
3,079
null
[ "server", "installation" ]
[ { "code": "", "text": "Dear all,New to community so please forgive me if placement is poor.Had an issue following the community install of mongodb 3.6.23 on an Amazon Linux 2 AMIafter setting up the code /etc/yum.repos.d/mongodb-org-3.6.repo[mongodb-org-3.6]\nname=MongoDB Repository\nbaseurl=Index of x86_64\ngpgcheck=1\nenabled=1\ngpgkey=https://www.mongodb.org/static/pgp/server-3.6.asc>sudo yum install -y mongodb-org-3.6.23 mongodb-org-server-3.6.23 mongodb-org-shell-3.6.23 mongodb-org-mongos-3.6.23 mongodb-org-tools-3.6.23Delta RPMs disabled because /usr/bin/applydeltarpm not installed.\nmongodb-org-server-3.6.23-1.am FAILED\nhttps://repo.mongodb.org/yum/amazon/2/mongodb-org/3.6/x86_64/RPMS/mongodb-org-server-3.6.23-1.amzn1.x86_64.rpm: [Errno 14] HTTPS Error 404 - Not Found ] 0.0 B/s | 0 B --:–:-- ETA\nTrying other mirror…\nBlockquotethis happens for all the packages, I found that the url to the repo is incorrect and hence the issueby installing the RPM direct the packages install without issue as a workaround (note the 2 just before .x86_64rpm) the package is using a 1, I think this either a repo or AWS issue?repo.mongodb.org/yum/amazon/2/mongodb-org/3.6/x86_64/RPMS/mongodb-org-3.6.23-1.amzn2.x86_64.rpmPlease could someone direct me if this is my mistake or is the repo broken? I cannot see a fix in the config file and not sure if to log a case with AWS as this is a Mongo repo and Mongo instructionsAppreciate any help you can give", "username": "Nathan_Harris" }, { "code": "yum install deltarpm\n", "text": "Hi @Nathan_Harris and welcome in the MongoDB Community !Well, first of all, MongoDB 3.6 reached end of life in April 2021, so it’s not supported anymore. This could be the issue here.Whichever MongoDB product you’re using, find the support policy quickly and easily.Can we start by upgrading all this to MongoDB 4.4? Looks like 3.6.22+ was compatible with Amazon Linux 2 at the time though.More installation details here: https://docs.mongodb.com/manual/tutorial/install-mongodb-on-amazon/Also that’s not a typo?\nimage833×240 24.9 KB\nAlso looks likecouldn’t hurt.\nSource: centos - Do I need to do something about \"Delta RPMs disabled\"? - Unix & Linux Stack ExchangeCheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi Maxime,Thank you for the much needed help, the config file was a typo but only in the blog (must have hit the> key), the deltarpm has fixed the issue though, looking at the migration path now…Thank you Nathan", "username": "Nathan_Harris" } ]
MongoDB on AWS Linux 2 Community package install issue
2021-05-26T08:09:22.107Z
MongoDB on AWS Linux 2 Community package install issue
4,113
null
[ "queries", "mongoose-odm" ]
[ { "code": "{ _id: { type : String, unique: true},\n personal: { type: Object, default: {\n wallet: 0,\n bank: 1000,\n inventory: [Object]\n }},\n company: { type: Object, default: {\n name: \"Unnamed Company\",\n funds: 0,\n employees: [Object],\n stats:{\n max_employees: 2,\n experience: 0.0,\n totalincome: 0,\n sales: 0\n },\n },\n },\n", "text": "I can’t seem to find a way to push elements into a nested array. I’ve tried exploring the MongoDB documentation along with surfing stack-overflow and yet I can’t find a solution. My schema look like this:}This works and I can successfully create this and retrieve data such as the funds, wallet, bank etc. However I’m struggling to set value.One instance that I am working on now is trying to push elements to the “employees” array inside of “company.”I have the object that I want to push and whatever attempt I try, whether it be a code snippet from stack-overflow or trying to make something myself from the documentation.Any code snippets or advice to overcome this would be greatly appreciated.", "username": "eitzen_N_A" }, { "code": "{\n \"firstname\": \"Maxime\",\n \"surname\": \"Beugnet\",\n \"role\": \"Senior Developer Advocate\"\n}\n{\n \"_id\": \"123\",\n \"company\": {\n \"name\": \"MongoDB\",\n \"employees\": []\n }\n}\n> db.companies.insertOne({\n \"_id\": \"123\",\n \"company\": {\n \"name\": \"MongoDB\",\n \"employees\": []\n }\n})\n{ acknowledged: true, insertedId: '123' }\n> db.companies.updateOne({ \"_id\": \"123\" }, { $push: { \"company.employees\": { \"firstname\": \"Maxime\", \"surname\": \"Beugnet\", \"role\": \"Senior Developer Advocate\" } } })\n{\n acknowledged: true,\n insertedId: null,\n matchedCount: 1,\n modifiedCount: 1,\n upsertedCount: 0\n}\n> db.companies.findOne()\n{\n _id: '123',\n company: {\n name: 'MongoDB',\n employees: [\n {\n firstname: 'Maxime',\n surname: 'Beugnet',\n role: 'Senior Developer Advocate'\n }\n ]\n }\n}\n", "text": "Hi @eitzen_N_A,So if I sum up, your problem is:You want to add this employee:Into the employee array of a document that looks like this:If that’s the case then you can do:I hope this helps,Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Maxime you genius, thank you so much! Very much appreciated!!", "username": "eitzen_N_A" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Pushing elements to nested array
2021-05-28T12:52:16.483Z
Pushing elements to nested array
13,294
null
[ "queries", "dot-net", "indexes" ]
[ { "code": "var = Builders<MyObject>.Filter.Gte(x => x.DocumentDate, fromTimestamp)\n\t\t\t\t& Builders<MyObject>.Filter.Lte(x => x.DocumentDate, toTimestamp)\n\t\t\t\t& Builders<MyObject>.Filter.ElemMatch(x => x.Customer, x => x.Customer == 1)\n\t\t\t\t& Builders<MyObject>.Filter.ElemMatch(x => x.Language, x => x.LanguageShort == \"en\")\n\t\t\t\t& Builders<MyObject>.Filter.ElemMatch(x => x.SomeArray, Builders<MyArrayObject>.Filter.In(i => i.SomeProperty, arrayOfValuesToMatch));\nDocumentDatematchFilter var results = await Collection.Aggregate()\n\t.Match(matchFilter)\n\t.Project(someProjectionFilter)\n\t.ToListAsync();\nvar results = await Collection.Aggregate()\n\t.Match(match-filter-with-indexed-properties)\n\t.Match(match-filter-NOT-indexed-properties)\n\t.Project(someProjectionFilter)\n\t.ToListAsync();\nDocumentDate", "text": "I couldn’t measure this on my own, so can someone explain what is more performant (faster) and best practice.Match filter that I use now isProperty DocumentDate is ONLY indexed in my document (so in matchFilter I combine indexed and not indexed properties)When writing aggregations likeis it better to have two matches, one following other, likewhere first match would match only indexed properties, and second match would match unindexed properties.Please note that adding extra indexes is not an option, since I have many different filters, where DocumentDate is common one.Ty,\nMarko", "username": "Marko_Saravanja" }, { "code": "$match$match$match$match", "text": "Hello @Marko_Saravanja,… is it better to have two matches, …No, it is not. The query operation doesn’t change if you have your filter spread over multiple $match stages or with a single $match stage. It doesn’t affect your query performance either.Further explained in the sub-topic $match + $match Coalescence:", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Seperate indexed from unindexed properties in match filter
2021-05-28T12:46:53.866Z
Seperate indexed from unindexed properties in match filter
2,008
null
[ "performance", "server", "upgrading" ]
[ { "code": "", "text": "I am trying to move the collection data from replica set to shard cluster.\nIn the shard cluster, we split shard into empty collections that generated the same index.After that, I am trying to load with json file using mongoimport command.The following is our disk usage of before and after their migration value.\nstorage size\n162G → 225G\nindex size\n_id Index: 53G → 94G\nSingle Field Index: 22G, 24G, 23G → 67G, 68G, 65GDisk usage seems to double after migration.\nWhat is the cause of this problem?", "username": "111115" }, { "code": "", "text": "Hi @111115 and welcome in the MongoDB Community !Could you please explain the architecture of your replica set before the migration and the new architecture of the sharded cluster?\nUsually, sharding is overkill before 1TB unless there is another good reason for sharding (planning to reach 1TB+ soon, geo distribution, data sovereignty, etc).\nAlso, how did you distribute your nodes on your physical machines?Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi!\nThank you for your prompt reply.Before migration, the architecture was physically using one primary, secondary, and arbiter server.As you said, our existing DB server was using more than 1TB, so we decided to build a new shard cluster.The architecture of the new shard cluster consists of three servers.Currently, there are only a limited number of servers available, so we have configured it like this.\nIn the future, we are planning to increase the config, mongos, and shard servers.\nmongodb sharded cluster488×530 44.4 KBOf course, I thought it would be too much to migrate all the data from the current server to the new server.\nSo, until a new server was added, only a few DBs were going to be migrated.However, as I explained in the first question,\ndisk usage increased rapidly during the migration by sharding by collection,\nso I checked the collection on Mongos server with stats command.We expected to have more storage capacity because of the _id hashed index.\nAnd it was right.\nBut unfortunately, Additionally,\nit seems to cost more than twice the storage size and index size\ncompared to the same number of documents per collection.", "username": "111115" }, { "code": "_idmongodmongod", "text": "Hey,So… I don’t know where to begin because there are so many things to say here and this raises so many new questions.I don’t have a clear explanation for the increase of sizes you noticed to be honest but many things can explain a part of it…Also one thing that I don’t understand in what you are saying:In the shard cluster, we split shard into empty collections that generated the same index.One collection in a RS == one collection in a sharded cluster. If the collection isn’t shared, it will live in its associated primary shard. If it’s sharded, it will be distributed across the different shards according to the shard key.Another thing that might not be obvious but Sharded Clusters come with an overhead. They require more computing power and more machines. If you only have 3 machines available, then it’s more efficient to run a RS on them rather than deploying a sharded cluster & sharing the machines to host multiple nodes on the same machines. It’s also an anti pattern because sharded clusters are designed to be Single Point of Failure safe and here every single one of your 3 machines is a SPOF that will take the entire cluster down with them if one of them fails. This architecture is OK for a small dev environment, but definitely not ready for a production environment.We expected to have more storage capacity because of the _id hashed index.This doesn’t make sense to me. Why would an index give you more storage capacity? It’s the opposite. An index costs RAM & storage.Also choosing the right shard key is SUPER important.A hashed index on the _id is the VERY LAST trick in the book you can use to create a shard key. 99% of the time, there is a better solution and a better shard key that will improve read performances dramatically and still distribute the write operations evenly across the different shards.\nA good shard key should optimize as much as possible your read operations by limiting the reads to a single shard and distribute the writes evenly across all the shards. The shard key must not increase (or decrease) monotonically and the rule “data that is access together must be stored together” also applies more than ever. Ideally, data that you will query “together” should be in the same chunk as much as possible. Using a hashed _id does exactly the opposite. It’s basically does a random spread of all the documents across the entire cluster chunks, and consequently shards. Unless your use cases falls in the 1% special cases, most of your queries will be scatter gather queries and they aren’t efficient at all.\nimage960×114 17.1 KB\nAlso, I see you are using Arbiter nodes and they are usually not recommended in production.\nMongoDB Atlas doesn’t use them at all for instance… Anywhere.There are already a few topics in the community forum covering why arbiters are generally a bad idea.Regarding the initial question of the sizes… It doesn’t make sense to me. Unless the questions I asked above can help to find something suspicious.Don’t forget that RS needs to be on 3 different machines to enforce HA and sharding == scaling but it shouldn’t cost you your HA status. Each mongod should be on its dedicated machine with the correct amont of ressource to work correctly.If your 3 nodes RS is already at about 1TB data, you probably have about 150GB RAM on each node so splitting your dataset in 2 => 500GB and squeezing 2 mongod on the same node (2*500GB) with still150GB RAM is exactly the same problem but now with an extra layer of complexity and overhead. A sharded cluster needs a bunch of admin collections to work. They shouldn’t be big enough to be noticeable and create the big difference in sizes that you noticed. So I guess there is something else here.I hope this helps a bit .\nCheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi.\nThank you for explaining in as much detail as possible.Did you use the same MongoDB versions in the former RS and in the sharded cluster?\nFormer RS uses version 4.2.3 and dbs in sharded cluster are configured with version 4.4.5Did you use the same configuration ? Not using a different WiredTiger compression algorithm 1 by any chance?\nYes, the DB version is different, but it has the same configuration. The storage engine is wiredTiger.Are these stats just about a single collection?\nI haven’t migrated all the DB collections yet, so I showed you about one of the largest collections.\ncommand : db.collection.stats(scale:1024*1024*1024)\nI referred to the storageSize, indexSizes data.Does that collection contains exactly the same data?\nIt’s exactly the same data.\nHowever, the number of collection documents in the before migration DB is higher .\nBecause the Former RS is still in operation and that collection data is still increasing.How did you gather these stats?\nAs I said above, it is the stats result for one collection.\nOther collections are also much higher than the disk usage of the former dbAnother thing that might not be obvious but Sharded Clusters come with an overhead. They require more computing power and more machines. If you only have 3 machines available, then it’s more efficient to run a RS on them rather than deploying a sharded cluster & sharing the machines to host multiple nodes on the same machines. It’s also an anti pattern because sharded clusters are designed to be Single Point of Failure safe and here every single one of your 3 machines is a SPOF that will take the entire cluster down with them if one of them fails. This architecture is OK for a small dev environment, but definitely not ready for a production environment.We created three physical servers, one (config, Mongos) server and two shards, and configured RS.\nThese two shard servers should be combined into one shard and used as RS.\nIn the future, six new machine will be Added.\nIs it okay to configure one primary and one secondary per shard without an Arbiter server?Also choosing the right shard key is SUPER important.As for Shard key, I think it was my mistake.\nThat collection is mostly tried aggregated querying for dates, it would be better to make it a shard key with a date field.\n(I’m sorry that I don’t explain exactly what fields are there because it’s confidential information.)\nThanks to your advice, I will consider choosing the shard key well when sharding not only that collection but also other collections.For some collections, only the query is performed for the _id, in which case, is it okay to use it as a hashed index for the _id if the collection is large?I look forward to your reply.\nThank you.", "username": "111115" }, { "code": "_id_id", "text": "Is it okay to configure one primary and one secondary per shard without an Arbiter server?Absolutely not. If you do that, both nodes are a SPOF because you need to be able to reach the majority of the voting members of a RS to be able to elect a primary… With 2 nodes, the majority is at 2. So if one of them fails, then you won’t have a primary anymore (either because it the node that failed or because the primary will step down to secondary as it cannot reach the majority of the nodes).Arbiter are also a bad idea in general because they do not help to move the majority-commit point. If your secondary is down (for maintenance operation or to perform a backup using the file system), only your primary can perform a write operation (arbiter don’t have data so no writes) so only a single node gets the data. It’s less than 2 so that data isn’t committed on a majority of the nodes. Meaning it can potentially be rolled back if something happens to the primary before the secondary can catch up and it can’t be read by change streams for example.It’s also creating a cache pressure on the primary which has to keep track of what is majority committed and what’s not to answer correctly the queries using the different readConcern options.For some collections, only the query is performed for the _id, in which case, is it okay to use it as a hashed index for the _id if the collection is large?Yes, but you really only have queries on the _id and no other fields, no aggregations using something else? Where do you get that _id from then? If it’s from another collection, that looks like a join that could be avoided by embedding the document.\nIf using this shard key distributes the writes evenly across all the chunks (so shards by extention) and allow targeted read (no scatter gather), then it’s technically perfect.Just to reiterate on your architecture, you want to avoid SPOFs at all cost because they can make your entire cluster instable. For example, at the moment you have your 3 config nodes on the same physical machine… Losing this machine will instantly make the entire cluster unavailable until you can restore this machine. If you lose this hard drive… You will have to restore your entire cluster from a backup because your chunk definitions won’t be aligned anymore with your data in your shards which is a terrible situation to be in…\nAlso, the point of having a sharded cluster is to scale up read and write operations. But you have only a single mongos which will bottleneck your performances. And also make the entire cluster unavailable if it dies.I recommend this free course on MongoDB University to get more details:Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.Regarding the storage and index sizes. I really can’t tell without more investigations. There must be a reason but it’s out of my reach.\nDon’t forget to size your RAM correctly so there is enough space in RAM to store the indexes + the working set + have some spare RAM for queries, sort in memory, aggregations, etc.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Why is disk usage too high when migrating from 4.2 to 4.4?
2021-05-24T05:37:33.530Z
Why is disk usage too high when migrating from 4.2 to 4.4?
4,498
null
[]
[ { "code": "", "text": "Hello, My env is 3 nodes centos cluster, my mongodb version is 2.6…when I use replica set in 3 nodes cluster, I adjuest system time rollback for 3 hours, and I stop the primary node mongod server, it failed to elect the new primary between the rest two secondary nodes", "username": "Xu_Han" }, { "code": "", "text": "Hi @Xu_Han and welcome in the MongoDB Community !What does this mean? What command did you run?I adjuest system time rollback for 3 hoursAlso, MongoDB 2.6 reached End Of Life in October 2016 and isn’t supported since then.\nPlease upgrade to a more recent version of MongoDB.Whichever MongoDB product you’re using, find the support policy quickly and easily.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "date --set=\"-3 hours\"systemctl stop mongod", "text": "Hello @MaBeuLux88Thanks for your reply, I run date --set=\"-3 hours\" in each node, then I run systemctl stop mongod to stop the primary node mongod service, the rest of two secondary nodes can’t trigger the election to elect the new primary. It start election and elect the primary 3 hours later. I test this condition in Mongodb version 3.2 too, ver 3.2 still has this kind of issue. But the latest 4.4 is OkThanks,\nXu", "username": "Xu_Han" }, { "code": "", "text": "The nodes probably thought they couldn’t be elected because they had more recent entries in the oplog. Old stuff die hard . This was fixed at some point apparently.The real solution in an update .Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Election issue when system time rollback
2021-05-27T11:56:42.168Z
Election issue when system time rollback
1,994
null
[ "node-js" ]
[ { "code": "exports.handler = async function(context, event, callback) {\n \n console.log('require realm');\n const Realm = require(\"realm\");\n \n console.log(\"event\", event);\n \n const user = event.from;\n console.log(\"user\", user);\n const enquiry = event.enquiry\n const body = event.body\n \n // NBP01\n const app = new Realm.App({ id: \"nbp01-*****\" });\n const apiKey = \"*************************\";\n\n const timestamp = new Date();\n const status = \"twilio\";\n const easyid = \"E0010\";\n const code = \"action\";\n const userid = user\n const device = \"twiliofunction\"\n const datastr = userid + enquiry\n const datadoc = { \"enquiry\": enquiry, \"body\": body }\n const theArgs = [timestamp, status, easyid, code, userid, device, datastr, datadoc];\n \n if (!apiKey) {\n throw new Error(\"Could not find a Realm Server API Key.\");\n }\n \n const credentials = Realm.Credentials.serverApiKey(apiKey);\n \n try {\n const user = await app.logIn(credentials);\n console.log(\"Successfully logged in!\", user.id);\n \n console.log(\"Calling Realm Function\");\n const result = await user.callFunction(\"sevent\", theArgs)\n console.log(\"Function result\", result);\n user.logOut\n return callback(null, result);\n\n } catch (err) {\n console.error(\"Failed to log in\", err.message);\n return callback(error);\n }\n\n}\n", "text": "I am using the Realm JS SDK in the ‘Function’ (Node 12) Environment of Twilio.When i use the function for the first time i get the following error …UnhandledPromiseRejectionWarning: Unhandled promise rejection: Error: make_dir() failed: Read-only file system Path: /var/task/mongodb-realm/\nException backtrace: /var/task/node_modules/realm/compiled/napi-v4_linux_x64/realm.node(ZN5realm4util4File11AccessErrorC1ERKSsS4+0x2d) [0x7fd2bb1a654d]\n/var/task/node_modules/realm/compiled/napi-v4_linux_x64/realm.node(_ZN5realm4util12try_make_dirERKSs+0x166) [0x7fd2bb298ef6]\n/var/task/node_modules/realm/compiled/napi-v4_linux_x64/realm.node(+0x4bbebd…Subsequent calls all work fine and i can use the function multiple times without any issues.\nI guess its trying to write to a private filesystem for some reason - for the first time ??Here is the code …", "username": "Damian_Raffell" }, { "code": "", "text": "The error is occurring here …\nconst app = new Realm.App({ id: “nbp01-*****” });", "username": "Damian_Raffell" }, { "code": "/var/task/mongodb-realm/", "text": "Hi @Damian_Raffell, what happens if you try to write directly to /var/task/mongodb-realm/ (just to confirm that this is the issue)?", "username": "Andrew_Morgan" }, { "code": "Realm.defaultPath", "text": "Looks like you can only write to the tempdir in a Twilio Function: Utilize Temporary Storage to Read & Write in a Twilio Function. Maybe try setting Realm.defaultPath to somewhere in your function’s tempdir", "username": "Andrew_Morgan" }, { "code": "", "text": "I figured that the same issue might exist for Lambda functions and a quick search came up with this… Realm not working in AWS Lambda (serverless framework)", "username": "Andrew_Morgan" }, { "code": " try {\n app = new Realm.App({ id: id });\n } catch(err) {\n console.log(\"Building Realm.App failed with : \", err.message);\n app = new Realm.App({ id: id });\n console.log(\"Built Realm.App again!\");\n }\n", "text": "Hi Andrew … thanks for helping out.Yep … i found most of that info too…Strange thing is that if i catch the first error and just run it again - it works ok !\nThere must be some shonkey code in there ? The Twilio Function only calls a Realm Function (no DB work).\nI will try setting Realm.defaultPath … but i think it wants a fully qualified path ?", "username": "Damian_Raffell" } ]
Using JS SDK in Twilio Node - writing to private filesystem error?
2021-05-21T10:20:48.786Z
Using JS SDK in Twilio Node - writing to private filesystem error?
3,095
null
[]
[ { "code": "", "text": "Disclaimer: Realm NewbiePlease think of my project similar to a website builder, where websites are created through an admin web app. Each new website requires it’s own MongoDB database and Realm App.The database stuff is all fine and achievable through the SDKs. I can also create the Realm App along with required functions through the Realm Administration API. However my Realm App needs a specific JSON schema (not generated) and custom resolvers. I cannot see any way that I can set those through the Administration API or the SDK. Am I missing it - it might be a terminology issue?Currently the only way I can see to create a fully configured Realm App is through an import using the CLI which isn’t ideal from a web application programming perspective. Any assistance is much appreciated.", "username": "David_Peck" }, { "code": "generate_schema", "text": "Hi @David_Peck, welcome to the community forum!I’m not an expert on the MongoDB Realm CLI (you may attract better-qualified eyeballs in the “MongoDB Realm” category.One API endpoint I did notice in the docs is generate_schema, which could work if you’d set up sample data?", "username": "Andrew_Morgan" }, { "code": "", "text": "Thanks for responding.Unfortunately I don’t think so. The generate schema creates a schema from sample data but because unfortunately what it creates is not suitable for our needs so we have modified it. Also I think that call generates a schema for you, but I don’t know if it sets it or just returns it in the response.", "username": "David_Peck" }, { "code": "POST", "text": "As it’s a POST, I’d assumed that it would add the schema to the app, but the docs aren’t explicit.I’ll dig a bit more to see if there’s something we’re missing.", "username": "Andrew_Morgan" }, { "code": "", "text": "The answer is that it’s a work-in-progress. Currently you’d set the schema through the rules endpoint but it will be moved from there to a more specific endpoint very soon. So, the short-term recommendation is to use the CLI.There may also be an undocumented endpoint that works at an app level that could be used. We’re checking if this is something suitable to use/document (and won’t become obsolete with the separation of schema and rules).", "username": "Andrew_Morgan" }, { "code": "", "text": "That’s a great answer (albeit not the one I was hoping for). Thank you Andrew, and please do let me know if there are any undocumented endpoints.", "username": "David_Peck" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Programmatically duplicate Realm Apps
2021-05-27T10:23:30.639Z
Programmatically duplicate Realm Apps
2,718
https://www.mongodb.com/…22f0a8afeacb.png
[ "aggregation" ]
[ { "code": "db.collection.aggregate(\n [\n { \"$limit\": 200000 },\n {\n \"$group\":\n {\n \"_id\":\n {\n \"TID\": \"$TID\",\n\t\t\t\t\t\t\t\"Opt\": \"$Opt\",\n \"DSN1\": \"$DSN1\",\n\t\t\t\t\t\t\t\"DSN2\": \"$DSN2\",\n \"Column\": \"$Column\",\n\t\t\t\t\t\t\t\"Row\": \"$Row\",\n \"CSN\": { \"$substr\": [\"$CSN\", 2, -6] }\n },\n \"details\": {\n \"$push\":\n {\n \"A\": \"$A\",\n \"B\": \"$B\",\n \"C\": \"$C\",\n \"D\": \"$D\",\n \"timestamp\": \"$timestamp\"\n }\n }\n }\n }\n ],\n {\n allowDiskUse: true\n }\n); \n\n", "text": "Hi ,I’ve a goal to query data from collection which is huge > 2TB, I need to query only the data I need as CSV file to process in Python. What I need to do while query data isRight now I can do pipeline like this.Output from this pipeline is like {\"_id\": { set of “details” according to _id} } as pictureimage719×448 4.18 KBFrom this point I’m not sure how to sort by “details.timestamp” and select only certain range of “details”. Moreover, how can I break output as 1 “_id”: to 1 “details” , not 1 “_id” to set of “detail” which currently I have? 1 “_id” to 1 “details” is more suitable for my process.Please kindly advise.Thanks.", "username": "Preutti_Puawade" }, { "code": "{\n$unwind : \"$details\"\n}\n", "text": "Hi @Preutti_Puawade,Is it possible to filter by using $match and $sort before the limit and group? If yes you should consider doing so and index those fields in the order of predicates going by Equility , Sort and Range of the queried fields.To break the results I believe you will need an $unwind stage in the end to have each pushed element as a single document :After this unwind you can also sort and filter the unwinded documents but be aware that it will all be in memory and cannot use indexes therefore to optimise the pipeline try to filter the documents as much as you can in first stage of the pipeline.To export to csv you can use a scripting language like python and pymongo or use mongoexport when exporting a view which you can define with your pipeline…Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi@Pavel_Duchovny thanks for advise. From my knowledge, I don’t think I can filter by using $match and $sort before the limit and group. Maybe if I can break down my operation you can guide me more,I group as _id by TID, Opt, DSN1, DSN2, Column, Row, CSN to have partition of data. Then push “details” in the group and want to sort details by details.timestamp. Then I want to query only 30th - 180th of details in each _id and unwind them before export as CSV.Currently I have done this by pull original document no aggregation and perform this action in Python using pandas and the code is just belowdf[‘RN’] = df.sort_values([‘timestamp’], ascending=[True]) \n.groupby([‘DSN1’,‘DSN2’,‘TID’,‘Column’,‘Row’,‘Opt’,‘CSN’]) \n.cumcount() + 1df2=df.loc[(df[‘RN’] >= 30) & (df[‘RN’] <= 180)]do you have suggestion or alternative ?Thanks.Regards,Ohm", "username": "Preutti_Puawade" }, { "code": "timestamp{ timestamp : 1}db.collection.aggregate(\n[\n{ \"$sort\" : { timestamp : 1 } },\n{ “$limit”: 200000 },\n{\n“$group”:\n{\n“_id”:\n{\n“TID”: “$TID”,\n“Opt”: “$Opt”,\n“DSN1”: “$DSN1”,\n“DSN2”: “$DSN2”,\n“Column”: “$Column”,\n“Row”: “$Row”,\n“CSN”: { “$substr”: [\"$CSN\", 2, -6] }\n},\n“details”: {\n“$push”:\n{\n“A”: “$A”,\n“B”: “$B”,\n“C”: “$C”,\n“D”: “$D”,\n“timestamp”: “$timestamp”\n}\n}\n}\n},\n{ $unwind : {\n path: \"$details\",\n includeArrayIndex: \"arrIndex\"\n}\n},\n{ $match : { \"arrIndex\" : { \"$gte\" : 30 }, \"arrIndex\" : { \"$lte\" : 180 }}}\n],\n{\nallowDiskUse: true\n}\n); \n", "text": "@Preutti_Puawade,I think you should be able to sort on the beginning with timestamp field. If you need to do a ASC sort for example create index on { timestamp : 1}.Something like that might be ok for you.Thanks,\nPavel", "username": "Pavel_Duchovny" }, { "code": "\"arrIndex\" {\n \"$sort\": {\n \"_id\": 1.0,\n \"timestamp\": 1.0\n }\n", "text": "\"arrIndex\"@Pavel_Duchovny Thank you very much. I’ve spent time of the whole yesterday to do this and got something quite close to your code, mine put “$sort” after “$group”I’ll try yours in case it got better performance. Thank you again for helping on this. Appreciated your kindly advise.Thanks & Regards,Ohm.", "username": "Preutti_Puawade" }, { "code": "db.collection.aggregate(\n [\n {\n \"$match\": {\n \"$and\": \n [\n\t\t\t\t\t { \"TID\": /XP/ },\n { \"timestamp\": { \"$gte\": \"2021-05-19 00:00:00\" } },\n { \"timestamp\": { \"$lte\": \"2021-05-27 23:59:59\" } },\n { \"Col\": /^\\d.*$/ },\n { \"MT0\": /^\\d.*$/ },\n { \"Row\": /^\\d.*$/ },\n { \"TRM\": /^\\d.*$/ },\n { \"WRLFTR_0\": /^\\d.*$/ },\n { \"WRLFTR_1\": /^\\d.*$/ }\n ]\n }\n },\n {\n \"$group\": {\n \"_id\": {\n \"TID\": \"$TID\",\n \"Opt\": \"$Opt\",\n \"DSN1\": \"$DSN1\",\n \"DSN2\": \"$DSN2\",\n \"Col\": \"$Col\",\n \"Row\": \"$Row\",\n \"CSN\": \"$CSN\", \n },\n \"details\": {\n \"$push\": { // partition over\n \"MT0\": \"$MT0\",\n \"WRLFTR_0\": \"$WRLFTR_0\",\n \"WRLFTR_1\": \"$WRLFTR_1\",\n\t\t\t\t\t\t\"TRM\": \"$TRM\",\n \"timestamp\": \"$timestamp\"\n }\n },\n }\n },\n {\n \"$sort\": {\n \"_id\": 1.0,\n \"timestamp\": 1.0 // order by timestamp\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$details\",\n \"includeArrayIndex\": \"array_idx\" // row_number\n }\n },\n {\n \"$match\": { // select only order 30 - 180 in array\n \"$and\": [\n {\n \"array_idx\": {\n \"$gte\": 30.0\n }\n },\n {\n \"array_idx\": {\n \"$lte\": 180.0\n }\n }\n ]\n }\n },\n {\n \"$sort\": { // sort TRM for percentile calculation\n \"details.TRM\": 1.0\n }\n }, \n {\n \"$group\": { // group parameter back to array by partition \"_id\" \n \"_id\": \"$_id\",\n \"timestamp\": { \"$push\": {\"$toDate\":\"$details.timestamp\"} },\n \"MT0\": { \"$push\": {\"$toDouble\":\"$details.MT0\"} },\n \"TRM\": { \"$push\": {\"$toDouble\":\"$details.TRM\"} },\n \"WRLFTR_0\": { \"$push\": {\"$toDouble\":\"$details.WRLFTR_0\"} },\n \"WRLFTR_1\": { \"$push\": {\"$toDouble\":\"$details.WRLFTR_1\"} },\n }\n },\t\t\t\n { // reporting \"_id\" and calculating \n \"$project\": {\n \"_id\": 0,\n \"TID\": \"$_id.TID\",\n \"Col\": {\n \"$toDecimal\": \"$_id.Col\"\n },\n \"Row\": {\n \"$toDecimal\": \"$_id.Row\"\n },\n \"CSN\": {\n \"$substr\": [\n \"$_id.CSN\",\n 2.0,\n -6.0\n ]\n },\n \"Opt\": \"$_id.Opt\",\n \"PID\": \"$_id.PID\", \n \"DSN1\": \"$_id.DSN1\",\n \"DSN2\": \"$_id.DSN2\",\n \"CBU_ID\": \"$_id.CBU_ID\", \n \"MAX_TRM\": {\n \"$max\": \"$TRM\"\n },\n \"Q3_TRM\": { \n \"$arrayElemAt\": [ \"$TRM\", {\"$floor\": { \"$multiply\": [0.75,{\"$size\": \"$TRM\"}] }} ] // pick up number at 75th percent of all data as Percentile 75\n },\n \"AVG_TRM\": {\n \"$avg\": \"$TRM\"\n },\n \"MAX_WRLFTR_0\": {\n \"$max\": \"$WRLFTR_0\"\n },\n \"MAX_WRLFTR_1\": {\n \"$max\": \"$WRLFTR_1\"\n },\n \"MAX_MT0\": {\n \"$max\": \"$MT0\"\n },\n \"MAX_timestamp\": {\n \"$max\": \"$timestamp\"\n },\n \n }\n },\n ],\n {\n \"allowDiskUse\": true\n }\n);\n", "text": "Just to record of complete pipeline:", "username": "Preutti_Puawade" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Need advise, cannot sort and select required document from group and push
2021-05-25T04:34:38.829Z
Need advise, cannot sort and select required document from group and push
3,552
null
[ "atlas-device-sync", "app-services-user-auth" ]
[ { "code": "Sign in with Applelet credentials = Credentials.apple(idToken: \"identityToken\")\napp.login(credentials: credentials)\nSign in with AppleSign in with Apple", "text": "Hello,\nI am working on an iOS app developed with Swift. I managed to implement the Sign in with Apple functionality and register a user with MongoDB Realm. Right after I call the login with the identityToken from Apple:I get the RealmUser and can sync with the database in Atlas. However, the RealmUser instance won’t be logged in forever (which is expected behaviour I guess, even if I found a post online which suggested otherwise). So maybe I am missing something regarding that process but I don’t get the identityToken without showing the Sign in with Apple button again.\nSo does anyone know what the Best Practise is regarding Logging in the user with Sign in with Apple after registering successfully?Thanks\nTobi", "username": "tobi_wan_kenobi" }, { "code": "if app.currentUser != nil {\n self.loggedIn = true\n // or some other indicator that the user is already logged in\n}\n", "text": "You can try this. I’m not using signin with apple, but another custom JWT called CosyncJWT.Anyway, try:Once I used this, it checks for the cached realm user and skips the login process. The only reason you would need to show Sign In With  again would be if they actually logged out.", "username": "Kurt_Libby1" }, { "code": "AppState_ = realmApp.currentUser?.logOut()app.currentUser", "text": "Hi @Kurt_Libby1,thanks for your answer. I think my question wasn’t clear enough. I was actually wondering what to do in the case the user is logged out. I was looking for a way to login the user again without showing the Sign in With Apple Button like you would do with email and password (Just store it in the keychain), when the user has something like “Remember me” activated. But I guess it’s not possible with Sign in with Apple.However, I looked at my implementation again where I tried to adopt some concepts from the RChat app example where they have an AppState that handles login and logout. I saw that they logout the user in the init method:_ = realmApp.currentUser?.logOut()So every time the app starts the user gets logged out which I don’t want. But I guess in the example they only do it to show the login on purpose every time the app starts.\nSo can I actually assume that app.currentUser is never nil (logged out) unless I explicitly logout the user?", "username": "tobi_wan_kenobi" }, { "code": "@State private var isLoggedIn: Bool = false\nSpacer().onAppear(perform: {\n if app.currentUser != nil {\n self.isLoggedIn = true\n } \n})\n", "text": "Hey Tobi,I think that is correct. When a realm has already been downloaded, there is a cached user, so you can check for that user and then just access the realm.If you call logOut, that remove the cached user.At that point you would need to authenticate ( SIW or any other authentication) in order to create a new authenticated realm user. Then, as long as you don’t call logOut, just check for the user.I am using a similar AppState, but just in my ContentView.swift file at the top level.Then somewhere in my first screen where people can log in I haveThat is enough to then route them to another View so that they never see the log in screen unless they log out.Hope that helps.–Kurt", "username": "Kurt_Libby1" }, { "code": "logOutt aware that there should be always a cached user until I call ", "text": "Hi Kurt,thanks so much for your help and your code examples. In the end I solved the problem by just removing the logOut call. But I wasnt aware that there should be always a cached user until I call logOut`.\nSo thanks for your input–Tobi", "username": "tobi_wan_kenobi" } ]
Login user after Sign in with Apple
2021-05-26T15:34:55.552Z
Login user after Sign in with Apple
2,779
null
[ "realm-web" ]
[ { "code": "const appId = 'myAppId';\n const appConfig = {\n id: appId,\n timeout: 100000,\n };\nconst app = new Realm.App(appConfig);\nconst mongodb = app.currentUser.mongoClient(\"mongodb-atlas\");\n", "text": "when i’m using above code in react native getting error “TypeError: app.currentUser.mongoClient is not a function”", "username": "Ayushi_Mishra" }, { "code": "", "text": "Same Issue:\non going through dov i found “To access a collection, create MongoDB service handle for the user that you wish to access the collection with:”there is some meaning in \"MongoDB service handle for the users, which i am unable to understand.HELP", "username": "CXLAB_MV" }, { "code": "currentUserapp.logIn(<Credentials>)mongoClientconst appId = 'myAppId';\nconst appConfig = {\n id: appId,\n timeout: 100000,\n};\nconst app = new Realm.App(appConfig);\n\n// Session authentication using the anonymous user\nconst credentials = Realm.Credentials.anonymous();\nawait app.logIn(credentials);\n\nconst mongodb = app.currentUser.mongoClient(\"mongodb-atlas\");\n", "text": "Hi @Ayushi_Mishra,Welcome to the community!From the code snippet you have provided, it does not look like you have authenticated the app session. The currentUser attribute for your app will be null as you have not called app.logIn(<Credentials>). In order to have access to the mongoClient function, all that needs to be done is to login against one of our included authentication providers like belowAs a note, our tutorial is extremely helpful in showing how to build a simple tracker app using React Native.I hope this solves your problem.Cheers,\nGiuliano", "username": "giulianocelani" } ]
TypeError: app.currentUser.mongoClient is not a function
2021-02-09T19:19:36.701Z
TypeError: app.currentUser.mongoClient is not a function
3,669
null
[ "performance", "security" ]
[ { "code": "", "text": "environment:We want to have our MongoDB data stored on a LUKS encrypted partition. The servers have NVMe SSD drives. Are there any gotchas to be aware of? Or would it just be expected to work? I understand that there will be some level of performance impact, but I’m trying to get a sense if using LUKS will make MongoDB practically unusable.", "username": "AmitG" }, { "code": "", "text": "Hi @AmitG,LUKS and other disk encryption methods should be transparent to applications like MongoDB, but performance outcomes will depend on your workload and hardware resources.The MongoDB server isn’t explicitly tested with LUKS, but there haven’t been any reports of significant problems that would lead to caveats in our MongoDB Production Notes.MongoDB’s supported solution for encryption at rest is the Encrypted Storage Engine available in MongoDB Enterprise Server.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
LUKS encryption on MongoDB data partition
2021-05-26T18:35:15.360Z
LUKS encryption on MongoDB data partition
2,375
null
[ "crud" ]
[ { "code": "_id: \"...\", list: [ {w:\"text1\"}, {w:\"text2\"}, {w:\"text3\"}....]find({ list: {w: \"text2\" } })list: [{w:\"text2\"}]", "text": "Data\n_id: \"...\", list: [ {w:\"text1\"}, {w:\"text2\"}, {w:\"text3\"}....]with this query find({ list: {w: \"text2\" } }) this return the whole list, how can I return only matched documents?\nexpected: list: [{w:\"text2\"}]", "username": "Alii_N_A" }, { "code": "db.partial.insert({ list: [{w:\"text1\"}, {w:\"text2\"}, {w:\"text3\"} ] })\n\ndb.partial.aggregate([\n { $project: { list: { $filter: { input: \"$list\", as: \"item\", cond: { $eq: [ \"$$item.w\", \"text2\" ] } } } } }\n]) \n", "text": "You could use $filter aggregation pipeline operator to return a subset of an array that match the condition.So, using your example, it would look like this:Mahi", "username": "mahisatya" } ]
Return partial array
2021-05-27T18:58:47.883Z
Return partial array
2,368
null
[ "java" ]
[ { "code": "", "text": "Hi, I need to find a document in my collection with some information about the player and if there isn’t any, insert a new one. I know in C# is FirstOrDefault(), which returns null if nothing is found. Unfortunately, in JAVA, the collection of documents cant be null. Is there some other way to do it?Code I have so far: mongo - Pastebin.com", "username": "dlabaja" }, { "code": "", "text": "Hi @dlabaja,I guess the best way is to use a method such as FindOneAndUpdate with an upsert:true having only setOnInsert clause only.This means that the document will try to be fetched and if not existing the upsert part of $setOnInsert will occur.https://mongodb.github.io/mongo-java-driver/4.2/apidocs/mongodb-driver-sync/com/mongodb/client/MongoCollection.html#findOneAndUpdate(com.mongodb.client.ClientSession,org.bson.conversions.Bson,java.util.List,com.mongodb.client.model.FindOneAndUpdateOptions)You can learn more on this article from my colleague:\nhttps://www.mongodb.com/quickstart/java-setup-crud-operations/Let me know if that helps?Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi, thanks for a fast response\nI’m not sure if I understand it, this is my code so farBson filter = eq(“uuid”, event.getPlayer().getUniqueId().toString());\nBson updateOperation = push(“uuid”, event.getPlayer().getUniqueId().toString());\nUpdateOptions options = new UpdateOptions().upsert(true);\nObject updateResult = coll.updateOne(filter, updateOperation, options);and log: log - Pastebin.com", "username": "dlabaja" }, { "code": "findOneAndUpdate()Bson filter = eq(“uuid”, event.getPlayer().getUniqueId().toString());\n Bson insertDoc = setOnInsert(“uuid”, event.getPlayer().getUniqueId().toString());\nUpdateOptions options = new UpdateOptions().upsert(true);\nDocument userDocument = coll.findOneAndUpdate(filter, insertDoc, options);\n", "text": "Hi @dlabaja,What I suggest is to use findOneAndUpdate() method instead of updateOne:I haven’t tested this code and it is based on the Java article and APIThis way if the document exists it will be returned and if not use setOnInsert to set a new document.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi tried it but still getting MongoTimeout exception. It seems that it cant find the document, so it tries again and again until the exception", "username": "dlabaja" }, { "code": "", "text": "Hi @dlabaja,I actually think that your issue is with the Java version and your client certificate being out of date:Issues connecting to Atlas Database - #4 by Pavel_DuchovnySee the above way to try and fix it.Thanks", "username": "Pavel_Duchovny" }, { "code": "", "text": "Ok, it seems I solved it - using JDK 12 wasn’t a good option, so I updated it to 16 instead. Thanks for your support and have a nice day ", "username": "dlabaja" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Document exist problem
2021-05-23T10:11:16.381Z
Document exist problem
3,344
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 5.0.0-rc0, the first release candidate of MongoDB 5.0, is out and is ready for testing. This is the culmination of the past year’s worth of development and marks the first release under the new rapid release cadence. Please review the release notes for more about the exciting new features, along with upgrade procedures and instructions on how to report an issue. Here are some of the highlights with more to come as we approach MongoDB.live in July:\nAccelerating developer productivity and application performance across more workloads:\n\nEven higher database resilience and efficiency:\n\nMongoDB 5.0 Release Notes | Changelog | Downloads\n\n-- The MongoDB Team", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 5.0.0-rc0 is released
2021-05-27T16:04:15.398Z
MongoDB 5.0.0-rc0 is released
4,788
null
[ "replication" ]
[ { "code": "", "text": "I would like to know if there is a way to perform a selective data replication to another mongodb. I experimented with zone sharding, however, that does not look like an option that will work for us.My use case is 2 sets of replicasets. They are located in different parts of the world. applications should only query the local db. headquarter_replicaset contains all data. division_replicaset should only contain a subset of data from headquarter_replicaset.Zone sharding is not the right solution as data being sharded to division_replicaset are no longer available in headquarter_replicaset. mongo query from headquarters will be slow as it need to retrieve the sharded data from division_replicaset. Maybe I implemented zone sharding wrongly.That is why I want to see if anyone out there implemented a way to selectively replicate data to remote nodes based on query results or through oplog filtering.Any help is apprepriated.\nThanks, Eric", "username": "Eric_Wong" }, { "code": "", "text": "I would setup a change stream server that monitor only the databases/collections I want to be replicated from one replica set to the other.MongoDB triggers, change streams, database triggers, real time", "username": "steevej" }, { "code": "", "text": "I was going to suggest upvoting this idea on Database: Top (211 ideas) – MongoDB Feedback Engine but to my surprise I couldn’t find this suggestion there! Which is strange because I know we’ve had other users ask for it…Asya", "username": "Asya_Kamsky" } ]
Selective replication
2021-05-26T09:31:33.000Z
Selective replication
2,736
null
[ "crud" ]
[ { "code": "[{\n $set: { \n health: { \n $cond: { \n if: { $gt: [ { $add: [ \"$health\", 20 ] }, 150 ] }, \n then: 150, \n else: { $add: [ \"$health\", 20 ] } \n } \n },\n $inc: {\n bandageamount: -1,\n },\n $pull: {\n items: [\"Bandage\"]\n }, \n }\n}\n]\n", "text": "Hello everyone, for some reason i get this error when i try to run this:Could anyone help me with this?", "username": "JasoO" }, { "code": "", "text": "Your $set is not terminated correctly so your following $inc is actually part of the $set object rather than being another object. Since top level field cannot start with $ then you get the given error.", "username": "steevej" }, { "code": "", "text": "What would be the way to fix that?", "username": "JasoO" }, { "code": "", "text": "It would be to terminate $set at the right place, just before the $inc. You do that by moving the last closing brace } just before the comma that is supposed to separate the $set object from the $inc object.", "username": "steevej" }, { "code": "$inc[]$sethealth$inc$pull[{$set: { \n health: { \n $cond: { \n if: { $gt: [ { $add: [ \"$health\", 20 ] }, 150 ] }, \n then: 150, \n else: { $add: [ \"$health\", 20 ] } \n } \n },\n bandageamount: {$sum:[\"$bandageamount\", -1]},\n items: {$filter:{\n input:\"$items\",\n cond:{$ne:[\"$$this\",\"Bandage\"]}\n }}\n}} ]", "text": "The issue here is that you are mixing two different syntax types here.Update takes either regular modifiers ($inc is one of those) or aggregation syntax, if you are using pipeline syntax, which you are ([]).So your $set for field health is fine, but you are not allowed to mix in $inc operator nor $pull. What you want is to express the whole thing as an aggregation:", "username": "Asya_Kamsky" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoError: Invalid $set :: Caused by :: FieldPath field names may not start with "$"
2021-05-26T14:07:00.014Z
MongoError: Invalid $set :: Caused by :: FieldPath field names may not start with &ldquo;$&rdquo;
28,763
null
[]
[ { "code": "const mongoClient = require(\"mongodb\").MongoClient;\n\nlet client;\n\nasync function connect() {\n if (!client) {\n client = await mongoClient.connect(\"mongodb://localhost:27017/somedb\", {\"useUnifiedTopology\": true});\n }\n const db = client.db();\n\n return db;\n}\n\nconst dbCollections = [\"A\", \"b\", \"c\", \"d\", \"e\", \"f\"];\n\nasync function createCollections() {\n const db = await connect();\n\n return Promise.all(\n dbCollections\n .map(collection => db.createCollection(collection))\n );\n}\n\n\nasync function dropCollections() {\n const db = await connect();\n for (const collection of dbCollections) {\n const resposne = await db.collection(collection).drop();\n console.log(`deleted ${collection}`, resposne);\n }\n}\n\ncreateCollections().then(() => dropCollections());\n\n", "text": "We are trying to delete all the collections using latest mongodb nodejs driver v 3.6. It deletes the collections but at the same time drops the database as well. Is this expected behaviour? Here is the code.", "username": "Ashish_Modi" }, { "code": "mongo", "text": "Hello @Ashish_Modi, welcome to the MongoDB community forum!This seems to be the behavior with NodeJS driver (I got similar result with my own code). This also turned out true with mongo shell.", "username": "Prasad_Saya" }, { "code": "", "text": "Thanks @Prasad_Saya for your reply. Can I ask which version of mongodb are you using?", "username": "Ashish_Modi" }, { "code": "", "text": "My MongoDB version was 4.2.8 and NodeJS driver’s was v3.6.", "username": "Prasad_Saya" } ]
Dropping All Collections, drops database | Nodejs Driver
2021-05-27T12:45:30.732Z
Dropping All Collections, drops database | Nodejs Driver
2,095
null
[ "queries", "node-js" ]
[ { "code": "", "text": "I’m working with javascript and mongoDB. Is there a way to search without it being case sensitive? So I’m looking for a document, which has one of it’s properties set to “Apple”, but it should also return for “apple”, “APPLE” or “aPpLe”I could just make the search query lower case from the javascript side and then match it to the values, but I didn’t have this in mind when creating the data so I would have to go over them and edit everything to lower case, so figured I’d ask for a possible easier solution", "username": "Lord_Wasabi" }, { "code": " db.createCollection(\"fruits\", { collation: { locale: 'en_US', strength: 2, caseLevel: false } } )\n\n db.fruits.insert( [ { type: \"apple\" },\n { type: \"aPpLe\" },\n { type: \"APPLE\" } ] )\n \n db.fruits.find( { type: \"apple\" } ) //this will match and return all three docs\n\n //specify at the query level\n db.fruitsQueryLevel.insert( [ { type: \"apple\" },\n { type: \"aPpLe\" },\n { type: \"APPLE\" } ] )\n \n db.fruitsQueryLevel.find({ type: \"apple\" }).collation( { locale: \"en_US\", strength: 2, caseLevel: false } ) //will return all three docs\n", "text": "Hi @Lord_WasabiCollation to the rescue. It allows language-specific rules for string comparison like lettercase and more. The documentation goes into a lot more detail.This could also be set the index level, or specified directly in the query.Here’s a snippet in action. It has been tested on 4.4Let me know if this helps.Cheers,\nMahi", "username": "mahisatya" }, { "code": ".collationawait client.db.itemdata.findOne({displayName: input}).collation( { locale: \"en_US\", strength: 2, caseLevel: false } )", "text": "Hello, thank you for your reply. Upon trying out the method where I specify it at the query level, I’m getting an error stating .collation is not a function. Do you have any idea about what did I do wrong?This is my code:", "username": "Lord_Wasabi" }, { "code": "", "text": "With the node-js driver you specify the collation in a slightly different way.See Class: Collection for more details.", "username": "steevej" }, { "code": "", "text": "This helped to make it work, thank you", "username": "Lord_Wasabi" }, { "code": "field db.coll.updateMany({}, [ {$set:{lowerCaseField:{$toLower:\"$field\"}}} ])\nfield:{$toLower:\"$field\"}", "text": "Keep in mind that any indexes created with a specific collation can only be used with queries that specify the same collation, so be sure you want this collation used for everything in this collection…If you did want to convert your existing data in a particular field to be lower case, or create a new field that contains the same string in lower case, you can do that with a single multi-update (as long as you’re on 4.2 or later):The above creates a new field but you could do the same update with field:{$toLower:\"$field\"} to change existing field from mixed case to all lower case.Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "Thank you for the answer, can you specify what you mean exactly in the first paragraph? What I used this system for seems to work fine, but I’m interested what I should be keeping in mind about it", "username": "Lord_Wasabi" }, { "code": "", "text": "When you create a collection specifying a particular collation, all the indexes and queries for that collection will by default use that collation. @mahisatya made a reference to that when he said collation can also be specified at the index level, or directly in the query.The caution is around the fact that if you specify one collation when creating an index, using a different collation to query that collection will not be able to use an index with different collation and will use a collection scan instead.Asya", "username": "Asya_Kamsky" } ]
Quick question about search queries
2021-05-26T14:27:14.300Z
Quick question about search queries
2,389
null
[ "change-streams" ]
[ { "code": "{\n \"_id\": 1111,\n \"staticName\": {\n \"dynamicName_1\": {\n \"dynamicName_2\": {\n \"staticName_2\": \"static name number 2\",\n \"staticName_3\": \"static name number 3\",\n \"staticArrayToTrackChanges\": [\n {\n \"static_do_not_track_1\":1, \n \"static_do_not_track_2\":2, \n \"static_track_this_for_changes\":111\n },\n {\n \"static_do_not_track_1\":3, \n \"static_do_not_track_2\":4, \n \"static_track_this_for_changes\":112\n }\n ] \n }\n }\n } \n}\ndynamicName_1dynamicName_2static_track_this_for_changesdynamicName_1dynamicName_2updateDescription.updatedFieldsChangeStreamDocumentstaticArrayToTrackChangesstatic_track_this_for_changes", "text": "MongoDB document structure that I need to watch for changes is something like below:Every property with keyword static inside name is property that has static (identical) name inside every document.Properties dynamicName_1 and dynamicName_2 are Dictionary keys that have dynamic values and are different for each document.I need to be able to track changes for property static_track_this_for_changes . Every document will have this property, but under different dynamicName_1 and dynamicName_2 keys.Also updateDescription.updatedFields of ChangeStreamDocument should contain only staticArrayToTrackChanges object that contains changed static_track_this_for_changes .Any help toward right solution for this is very appreciated ", "username": "Marko_Saravanja" }, { "code": "X.Y.staticName_1{\n \"_id\": 1111,\n \"staticName\": {\n \"staticStuff_1\": {\n \"type\": \"dynamicName_1\",\n \"staticStuff_2\": {\n \"someNumber\": \"dynamicName_2\",\n \"staticName_2\": \"static name number 2\",\n \"staticName_3\": \"static name number 3\",\n \"staticArrayToTrackChanges\": [\n {\n \"static_do_not_track_1\":1, \n \"static_do_not_track_2\":2, \n \"static_track_this_for_changes\":111\n },\n {\n \"static_do_not_track_1\":3, \n \"static_do_not_track_2\":4, \n \"static_track_this_for_changes\":112\n }\n ] \n }\n }\n } \n}\nstaticName.staticStuff_1.staticStuff_2.staticName_1staticName.staticStuff_1.staticStuff_2.staticName_1$or", "text": "Hi @Marko_Saravanja and welcome in the MongoDB Community !To my knowledge, it’s not possible to track only these fields in subdocuments if you don’t know the path to these. If by any chance, dynamicName_1 and 2 are actually limited to a few values, you could use $or and $exists all the different combinations of X.Y.staticName_1.But I think the easiest path here it to change the model and avoid dynamic names in fields. It’s usually a nightmare to deal with dynamic fields in the backend.Would it be possible to update the document to something like this for example?Then it’s trivial to use a $exists on staticName.staticStuff_1.staticStuff_2.staticName_1 and staticName.staticStuff_1.staticStuff_2.staticName_1 with $or.I’m trying to ask around if someone has a smarter / better idea but no chance so far.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "$objectToArray", "text": "I assume you know how many levels deep these static fields are?You could use $objectToArray expression to convert a document into array of key value pairs - you can now match on the “value” part - though the document you show is quite complex so you would need to apply this technique a couple of levels deep…Asya", "username": "Asya_Kamsky" }, { "code": "{\n \"_id\": 1111,\n \"staticName\": [\n {\n \"type1\": \"dynamic_type1_1\",\n \"prop1\": \"some_value\",\n \"type2\": \"dynamic_type2_2\"\n },\n {\n \"type1\": \"dynamic_type1_1\",\n \"prop1\": \"some_value\",\n \"type2\": \"dynamic_type2_3\"\n },\n {\n \"type1\": \"dynamic_type1_2\",\n \"prop1\": \"some_value\",\n \"type2\": \"dynamic_type2_4\"\n }\n ]\n}\ntype1 = some valuetype1 = some valuetype2matchtype1 = dynamic_type1_11111staticNametype1 = dynamic_type1_1staticNametype1 = dynamic_type1_2", "text": "Hi Maxime, thank you for your welcome.\nWe have agreed to remove dynamic names from mongo schema, and use static ones.\nAlso, we have removed one level of nesting, and merged two dynamics into one object (we now have more objects in array, but that is fine), so now object is something likeIssue that I am having now is as following:I do not know is this the right place to continue our discussion, so feel free to point that out if necessary.Cheers,\nMarko", "username": "Marko_Saravanja" }, { "code": "", "text": "Hi Asya,\nappreciated very much your time invested in providing solution, but it did struck me the issues that we would have while fighting with dynamic names on backend side, so we decided to remove dynamics completely.Cheers,\nMarko", "username": "Marko_Saravanja" }, { "code": "type1 = dynamic_type1_11111staticNametype1 = dynamic_type1_1staticNametype1 = dynamic_type1_2$filter", "text": "search condition is type1 = dynamic_type1_1 . Document with id 1111 matches filter. From that document, inside array staticName , i should project results to include only those objects where type1 = dynamic_type1_1 . In document above, that means I should project only first two objects from array staticName and exclude third one where type1 = dynamic_type1_2 .You can do this using $filter expression.", "username": "Asya_Kamsky" }, { "code": "$filterdb.coll.drop();\ndb.coll.insertOne({\n \"_id\": 1111,\n \"staticName\": [\n {\n \"type1\": \"dynamic_type1_1\",\n \"prop1\": \"some_value\",\n \"type2\": \"dynamic_type2_2\"\n },\n {\n \"type1\": \"dynamic_type1_1\",\n \"prop1\": \"some_value\",\n \"type2\": \"dynamic_type2_3\"\n },\n {\n \"type1\": \"dynamic_type1_2\",\n \"prop1\": \"some_value\",\n \"type2\": \"dynamic_type2_4\"\n }\n ]\n});\n\ndb.coll.createIndex({'staticName.type1': 1});\n\ndb.coll.aggregate([\n {\n '$match': {\n 'staticName.type1': 'dynamic_type1_1'\n }\n }, {\n '$project': {\n 'staticName': {\n '$filter': {\n 'input': '$staticName', \n 'as': 'value', \n 'cond': {\n '$eq': [\n '$$value.type1', 'dynamic_type1_1'\n ]\n }\n }\n }\n }\n }\n]);\n[\n {\n _id: 1111,\n staticName: [\n {\n type1: 'dynamic_type1_1',\n prop1: 'some_value',\n type2: 'dynamic_type2_2'\n },\n {\n type1: 'dynamic_type1_1',\n prop1: 'some_value',\n type2: 'dynamic_type2_3'\n }\n ]\n }\n]\n", "text": "Excellent news that it was possible to change the previous document model.\nI wrote a little script to show you the result using $filter @Marko_Saravanja.I took the liberty to add a little index in the mix that you need ;-). I bet you will probably need the same on type2 .Result of this little aggregation:Another naive approach would be to use $match + $unwind + $match + $group on _id with $push to recreate the array but it’s just a lot more processing…Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Another naive approach would be to use $match + $unwind + $match + $group on _id with $push to recreate the array but it’s just a lot more processing…I think by “naive” you meant to say “really bad”, “inefficient” and “definitely unadvised” ", "username": "Asya_Kamsky" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB ChangeStream watch for multiple nested property changes
2021-05-21T10:52:21.945Z
MongoDB ChangeStream watch for multiple nested property changes
4,429
null
[ "aggregation", "java", "performance" ]
[ { "code": "", "text": "Through a JAVA app , I have an aggregate pipeline which runs in , say 15 seconds , showing so in op_msg in mongodb log.\nWhen I copy paste and run it on mongodb SHELL , it returns instantly .The returned data is not much , and i do use toArray() on mongodb shell , just to make sure that the data is read as well .Does the op_msg contain the amount of time it takes to send data over network ?BOTH the JAVA app and the mongo shell are running on the same machine", "username": "Homer_Kommrad" }, { "code": "", "text": "Hi @Homer_Kommrad and welcome in the MongoDB Community !There is obviously something wrong. The execution time should be the same on both systems.\nWould it be possible to share the Java code and the code you are using in the Mongo Shell to reproduce the same query?\nWould it be possible to also share a few fake documents so we have a better idea of what is happening and also what indexes exist on this collection?You could also try to use explain in Java and the Mongo Shell to see the differences and what is taking time.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Slow query from mongodb log runs so fast on shell, slow when called from JAVA
2021-05-27T09:43:15.459Z
Slow query from mongodb log runs so fast on shell, slow when called from JAVA
2,525
null
[]
[ { "code": "", "text": "Hi,\nI am using Mongo ver 4.2 and want to rename the database, Can someone assist me in how can we rename the Mongo DB as we do it in MySQL.", "username": "pushkar_sawant" }, { "code": "use mydb;\ndb.getCollectionNames().forEach(function(coll) { db.getSiblingDB(\"admin\").runCommand({ renameCollection: \"mydb.\" + coll, to: \"test.\" + coll , writeConcern : { \"w\" : \"majority\" }}) } );\n", "text": "Hi @pushkar_sawant,Welcome to MongoDB Community.For non-Atlas clusters you can use the renameCollection command to rename all namespaces one by one between 2 database names, for example to rename all collections from database “mydb” to “test”:Alternatively, you can use dump and restore for all collections from one database to another:Best regards,\nPavel", "username": "Pavel_Duchovny" } ]
How to rename mongodb directly like we do in MySQL Database?
2021-05-26T04:50:49.557Z
How to rename mongodb directly like we do in MySQL Database?
5,200
null
[ "compass", "connecting", "mongodb-shell", "database-tools" ]
[ { "code": "", "text": "I have two clusters (one development, one production). The dev cluster is M30, v4.4.6; the prod cluster is M40, v4.4.6. They have the same network access rules.I can connect to the dev cluster using Compass, and the MongoSH Beta terminal in Compass connects as well. The command prompt is “Enterprise atlas-xxxx-shard-0 [primary] >”.I can also connect to the prod cluster using Compass, but the MongoSH Beta terminal in Compass does not connect. The command prompt is “>” and no command works; all give the following error message: “MongoServerSelectionError: Server selection timed out after 30000 ms”. I do not understand why Compass itself can connect to the database, but the mongo shell in Compass cannot.", "username": "Jeffrey_Pinyan" }, { "code": "", "text": "Hey @Jeffrey_Pinyan! This definitely sounds like a bug. We would like to get some additional info to help us investigate. Can you please share connection information for both clusters (excluding all personal information of course) and also can you please tell us if this happens consistently with those two clusters for you?", "username": "Sergey_Petushkov" }, { "code": "", "text": "Connection information being the full hostname and port?This is happening consistently: we have never noticed any problems getting mongo shell in Compass to work with the dev cluster, and the problem on the prod cluster has been constant since at least May 7.", "username": "Jeffrey_Pinyan" }, { "code": "", "text": "We are mostly interesting in connection options, the query params that are part of connection string, it would also help to know if you are using an srv record to connect or directly connecting to one of the nodes. What version of Compass are you on? There were no GA Compass releases around 7th of May, the last one was in April, so if you haven’t updated Compass around this date, maybe something changed in your prod cluster configuration?", "username": "Sergey_Petushkov" }, { "code": "mongodb+srv://user:[email protected]/test?authSource=admin&replicaSet=atlas-XXX-shard-0&w=majority&readPreference=primary&appname=MongoDB%20Compass&retryWrites=true&ssl=truemongodb+srv://user:[email protected]/test?authSource=admin&replicaSet=XXX-XXX-shard-0&w=majority&readPreference=primary&appname=MongoDB%20Compass&retryWrites=true&ssl=true", "text": "This is Compass 1.26.1 (the latest stable release, I think?).Connecting to the dev cluster via: mongodb+srv://user:[email protected]/test?authSource=admin&replicaSet=atlas-XXX-shard-0&w=majority&readPreference=primary&appname=MongoDB%20Compass&retryWrites=true&ssl=trueConnecting to the prod cluster via: mongodb+srv://user:[email protected]/test?authSource=admin&replicaSet=XXX-XXX-shard-0&w=majority&readPreference=primary&appname=MongoDB%20Compass&retryWrites=true&ssl=trueI note that the replicaSet name is differently formatted for dev than for prod.", "username": "Jeffrey_Pinyan" }, { "code": "replicaSetmongosh", "text": "Yeah, you’re right 1.26.1 is the latest stable one and as I mentioned was released in April. Hmm, nothing jumps out as suspicious when looking at the connection info, it’s the same except this replicaSet difference that you spotted.So just for some context, from 1.26.1 mongosh beta is using Node.js driver 4 (which is also a beta at the moment) internally and it might be the root cause for the issue, although it’s still weird that this started happening for you later than the actual Compass update. If it’s not too much work can you try connecting to your production cluster (the one that times out in Compass) with mongosh CLI and tell us if it works?", "username": "Sergey_Petushkov" } ]
Compass connects to database, but MongoSH Beta window does not
2021-05-21T15:56:00.318Z
Compass connects to database, but MongoSH Beta window does not
4,415
null
[ "database-tools" ]
[ { "code": "", "text": "Hello,\nIn MongoDB 4.2.14, mongodump is not able to decrypt the private key of a PEM file despite the fact that --sslPEMKeyPassword is provided with the correct password.It displays below error:\n2021-05-25T07:25:50.601+0000 Failed: can’t create session: error configuring the connector: error configuring client, can’t load client certificate: tls: failed to parse private keyThis seems same issue as reported in Jira issue TOOLS-2755\nhttps://jira.mongodb.org/browse/TOOLS-2755?jql=project%20%3D%20TOOLS%20AND%20component%20%3D%20mongodump%20AND%20text%20~%20\"\\\"failed%20to%20parse%20private%20key\\\"\"So I updated this Jira issue, providing certificates and mongod.conf permitting to easily reproduce it.Does anybody faced same issue and found a correct workaround ?Regards", "username": "Philippe_Bailleux" }, { "code": "", "text": "Jira issue TOOLS-2755 was closed, so I created TOOLS-2878.Also I found a workaround:\nI have rebuilt the Docker image of MongoDB 4.2.14 with the “MongoDB Database Tools” mongodb-database-tools-ubuntu2004-x86_64-100.3.1.deb downloaded from Download MongoDB Command Line Database Tools | MongoDB.\nI confirm that mongodump 100.3.1 doesn’t have the issue.", "username": "Philippe_Bailleux" } ]
Mongodump "failed to parse private key"
2021-05-25T08:12:16.070Z
Mongodump &ldquo;failed to parse private key&rdquo;
2,895
null
[ "atlas-functions" ]
[ { "code": "exports = async function handler(changeEvent) {\n try {\n const serviceAccount = JSON.parse(context.values.get(\"serviceAccountSecret\"));\n const { PubSub } = require(\"@google-cloud/pubsub\");\n const pubsub = new PubSub({ projectId: serviceAccount.project_id, credentials: serviceAccount });\n const messageData = JSON.stringify(changeEvent);\n const dataBuffer = Buffer.from(messageData);\n console.log(\"Before pubsub\")\n await pubsub.topic(\"slides-change-stream-events\").publish(dataBuffer);\n console.log(\"After pubsub\")\n } catch (e) {\n console.log(e);\n }\n}\n > ran on Mon May 24 2021 09:01:24 GMT-0700 (Pacific Daylight Time)> took 1m30.011253723s> error:execution time limit exceeded> logs:Before pubsub", "text": "My function executes in 250 milliseconds locally but times out at 92 seconds in the Cloud.Here’s my code:I’ve tested it with all sorts of logging. It will log “Before pubsub” but then hang on the await call to publish a message. Works great locally. No error is thrown in the logs so I am unable to debug it.Error Output:\n > ran on Mon May 24 2021 09:01:24 GMT-0700 (Pacific Daylight Time)\n> took 1m30.011253723s\n> error:\nexecution time limit exceeded\n> logs:\nBefore pubsub", "username": "Govind_Rai" }, { "code": "google-cloud/pubsub", "text": "Hey,Just a random idea, but maybe you had to put your local IP address on a an IP access list so the Google Cloud service knows it needs to expect messages from this particular IP address and did add Realm’s IP address into the same list?Also, I guess you were able to import google-cloud/pubsub into the Realm dependencies?Not sure this will help but… At least I tried !Cheers,\nMax.", "username": "MaBeuLux88" }, { "code": "@google-cloud/pubsub", "text": "@MaBeuLux88 Thanks for your reply, and I appreciate your trying! . The IP address is not an issue and I was able to import the dependencies. I’m also working with Mongo Support.Here is the current update from support:I also found some errors on our server side indicating an issue with finding modules which could explain why it’s taking so long.After doing some further research, it appears that the @google-cloud/pubsub package does not appear to be supported by Realm along with some other GCP packages such as: The rep will be trying to see if there is a way to alleviate the issue.Currently it seems the MongoDB Functions offering is just way too primitive. For example, I need to write the changes to both pubsub and sqs but installing both npm packages takes the compressed limit of node_modules over 10mb. So I’m limited to very small amount of node_modules which is not realistic.Thanks,\nGovind", "username": "Govind_Rai" }, { "code": "", "text": "I also assumed it was a dependency issue.You can read more about it here:https://docs.mongodb.com/realm/functions/upload-external-dependencies/\nimage1469×240 22.7 KB\nThis is still a beta feature for this reason. It’s clearly not finished and not working as it should at the moment for many libs… I also suffer for the same problem on a few of my projects . Hopefully this will be resolved soon!Let me know if your rep or support contact has a better alternative but I have big doubts for now…In one of my project I needed slackify-markdown so the only “smart” way I found to get away with this was this:Mini Node.js application to which I can send via REST some HTML content and retrieve Slack MD content. - GitHub - MaBeuLux88/slackify-html-service: Mini Node.js application to which I can send via ...I created a little REST service that I hosted in a corner of an S3 server that I have that answers to my Realm post queries and answers the result I wanted to execute in my Realm function in the first place…Silly but… Once the dependencies are fixed and not beta anymore, it will be easy to fix my project and replace the call by the actually module function.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi Govind,As per our discussion in the support ticket, the aforementioned dependencies are currently not supported but we are working to provide this in the near future.I am keeping track of this work and will circle back around to this thread and update if there is a change to these packages being supported.Regards\nManny", "username": "Mansoor_Omar" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Error: Execution Time Limit Exceeded using google-cloud/pubsub
2021-05-24T16:02:57.123Z
Error: Execution Time Limit Exceeded using google-cloud/pubsub
4,439
https://www.mongodb.com/…e_2_1024x477.png
[]
[ { "code": "", "text": "image1579×737 44.4 KBMy cluster couldn’t be connected and fetching data.\nThe status “We are deploying your changes (current action: configuring MongoDB)” keep showing for 24 hours?What can I do now \nplease help", "username": "An_S_n_Nguy_n" }, { "code": "", "text": "Hi @An_S_n_Nguy_n and welcome in the MongoDB Community !Can I ask you to go to:This will get you in touch with our live Atlas Support team who will be able to troubleshoot this issue further for you.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Why my cluster configure for too long (over 24 hours)
2021-05-26T17:29:22.956Z
Why my cluster configure for too long (over 24 hours)
4,251
null
[ "node-js" ]
[ { "code": "", "text": "I’m interested in using the Kerberos library to handle enterprise authentication for a Node.js app I’m writing.Based on the README, there is a requirement for a C++ toolchain to build binaries for the computer’s platform/architecture.However, I see in the Github releases page that there are a number of prebuilt bindings for various Node versions and architectures: Release v1.1.5 · mongodb-js/kerberos · GitHubIs there a way to use these binaries, so that a C++ toolchain isn’t required? Similar to how node-sass or other C++ modules provide bindings.", "username": "Matt_Steele" }, { "code": "npm install", "text": "Hi Matt, the pre-built binaries will be used by default when you install the library via npm install; it only falls back to compilation via node-gyp if no binaries are available for your system.", "username": "Eric_Adum" }, { "code": "", "text": "I just tried this out on my personal device and it’s working great, thank you!For use on our corporate intranet, we have a need to mirror these binaries inside our firewall. Other projects with native dependencies provide environment variables/npm configuration parameters, such as node-sass: GitHub - sass/node-sass: Node.js bindings to libsassIs this achievable with the kerberos module? If not, would you be open to adding it (or accepting a PR with this functionality)?", "username": "Matt_Steele" }, { "code": ".npmrcnpm installkerberos_binary_host=http://overriden-host.com/overriden-path\n", "text": "Glad to help! Yes this is possible via prebuild-install, which the kerberos module uses to install prebuilds. You need to add a line like this to .npmrc before running npm install:See the prebuild-install docs for the full details: GitHub - prebuild/prebuild-install: Minimal install client for prebuilds.", "username": "Eric_Adum" }, { "code": "", "text": "Thanks for the docs @Eric_Adum! I wasn’t aware of this package.While reading the prebuild-install docs, I see they now recommend to use prebuildify + node-gyp-build, which bundles binaries into the NPM package itself: GitHub - prebuild/prebuild-install: Minimal install client for prebuilds.If you think this is a feasible approach, I’d be happy to help build this out and submit a PR; I’d prefer to do this work than stand up and maintain a mirror of binaries inside my firewall ", "username": "Matt_Steele" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Kerberos NPM module - how do prebuilt binaries work?
2021-05-24T21:51:24.759Z
Kerberos NPM module - how do prebuilt binaries work?
3,554
null
[]
[ { "code": "", "text": "Hi everyone, my name is Tom and I am from San Diego, California. I am learning all I can about JavaScript and Mongo DB. I’m actually in a JavaScript Bootcamp at the moment and this week we are studying quite a few different database types incliding Mongo (obviously), Redis, SQL, mySQL, and Postgre. I hope to learn all I can from you all about MongoDB. Thanks in advance!", "username": "Tom_Phillips" }, { "code": "", "text": "Hi Tom, welcome to the community Glad to have you here.Looking forward to sharing the knowledge and learning in the process.Regards,\nMahi", "username": "mahisatya" }, { "code": "", "text": "Thank you Mahi! ", "username": "Tom_Phillips" } ]
How is everybody doing?
2021-05-26T19:40:58.018Z
How is everybody doing?
3,075
null
[]
[ { "code": "", "text": "Looking for the URL for the checksum files for mongodb community. Is there a root folder or anything we can browse?", "username": "Cameron_Boyer" }, { "code": "", "text": "Hi @Cameron_Boyer, welcome to the community.I am not sure if there is place to browse the checksum files. Maybe someone will chime in here with that information if it exists.However, there is this to help you verify the integrity of the package - https://docs.mongodb.com/manual/tutorial/verify-mongodb-packages/#use-sha-256Hopefully, this helps. Let me know.Thanks,\nMahi", "username": "mahisatya" }, { "code": "", "text": "Links and checksums:Try MongoDB Atlas products free. Developers can choose to use in the cloud or download locally. Either way, our software makes it easy to work with data.Try MongoDB Atlas products free. Developers can choose to use in the cloud or download locally. Either way, our software makes it easy to work with data.", "username": "chris" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Checksum for linux direct download
2021-05-26T17:42:10.501Z
Checksum for linux direct download
2,486
null
[]
[ { "code": "", "text": "I am trying to configure atlas search to search from news collection which is going to consists of news in multiple languages. I did not found any way to use multiple language analyzer, is there any way through which I can accomplish this thing and i want to use the news heading field for autocompletion, can i use autocompletion without using autocomplete in mapping?", "username": "Siddharth_49956" }, { "code": "", "text": "@Siddharth_49956 Check out the multi analyzer option in Atlas Search. It will allow you to do exactly what you want.", "username": "Marcus" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can we use multiple lucene language analyzer in atlas search?
2021-05-26T10:59:46.378Z
Can we use multiple lucene language analyzer in atlas search?
2,234
null
[ "node-js" ]
[ { "code": "", "text": "Hey, I’m working on setting up a discord.js database (for a bot), and I’m getting this warning. b51538378d | SourceBinExample code for one of my commands is ddd7c0aacb | SourceBin (I know there are some now unused variables. Just didn’t want to delete until change is complete)I used to just store it all in a local file, but was informed that it’d be better to move it to a database. Nobody in the servers could figure out what was wrong.Before, the bot worked and just said the warning. Now, it doesn’t work. I contacted discord.js support servers, and they said to contact you as it was an error on this end. Thank you!", "username": "Katz_Kingdom" }, { "code": "", "text": "did you find the fix of this problem @Katz_Kingdom", "username": "Kaustubh_Rai" } ]
UnhandledPromiseRejectionWarning: MongoError: Topology closed
2020-06-29T19:30:18.902Z
UnhandledPromiseRejectionWarning: MongoError: Topology closed
5,957
null
[]
[ { "code": "", "text": "In my old legacy realm cloud application, I was using a method where I would have a local realm when a user first starting using the app.Eventually, if they decide to start paying for sync, I would migrate that realm to a synced realm.I’m working on migrating my app to MongoDB Realm from Realm Cloud currently (this is a react-native app).My thought is that instead of doing this difficult client side migration when a user toggles sync on/off, that I could just start off with an anonymous user so that there is only ever one schema / one type of Realm.Then, if they pay for syncing later, I can create a JWT login to gate access to authenticate the user and then use that on new devices to link to the same data set.@Kenneth_Geisshirt mentioned that it is possible to link credentials in v10+ at the end of this github issue: Convert Realm Cloud Anonymous User into an authenticated User · Issue #2211 · realm/realm-js · GitHubI’m just wondering if there is any example of how to do this? Does creating a login automatically use the previous anonymous user’s realm user? Are there any drawbacks? I also read somewhere that the anonymous user isn’t permanent and data may be lost, which is obviously not usable in production.Would love to hear how anyone else is solving this issue.Thanks.–Kurt", "username": "Kurt_Libby1" }, { "code": "", "text": "Hi @Kurt_Libby1, I’ve not tried it myself but the React Native SDK does include a feature to link identities: https://docs.mongodb.com/realm/sdk/react-native/advanced/link-identities/", "username": "Andrew_Morgan" }, { "code": "", "text": "Thank you!That was exactly what I was looking for. Working on it now.Wondering this still though:Are there any drawbacks? I also read somewhere that the anonymous user isn’t permanent and data may be lost, which is obviously not usable in production.Is there any case that would cause the anonymous user to lose their data prior to linking to an auth service?–Kurt", "username": "Kurt_Libby1" } ]
Starting out with Anonymous auth before Authenticating
2021-05-25T20:09:33.175Z
Starting out with Anonymous auth before Authenticating
1,975
null
[ "data-modeling" ]
[ { "code": "post_id: \"post_1\"\ncomments: [comment_1, comment_2, comment_3, ..., comment_10000]\ncontent: \"Here is a row of post collection\"\ncomment_id: \"comment_1\",\ncontent: \"Here is a row of comment collection\",\npost.find()\ncomment_id: \"comment_1\",\npost_id: \"post_1\"\ncomment.find( {post_id} )\n", "text": "My situation: I have a post collection where each post have many comments and comments collections.andNow, I can easy retrieve a post with comments by query:then lookup or populate (using mongoose). I call this as SCHEMA_1, follow one-to-many pattern.After, I think about one-to-sqiullions patterns, then build other SCHEMA_2, that using reverse reference in comment collection, like:Now, I aslo can easy retrieve comments of a post by query:Just one query with post_id had index.OVERALL query for read:MY QUESTION IS: Which schema is better performance for read query ? I don’t actually deeply understand how lookup work behind with array of references. In SCHEMA_1, if for each reference of comment, lookup need travel comment collection one (like correlate subquerry in SQL), it’s terrible if array of objectID is large.\nAnd if I follow one-to-sqiullition patterns, I have a thought that performance is better because it just run one query, no need to JOIN.How can I understand this problem ?", "username": "Dung_Ha_Quy" }, { "code": "db.post.find( { postId: \"My Data Model\" } )postIdcommentIdcommentId$lookuppostIdpostId$lookup", "text": "Hello @Dung_Ha_Quy, welcome to the MongoDB Community forum!The questions you had posted are valid, but, the answers will be clear (I think) if you know the amount of data and the important queries in your application - that is the application’s functionality in little more detail. These aspects can help determine how to organize or model the data.For example, in your application a post can have about 10 to 20 comments, with each comment has a few (about 10 lines) of text. This will help think about storing all the comment data within each post itself. Then the design and the queries like will be simple as all the related data is stored together.db.post.find( { postId: \"My Data Model\" } )The query will fetch post related data like title, content, date, etc., and also all the comments associated with the post. You can use projection to control the fields required in a particular page of the application. You can also, query for specific comment. Both the postId and commentId fields can be indexed for fast access.But, if the number of comments is very large, and ever growing (only some extraordinary posts have many comments - an unlikely scenario), then store commentId values within the post as an array field. The comments are stored within a separate collection.Then, you can always query a post for its information with a simple query. To find a specific comment information you will need to do a “join” operation - the $lookup aggregation query to access details of comments. Note that even the lookup operations can use indexes. Ideally, you use lookup queries where the performance may not be very important.An option, would be to store the postId value (and even the post title) in the comments collection. This will help query the comments collection using the postId (or post title), without a $lookup of the post collection.Note that, for each option you will also have to take into consideration the other operations on the collection like - inserts, updates and deletes.Some relevant questions that can help in modeling the data:", "username": "Prasad_Saya" }, { "code": "", "text": "@Prasad_Saya Thank you so much for your answer !\nActually, i have a little bit unclearly about performance, exactly PERFORMANCE of READ query if array of comment objectID is large, maybe UNBOUNED. And I just don’t know the comparision performance of two scenarior - to read amounts of comment of a post:Using lookup if store array comment objectID in post ( I’m worry about performance here because I saw from document: Each objectID need to travel to comment collection one, so array of 10000 objectID will need to travel comment collection 10000 times, or maybe I misunderstand this ??? )Just store postID in each comment, then using only ONE query. Is this better performance than lookup above ?.And of course, there are other CRUD operator, but I just focus on READ operator.\nCan you explain to me more about lookup operation, thank u !", "username": "Dung_Ha_Quy" }, { "code": "$lookupexplain", "text": "Actually, i have a little bit unclearly about performance, exactly PERFORMANCE of READ query if array of comment objectID is large, maybe UNBOUNED.First of all, you should not have an unbounded array field in your document - this will be a bad design and will lead to problems later on. There can be an array field with large number of bounded elements. You can define index on an array field - these are called as Multikey Indexes.I suggest, you create some samples of data in your collections and try some queries - including the $lookup queries. Use the explain with “executionStats” mode and study the query plans.", "username": "Prasad_Saya" } ]
Build a schema for post and comments: Array of comments objectID OR post objectID in each comment
2021-05-26T07:08:04.653Z
Build a schema for post and comments: Array of comments objectID OR post objectID in each comment
17,542
null
[ "security" ]
[ { "code": "net:\n port: 27017\n bindIp: 127.0.0.1,10.0.0.4\n\n ssl:\n mode: requireSSL\n CAFile: /etc/ssl/ca.pem\n PEMKeyFile: /etc/ssl/mongodb.pem\n clusterCAFile: /etc/ssl/ca.pem\n clusterFile: /etc/ssl/mongodb.pem\n\t allowConnectionsWithoutCertificates: true\n\n security:\n clusterAuthMode: x509\ndb.serverStatus().security\n{\n \"SSLServerSubjectName\" : \"CN=hostname\",\n \"SSLServerHasCertificateAuthority\" : true,\n \"SSLServerCertificateExpirationDate\" : ISODate(\"2021-01-24T10:37:32Z\")\n}\n", "text": "Hello,I’m trying to setup x509 Cluster Authentication using Let’s encrypt certificate.\nBut getting the following error:2020-10-29T09:29:40.764+0000 I REPL_HB [replexec-1] Error in heartbeat (requestId: 555) to “hostname”:27017, response status: Unauthorized: not authorized on admin to execute command { replSetHeartbeat: “repl-01”, configVersion: 1, hbv: 1, from: “hostname:27017”, fromId: 0, term: 27, $replData: 1, $clusterTime: { clusterTime: Timestamp(1603880275, 1), signature: { hash: BinData(0, 6775444445555FB8F9AEC2FE5566A791EAD5C1824), keyId: 6886403334444455553 } }, $db: “admin” }This certificate is working fine for the client authentication, but not for internal membership authentication.Configuration:Output from MongDB admin db:I know, that clusterCAFile and clusterFile are overhead parameters in case if already used CAFile, PEMKeyFile, but just in case tried put them also.The internal authentication works using keyFile.Also it works with cert signed by self-signed CA, but not with Let’s Encrypt certificate.\nIs it something like limitation for using Let’s Encrypt certificate for internal authentication?Version of MongoDB 3.6Much appreciate for any help.", "username": "Alexey_Roshka" }, { "code": "", "text": "It’s hard to debug this remotely. Maybe someone else will still chime in and provide more useful help but if not you may find it useful to check out the M310 course on MongoDB university. It takes you through all the steps to set x509 and keyfile authentication up correctly in the first Chapter.", "username": "Naomi_Pentrel" }, { "code": "", "text": "From this articlethe requirements for the certificate for internal authentication are the following:The Distinguished Name (DN), found in the member certificate’s subject, must specify a non-empty value for at least one of the following attributes: Organization (O), the Organizational Unit (OU) or the Domain Component (DC).But in Let’s Encrypt certificate the Subject only contains domain name like this: CN = mongo-cl-01.example.com\nI’m right in my assumption, that the issue can be, that there is no OU, O or DC in the Subject of Let’s Encrypt certificate ?", "username": "Alexey_Roshka" }, { "code": "", "text": "I’m facing the same problem as you. Did you find a way to solve it?", "username": "Nam_Le" } ]
Let's Encrypt certificate for internal membership authentication
2020-10-29T12:10:47.805Z
Let&rsquo;s Encrypt certificate for internal membership authentication
2,872
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "Hello guys, i am facing troubles in reseting user password. I am receiving thee the reset link from realm but can not successfully reset the password. I also tried making a function but could not achieve this goal.", "username": "Panashe_Makomo" }, { "code": "", "text": "Hi @Panashe_Makomo, could you share the code that’s trying to reset the password?What error are you seeing?", "username": "Andrew_Morgan" } ]
Reset Password Using URL Link
2021-05-17T14:51:41.190Z
Reset Password Using URL Link
2,037
null
[]
[ { "code": "", "text": "Hi!I have a data stream setup in AWS Firehose, which is connected to my Realm app’s webhook (in 3rd Party Services). It is working great so far.Now I have setup a second Firehose stream. The data from that one can be processed through the same webhook I mentioned above. However, I’m wondering if there’s any risk for clashes in case Firehose I and Firehose II request the same webhook simultaneously. How would that be handled? Do the requests queue up and then run synchronically in that case?I couldn’t find anything on the subject, so I’ll test asking here. I can of course just set up another webhook that runs in parallell, it just felt so potentially redundant, seeing how the code would look identical on the two wehooks.Thank you in advance!", "username": "Patrick_Mamba" }, { "code": "", "text": "Hi @Patrick_Mamba, I’ve not seen anything to suggest that there would be a problem with multiple sources accessing the same webhook. In fact, that’s what you’d expect to happen. e.g. if you were calling the webhook from a web app then you could expect thousands of browsers to be calling the same webhook.Of course, there’s no conflict detection/resolution between different calls to the webhook and so you have to accept “last write wins”.", "username": "Andrew_Morgan" } ]
Simultaneous requests to the same Realm webhook
2021-05-26T08:31:11.544Z
Simultaneous requests to the same Realm webhook
1,381
null
[ "data-modeling", "performance" ]
[ { "code": "", "text": "We want to move collection from one database to another on the same server. We have about 30 million documents in each collections. We tried doing it using the bulk insert in C# but its taking very long time. Is there any better way to clone the collections.Thank you,\nSJ", "username": "Jason_Widener1" }, { "code": "", "text": "Hi @Jason_Widener1,Have you looked at mongodump and mongorestore to accomplish this task? The indexes have to be rebuilt if you go down this path, after restoring.And, regarding the C# program, are you running it on the mongod host? If not, I would give that a try if possible, and compare the run times.Regards,\nMahi", "username": "mahisatya" } ]
Moving to new DB - Clone Collections
2021-05-25T21:01:01.502Z
Moving to new DB - Clone Collections
1,698
null
[ "queries", "graphql" ]
[ { "code": "_id__baas_transaction: ObjectId(\"[someId]\")email_id__baas_transaction", "text": "I am trying to debug errors I am experiencing when trying to update some documents in one of my collections. After looking closer, I noticed that documents got updated with an additional field:_id__baas_transaction: ObjectId(\"[someId]\")Some context:\nThe given document is created, then updated via GraphQL query (Realm) and then (by mistake) updated again with the same data. During the second update I get an error:Error: reason=“role \\“server\\” in \\“main-db.Users\\” does not have update permission for document with _id: ObjectID(\\”[differentId]\\\"): could not validate document: \\n\\taccount: email is required\"; code=“SchemaValidationFailedWrite”; untrusted=“update not permitted”; details=map[]\"I investigated the issue, and it seems that the document cannot be updated because of the missing email field. That is to be expected. What I do not know is what is _id__baas_transaction and what document is it pointing to.", "username": "_alex" }, { "code": "", "text": "Hi Alex – There doesn’t appear to be any corruption taking place here, this is simply an internal field that Realm uses to understand whether a document changes as Rules are evaluated / Writes are made. It is not cleaned up on operation completion to minimize load on the database. It should not prevent any operations from completing or impact schema validation for your data.", "username": "Drew_DiPalma" }, { "code": "", "text": "Thank You, @Drew_DiPalma. This explains a lot.Do you know if:", "username": "_alex" }, { "code": "", "text": "Hi Alex – I don’t believe there’s a use for this field, it’s really only for internally tracking. It is not cleaned up automatically, but may change as further write are made.", "username": "Drew_DiPalma" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Document corrupted with _id__baas_transaction: ObjectId("...") field?
2021-05-24T15:24:22.534Z
Document corrupted with _id__baas_transaction: ObjectId(&ldquo;&hellip;&rdquo;) field?
4,537
null
[ "data-modeling" ]
[ { "code": "", "text": "Hello everyone!Sorry if my question is silly, as I have just started to learn MongoDB, but I can not figure out how to solve one problem, using MongoDB. Therefore ask your advices here.The main problem - how to work with timezones in MongoDB? In case if we just need to store time - here is no problem and everything is clear, but so if we don’t have time but only timezone and need to know current time, applying particular timezone ?As an example, what I mean by that here is small example which I would like to achieve.Imagine we have users in collection, every user has its own timezone stored in field timezone.\nWe have scheduled task to send notifications to user, but, time for send it has to be strictly, let’s say, from 9:00 am to 6:00 pm. And of course, in particular moment of time every user will have different time.\nThe question is: how to select those users which are in period of 9:00 am - 6:00 pm by their local time?–\nIn this article: https://docs.mongodb.com/manual/tutorial/model-time-dataIts advised instead of timezone store offset, in this case it’s easy to reconstruct time, but this time will not always be correct. Therefore in my opinion it’s harmful advise.Imagine that we created record at first of January with Germany timezone (UTC +1) so offset will be 3600. But once German will switch to summer time (29 March 2:00 am) it has to be UTC +2. So we will convert time wrong after that period. That’s why i don’t really like solution with offset.Here is $where operator which could help, but unfortunately there is no way to work with timezones in JavaScript.Any advise or thoughts in that?Thank you in advance,\nBest regards,\nAlex", "username": "Alex_Kapustin" }, { "code": "Europe/Berlin", "text": "I’m confused - why do you think $where would help?As far as storing time zone, don’t store offset from UTC, store the actual time zone like Europe/Berlin", "username": "Asya_Kamsky" }, { "code": "", "text": "Hello Asya!\nThank you for your time replying me!Yes, that’s what i want to store. Store only timezone name, such you mentioned, Europe/Berlin, because storing offset will require update all the users in a future once DST turns on or off occurred.\nBut how can i convert current time applying timezone from document? (Is there any tool in MongoDB to work with timezones?) And then, how to compare it to allowed time range ?\nI can see only one way doing some calculations in MongoDB query - $where, that’s why I was thinking it might help.", "username": "Alex_Kapustin" }, { "code": "", "text": "Handling and displaying time and date in the user timezone is a presentation layer issue not a data layer issue. For this reason it depends on which driver you use. For example, with the node.js driver, you could use Date.prototype.toLocaleTimeString() - JavaScript | MDN to do it.", "username": "steevej" }, { "code": "$dateToStringtimezone$dateToString$dateToPartsdb.tz.find()\n{ \"_id\" : \"Asya\", \"tz\" : \"America/New_York\" }\n{ \"_id\" : \"Stennie\", \"tz\" : \"Australia/Sydney\" }\n{ \"_id\" : \"Alex\", \"tz\" : \"Australia/Sydney\" }\n{ \"_id\" : \"Joe\", \"tz\" : \"Europe/Dublin\" }\nFetched 4 record(s) in 2ms\n> db.tz.aggregate({$addFields:{now:{$dateToString:{date:new Date(), timezone:\"$tz\"}}}})\n{ \"_id\" : \"Asya\", \"tz\" : \"America/New_York\", \"now\" : \"2020-02-16T11:14:02.734Z\" }\n{ \"_id\" : \"Stennie\", \"tz\" : \"Australia/Sydney\", \"now\" : \"2020-02-17T03:14:02.734Z\" }\n{ \"_id\" : \"Alex\", \"tz\" : \"Australia/Sydney\", \"now\" : \"2020-02-17T03:14:02.734Z\" }\n{ \"_id\" : \"Joe\", \"tz\" : \"Europe/Dublin\", \"now\" : \"2020-02-16T16:14:02.734Z\" }\n$$NOW", "text": "I definitely understand the issue given that usually storing UTC and then displaying it in the “local” format is pretty simple with various helper functions (you can even do it server side as $dateToString takes timezone).What I would recommend is using either $dateToString or antoher aggregation function $dateToParts to deal with this - depending on what you want to do with result.If you are on 4.2 you can even generate current time server-side with $$NOW expression.", "username": "Asya_Kamsky" }, { "code": "", "text": "Thank you again Asya!In case of aggregate - it will be possible to filter results by $match operator. Just need to think over, how can I apply rules to select only users which are in desired time window.But wouldn’t it be very heavy query to execute aggregation for every single document in collection ?Thnaks!", "username": "Alex_Kapustin" }, { "code": "", "text": "Hello Steevej, I think you did not understand my question.I do not want to display time. I want to filter users which are in specific time window applying their own timezone.", "username": "Alex_Kapustin" }, { "code": "", "text": "Indeed. I hope you did not lost too much time investigating my link.", "username": "steevej" }, { "code": "$expr$matchfinddb.tz.find({$expr:{$eq:[0, {$hour:{date:new Date(), tz:\"$tz\"}}]}})\n{ \"_id\" : \"Joe\", \"tz\" : \"Europe/Dublin\" }\n", "text": "Take a look at $expr operator - it allows you to use aggregation expressions in $match stage or find command.Like this:This finds all users for whom the time now is between midnight and 1am (aka hour=0).wouldn’t it be very heavy query to execute aggregation for every single documentWell, if you need to query by something, then you query by it. If you are able to constrain the results by another field that could be indexed, then it would help, otherwise, it’s like any other collection scan - not optimal and if this is a large collection and this query needs to run frequently I would recommend figuring out how you can index this query.", "username": "Asya_Kamsky" }, { "code": "", "text": "Thank you very much Asya for your flawless support!", "username": "Alex_Kapustin" } ]
Work with dates and timezones
2020-02-16T02:20:44.301Z
Work with dates and timezones
58,447
null
[ "queries" ]
[ { "code": "// pretend this is a collection.\nconst documents = [{bar: true}, {foo: true}];\nconst query = {$or: [{foo: true}, {bar: true}]};\ndocuments.findOne(query) // which document will this return?", "text": "// pretend this is a collection.\nconst documents = [{bar: true}, {foo: true}];const query = {$or: [{foo: true}, {bar: true}]};\ndocuments.findOne(query) // which document will this return?I need to create a query like if foo, else bar. Only return bar if foo doesn’t exist.\nSo my question is; is the or operator deterministic so that the order in the array determines which will be output first? If not, what would the best way to have a query to do such.", "username": "ruby" }, { "code": "if the field foo exists then return the value of foo \nelse return the value of bar", "text": "Hello @ruby, welcome to the MongoDB Community forum!documents.findOne(query) // which document will this return?The query returns both the documents.I need to create a query like if foo, else bar. Only return bar if foo doesn’t exist.This is not clear. You mean:", "username": "Prasad_Saya" }, { "code": "{ \"$or\" :\n [\n { \"foo\" : true } ,\n { \"$and\" : [ { \"foo\" : { \"$exists\" : false } } , { \"bar\" : true } ] }\n ]\n}\n", "text": "Please be respectful.Your reply is unnecessary.I disagree with the above, since your question// which document will this return?has been clearly and unquestionably answered withThe query returns both the documents.Since you did not clarify the followingif the field foo exists then return the value of foo\nelse return the value of barI will assume that this is what you want and try to propose something.Of course, this is untested.", "username": "steevej" }, { "code": "findOne()find()sort()nullbooleansort()foodb.mydata.insert([\n { \"_id\": \"foo\", \"foo\" : true },\n { \"_id\": \"bar\", \"bar\" : true },\n { \"_id\": \"foobar\", \"foo\" : true, \"bar\" : true }\n])\n// Find a document where `foo` or `bar` is true, but prefer `foo`\n> db.mydata.find({$or: [{foo: true}, {bar: true}]}).sort({foo:-1}).limit(1)\n{ \"_id\" : \"foo\", \"foo\" : true }\n{ \"_id\": \"foobar\", \"foo\" : true, \"bar\" : true }foo:truefoo_id_id// Find a document where `foo` or `bar` is true, but prefer `foo` ordered by _id (descending)\n> db.mydata.find({$or: [{foo: true}, {bar: true}]}).sort({foo:-1, _id:-1}).limit(1)\n{ \"_id\" : \"foobar\", \"foo\" : true, \"bar\" : true }\n", "text": "I need to create a query like if foo, else bar. Only return bar if foo doesn’t exist.\nSo my question is; is the or operator deterministic so that the order in the array determines which will be output first? If not, what would the best way to have a query to do such.Welcome to the MongoDB Community @ruby!The behaviour of findOne() is described in the MongoDB server documentation:If multiple documents satisfy the query, this method returns the first document according to the natural order which reflects the order of documents on the disk. In capped collections, natural order is the same as insertion order. If no document satisfies the query, the method returns null.Natural sort order is not deterministic (see: What is the default sort order when none is specified?).However, if you want more deterministic results you can use a find() query with an explicit sort() which will follow the rules of Comparison/Sort Order.In particular, you can use the fact that null values will be sorted before boolean values to create a sort() which returns a document with foo first, if present.Example data:This query could also return { \"_id\": \"foobar\", \"foo\" : true, \"bar\" : true }, since multiple documents matching foo:true have equal value in a sort on foo. The order of result documents may not change frequently, but you should not rely on the same document being returned.If you want a more stable order you could add additional fields to the sort criteria. For example, adding _id to the sort (since _id is guaranteed to be present and unique):If this is going to be a common query, you should also Create an Index to Sort Query Results.Regards,\nStennie", "username": "Stennie_X" } ]
Deterministic OR Operator
2021-05-25T11:07:29.877Z
Deterministic OR Operator
2,313
null
[ "configuration" ]
[ { "code": "May 20 07:00:11 kernel: [2380339.617830] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/,task=mongod,pid=301659,uid=114\nMay 20 07:00:11 kernel: [2380339.617943] Out of memory: Killed process 301659 (mongod) total-vm:1837512kB, anon-rss:341936kB, file-rss:0kB, shmem-rss:0kB, UID:114 pgtables:1076kB oom_score_adj:0\ndb version v4.4.6\nBuild Info: {\n \"version\": \"4.4.6\",\n \"gitVersion\": \"72e66213c2c3eab37d9358d5e78ad7f5c1d0d0d7\",\n \"openSSLVersion\": \"OpenSSL 1.1.1f 31 Mar 2020\",\n \"modules\": [],\n \"allocator\": \"tcmalloc\",\n \"environment\": {\n \"distmod\": \"ubuntu2004\",\n \"distarch\": \"aarch64\",\n \"target_arch\": \"aarch64\"\n }\n}", "text": "Hello everyone… has anyone had success running MongoDB on a Raspberry PI 4 with moderate data volumes? My instance keeps getting killed by the kernel’s OOM handler\":If I manually restart the server, it runs for a while then dies again. Here’s my build information;", "username": "Jeff_Smith" }, { "code": "", "text": "You are running out of memory.You might try to adjust wiredTiger configuration. See https://docs.mongodb.com/manual/core/wiredtiger/", "username": "steevej" } ]
MongoDB on a Raspberry Pi 4
2021-05-25T14:53:38.911Z
MongoDB on a Raspberry Pi 4
2,218
null
[]
[ { "code": "_db = db.db(\"myFirstDatabase\");\nTypeError: Cannot read property 'db' of undefined\n connectToServer: function (callback) {\n client.connect(function (err, db) {\n _db = db.db(\"myFirstDatabase\");\n return callback(err);\n console.log(\"Successfully connected to MongoDB.\");\n });\n }\ndberrmern-stack-example/mern/server/node_modules/mongodb/lib/utils.js:691\n throw error;\n ^\n\nTypeError: Cannot read property 'db' of undefined\nat mern-stack-example/mern/server/db/conn.js:14:16\n at mern-stack-example/mern/server/node_modules/mongodb/lib/utils.js:688:9\n at mern-stack-example/mern/server/node_modules/mongodb/lib/mongo_client.js:257:23\n at connectCallback (mern-stack-example/mern/server/node_modules/mongodb/lib/operations/connect.js:365:5)\n at mern-stack-example/mern/server/node_modules/mongodb/lib/operations/connect.js:552:14\n at connectHandler (mern-stack-example/mern/server/node_modules/mongodb/lib/core/sdam/topology.js:286:11)\n at cb (mern-stack-example/mern/server/node_modules/mongodb/lib/core/sdam/topology.js:684:18)\n at mern-stack-example/mern/server/node_modules/mongodb/lib/cmap/connection_pool.js:348:13\n at mern-stack-example/mern/server/node_modules/mongodb/lib/core/sdam/server.js:282:16\n at Object.callback (mern-stack-example/mern/server/node_modules/mongodb/lib/cmap/connection_pool.js:345:7)\n", "text": "I’ve tried doing the How to Use MERN Stack: A Complete Guide but keep getting an error with the server/db/conn.js file on line 13:There error is:Connecting via the Node code generated by Atlas works just fine. Also, I cloned the GitHub repo and get the exact same error. The actual function call is this:But, literally, nothing is getting assigned to the db parameter when called. The Atlas connection code only has one parameter: err.I’ve tried this on both macOS 11.3.1 and Debion 10.9 and the exact same thing happens. Since it happens on the GitHub code, I know that I’m not the one causing it by accidentally doing something wrong. And, since the Atlas connection code works, I know that the URI is correct.Please advise on what I need to do to fix this and then please update the website and GitHub with the information as well. Thank you!Version Numbers:Node: 14.17.0\nNpm: 6.14.13While this is easy to reproduce and you can get the error info yourself, here it is in all of its glory:", "username": "Blaztek" }, { "code": "", "text": "Hi @Blaztek,Welcome to MongoDB community.I have tested this guide in the past and it worked.How did you install the MongoDB driver? What is its version?It might be that you are failing to connect to the Atlas cluster… Have you verified you can access it via a mongo shell connection from the howt you run the tutorial from?Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "You’re right. The password on Atlas - I typed it wrong, so it wasn’t connecting properly. It works now.Thank you so much!", "username": "Blaztek" }, { "code": "", "text": "I will look into a better error message on this scenario as you should get an error in that case.Thanks for highlighting Happy coding…", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Blaztek,We have fixed the code to have better connection error handling.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MERN Stack Tutorial Error - NOT
2021-05-20T19:52:19.761Z
MERN Stack Tutorial Error - NOT
4,675
null
[ "swift" ]
[ { "code": "class PersonClass: Object {\n var addressList = List<EmbeddedAddress>()\n}\nclass EmbeddedAddress: EmbeddedObject {\n @objc dynamic var address = \"\"\n @objc dynamic var linked_neighborhood: Neighborhood!\n}\nclass Neighborhood: Object {\n @objc dynamic var name = \"\"\n let linkingAddresses = LinkingObjects(fromType: EmbeddedAddress.self, property: \"linked_neighborhood\")\n}\nlet hood = realm.objects(Neighborhood.self)....filter for some neighborhood\nfor addr in hood.linkingAddresses {\n print(addr.address)\n}\n", "text": "Before going too far with this, does the Swift SDK support using EmbeddedObjects with LinkingObjects?Here’s the use casesuppose we have a person class with a List of embedded addressesand then the EmbeddedAddress looks like thisbut then we also want to have a Neighborhood object that tracks back to those addresseswith this setup we can retrieve a neighborhood from Realm and then show all of the addressesThis seems to be working but… is it supposed to be since EmbeddedObjects are not officially Managed Objects?", "username": "Jay" }, { "code": "", "text": "The issue we are running into which prompted the question actually ties back to a problem discovered previously where when working with an object with embedded objects, if you want to create an unmanaged copy of an object that has embedded objects, you need to make a deep copy of the embedded objects.While it seems unrelated, it is. More info to follow.", "username": "Jay" }, { "code": "", "text": "Looping back around to the original questiondoes the Swift SDK support using EmbeddedObjects with LinkingObjects?Is using an EmbeddedObject as a Linking object a thing?Would appreciate any input from a Realmer here - just want know if that’s something that is working by design or if it just accidentally works and will be removed in a future release.", "username": "Jay" }, { "code": "", "text": "Hi @Jay, sorry for the delay. I posted a question around this to one of our internal developer lists and wanted to give a bit of time for people to raise a red flag.It feels it’s something that isn’t not supported, but it isn’t the typical use of the feature and so there’s always a risk that you hit a bug that isn’t covered by our testing.Sorry for the less than definitive answer!", "username": "Andrew_Morgan" }, { "code": "", "text": "@Andrew_MorganThanks so much for reaching out to your team and providing an update.We are trying to leverage EmbeddedObjects in situations where data needs a ‘format’ and ties to a single object so yet-another managed is object is not needed; like a persons address as in my initial example.In the future, I hope we can get a little more clear definition of the functionality, along with some eyeballs on creating an unmanaged version of the EmbeddedObject as well.", "username": "Jay" } ]
EmbeddedObject with LinkingObjects
2021-05-15T16:12:20.806Z
EmbeddedObject with LinkingObjects
3,272
null
[]
[ { "code": "sudo mongod --repair --dbpath /var/lib/mongodb\nsudo chown -R mongodb:mongodb /var/lib/mongodb \nsudo chown mongodb:mongodb /tmp/mongodb-27017.sock\n", "text": "Hello,I have been trying to recover my data store from the physical location. I was running MongoDB Community Version 4.0.10 and accidentally deleted the /bin directory. However, my MongoDB data was located at /var/lib/mongodb and had 105 files in it. According to the log generated while deletion, I got the confirmation that none of the files of the data directory was deleted.So I copied the entire data from “/var/lib/mongodb” to my local machine and kept it in the same location, installed the same version of MongoDB, which is 4.0.10, but could not manage to retrieve the data.After that, I have run the following command to see if I can repair the data:This process also did not help as I received the message “Aborted” affer I executed it. In the log file of MongoDB located at /etc/mongod.log, I have found a line that says:…\nread checksum error for 4096B block at offset 249856: block header checksum of 0x1c71aba3 doesn’t match expected checksum of 0x5881888d\"}}\n…I am sure that none of the files in the server data directory has been deleted.I have also used the following commands to give permissions to my local data directory:But I still could not recover the data. Can someone please help?", "username": "Tarif_Ezaz" }, { "code": "pgrep mongodpkill mongod", "text": "Even though the binary file was deleted it is quite likely that mongod was still running. Copy the files while mongod is running would result in the files not matching/being correct as you have experienced.If the original system is still running, you can check for the running mongod process pgrep mongod and kill it with pkill mongod then copying the file should be successful.", "username": "chris" }, { "code": "", "text": "Unfortunately, the original system is not running. So, can’t do this anymore. Is there any other way?", "username": "Tarif_Ezaz" }, { "code": "", "text": "Short answer: no.Longer answer:", "username": "chris" }, { "code": "sudo apt-get install -y mongodb-org=4.0.10 mongodb-org-server=4.0.10 mongodb-org-shell=4.0.10 mongodb-org-mongos=4.0.10 mongodb-org-tools=4.0.10\nmongod --dbpath /var/lib/mongodb\nmongo\nuse new_database\ndb.newCollection.insert({hi: \"helo\"})\ndb.newCollectoin.stats()\n\t\t\"uri\" : \"statistics:table:collection-44--4912736618580810110\",\nsudo cp /home/user/old_database/collection-12--4912736618580810442.wt /var/lib/collection-44--4912736618580810110.wt\nsudo mongod --dbpath /var/lib/mongodb --repair\nmongod --dbpath /var/lib/mongodb\ndb.newCollection.find({}).pretty()\n", "text": "So, this is how to problem was solved.Step 1:\nWe have installed the same version of MongoDB that was used in the server. For our case, the version was 4.0.10. We have used the following command to download the specific version:Step 2:\nThe version installed at step 1 was storing it’s data on /var/lib/mongodb. So, we started the MongoDB server usingStep 3:\nWe have started the mongo terminal client using the command:Step 4:\nAfter entering into the client, we have created a new database using the commandStep 5:\nWe have created a dummy collection inside our “new_database” using the command:Step 6:\nWe have checked the stats of our new collection using:Step 7:\nFrom the output file generated at step 6, we have looked for the keyword “uri” to see under which .wt file, our “newCollection” collection is stored. The uri key in the output should look something like this:Step 8:\nWe have kept a local copy of our server database at a location, let’s call it /home/user/old_database. Inside it, let’s say, there’s a .wt file called “collection-12–4912736618580810442.wt”. We copied this .wt file to our new .wt file that represents the “newCollection” collection. We used the command:Our intention is to make the new database consider the server .wt file as a distorted version of new databases’ .wt file.Step 9:\nWe stopped the MongoDB client and server and run the repair command on our new database:Step 10:\nAfter a successful repair, we have seen the exit code 0, as output of our command in line 9. So, we restarted the MongoDB server again, just like step 2:Step 11:\nWe started the mongo client again, choose the new_database again and looked at the collection like this:Fortunately, we could see the old documents stored in one of the collections of our old_database.Step 12:\nWe repeated steps 1 to 11 again for other .wt files in the old_database.We have learned this technique from: Repairing MongoDB When WiredTiger.wt File is Corrupted | by Ido Ozeri | MediumHope this write-up will help some other teams in the future. Please don’t forget to keep regular backup as well ", "username": "Tarif_Ezaz" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Recover MongoDB Data from Physical Storage
2021-05-23T12:04:33.269Z
Recover MongoDB Data from Physical Storage
7,758
null
[ "atlas-device-sync" ]
[ { "code": "Dog{ \"price\": { \"bsonType\": \"string\" } }Dog : RealmObject{ price: String? = null }Dog{ “price”: { “bsonType”: “double” } }", "text": "Hello,\nI have this scenario,I have an json schema definition in realm app on mongo cluster:\nDog{ \"price\": { \"bsonType\": \"string\" } }and a realm Object on android app like so,\nDog : RealmObject{ price: String? = null }The sync on the app is working fine upto this point.\nNow, I make an destructive schema change on the realm app on mongo server i.e. I update the datatype of price to double like so:\nDog{ “price”: { “bsonType”: “double” } }Now the sync is not working on the app and throw client error: Failed to transform received changeset: Schema mismatch.I can solve this by updating the app’s Dog realm object but in my case there is possibility that the user does not update the app with new updated realm object code.\nFor that user, the sync wont work because the his app is outdated and has outdated Dog realm object schema. This is my problem.Most of the examples and solutions I saw on the internet help with migrating after local realm object schema changes. I want to handle schema changes from backend on the app such that sync doesn’t not fail for someone with outdated realm object.", "username": "Lakshmi_Narasimhan_M" }, { "code": "“price2”: { “bsonType”: “double” }pricepriceprice2", "text": "Hi @Lakshmi_Narasimhan_M, welcome to the community!In this particular case, I’d turn it into a non-destructive schema change (at the expense of some data duplication).I’d add a new optional attribute: “price2”: { “bsonType”: “double” }, while leaving price unchanged. I’d also add database triggers to make sure that price and price2 are kept consistent with each other.Old versions of the mobile app should continue to work, while new versions can use the new attribute.", "username": "Andrew_Morgan" } ]
Full sync handling after destructive schema in backend (Realm app on mongodb.com)
2021-05-18T13:38:53.099Z
Full sync handling after destructive schema in backend (Realm app on mongodb.com)
2,061
https://www.mongodb.com/…6_2_1024x181.png
[]
[ { "code": "", "text": "Been following this How to Use MERN Stack: A Complete Guide. I have configured all the files as mentioned. Both react and node apps run. But when I create a record, data does not seem to pass to the db. Because when I use the Records List page, no records are being shown, even after I created multiple records using Create page. I created the Atlas account and configured a cluster there. When I check the Atlas cluster, I don’t see any database created either, let alone records.\n\nmernError11364×242 5.86 KB\nAlso, I should note that, the github repo’s code is somewhat different to the code given in the Tutorial page.", "username": "Sandun_Akalanka" }, { "code": "", "text": "Hi @Sandun_Akalanka,Welcome to MongoDB community.Do you see any errors in your browser developer tools or in the backend server terminal?Make sure your atlas connection doesn’t throw any errors and you can connect to it from a mongo shell using the placed connection string in your config.envWe need to update the tutorial with some better error handling the git repo is updated with it. So I suggest you to pull latest copy.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
MERN Stack Tutorial -
2021-05-24T18:11:54.253Z
MERN Stack Tutorial -
2,169
null
[ "queries", "security" ]
[ { "code": "", "text": "I am using one db and when I run the command“show collections”I get the following output…\nWarning: unable to run listCollections, attempting to approximate collection names by parsing connectionStatusWould like to know what does it mean?", "username": "ragarwal1_N_A" }, { "code": "readWriteAnyDatabaseORuse dbName\ndb.createUser({ user: \"USERNAME\", pwd: \"PASSWORD\", roles: [ \"readWrite\", \"dbAdmin\" ] })\ndb.system.users.find().pretty()Warning: unable to run listCollections, attempting to approximate collection names by parsing connectionStatus\n> show dbs\n> db.system.users.find().pretty()\n", "text": "Hi @ragarwal1_N_A,Generally, it means that you haven’t given “readWriteAnyDatabase” role to the user for the “admin” database OR, It could be probably because authentication is disabled…!!Can you clarify to me that you have created a user for the specific database.?If not, you can create a user by using the following command:And try to use this to connect to the MongoDB database.Meanwhile, by using db.system.users.find().pretty() command check the authentication status of the user.For more info read here.Hope it helps!!Kind Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Thanks… it seems to be access issue. I have raised the access request to the specific db. I will come back if the issue is still unresolved.", "username": "ragarwal1_N_A" }, { "code": "", "text": "Hi @ragarwal1_N_A,Yeah, it could be due to the authorization issue as I mentioned it doesn’t have a specific role granted.Yeah, you can reach out if you need any further assistance.Kind Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Understanding command output
2021-05-24T17:17:21.070Z
Understanding command output
13,774
null
[]
[ { "code": "", "text": "I have a specific question about charting that maybe you can answer me. I have a question in regards to lockup fields. The question is the following:", "username": "Sergi_Martinez" }, { "code": "$lookup[{$set: { fieldName: { $toString: '$fieldName' } } }]$lookup", "text": "Hi @Sergi_Martinez -Lookup fields use the MongoDB $lookup aggregation stage under the covers, and this does a strict equality check against the field values, so a string and an Object ID will never match.There are a couple of directions you could consider:Let me know if either of these work.\nTom", "username": "tomhollander" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Charts lookup fields
2021-05-24T15:39:43.195Z
Charts lookup fields
3,416
null
[ "node-js" ]
[ { "code": "", "text": "After upgrading the NodeJS driver package from 3.6.6 to 3.6.8 i encountered an unexpected performance issue when trying to insertMany() with 10K documents. I tracked down the issue and it seems that lib/operations/bulk_write.js line 80 in v3.6.8 the culprit. That is the reindexing operation performed on the insertedIds array.In 3.6.6 the entire array was created from scratch which isnt possible in 3.6.8 due to restrictions not allowing the override of ‘r.insertedIds’. On my machine 3.6.6 is approximately 100x faster than 3.6.8 on the mentioned set with 10000 documents.Is this a known Issue? Should I open a Ticket about this?", "username": "Jan_Schwalbe" }, { "code": "", "text": "Hi, thanks so much for catching this!It appears that there was an incomplete backport of some bulk operations related code from 4.0 to 3.6 that caused duplicate loops over the result array.We created the following ticket to track the fix: NODE-3309.Best regards,\nDaria", "username": "dariakp" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
NodeJS driver bulk write performance issue after upgrade from v3.6.6 to v3.6.8
2021-05-22T15:34:19.091Z
NodeJS driver bulk write performance issue after upgrade from v3.6.6 to v3.6.8
1,936
null
[ "java" ]
[ { "code": "", "text": "I guess that before Loom becomes a reality (i.e. in a few years), there is no plan for a non-reactive “async” driver anymore. Jumping to a reactive approach is kinda a big deal (this is an understatement). I get that one can isolate it at the mongodb level turning every subscriber result into a CompletionStage or the like, but that’s still a significant refactoring effort. Am I alone with those concerns? Started thinking about it more recently because the next release (5.0) won’t be supported by 3.12.x.", "username": "Jean-Francois_Lebeau" }, { "code": "", "text": "While the 3.12 driver won’t support all the new features of MongoDB 5.0, we are still running the entire 3.12 test suite against 5.0 and ensuring that all the tests still pass.As for the callback-back async driver, I wouldn’t say you’re alone, but we did not see much usage of this driver, as almost everyone doing async standardized on Reactive Streams.Regards,\nJeff", "username": "Jeffrey_Yemin" }, { "code": "", "text": "That’s good news.The callback-style async driver is a bit cumbersome, but a CompletableFuture-style one, similar to what Lettuce does for Redis, could have been a good alternative. With Loom in the picture, some will definitly resist the jump to the Reactive approach anyway.", "username": "Jean-Francois_Lebeau" }, { "code": "", "text": "At the time that we created the async driver our minimum supported Java version was 1.6, so CompletableFuture wasn’t an option. And by the time Java 8 became our minimum supported version (about a year ago), the Reactive Streams driver was already well established. This is the first time I recall anyone asking for a CompletableFuture-based API (though I may have forgotten). Not that it wouldn’t be useful to some, but there would be a cost burden in maintaining it.Regards,\nJeff", "username": "Jeffrey_Yemin" } ]
Stuck on Java driver 3.12.x for foreseeable future
2021-05-21T17:13:35.647Z
Stuck on Java driver 3.12.x for foreseeable future
2,595
https://www.mongodb.com/…13baf751789d.png
[]
[ { "code": "", "text": "I’m currently working on a Java spigot plugin and getting a NoClassDefError with my MongoDB. I tried almost everything, but still can’t fix it.My pom.xml: pom.xml - Pastebin.com\nMy code: code - Pastebin.com\nMaven classes: \nmavenClasses695×386 26.2 KB\n", "username": "dlabaja" }, { "code": "", "text": "Hi @dlabaja,3.9.1 is a bit old.\nCan you try 4.2.3 instead?You can check out this repo which is also using the sync driver and works without issues: GitHub - mongodb-developer/java-quick-start: This repository contains code samples for the Java Quick Start blog post seriesIf that doesn’t solve your issue, I guess you have a path issue somewhere.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi, thanks for the quick response. I updated the driver and copied my source code to the project you sent, but I’m still getting the same error.log: log - Pastebin.com\nproject (the new one I made from your repo): java-quick-start-master - Google Drive", "username": "dlabaja" }, { "code": "$ mvn clean compile \n[INFO] Scanning for projects...\n[INFO] \n[INFO] ---------------------------< me.dlabaja:dbl >---------------------------\n[INFO] Building dblPlugin 1.0-SNAPSHOT\n[INFO] --------------------------------[ jar ]---------------------------------\nDownloading from sonatype: https://oss.sonatype.org/content/groups/public/org/spigotmc/spigot-api/1.16.5-R0.1-SNAPSHOT/maven-metadata.xml\nDownloading from spigotmc-repo: https://hub.spigotmc.org/nexus/content/repositories/snapshots/org/spigotmc/spigot-api/1.16.5-R0.1-SNAPSHOT/maven-metadata.xml\nDownloaded from spigotmc-repo: https://hub.spigotmc.org/nexus/content/repositories/snapshots/org/spigotmc/spigot-api/1.16.5-R0.1-SNAPSHOT/maven-metadata.xml (1.5 kB at 1.9 kB/s)\nDownloading from spigotmc-repo: https://hub.spigotmc.org/nexus/content/repositories/snapshots/org/spigotmc/spigot-api/1.16.5-R0.1-SNAPSHOT/spigot-api-1.16.5-R0.1-20210519.225132-83.pom\nDownloaded from spigotmc-repo: https://hub.spigotmc.org/nexus/content/repositories/snapshots/org/spigotmc/spigot-api/1.16.5-R0.1-SNAPSHOT/spigot-api-1.16.5-R0.1-20210519.225132-83.pom (10 kB at 30 kB/s)\nDownloading from spigotmc-repo: https://hub.spigotmc.org/nexus/content/repositories/snapshots/com/google/guava/guava/21.0/guava-21.0.pom\nDownloading from sonatype: https://oss.sonatype.org/content/groups/public/com/google/guava/guava/21.0/guava-21.0.pom\nDownloaded from sonatype: https://oss.sonatype.org/content/groups/public/com/google/guava/guava/21.0/guava-21.0.pom (7.0 kB at 13 kB/s)\nDownloading from spigotmc-repo: https://hub.spigotmc.org/nexus/content/repositories/snapshots/com/google/guava/guava-parent/21.0/guava-parent-21.0.pom\nDownloading from sonatype: https://oss.sonatype.org/content/groups/public/com/google/guava/guava-parent/21.0/guava-parent-21.0.pom\nDownloaded from sonatype: https://oss.sonatype.org/content/groups/public/com/google/guava/guava-parent/21.0/guava-parent-21.0.pom (9.7 kB at 44 kB/s)\nDownloading from spigotmc-repo: https://hub.spigotmc.org/nexus/content/repositories/snapshots/net/md-5/bungeecord-chat/1.16-R0.4/bungeecord-chat-1.16-R0.4.pom\nDownloading from sonatype: https://oss.sonatype.org/content/groups/public/net/md-5/bungeecord-chat/1.16-R0.4/bungeecord-chat-1.16-R0.4.pom\nDownloaded from sonatype: https://oss.sonatype.org/content/groups/public/net/md-5/bungeecord-chat/1.16-R0.4/bungeecord-chat-1.16-R0.4.pom (978 B at 4.9 kB/s)\nDownloading from spigotmc-repo: https://hub.spigotmc.org/nexus/content/repositories/snapshots/net/md-5/bungeecord-parent/1.16-R0.4/bungeecord-parent-1.16-R0.4.pom\nDownloading from sonatype: https://oss.sonatype.org/content/groups/public/net/md-5/bungeecord-parent/1.16-R0.4/bungeecord-parent-1.16-R0.4.pom\nDownloaded from sonatype: https://oss.sonatype.org/content/groups/public/net/md-5/bungeecord-parent/1.16-R0.4/bungeecord-parent-1.16-R0.4.pom (10 kB at 51 kB/s)\nDownloading from spigotmc-repo: https://hub.spigotmc.org/nexus/content/repositories/snapshots/org/spigotmc/spigot-api/1.16.5-R0.1-SNAPSHOT/spigot-api-1.16.5-R0.1-20210519.225132-83.jar\nDownloading from spigotmc-repo: https://hub.spigotmc.org/nexus/content/repositories/snapshots/net/md-5/bungeecord-chat/1.16-R0.4/bungeecord-chat-1.16-R0.4.jar\nDownloading from spigotmc-repo: https://hub.spigotmc.org/nexus/content/repositories/snapshots/com/google/guava/guava/21.0/guava-21.0.jar\nDownloaded from spigotmc-repo: https://hub.spigotmc.org/nexus/content/repositories/snapshots/org/spigotmc/spigot-api/1.16.5-R0.1-SNAPSHOT/spigot-api-1.16.5-R0.1-20210519.225132-83.jar (1.3 MB at 3.9 MB/s)\nDownloading from sonatype: https://oss.sonatype.org/content/groups/public/com/google/guava/guava/21.0/guava-21.0.jar\nDownloading from sonatype: https://oss.sonatype.org/content/groups/public/net/md-5/bungeecord-chat/1.16-R0.4/bungeecord-chat-1.16-R0.4.jar\nDownloaded from sonatype: https://oss.sonatype.org/content/groups/public/com/google/guava/guava/21.0/guava-21.0.jar (2.5 MB at 6.3 MB/s)\nDownloaded from sonatype: https://oss.sonatype.org/content/groups/public/net/md-5/bungeecord-chat/1.16-R0.4/bungeecord-chat-1.16-R0.4.jar (149 kB at 311 kB/s)\n[INFO] \n[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ dbl ---\n[INFO] Deleting /tmp/java/java-quick-start-master/java-quick-start-master/target\n[INFO] \n[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ dbl ---\n[INFO] Using 'UTF-8' encoding to copy filtered resources.\n[INFO] Copying 1 resource\n[INFO] \n[INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ dbl ---\n[INFO] Changes detected - recompiling the module!\n[INFO] Compiling 5 source files to /tmp/java/java-quick-start-master/java-quick-start-master/target/classes\n[INFO] ------------------------------------------------------------------------\n[INFO] BUILD SUCCESS\n[INFO] ------------------------------------------------------------------------\n[INFO] Total time: 9.929 s\n[INFO] Finished at: 2021-05-20T13:59:33+02:00\n[INFO] ------------------------------------------------------------------------\npackage me.dlabaja.dbl;\n\nimport com.mongodb.ConnectionString;\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport org.bson.Document;\n\nimport java.util.ArrayList;\nimport java.util.List;\n\npublic class FakeMain {\n\n public static void main(String[] args) {\n try (MongoClient mongoClient = MongoClients.create(new ConnectionString(\"mongodb://localhost\"))) {\n List<Document> databases = mongoClient.listDatabases().into(new ArrayList<>());\n databases.forEach(db -> System.out.println(db.toJson()));\n }\n }\n\n}\nMay 20, 2021 2:19:01 PM com.mongodb.diagnostics.logging.Loggers shouldUseSLF4J\nWARNING: SLF4J not found on the classpath. Logging is disabled for the 'org.mongodb.driver' component\n{\"name\": \"admin\", \"sizeOnDisk\": 81920.0, \"empty\": false}\n{\"name\": \"config\", \"sizeOnDisk\": 28672.0, \"empty\": false}\n{\"name\": \"local\", \"sizeOnDisk\": 356352.0, \"empty\": false}\nMongoClient", "text": "Looks like I can compile it without issues.But there is no main method to run your code, so I have no idea how to run it.\nI added a FakeMain class in your code base to see if I could run it:And it ran without issuesJust the SLF4J warning.I have no idea how you are actually using and running this code. But the problem comes from the packaging I guess because I’m guessing you are using this as a jar file. The mongodb driver is probably not packaged correctly within this jar. So it compiles in your IDE, but the class is missing at runtime.Also, on a side note, I find this very suspicious that you have to create a new MongoClient each time a players joins the world. I think it would be better to have only a single instanciation of it for the entire application and not create a new one each time an event happens. Much better performances I guess.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi, thanks again for a response.\nI’m running this as a plugin, so instead of main() there is onEnable(). Then I’m exporting it into my plugins folder, where it starts together with my server.Plugins folder and inside the jar: image1084×845 68.4 KBI think you are true about missing due to compile because I can’t find any MongoDB classes in the exported jar.", "username": "dlabaja" }, { "code": "", "text": "Your JAR packager should be configured to be a fat JAR with all the dependencies in it.\nUsually I use maven-assembly-plugin for this.", "username": "MaBeuLux88" }, { "code": "", "text": "Thank you very much, it works!!!Just a final question, can I build this directly to the plugin folder?", "username": "dlabaja" }, { "code": "", "text": "No idea . I don’t know enough about the project itself . But it it’s driven by maven as well, I don’t see why not.", "username": "MaBeuLux88" }, { "code": "", "text": "Just found a way to do it, you can add dependencies manually in modules\nimage726×556 25.3 KBThanks for your help, I’m really grateful for it, and I’m not sure what would I do without you.", "username": "dlabaja" }, { "code": "pom.xml", "text": "Isn’t this supposed to be managed by the pom.xml file?", "username": "MaBeuLux88" }, { "code": "", "text": "I don’t know, the main thing is that it’s working ", "username": "dlabaja" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Getting NoClassDef with MongoDB driver
2021-05-19T15:54:41.388Z
Getting NoClassDef with MongoDB driver
3,262
null
[ "data-modeling", "atlas-device-sync" ]
[ { "code": "", "text": "From https://docs.mongodb.com/realm/sync/conflict-resolution/#custom-conflict-resolution, it doesn’t explicitly states the implementation details of handling custom conflict resolution.I imagine this is how it would need to work but can someone validate my understanding?Suppose we have an object with a status flag (status = high, medium, low) and say we always want status = high to win.I assume thatwe need to put this object into a list, sync it back, and when the list gets synced back to Atlas, trigger will need to be initiated, call the function, resolve the conflict based on custom logic (ie. loop thru the list and check which status = high), and then delete the rest of the items in the list, then persist the actual record it back to Atlas, such that the resolution results are reflected back on the connected devices?Can someone shed some light around this and validate my understanding? Thanks!", "username": "Bob_Fish" }, { "code": "status=high", "text": "Welcome to the forums!We don’t go into too much detail on custom conflict resolution because it will vary depending on the application and the regular conflict resolution works for most cases. That said I think the workflow that you outlined should work.One thing to consider is that your client apps will likely also see the intermediate states, i.e. they’ll see the status=high object get added to the list and then a little bit later see the other objects deleted as a result of the trigger update. This may or may not matter too much for your use case but be prepared to handle it if it comes up.", "username": "nlarew" } ]
Custom Conflict Resolution implementation
2021-05-24T17:16:17.273Z
Custom Conflict Resolution implementation
2,574
null
[ "atlas-functions" ]
[ { "code": "413 {\"error\":\"read size limit exceeded\",\"error_code\":\"ReadSizeLimitExceeded\"}async function deploy() {\n const accessToken = await getAccessToken();\n console.log(\"accessToken\", accessToken);\n const body = {\n \"can_evaluate\": {},\n \"name\": \"slidesTriggerHandler\",\n \"private\": true,\n \"source\": fs.readFileSync(path.join(__dirname, \"./functions/slides/dist/index.js\"))\n };\n const appId = await getAppId();\n console.log(\"appId\", appId);\n const projectId = \"someprojectId\";\n const url = `https://realm.mongodb.com/api/admin/v3.0/groups/${projectId}/apps/${appId}/functions`;\n const res = await fetch(url, {\n method: \"POST\",\n headers: {\n \"Content-Type\": \"application/json\",\n \"Accept\": \"application/json\",\n \"Authorization\": `Bearer ${accessToken}`,\n },\n body: JSON.stringify(body)\n });\n\n if (res.ok) {\n console.log(await res.json());\n } else {\n console.log(res.status);\n console.log(await res.text());\n }\n}\n", "text": "Trying to create a function returns\nhttps status code 413 with a message of\n {\"error\":\"read size limit exceeded\",\"error_code\":\"ReadSizeLimitExceeded\"}example node.js code that uses node-fetch npm package for http:Also feedback, it’s highly frustrating that there is no information description on error codes on any of the APIs.", "username": "Govind_Rai" }, { "code": "source", "text": "Hi @Govind_Rai, welcome to the community!This error seems to be indicating that the source attribute is too large. If it’s a large file then you could try refactoring it into multiple, smaller functions.", "username": "Andrew_Morgan" }, { "code": "", "text": "Hi @Andrew_Morgan,\nThat is the case. Thanks for that.Some feedback for the Functions Team:\nAllow larger function sizes: For example Google Cloud Functions allows;\n100MB (compressed) for sources.\n500MB (uncompressed) for sources plus modules.Have dependencies be automatically picked up/installed from a package.json fileAllow unlimited logging per execution and don’t truncate logs (currently you only get 10 logs and logs are truncated at 256 chars). Also allow formatted logging, currently we have to scroll right and read a really long non-wrapping lines.When testing functions in the UI, if you have a long running function whether or purpose or on error, you don’t really know if the function has passed or failed. Adding a spinner when a function is executing will help with UX (I had to learn this after realizing my function is timing out but it took me a while to understand that and I though the UI was broken).Some feedback for the Realm Admin API docs team:Please create a SDK for the Realm Admin API to help consumers avoid writing repetitive code\nPlease document errors codes and messages.Thanks for your help.", "username": "Govind_Rai" }, { "code": "", "text": "Hi @Govind_Rai, thanks for circling back with this feedback.", "username": "Andrew_Morgan" } ]
Creating function using Realm Administration API emits read size limit exceeded errors
2021-05-21T17:21:51.441Z
Creating function using Realm Administration API emits read size limit exceeded errors
2,668
https://www.mongodb.com/…24867d235134.png
[ "atlas-functions" ]
[ { "code": "", "text": "I am using Realm to create a serverless function that is able to pull JSON data from an API (roughly 500 text records) via an API (Octoparse). However, I am receiving an “Execution Time Limit Exceeded” error, every time I make the request.When I make the request using Postman or LocalHost, it is able to execute in 1.8s. Why does it take longer than the allotted 90 seconds with MongoDB Realm?\nScreen Shot 2021-05-08 at 2.37.12 PM1014×166 22.8 KB\n", "username": "Tyler_Huyser" }, { "code": "", "text": "Also, just to clarify, the total size of the response is 390.58 KB.", "username": "Tyler_Huyser" }, { "code": "", "text": "A post was split to a new topic: Error: Execution Time Limit Exceeded using google-cloud/pub-sub", "username": "Stennie_X" } ]
Error: Execution Time Limit Exceeded
2021-05-08T18:39:46.501Z
Error: Execution Time Limit Exceeded
3,730
null
[ "dot-net" ]
[ { "code": "MongoDB.Driver.MongoNotPrimaryException: Server returned not master error.\n at MongoDB.Driver.Core.Operations.RetryableReadOperationExecutor.Execute[TResult](IRetryableReadOperation`1 operation, RetryableReadContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.ReadCommandOperation`1.Execute(RetryableReadContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.FindCommandOperation`1.Execute(RetryableReadContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.FindOperation`1.Execute(RetryableReadContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.FindOperation`1.Execute(IReadBinding binding, CancellationToken cancellationToken)\n at MongoDB.Driver.DefaultLegacyOperationExecutor.ExecuteReadOperation[TResult](IReadBinding binding, IReadOperation`1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollection.ExecuteReadOperation[TResult](IClientSessionHandle session, IReadOperation`1 operation, ReadPreference readPreference)\n at MongoDB.Driver.MongoCursor`1.GetEnumerator(IClientSessionHandle session)\n at MongoDB.Driver.MongoCollection.UsingImplicitSession[TResult](Func`2 func)\n at System.Linq.Enumerable.FirstOrDefault[TSource](IEnumerable`1 source)\n at MongoDB.Driver.GridFS.MongoGridFS.FindOne(IMongoQuery query, Int32 version)\n", "text": "I catch MongoNotPrimaryException when I use FindOne on MongoGridFS during a replica set election. I expected that driver will retry read request. mongo-csharp-driver/RetryableReadOperationExecutor.cs at master · mongodb/mongo-csharp-driver · GitHub\nIn fact, retry works for ordinary collection but not for gridfs. It seems it is because MongoGridFS use _server.RequestStart which in turn use ChannelReadWriteBinding while ordinaty collection use WritableServerBinding (in case ReadPreferenceMode.Primary)It is bug or intented behavior? What need to be done for retrying?driver version: 2.11.6.0stack trace:", "username": "111518" }, { "code": "retryReads=trueCollection.findfindOnereadPreferencereadPreferenceprimaryPreferredeventual consistency", "text": "Hi @111518 and welcome in the MongoDB Community !Your cluster must be 3.6 or above. If you are 3.6.X or 4.0.X, your URI must make it explicit that you want to use retryable reads: retryReads=true. Clusters 4.2.X and above will automatically enable retryable reads (and writes for that matter).Also note that MongoDB 3.6 isn’t supported anymore so if you are not at 4.0 at least, it’s time to level up .I’m guessing you are using C# here. So your driver version should be at least compatible with 4.2 so apparently 2.9 or above which seems to be the case here but an upgrade to 2.12 wouldn’t hurt .Finally─according to the doc─you need to use Collection.find which is apparently the only operation in GridFS that supports retryable reads. I’m not sure findOne is covered by this feature…Source: https://docs.mongodb.com/manual/core/retryable-reads/#retryable-read-operationsFinally, I think there is an even easier solution for this issue: readPreference.By default, your MongoDB driver reads with the read preference Primary. It means that if you don’t have a primary (which, indeed, is briefly the case during an election) then you can set your readPreference to primaryPreferred which would allow the driver to fall back on a secondary for some read operations during this brief period. In this case, you are accepting the eventual consistency of the data (as you are reading from secondaries) but as soon as the primary is back, you will go back to read on the (new) primary.I hope this helps .Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "readPreferenceprimaryFindfindOne", "text": "Yes, I am using C#, mongodb 4.2. I want readPreference primary onlyIt seems that availability of retries depends on method. Find supports retries, findOne doesn’t. For me, it qis quite confusing behaviour because method’s names don’t tell anything about it and documentation as well. I also use writes and it seems that driver doesn’t support retryabe writes for gridfs. So I think I will catch such exceptions and retry by myself in my code.", "username": "111518" } ]
Retry doesn't work for gridfs
2021-05-20T10:38:09.576Z
Retry doesn&rsquo;t work for gridfs
2,403
https://www.mongodb.com/…bf_2_1024x34.png
[ "connecting", "golang" ]
[ { "code": "", "text": "Capture1596×54 3.62 KBHey everyone ! I am having a problem I attached to my screenshot. I am using Golang for interacting with the MongoDb but I am having server selection error I don’t know why. I tried everything even adding my ip in Whitelist ,Network access, Turning off the firewall everything I did to get over this problem but still getting the same error my connection link is also correct. Could any body help me out of this problem?Thanks any help will be appreciated greatly", "username": "Gamming_N_A" }, { "code": "mongo", "text": "Hello @Gamming_N_A, welcome to the MongoDB Community forum!You may want to include the code you are trying to connect to the MongoDB server and a clear copy of the error message (may be as formatted text)Also, see the topic on Getting Started at MongoDB Blog: Quick Start: Golang & MongoDB - Starting and Setup.Are you able to connect to the server via mongo shell or Compass?", "username": "Prasad_Saya" }, { "code": "", "text": "Thank you soo much Prasad_Saya it is solved.", "username": "Gamming_N_A" } ]
Unable to connect 0 - Server selection error
2021-05-19T11:04:57.137Z
Unable to connect 0 - Server selection error
2,320
null
[ "aggregation" ]
[ { "code": "", "text": "@Lauren_Schaefer I just watched your MongoDB & Node.js: Aggregation & Data Analysis (Part 2 of 4) i have a collection called reviews in which i have 2 foreign keys (product and user) , i am using aggregate to query the reviews collection , where i want to match reviews with rating >= 3 and then $lookup to populate product and user. It giving me result but in Array , what i want that in result it should show me only name of the product and name of the user instead of complete document. What should be the stages of pipeline for such aggregation ? Thanks in Advance", "username": "Noor_Arman" }, { "code": "", "text": "Hi @Noor_Arman - Welcome to the community!I think $project is what you’re looking for. $project lets you define the fields that should be included in the next stage. Those fields can either be existing fields or newly calculated fields.If $project isn’t what you’re looking for, can you include a sample document from each collection as well as the final output you’re hoping for?", "username": "Lauren_Schaefer" } ]
Adjusting $lookup results from an Array to a document
2021-05-23T09:01:44.772Z
Adjusting $lookup results from an Array to a document
4,791
null
[ "atlas-device-sync" ]
[ { "code": "", "text": "Session[2]: Failed to parse, or apply received changeset: ERROR: ArrayInsert: Invalid prior_size (list size = 4, prior_size = 0)", "username": "Amit_Chavan" }, { "code": "", "text": "Hi @Amit_Chavan, welcome to the community!Which SDK are you using? Are you able to describe the steps/code that can reproduce this issue?", "username": "Andrew_Morgan" } ]
Realm Data delete from atlas but getting this issue
2021-05-20T12:29:16.754Z
Realm Data delete from atlas but getting this issue
2,173
null
[]
[ { "code": "", "text": "Hi All,Can I please request a hand / a pointer towards docs that shows how to enable the ‘Export Data’ with the three-dots UI interface element, for an unauthenticated chart embedded in an iframe?The chart displays the data, auto-refreshes properly etc etc, but does not yet allow a generic end user to ‘interact’ with the chart via the three dots nor to select the ‘export data’ via the iframe-d charts…thanks in advance…(first time poster, long time user of mongodb…)", "username": "samos" }, { "code": "", "text": "Hey Sam -Thanks for using Charts! Unfortunately this is not a supported feature of embedded charts today. For feature requests, the place to go is feedback.mongodb.com - although this specific request is already on our radar and we hope to get to it soon.thanks\nTom", "username": "tomhollander" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to enable export from unauthenticated iframe chart?
2021-05-24T04:18:08.042Z
How to enable export from unauthenticated iframe chart?
1,866
null
[]
[ { "code": "2021-05-21T16:19:15.348+0000\tfound collection admin.rates bson to restore to admin.rates\n2021-05-21T16:19:15.348+0000\tfound collection metadata from admin.rates to restore to admin.rates\n2021-05-21T16:19:15.348+0000\tfound collection admin.savings bson to restore to admin.savings\n2021-05-21T16:19:15.348+0000\tfound collection metadata from admin.savings to restore to admin.savings\n2021-05-21T16:19:15.348+0000\tfound collection admin.system.roles bson to restore to admin.system.roles\n2021-05-21T16:19:15.348+0000\tfound collection metadata from admin.system.roles to restore to admin.system.roles\n2021-05-21T16:19:15.348+0000\tfound collection admin.system.users bson to restore to admin.system.users\n2021-05-21T16:19:15.348+0000\tfound collection metadata from admin.system.users to restore to admin.system.users\n2021-05-21T16:19:15.348+0000\tfound collection admin.system.version bson to restore to admin.system.version\nmongodump --uri mongodb+srv://$(db_user):$db_password@$(db_hostname) --oplogmongorestore --host mongo-restore-mongodb-0.mongo-restore-mongodb-headless.dr-backups.svc.cluster.local,mongo-restore-mongodb-1.mongo-restore-mongodb-headless.dr-backups.svc.cluster.local -vvvv --authenticationDatabase admin --username root --password <password> dump --drop", "text": "Hi,I am facing an issue with the mongorestore operation where in I have the fullbackup dump which is encrypted using vault transit secrets engine, when decrypted I can find all the databases and collections present. However when using mongorestore to restore to the seed instance the logs include found collection to restore to but never really restore to the instance. Some sample logs as belowThe command used for fullbackup is mongodump --uri mongodb+srv://$(db_user):$db_password@$(db_hostname) --oplogThe command used for restore is mongorestore --host mongo-restore-mongodb-0.mongo-restore-mongodb-headless.dr-backups.svc.cluster.local,mongo-restore-mongodb-1.mongo-restore-mongodb-headless.dr-backups.svc.cluster.local -vvvv --authenticationDatabase admin --username root --password <password> dump --dropStrangely though restoring to the local mongodb instance works fine, also to the new instance in mongodb atlas just facing issues with the helm chart based instance setup in the k8s cluster.Would appreciate any help with regard to this", "username": "Rachitha_Rajagopal" }, { "code": "", "text": "Please check mongo tools documentation.What is the version you are using?\n–host will be of the form–host=replSetName/hostname1:port,…Also when you use oplog with mongodump corresponding param need to pass while restore", "username": "Ramachandra_Tummala" }, { "code": "replSetName", "text": "Updating the --host by including replSetName helped. Thanks for all the help ", "username": "Rachitha_Rajagopal" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongorestore is failing to perform full restore of all the databases and collections to the seed instance installed using bitnami helm charts
2021-05-21T16:36:01.390Z
Mongorestore is failing to perform full restore of all the databases and collections to the seed instance installed using bitnami helm charts
3,750
null
[]
[ { "code": "", "text": "How to install realm database on windows phone 10 using c#, UWP", "username": "wael_elarby" }, { "code": "", "text": "Have you tried the instructions in our Getting Started guide?", "username": "nirinchev" }, { "code": "", "text": "Yes, I tried but the installation failed", "username": "wael_elarby" }, { "code": "", "text": "How did it fail? Its difficult for us to help you without seeing the details of your environment, how you installed, and what error you get when you attempt to install", "username": "Ian_Ward" }, { "code": "", "text": "I want to install realm database on windows mobile phone version 10 using UWP by c#\nMy environment :\nMicrosoft Visual Studio Professional 2015\nVersion 14.0.25431.01 Update 3\nMicrosoft .NET Framework\nVersion 4.8.04084Installed Version: ProfessionalWindows Phone SDK 8.0 - ENU 00322-50050-03552-AA993\nWindows Phone SDK 8.0 - ENUVisual Studio Tools for Universal Windows Apps 14.0.25527.01\nThe Visual Studio Tools for Universal Windows apps allow you to build a single universal app experience that can reach every device running Windows 10: phone, tablet, PC, and more. It includes the Microsoft Windows 10 Software Development Kit.Target : Universal Windows\nTarget version : windows 10 build 10586", "username": "wael_elarby" }, { "code": "", "text": "What is the error you’re getting?", "username": "nirinchev" }, { "code": "", "text": "Restoring packages for ‘MasHandHeld’.\nRestoring packages for E:\\last_lap\\CT50\\work\\projects\\MasHandHeld - BB\\MasHandHeld\\MasHandHeld\\project.json…\nRealm 10.1.4 is not compatible with UAP,Version=v10.0.\nSome packages are not compatible with UAP,Version=v10.0.\nRealm 10.1.4 is not compatible with UAP,Version=v10.0 (win10-arm).\nSome packages are not compatible with UAP,Version=v10.0 (win10-arm).\nRealm 10.1.4 is not compatible with UAP,Version=v10.0 (win10-arm-aot).\nSome packages are not compatible with UAP,Version=v10.0 (win10-arm-aot).\nRealm 10.1.4 is not compatible with UAP,Version=v10.0 (win10-x64).\nSome packages are not compatible with UAP,Version=v10.0 (win10-x64).\nRealm 10.1.4 is not compatible with UAP,Version=v10.0 (win10-x64-aot).\nSome packages are not compatible with UAP,Version=v10.0 (win10-x64-aot).\nRealm 10.1.4 is not compatible with UAP,Version=v10.0 (win10-x86).\nSome packages are not compatible with UAP,Version=v10.0 (win10-x86).\nRealm 10.1.4 is not compatible with UAP,Version=v10.0 (win10-x86-aot).\nSome packages are not compatible with UAP,Version=v10.0 (win10-x86-aot).\nPackage restore failed for ‘MasHandHeld’.\nPackage restore failed. Rolling back package changes for ‘MasHandHeld’.\n========== Finished ==========", "username": "wael_elarby" }, { "code": "", "text": "Per the Supported Platforms section in the docs, Realm supports UWP apps using .NET Standard 2.0 targeting Fall Creators Update or later (Build 16299). According to the MS Docs, you need VS 2017 or later to target that version of the UWP SDK. Can you try upgrading to a newer VS version and try setting Fall Creators Update or a newer build version as the target of your project.", "username": "nirinchev" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm DB and windows phone 10
2021-05-22T19:21:55.324Z
Realm DB and windows phone 10
2,063
null
[ "transactions" ]
[ { "code": "", "text": "Hi,How is ‘write follows read’ guarantee maintained if a read request from a client goes to a follower server?\nSay client’s view of clusterTime is 2, and a particular mongodb server’s clusterTime is 1. If a read request goes to the server with afterClusterTime 2, the server needs to update its clusterTime to make sure any subsequence write requests will happen at later time. To make sure that the clusterTime is updated, the server needs to make sure it ticks the clusterTime by doing a no-op write?\nIts easy if the server where the read request is sent is a the primary. But what if the request goes to follower replica?\nThere are two scenarios hereThanks,\nUnmesh", "username": "Unmesh_Joshi" }, { "code": "", "text": "A supplementary question.\nI see that mongodb read concern handling does two checks.", "username": "Unmesh_Joshi" }, { "code": "", "text": "I see this discussion, which is related.I think my question in this post is related. And I think the no-op writes which happen as part of read requests, is critical to move the operationTime beyond the the time requested in read?", "username": "Unmesh_Joshi" } ]
Implementing 'Write follows Read' guarantee of causal consistency
2021-05-02T04:37:10.392Z
Implementing &lsquo;Write follows Read&rsquo; guarantee of causal consistency
3,720
null
[ "queries", "replication" ]
[ { "code": "", "text": "Hi All,\nI am facing an issue of data replication. Means, We are using replica set (1 Primary, 1 Secondary and 1 Arbiter) and through Java Application we are inserting documents (100 documents in a minute). For 1 day I can see those documents in data base after 1 day there was delay to insert the document. At java application says inserted the documents but we can’t see those documents in database when we make a query. It display after 30 minute. So it is a 30 minute delay.\nBut, If I use standalone mongo db Node (means without replication) then it works fine without any delay.\nPrimary and secondary servers are different and this is the configuration of secondary.systemLog:\ndestination: file\npath: F:\\MongoDB\\logs\\rs-node-2.log\nstorage:\nwiredTiger:\nengineConfig:\ncacheSizeGB: 18\ndbPath: F:\\MongoDB\\data\\force-rs\\rs-node-2\nnet:\nbindIp: 0.0.0.0\nport: 27018\nreplication:\nreplSetName: force-rs\nenableMajorityReadConcern: false\nsecurity:\nauthorization: enabledKindly suggestion what I can do.", "username": "RGupta5" }, { "code": "", "text": "There is not enough information for us to help.Primary and secondary servers are differentDifferent configuration? Different machine?Setting the port to 27018 makes me think you they are on the same hardware.What are the resources of the machine(s)? RAM, cpu, disks, …How are they connected?", "username": "steevej" }, { "code": "", "text": "Hello Steeve\nNow, we are getting this delay with single node mongo DB after 2 days. We are using virtual SDD disk.\nThis is the configuration of Single NodesystemLog:\ndestination: file\npath: E:\\MongoDB\\logs\\rs-node-1.log\nstorage:\ndbPath: E:\\MongoDB\\data\\force-rs\\rs-node-1\nnet:\nbindIp: 0.0.0.0\nport: 27017We are using Mongo Java Driver 3.11.2 and Mongo DB 4.0.Here is the screen shot of CPUHere is the disk activity", "username": "RGupta5" }, { "code": "", "text": "We are using virtual SDD disk.I am not sure what you mean by that.Is your machine real or virtual?Now, we are getting this delay with single node mongo DB after 2 days.If I understand the statement above, it means that you have the same delay in a single node configuration as with the PSA configuration.What is the average size of your document?Any indexes?", "username": "steevej" } ]
Replica set data latency
2021-05-22T09:16:49.732Z
Replica set data latency
2,773