image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
[
"node-js"
] | [
{
"code": "",
"text": "Hi, I am facing an error while deploying my app to Heroku.Framework: Express on NodeJsThe error is-> Error: Cannot find module ‘./mongodb_aws’This is the error log:\nScreenshot from 2020-12-26 02-28-141070×758 170 KB",
"username": "Gamers_Subscribe_Ple"
},
{
"code": "mongodb_awsnode_modulesnpm install",
"text": "Hi @Gamers_Subscribe_Ple, welcome to the forums!The error is-> Error: Cannot find module ‘./mongodb_aws’Could you provide the version of MongoDB Node.js driver that you are using ?\nDo you have a similar named file on the project by chance? i.e. mongodb_awsHave you tried resetting the project ? i.e. move node_modules directory to a temporary location, then perform npm install again.If the solution above still does not work for you, please provide a minimal project that could reproduce the issue that you are seeing.Regards,\nWan.",
"username": "wan"
}
] | Getting error while production on Heroku | 2020-12-25T22:00:59.538Z | Getting error while production on Heroku | 3,578 |
|
null | [] | [
{
"code": "sudo apt-get install gcc-8-aarch64-linux-gnu g++-8-aarch64-linux-gnu\n$ sudo dpkg --add-architecture arm64\n$ sudo apt-get update\n$ sudo apt-get install libssl-dev:arm64 libcurl4-openssl-dev:arm64\n\n$ git clone -b r4.4.0 https://github.com/mongodb/mongo.git\n$ cd mongo\n\n# Consider using a virtualenv for this\n$ python3 -m pip install --user -r etc/pip/compile-requirements.txt\n\n$ python3 buildscripts/scons.py --ssl CC=/usr/bin/aarch64-linux-gnu-gcc-8 CXX=/usr/bin/aarch64-linux-gnu-g++-8 CCFLAGS=\"-march=armv8-a+crc -mtune=cortex-a72\" --install-mode=hygienic --install-action=hardlink --separate-debug archive-core{,-debug}\n-mtune=cortex-a72\n-mtune=cortex-a53\n/usr/bin/objcopy --only-keep-debug build/opt/mongo/mongo build/opt/mongo/mongo.debug\nscons: building terminated because of errors.\nbuild/opt/mongo/mongo failed: Error 1\n",
"text": "Hello good morning, I’m creating a small application and my idea is to have a small embedded in an old raspberry pi 3 that I have. In my application I use mongo 4.4 and when I tried to install it I found that there were no pre-builds for arm64 so I started to investigate and found the next thread:In this thread the following is mentioned to create a build using cross-compiling:and made some modifications to adapt it to the rapsberry pi 3:Maybe the mistake is precisely that I ventured to just change that. However, I also did the test without making any modifications and the mistake is exactly the same(a53):In doing some more research I found that apparently instead of /usr/bin/objcopy you should use /usr/bin/arm-linux-gnueabihf-objcopy but here I miss too much.Basically I come to ask if I’m on the right track and if anyone knows how to fix it or if they’ve already encountered this same problem.Thank you very much.",
"username": "Alvaro_Hernandez"
},
{
"code": "objcopyCCCXXscons ... OBJCOPY=/path/to/some/objcopyscons ... OBJCOPY=/usr/bin/arm-linux-gnueabihf-objcopy",
"text": "Hi Alvaro -If you need to customize which objcopy is used, you can specify that on the SCons invocation just like you would the path to CC or CXX, like so:scons ... OBJCOPY=/path/to/some/objcopy.In your case, it looks like you would want scons ... OBJCOPY=/usr/bin/arm-linux-gnueabihf-objcopyHope that helps,\nAndrew",
"username": "Andrew_Morrow"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 4.4 ARM64 builds for Raspberry Pi 3 Debian Buster | 2020-12-11T12:09:51.761Z | MongoDB 4.4 ARM64 builds for Raspberry Pi 3 Debian Buster | 5,278 |
null | [
"dot-net",
"atlas-device-sync"
] | [
{
"code": "private static async Task<bool> SynchroniseRealm(Realm realm, string driverName, bool upload, bool download, ProcessResult processResult, List<ProcessLog> processLogs)\n{\n\tstring task = string.Empty;\n\tbool synchronised = false;\n\tvar sw = new Stopwatch();\n\tsw.Start();\n\n\ttry\n\t{\n\t\tvar session = realm.GetSession();\n\t\tThread.Sleep(250);\n\n\t\tusing (var cts = new CancellationTokenSource())\n\t\t{\n\t\t\tcts.CancelAfter(TimeSpan.FromSeconds(_realmTimeout));\n\n\t\t\tif (upload)\n\t\t\t{\n\t\t\t\ttask = \"Upload\";\n\t\t\t\tawait SynchroniseRealmData(session, download).CancelAfter(cts.Token).ConfigureAwait(true);\n\t\t\t}\n\n\t\t\tif (download)\n\t\t\t{\n\t\t\t\ttask = \"Download\";\n\t\t\t\tawait SynchroniseRealmData(session, download).CancelAfter(cts.Token).ConfigureAwait(true);\n\t\t\t}\n\t\t}\n\n\t\tsynchronised = true;\n\t}\n\n\tcatch (OperationCanceledException ex)\n\t{\n\t\tstring msg = $\"{task} for {driverName} failed to complete within {_realmTimeout} seconds.\\nError: {ex.GetFullMessage()}\";\n\t\tService.LogEntry(EventLogEntryType.Error, msg);\n\t\tvar processLog = new ProcessLog(processResult.ProcessId, processResult.DriverId, EventLogEntryType.Error, nameof(UpdateRealmData), nameof(SynchroniseRealm), msg, ex.StackTrace.HasValue() ? ex.StackTrace : null);\n\t\tprocessLogs.Add(processLog);\n\t\t_restartService = true;\n\t}\n\n\tcatch (Exception ex)\n\t{\n\t\tService.LogException(new CrsException(nameof(UpdateRealmData), nameof(SynchroniseRealm), ex.GetFullMessage()));\n\t\tvar processLog = new ProcessLog(processResult.ProcessId, processResult.DriverId, EventLogEntryType.Error, nameof(UpdateRealmData), nameof(SynchroniseRealm), ex.Message, ex.StackTrace.HasValue() ? ex.StackTrace : null);\n\t\tprocessLogs.Add(processLog);\n\t}\n\n\tfinally\n\t{\n\t\tlock (appLock)\n\t\t{\n\t\t\tprocessResult.RealmSynchronised += sw.Elapsed;\n\t\t}\n\t}\n\n\treturn synchronised;\n}\n\nprivate static async Task SynchroniseRealmData(Session session, bool download)\n{\n\tif (download)\n\t\tawait session.WaitForDownloadAsync().ConfigureAwait(true);\n\telse\n\t\tawait session.WaitForUploadAsync().ConfigureAwait(true);\n}\n",
"text": "I am currently having a problem where none of my Realms can download the latest changes. This is affecting all of my users. Executing the following code results in a timeout exception after 300 seconds;",
"username": "Raymond_Brack"
},
{
"code": "",
"text": "Hi @Raymond_Brack,I suggest contacting MongoDB Cloud Support for help investigating this issue. It sounds like this code was previously working so perhaps there is an operational issue with your cloud service.When you contact support, it would be helpful to provide the version of Realm SDK and cloud service that you are using (Realm Legacy Cloud or MongoDB Realm) to assist with investigation.Regards,\nStennie",
"username": "Stennie_X"
}
] | Realm Failing to Download | 2021-01-04T22:22:29.804Z | Realm Failing to Download | 1,714 |
null | [
"production",
"server"
] | [
{
"code": "",
"text": "MongoDB 4.0.22 is out and is ready for production deployment. This release contains only fixes since 4.0.21, and is a recommended upgrade for all 4.0 users.\nFixed in this release:",
"username": "Jon_Streets"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB 4.0.22 is released | 2021-01-04T21:10:29.270Z | MongoDB 4.0.22 is released | 2,274 |
null | [
"production",
"server"
] | [
{
"code": "",
"text": "MongoDB 4.4.3 is out and is ready for production deployment. This release contains only fixes since 4.4.2, and is a recommended upgrade for all 4.4 users.\nFixed in this release:",
"username": "Jon_Streets"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB 4.4.3 is released | 2021-01-04T21:07:16.879Z | MongoDB 4.4.3 is released | 2,467 |
null | [
"connecting"
] | [
{
"code": "2021-01-03T10:15:04.722-0500 I NETWORK [js] Marking host cluster0-shard-00-01.l4a1z.mongodb.net:27017 as failed :: caused by :: Location8000: can't authenticate against replica set node cluster0-shard-00-01.l4a1z.mongodb.net:27017 :: caused by :: Authentication failed.\n2021-01-03T10:15:04.722-0500 E QUERY [js] Error: can't authenticate against replica set node cluster0-shard-00-01.l4a1z.mongodb.net:27017 :: caused by :: Authentication failed. :\nDB.prototype._authOrThrow@src/mongo/shell/db.js:1685:20\n@(auth):6:1\n@(auth):1:2\nexception: login failed\n",
"text": "Hi,I got this error from command line ‘mongo’ (per connect method from the company site). Could someone point me to the right direction? I migrated to atlas from classic mongo about month so ago.Thank you!!Mike",
"username": "Michael_Chen"
},
{
"code": "Authentication failed",
"text": "Authentication failedThis usually means wrong user name or password. Verify your credentials.",
"username": "steevej"
},
{
"code": "mongomongomongomongo --version",
"text": "Welcome to the MongoDB community @Michael_Chen!As @steevej suggested, a likely issue is that you have the wrong credentials for connecting to Atlas.Note that the credentials for Atlas User Access (logging in via the Atlas UI) are not the same as those for a Database User. If you are trying to use an email address as the username in your mongo shell connection, it is likely that is an Atlas User rather than a Database User.I would also check that you are using a version of the mongo shell which is at least as new as the server version your Atlas cluster is using. If you are using a significantly older shell, it may be lacking required support for TLS encryption or compatible authentication methods.For more suggestions on authentication issues, please see the Atlas guide to Troubleshoot Connection Issues.If you are still having trouble connecting, please provide an example of the mongo command line you are using (with any user/password/cluster details replaced) as well as the output of mongo --version.Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "I think I got it. You all right.Am bit lazy to figure out after classic mongo to atlas mongo why the code is not returning the expected result. I can see the data is there (via online dashboard)const client = new MongoClient(uri, { useNewUrlParser: true });client.connect(err => {\nconst collection = client.db(“mydb”).collection(“mycollection”);\n// dont seems show any my data in ‘collection’ ???\n});",
"username": "Michael_Chen"
}
] | Command line login fail | 2021-01-03T18:54:45.716Z | Command line login fail | 3,629 |
null | [
"queries"
] | [
{
"code": "{\"$accommodates\": {\"$gt\": 6}, \"reviews\": {\"$size\": 50}}\n{\"$gt\": {\"$accommodates\": 6}, \"reviews\": {\"$size\": 50}}\n",
"text": "i’m following mongo db course 001, chapter 4, lab 1 of array operators.\ni’ve got the following quiz, and the query to solve it, but atlas ui tells me there’s something wrong in the query.\nwhat’s wrong?What is the name of the listing in the sample_airbnb.listingsAndReviews dataset that accommodates more than 6 people and has exactly 50 reviews?also tried this:",
"username": "bb8"
},
{
"code": "",
"text": "Questions related to MongoDB university are better served in the university course specific forum.atlas ui tells me there’s something wrong in the query.What is the exact error? Posting a screenshot is the best way as we see the context in which the issue is happening.One thing is sure is that if accomodates is a field name, then like reviews it does not take a leading dollar sign.",
"username": "steevej"
},
{
"code": " {\"accommodates\": {\"$gt\": 6}, \"reviews\": {\"$size\": 50}}\n",
"text": "Hi @bb8,I think the issue is using a $ which is unneeded before the Field name. TryThanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "yes, that was the problem, thanks",
"username": "bb8"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Array operators query issue | 2021-01-03T18:53:57.455Z | Array operators query issue | 4,868 |
null | [] | [
{
"code": "",
"text": "I am getting unhandled exception while running the command (C:\\ProgramFiles\\MongoDB\\Server\\4.4\\bin) \" mongod --dbpath “C:\\ data\\ db”\nalso tried \"mongod --dbpath=“C:\\data\\db”The issue is that mongod server runs for few minutes and then it stops by giving “immediate exit due to Unhandled exception”\nKindly look to it.",
"username": "Akansha_Saxena1"
},
{
"code": "",
"text": "Please follow the instructions. You do not need to start mongod in the M001 course. You will be using an Atlas cluster.",
"username": "steevej"
},
{
"code": "",
"text": "Hi @Akansha_Saxena1,I hope you found @steevej-1495’s response helpful.Let us know if you are still facing any issues.~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "",
"username": "system"
}
] | I have issue with the starting of Mongdb | 2020-12-29T18:58:16.081Z | I have issue with the starting of Mongdb | 1,538 |
null | [
"python"
] | [
{
"code": "",
"text": "according to documentation,\nschema could be added when using mongo shell to add a collection\nHowever does pymongo suppose this ?",
"username": "first_name_last_name"
},
{
"code": "",
"text": "I use python with MongoDB but I always do schema in Node.js",
"username": "Jack_Woehr"
},
{
"code": "mongo",
"text": "Welcome to the community!Can you provide more information on the documentation reference you are referring?MongoDB has flexible schema, so unlike tabular databases there is no central catalog or required schema for a collection. You can define Schema Validation rules for inserts and updates if more rigor is required.Pymongo (and other MongoDB drivers) support the same functionality available via the mongo shell, so if you can provide some details on the documentation you are reviewing we may have more specific suggestions.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "You might find your answer here: Defining data schema using pymongo - #2 by MaBeuLux88",
"username": "Shane"
}
] | Pymongo schema support | 2020-12-25T19:02:34.449Z | Pymongo schema support | 10,537 |
null | [
"golang"
] | [
{
"code": "bsonoptions.SliceCodec().SetEncodeNilAsEmpty(true)registry := bsoncodec.NewRegistryBuilder()\nbsoncodec.DefaultValueEncoders{}.RegisterDefaultEncoders(registry)\nnilSliceCodec := bsoncodec.NewSliceCodec(bsonoptions.SliceCodec().SetEncodeNilAsEmpty(true))\nregistry.RegisterDefaultEncoder(reflect.Slice, nilSliceCodec)\nopts.SetRegistry(registry.Build())\nno decoder found for interface {}RegisterDefaultEncoders",
"text": "I’m attempting to use bsonoptions.SliceCodec().SetEncodeNilAsEmpty(true), but I’m having issues building the registry. This is what I have so far:When I do not override the registry, my tests work fine. When I add this hunk of code though, I start getting back this error: no decoder found for interface {}.\nMy current guess is that RegisterDefaultEncoders isn’t registering the proper encoder for interfaces, but I’m a little unclear about how to specify them.@Isabella_Siu - any ideas on what I might be doing wrong here?",
"username": "TopherGopher"
},
{
"code": "bson.NewRegistryBuilderbsoncodec.NewRegistryBuilder",
"text": "Hi @TopherGopher,With this code, you’re only registering encoders, and instead, you likely want to do the followng:nilSliceCodec := bsoncodec.NewSliceCodec(bsonoptions.SliceCodec().SetEncodeNilAsEmpty(true))\nregistry := bson.NewRegistryBuilder().RegisterTypeEncoder(reflect.Slice, nilSliceCodec).Build()The difference is that bson.NewRegistryBuilder pre-registers all the default encoders and decoders (which can then be overwritten) while bsoncodec.NewRegistryBuilder sets up an empty registry builder.",
"username": "Isabella_Siu"
}
] | How can I use a codec for 'kind' for setting the NilAsEmpty SliceCodec? | 2020-12-29T00:46:06.819Z | How can I use a codec for ‘kind’ for setting the NilAsEmpty SliceCodec? | 1,966 |
[] | [
{
"code": " {\n $search: {\n index: 'autocomplete',\n autocomplete: {\n query: args.searchText,\n path: 'title',\n tokenOrder: 'sequential',\n fuzzy: {\n maxEdits: 2,\n },\n },\n },\n },\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"title\": [\n {\n \"foldDiacritics\": false,\n \"maxGrams\": 7,\n \"minGrams\": 3,\n \"tokenization\": \"edgeGram\",\n \"type\": \"autocomplete\"\n }\n ]\n }\n }\n}\n",
"text": "I have this queryThis indexGetting this result… I would think salesforce would be first? Any idea what setting I can tinker with?Screen Shot 2021-01-03 at 11.31.57 PM926×326 14.2 KB",
"username": "Anthony_Comito"
},
{
"code": "",
"text": "I thought maybe I just needed to sort my aggregation by a score field, but my understanding is atlas search/the $search step is doing that by default?So not sure if its my $search options or the index settings that are not giving me what I’m aiming for",
"username": "Anthony_Comito"
},
{
"code": "",
"text": "Here’s another where it is giving me the intended result, but it seems to be the last result for some reasonScreen Shot 2021-01-04 at 9.15.20 AM866×468 30.4 KB",
"username": "Anthony_Comito"
}
] | Trying to understand atlas search autocomplete result | 2021-01-04T05:48:05.383Z | Trying to understand atlas search autocomplete result | 2,325 |
|
null | [
"legacy-realm-cloud"
] | [
{
"code": "",
"text": "hi,seems my realm cloud instance is down again: (since 9.00am the server is restarting every few minutes)https://samay.de1a.cloud.realm.iohelp would be greatly appreciated. our production apps ar down as a result…thanks",
"username": "rouuuge"
},
{
"code": "",
"text": "Hi,Thanks for flagging this issue. It looks like your instance (on the basic tier) has been suffering from memory and CPU pressure and this caused a number of restarts.We have now applied some changes to help prevent these restarts from happening. Could you please confirm if you see any improvement from your end?Kind Regards,\nMarco",
"username": "Marco_Bonezzi"
},
{
"code": "",
"text": "thank you marco, looks good so far",
"username": "rouuuge"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm Cloud Legacy Instance is down | 2021-01-04T11:50:16.202Z | Realm Cloud Legacy Instance is down | 3,360 |
null | [
"php"
] | [
{
"code": "",
"text": "hi. I am using php and gridFS.We want to upload more than 100.000 files from a folder. But mongodb stops, maybe because of memory??? I used a loop with storeFile() or put().Is there a better way like bulk or stream to avoid this???",
"username": "Infos_and_Facts"
},
{
"code": "php -vmax_execution_time",
"text": "Welcome to the MongoDB community @Infos_and_Facts!mongodb stops, maybe because of memory??? I used a loop with storeFile() or put().Please provide some more details:A common error for long running web tasks is exceeding PHP’s max_execution_time (30 seconds by default) or a timeout in the web server configuration. If you are logging or displaying errors in your PHP code, execution timeouts should result in a fatal error message similar to: “Fatal error: Maximum execution time of 30 seconds exceeded …”.Regards,\nStennie",
"username": "Stennie_X"
}
] | GridFS Upload stops | 2021-01-03T09:44:44.070Z | GridFS Upload stops | 1,996 |
null | [] | [
{
"code": "",
"text": "Hello there, I have the following document structure in my collection movies.The problem here is if I search for a soundtrack (via text search), the wrong movie is returned because the title and title_synonyms often have values wich are kind of near. Is there a way to set how much a field influences the score of a document?",
"username": "Richard_N_A"
},
{
"code": "",
"text": "Hi @Richard_N_A,Welcome to MongoDB community!To achieve this I would suggest to use the compound syntax to search each field path seperately :Use the compound operator to combine multiple operators in a single query and get results with a match score.Now on the field you want to boost you can adjust it scoring by either using score boosting or having a constant score with a high enough value over the others:Normalize or modify the score assigned to a returned document with the boost, constant, embedded, or function operator.Best\nPavel",
"username": "Pavel_Duchovny"
}
] | Custom scoring for Atlas Search | 2021-01-02T20:44:27.310Z | Custom scoring for Atlas Search | 3,574 |
[
"app-services-user-auth"
] | [
{
"code": "",
"text": "Hi all! I have database and collection. In my collection I have some data which I want display in app. Do I need register user first to get access to collections? Right now I’m authentication user with anonymous credential. if auth success I’m opening database with partitionValue. I wonder is it possible to open database without any authentication? If not is it possible to change anonymous user to as a Email/Password user? I also implemented Email/Password registration and login. But I have 2 users 1) anonymous user, 2)Email/Password user. I wan’t change anonymous user as a registered userСнимок экрана 2020-12-31 в 14.32.101780×908 174 KB",
"username": "nomadic_warrior"
},
{
"code": "",
"text": "My understanding of Realm is that in order to open a Realm, you need a configuration object, and in order to create a configuration object, you need a logged in user (anonymous or otherwise). You can always log in as an anomymous user, display stuff, and then logout prior to loging in again as a email/password user. MongoDB Realm can support multiple authentication providers, so this should not be a problem.Richard Krueger",
"username": "Richard_Krueger"
},
{
"code": "",
"text": "To add slightly to Richard’s answer – within Realm you also have the option to Link User Identities. This should allow you to start with an anonymous user and then link an email/password identity when the user creates it (giving you two separate identities linked to a single User/ID).",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "Thank you very much Mr. Richard Krueger! I understood that before to get access to any database or collection I have to setup user auth first. But it’s hard to me to understand turn Anonymous user to Email/Password user. I won’t want have user have 2 different account, 1) Anonymous 2)Email/Password. I read MongoDB removes anonymous users credentials if user has no activity.",
"username": "nomadic_warrior"
},
{
"code": "",
"text": "Thank you very much! I didn’t knew about Link User, I think there was no mentioned on Task Tracking tutorial.",
"username": "nomadic_warrior"
}
] | Anonymous, Email/Password, iOS | 2020-12-31T09:10:43.726Z | Anonymous, Email/Password, iOS | 2,617 |
|
null | [
"compass"
] | [
{
"code": "",
"text": "Hi, I want to filter a text field and match against part of a word (LIKE in sql). Is it possible to do this in the Documents tab ? I could not find any documentation / examples on the mongo website.\nThanks",
"username": "Jon_C"
},
{
"code": "LIKEREGEXP{myfield:/pattern/}",
"text": "Hi @Jon_C,You can use regular expressions in your filter criteria within Compass. Regular expressions provide a superset of SQL’s LIKE functionality with more advanced pattern matching options similar to REGEXP functions in SQL.For example: {myfield:/pattern/} would perform a case-sensitive substring match for the given pattern.I would draw your attention to the Index Use information on regular expressions, as many regular expression queries cannot use indexes effectively.Some useful references that may make your SQL experience more relatable:Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Using compass to filter Like query | 2021-01-03T22:22:35.898Z | Using compass to filter Like query | 41,891 |
null | [
"atlas-device-sync"
] | [
{
"code": "",
"text": "Hi,\nI’m currently designing my database classes/schemas and I like the embedded object feature. To understand the syncing process I just have one question.\nIf I change one property of an embedded object, does Realm Sync has to sync the whole object + the parent object also? Or which parts actually get synced?:\na) only the property\nb) the whole embedded object (which might include children , if there are any)\nc) the whole embedded object + all parentsThank you",
"username": "David_Funk"
},
{
"code": "",
"text": "Hi David – This depends a little on what the actual change to the database is. Realm generally sends diffs/changes at the leaf-level so if a single property is updated then the change sent to the client would be the property + metadata describing the change (most closely corresponding with A).One area where you may see additional information being sent as a part of the change is if you are performing ‘replaces’ at the document-level directly to MongoDB – In this case we will send the entire new top-level document, corresponding to C).",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Embedded Object Sync | 2020-12-31T07:39:58.119Z | Embedded Object Sync | 2,105 |
null | [
"containers",
"configuration"
] | [
{
"code": "prod-master1.2.5.10prod-mongodb1.2.5.11pingbindIp: 0.0.0.0/etc/mongod.confbindIpbindIp: localhost,1.2.5.10prod-mongodb1.2.5.10\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Cannot assign requested address\"}\n",
"text": "Hello MongoDB people,I am currently setting up a fresh mongoDB deployment on Digital Ocean. I have 2 droplets:Both droplets are in the same DO region, both droplets are in the same (standard/default) VPC of the DigitalOcean region, both have private IPs, both can ping the other one via the private IP!Until now, I was running a bindIp: 0.0.0.0 setup in my /etc/mongod.conf and everything was fine. Now I want to secure this setup for going live.If I understand bindIp correctly I would now change it to bindIp: localhost,1.2.5.10 to only allow the prod-mongodb droplet itself (= localhost) and my other main droplet (= 1.2.5.10) to access my mongo DB. correct?When I try this my mongod service can’t start up anymore and I end up with the following error:I now have read so many tutorials and stuff and I really can’t find any clues on what I am doing wrong! Please help me out here!Thanks a bunch, best regards\nPatrick",
"username": "Patrick_Schubert"
},
{
"code": "bindIPbindIpifconfig -a | grep \"inet\"prod-mongodb1.2.5.11127.0.0.1,1.2.5.111.2.5.10",
"text": "Welcome to the MongoDB community @Patrick_Schubert!The bindIP directive determines which local network interfaces the MongoDB process listens to, not the specific remote addresses that are allowed to connect.The “Cannot assign requested address” error indicates you are trying to bind to an address that is not a local network interface for that droplet.The only valid values for bindIp are local network interfaces for the MongoDB process. For example, on Linux any local IPs would appear in the output of ifconfig -a | grep \"inet\".If prod-mongodb has 1.2.5.11 as a local network interface, you could bind to 127.0.0.1,1.2.5.11.To limit remote connections to those originating from 1.2.5.10 you need to configure appropriate firewall settings. See Network Hardening and the MongoDB Security Checklist for more details.Regards,\nStennie",
"username": "Stennie_X"
}
] | DigitalOcean Setup using private IPs - SocketException: Cannot assign requested address | 2021-01-03T18:54:26.797Z | DigitalOcean Setup using private IPs - SocketException: Cannot assign requested address | 4,202 |
null | [
"app-services-user-auth",
"graphql"
] | [
{
"code": "",
"text": "Are there any examples out there that implement full user authorization with graphql mutations?I’m having a hard time setting up a single collection to allow an authenticated user to:I would expect this to be fairly straightforward and would love to see how someone else has done it.",
"username": "Travis_N_A"
},
{
"code": " {\n \"name\": \"insert\",\n \"apply_when\": {},\n \"insert\": true,\n \"delete\": false,\n \"search\": false,\n \"fields\": {\n \"_partition\": {},\n \"address\": {},\n \"description\": {},\n \"user_id\": {\n \"write\": true\n },\n },\n \"additional_fields\": {}\n },\ndoes not have insert permission for document with _id: ObjectID(\\\"5fc....\\\"): user_id cannot be written to\"; code=\"ArgumentsNotAllowed\";query {\n listings(\n query: {\n user_id: {\n user_id: \"5f......\"\n }\n }\n \n ){\n _id\n }\n}\n[]query {\n listings(\n query: {\n user_id: {\n _id: \"5f...\"\n }\n }\n \n ){\n _id\n }\n}\nRead: true",
"text": "So far what I’ve managed is to:Adding the user_id field as write: true was necessary to get past an error: does not have insert permission for document with _id: ObjectID(\\\"5fc....\\\"): user_id cannot be written to\"; code=\"ArgumentsNotAllowed\";But even with this I can’t seem to filter to just these records.returns []; as does:I definitely see a matching record in the collection.\nI’ve enabled “Search Documents” and marked all fields as Read: true (which I don’t want).Insecurely, this setup also allows me to insert a document with any random made-up user-id.",
"username": "Travis_N_A"
},
{
"code": "WriteAll additional fieldsaddress cannot be written to\"; code=\"ArgumentsNotAllowed\"",
"text": "In addition, despite checking Write for All additional fields, fields are not writable unless I specifically name them. You can see that here with address.Screen Shot 2020-11-29 at 1.41.43 PM724×341 20.7 KBBefore adding address to this I received: address cannot be written to\"; code=\"ArgumentsNotAllowed\"",
"username": "Travis_N_A"
},
{
"code": "",
"text": "Hey Travis,Just noticed you have a “_partition” field in your schema. Do you happen to be using Sync simultaneously? In that case your Rules will actually have to be set in the Sync section of the UI https://docs.mongodb.com/realm/mongodb/define-roles-and-permissions/If that’s not the case, do you mind linking your app url here so I can take a look at your permissions. Generally for your use-case you would set the “apply when” to something like this:and check “insert” and “read”",
"username": "Sumedha_Mehta1"
},
{
"code": "_partition_partition: \"ff\"\"<Owner ID Field>\": \"%%user.id\"",
"text": "Hi Sumedha, I’m not using sync and I don’t think I’m correctly using the _partition field. Its just something I added based on some of the docs. I am always sending _partition: \"ff\" to keep it happy, but haven’t made any further use of it. I am only using authentication and Graphql. Do you think I should remove it?I was using \"<Owner ID Field>\": \"%%user.id\" initially but changed to using a function based on this thread.\nI had it setup as you describe.Is it safe to post the app-id here - isn’t that the only secret for a web app? Wouldn’t posting it make it usable by any passers-by?",
"username": "Travis_N_A"
},
{
"code": "",
"text": "A few things:. _partition is a field that is required for Sync. If any part of the documentation made this unclear or hard to understand, let us know so we can fix itI responded to the user.id issue in another thread. Let me know if that worked (permissions for allowing users to only see their document).for insert a document, you can use the same isOwner role and check “insert” (as you already have). The other insert role may be unnecessary unless you add a separate “apply when” expression.",
"username": "Sumedha_Mehta1"
},
{
"code": "user_idroot",
"text": "Thanks for your help Sumeda.I responded to the user.id issue in another thread. Let me know if that worked (permissions for allowing users to only see their document).That did help. I have a field of user_id that is an ObjectId. In the other thread the meaning of root was unclear to me. Now I know that it is the top of the document being evaluated, regardless of whether that document is being read from the backend or coming from a graphql mutation and being evaluated for potential insert.For anyone else following along please check the other thread if trying to get object authorization working.Since resolving that I was able to remove all the other permissions (as you suggested) which were just there to allow me to keep going on the front-end until this could be resolved.. _partition is a field that is required for Sync. If any part of the documentation made this unclear or hard to understand, let us know so we can fix itI have removed the _partition field. I put this in very early on - not sure which document I was reading at that time.It would be really helpful if there were a working sample app that implemented authorization using any major web framework.",
"username": "Travis_N_A"
},
{
"code": "query MyListings($userId: ObjectId!) {\n listings(query: {\n user_id: {\n _id: $userId\n }\n }) {\n _id\n description\n }\n}\n{\n \"data\": {\n \"listings\": []\n }\n}\nffffffffffffffffffffffffquery ListingsWithDescription($description: String!) {\n listings(query: {\n description: $description\n }) {\n _id\n description\n }\n}\n",
"text": "So I thought I had this working, but as soon as I wanted to allow some fields to be public I ran back into these issues.I added a new role that has “read” for some fields and put it to the right of the existing role. This worked and I could now retrieve all records from any user using graphql.\nHowever, I don’t want to always get all of them, I want an individual user to see their own records only most of the time. To that end I modified my graphql query to filter by the user’s own ID:This unfortunately returns no results:And yes, I’m sure the userID is correct and on the records. I copy-pasted it and even altered some records to have their user_id set to ffffffffffffffffffffffff and ran it with that - no luck.To validate that I’m using the query-input types correctly I also ran this query which did return results:Not sure where to go from here, this seems like a continuation of the initial problem. I’ve been unable to resolve it.ObjectId clearly behaves differently as indicated by the other thread - but its not clear how to filter on an ObjectId field.\nHow can I filter to a specific user’s listings?",
"username": "Travis_N_A"
}
] | Graphql web examples with user authorization? | 2020-11-29T21:21:38.301Z | Graphql web examples with user authorization? | 4,232 |
null | [
"graphql",
"app-services-data-access"
] | [
{
"code": "mutation($user: ObjectId!, $comment: String!) {\n insertOneComment(data: {\n comment: $comment\n user: $user\n }) {\n _id\n }\n}\n{\n \"user\": \"%%user.id\"\n}\n{\n \"user\": \"<SOME_USER_OBJECT_ID>\"\n}\n",
"text": "Hello,I am trying to get a straightforward user check working when inserting a comment over Apollo GraphQL.The request should only be allowed if the user field matches the currently authenticated user.In Realm rules, under owner I have tried lots of variations, including hardcoding the user ID to validate. Unfortunately, I can’t get the rule to work.Hardcoded testI read on another thread that this could be due to the client passing the user as an ObjectId and not a String. However, if I try a String this doesn’t work as GraphQL is expecting an ObjectId.Any help would be greatly appreciated.",
"username": "NIALLO"
},
{
"code": "user",
"text": "Welcome to the community Niall -Can you link your app URL so I can take a closer look at your schema/rules?If user is a field with an ObjectId type in your schema and you’re not using custom user data, this should work unless there are other rules preventing this or a typo in the schema/user. I believe the reason hardcoding the object Id isn’t working either is because it is being treated as a string, and thus not passing the rule.",
"username": "Sumedha_Mehta1"
},
{
"code": "%%user.id%%oidToString%%stringToOid {\n \"%%true\": {\n \"%function\": {\n \"name\": \"equalStrings\",\n \"arguments\": [\n \"%%user.id\",\n \"%%root.user\"\n ]\n }\n }\n }\nexports = function(arg1, arg2){\n return String(arg1) == (String(arg2));\n};",
"text": "Hey Niall - similar to the post you read said, %%user.id is actually a string. Therefore, you’re comparing an objectId to a string and it’s not passing the rule.To get around this you can use a function (although we are introducing expressions such as %%oidToString and %%stringToOid very soon)rule:function (equalStrings, System Function):",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Thank you Sumedha, this is excellent.Where do I add the equalsString function in the format that you provided? I presume it can’t sit within the Rule as it’s expecting a JSON expression.Does it need to be placed in the custom functions section? I have tried that and it’s having trouble with the formatting provided.",
"username": "NIALLO"
},
{
"code": "equalStrings",
"text": "It needs to be added in the “Functions” section (accessed via the side nav). The name of the function has to be the same as what is referred to in your rules (mine was called equalStrings) and be a System function, the logic has to return whether the two strings (or objectIds) are equal.What formatting trouble were you running into?",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "It’s working now, thanks for your help ",
"username": "NIALLO"
},
{
"code": "",
"text": "@Sumedha_Mehta1 I think I’m running into this issue as well. I’m unable to assign the user through graphql (even graphiql).Do you know if this support has been added in yet? Or is this function still needed?Thanks!",
"username": "Travis_N_A"
},
{
"code": "",
"text": "@Travis_N_A I’m not sure what you mean by “assign the user” hereDo you mean assign rules/roles via GraphQL?",
"username": "Sumedha_Mehta1"
},
{
"code": "query {\n listings(\n query: {\n user_id: {\n _id: \"5fb......\"\n }\n } \n ){\n..... \n",
"text": "I mean is this function necessary still to match a user up with their records using graphql. Do I still need this function to match up a user with their own records, or does it now properly handle user id strings when comparing? Specifically I’m sending:",
"username": "Travis_N_A"
},
{
"code": "%%root.user%%root.user_iduser",
"text": "Looking at your app, it seems like you copied and pasted from the snippet above, but you would have to replace %%root.user to %%root.user_id since that is what your field is called. (in the first example it was user).",
"username": "Sumedha_Mehta1"
},
{
"code": "%%oidToString%%stringToOid%%oidToString%%stringToOid",
"text": "To get around this you can use a function (although we are introducing expressions such as %%oidToString and %%stringToOid very soon)Is this still necessary though? Earlier in the thread you mentioned “… we are introducing expressions such as %%oidToString and %%stringToOid very soon” as a replacement.",
"username": "Travis_N_A"
},
{
"code": "",
"text": "Yes, you should be able to find relevant expressions here - https://docs-mongodbcom-staging.corp.mongodb.com/realm/docsworker-xlarge/oid/services/expression-variables.html#operators",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "That link shows a 404 “NoSuchKey”",
"username": "Travis_N_A"
},
{
"code": "",
"text": "That link is for an internal docs staging site, here’s the link for the live docs: https://docs.mongodb.com/realm/services/expression-variables#ejson-conversion",
"username": "nlarew"
}
] | Permission rule for owner user, Apollo GraphQL | 2020-10-14T09:27:23.848Z | Permission rule for owner user, Apollo GraphQL | 5,705 |
null | [] | [
{
"code": "",
"text": "Hi, i am not able to find options to access in-browser IDE to execute chapter 1 Lab exercise.",
"username": "Ramya_Rks"
},
{
"code": "",
"text": "You should be able to open it",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Thanks! got it.",
"username": "Ramya_Rks"
},
{
"code": "",
"text": "",
"username": "Shubham_Ranjan"
}
] | Not able to find options to access in-browser IDE | 2021-01-03T07:40:01.571Z | Not able to find options to access in-browser IDE | 1,939 |
null | [
"performance",
"capacity-planning"
] | [
{
"code": "",
"text": "Currently we are with cassandra DB and we need to start using the other NoSQL for one of our below use case.\nat very high-level usecase requirements are:\nVery large incoming data volumes\nVery huge reads per second ( 200K TPS, around 60% by sharded key and 40% by only indexed keys)\nMore Writes (80K, less than read)\nThe data is time series data(Users/Wireless devices Movement for every 2 sec)\nAnalytics over datafew queries likeMy understanding with other project is that mongo is good for storing billions of documents and retrieve them… But do not have idea on how mongo will do for time series data and some queries related to that.We go with mongo Atlas, we will not manage mongo cluster by our own.",
"username": "Great_Info"
},
{
"code": "",
"text": "Hi @Great_Info,MongoDB is absolutely perfect to handle this time series related data. The thing I see is that you have high volume of reads and writes, so if you could spend some time on Data modelling, it would be the best. Some things to help you along in how to model your data are Bucketing Pattern, caching/precomputing your results by use case (if it helps)… So you might wanna look at different patterns and techniques. These links would help you a lot, so take some time to read them and then start implementing.\nA lot of patterns from this → Building with Patterns: A Summary | MongoDB Blog\nhttps://docs.mongodb.com/manual/tutorial/model-computed-data/\nhttps://docs.mongodb.com/manual/applications/data-models-relationships/\nhttps://docs.mongodb.com/manual/core/data-model-operations/Cheers…!",
"username": "shrey_batra"
}
] | Mongo performances for time series data | 2021-01-02T20:46:52.544Z | Mongo performances for time series data | 3,406 |
null | [
"java"
] | [
{
"code": "-- error --\n2021-01-02 09:52:17.399 ERROR 18688 --- [reactor-http-nio-2] org.mongodb.driver.operation \n\n\n: Callback onResult call produced an error.\n\nreactor.blockhound.BlockingOperationError: Blocking call! jdk.internal.misc.Unsafe#park\n\tat java.base/jdk.internal.misc.Unsafe.park(Unsafe.java)\n\tat java.base/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)\n\tat java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:885)\n\tat java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:917)\n\tat java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1240)\n\tat java.base/java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:267)\n\tat java.base/java.util.concurrent.LinkedBlockingQueue.offer(LinkedBlockingQueue.java:409)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1347)\n\tat java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:118)\n\tat java.base/java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:714)\n\tat com.mongodb.internal.connection.DefaultConnectionPool.getAsync(DefaultConnectionPool.java:157)\n\tat com.mongodb.internal.connection.DefaultServer.getConnectionAsync(DefaultServer.java:105)\n\tat com.mongodb.internal.binding.AsyncClusterBinding$AsyncClusterBindingConnectionSource.getConnection(AsyncClusterBinding.java:131)\n\tat com.mongodb.internal.async.client.ClientSessionBinding$SessionBindingAsyncConnectionSource.getConnection(ClientSessionBinding.java:140)\n\tat com.mongodb.internal.operation.OperationHelper.withAsyncConnectionSource(OperationHelper.java:730)\n\tat com.mongodb.internal.operation.OperationHelper.access$200(OperationHelper.java:68)\n\tat com.mongodb.internal.operation.OperationHelper$AsyncCallableWithConnectionAndSourceCallback.onResult(OperationHelper.java:750)\n\tat com.mongodb.internal.operation.OperationHelper$AsyncCallableWithConnectionAndSourceCallback.onResult(OperationHelper.java:738)\n\tat com.mongodb.internal.async.ErrorHandlingResultCallback.onResult(ErrorHandlingResultCallback.java:48)\n\tat com.mongodb.internal.async.client.ClientSessionBinding$WrappingCallback.onResult(ClientSessionBinding.java:208)\n\tat com.mongodb.internal.async.client.ClientSessionBinding$WrappingCallback.onResult(ClientSessionBinding.java:196)\n\tat com.mongodb.internal.binding.AsyncClusterBinding$1.onResult(AsyncClusterBinding.java:105)\n\tat com.mongodb.internal.binding.AsyncClusterBinding$1.onResult(AsyncClusterBinding.java:99)\n\tat com.mongodb.internal.connection.BaseCluster$ServerSelectionRequest.onResult(BaseCluster.java:432)\n\tat com.mongodb.internal.connection.BaseCluster.handleServerSelectionRequest(BaseCluster.java:299)\n\tat com.mongodb.internal.connection.BaseCluster.selectServerAsync(BaseCluster.java:155)\n\tat com.mongodb.internal.connection.SingleServerCluster.selectServerAsync(SingleServerCluster.java:42)\n\tat com.mongodb.internal.binding.AsyncClusterBinding.getAsyncClusterBindingConnectionSource(AsyncClusterBinding.java:99)\n\tat com.mongodb.internal.binding.AsyncClusterBinding.getReadConnectionSource(AsyncClusterBinding.java:84)\n\tat com.mongodb.internal.async.client.ClientSessionBinding.getReadConnectionSource(ClientSessionBinding.java:58)\n\tat com.mongodb.internal.operation.OperationHelper.withAsyncReadConnection(OperationHelper.java:677)\n\tat com.mongodb.internal.operation.FindOperation.executeAsync(FindOperation.java:689)\n\tat com.mongodb.internal.async.client.OperationExecutorImpl$1$1.onResult(OperationExecutorImpl.java:86)\n\tat com.mongodb.internal.async.client.OperationExecutorImpl$1$1.onResult(OperationExecutorImpl.java:74)\n\tat com.mongodb.internal.async.client.OperationExecutorImpl.getReadWriteBinding(OperationExecutorImpl.java:177)\n\tat com.mongodb.internal.async.client.OperationExecutorImpl.access$200(OperationExecutorImpl.java:43)\n\tat com.mongodb.internal.async.client.OperationExecutorImpl$1.onResult(OperationExecutorImpl.java:72)\n\tat com.mongodb.internal.async.client.OperationExecutorImpl$1.onResult(OperationExecutorImpl.java:66)\n\tat com.mongodb.internal.async.client.ClientSessionHelper.createClientSession(ClientSessionHelper.java:60)\n\tat com.mongodb.internal.async.client.ClientSessionHelper.withClientSession(ClientSessionHelper.java:51)\n\tat com.mongodb.internal.async.client.OperationExecutorImpl.execute(OperationExecutorImpl.java:66)\n\tat com.mongodb.internal.async.client.AsyncMongoIterableImpl.batchCursor(AsyncMongoIterableImpl.java:167)\n\tat com.mongodb.reactivestreams.client.internal.MongoIterableSubscription.requestInitialData(MongoIterableSubscription.java:45)\n\tat com.mongodb.reactivestreams.client.internal.AbstractSubscription.tryRequestInitialData(AbstractSubscription.java:177)\n\tat com.mongodb.reactivestreams.client.internal.AbstractSubscription.request(AbstractSubscription.java:100)\n\tat reactor.core.publisher.FluxConcatMap$ConcatMapImmediate.onSubscribe(FluxConcatMap.java:235)\n\tat com.mongodb.reactivestreams.client.internal.MongoIterableSubscription.<init>(MongoIterableSubscription.java:39)\n\tat com.mongodb.reactivestreams.client.internal.Publishers.lambda$publish$0(Publishers.java:43)\n\tat com.mongodb.reactivestreams.client.internal.FindPublisherImpl.subscribe(FindPublisherImpl.java:175)\n\tat reactor.core.publisher.FluxSource.subscribe(FluxSource.java:66)\n\tat reactor.core.publisher.Flux.subscribe(Flux.java:8147)\n\tat reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onNext(MonoFlatMapMany.java:195)\n\tat reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)\n\tat reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:73)\n\tat reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)\n\tat reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1784)\n\tat reactor.core.publisher.MonoSupplier.subscribe(MonoSupplier.java:61)\n\tat reactor.core.publisher.Mono.subscribe(Mono.java:4046)\n\tat reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:103)\n\tat reactor.core.publisher.MonoFlatMap$FlatMapMain.onError(MonoFlatMap.java:172)\n\tat reactor.core.publisher.FluxFilter$FilterSubscriber.onError(FluxFilter.java:157)\n\tat reactor.core.publisher.FluxMap$MapConditionalSubscriber.onError(FluxMap.java:259)\n\tat reactor.core.publisher.Operators.error(Operators.java:196)\n\tat reactor.core.publisher.MonoError.subscribe(MonoError.java:52)\n\tat reactor.core.publisher.MonoDeferContextual.subscribe(MonoDeferContextual.java:55)\n\tat reactor.core.publisher.Flux.subscribe(Flux.java:8147)\n\tat reactor.core.publisher.FluxFlatMap.trySubscribeScalarMap(FluxFlatMap.java:199)\n\tat reactor.core.publisher.MonoFlatMapMany.subscribeOrReturn(MonoFlatMapMany.java:49)\n\tat reactor.core.publisher.Flux.subscribe(Flux.java:8133)\n\tat reactor.core.publisher.FluxUsingWhen.subscribe(FluxUsingWhen.java:93)\n\tat reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)\n\tat reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)\n\tat reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1784)\n\tat reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)\n\tat reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)\n\tat reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:210)\n\tat reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:210)\n\tat reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1784)\n\tat reactor.core.publisher.MonoIgnoreThen$ThenAcceptInner.onNext(MonoIgnoreThen.java:305)\n\tat reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1784)\n\tat reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)\n\tat reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1784)\n\tat reactor.core.publisher.MonoZip$ZipCoordinator.signal(MonoZip.java:251)\n\tat reactor.core.publisher.MonoZip$ZipInner.onNext(MonoZip.java:336)\n\tat reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.onNext(MonoPeekTerminal.java:180)\n\tat reactor.core.publisher.FluxDefaultIfEmpty$DefaultIfEmptySubscriber.onNext(FluxDefaultIfEmpty.java:99)\n\tat reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:73)\n\tat reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)\n\tat reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1784)\n\tat reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)\n\tat reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onNext(FluxContextWrite.java:107)\n\tat reactor.core.publisher.FluxMapFuseable$MapFuseableConditionalSubscriber.onNext(FluxMapFuseable.java:295)\n\tat reactor.core.publisher.FluxFilterFuseable$FilterFuseableConditionalSubscriber.onNext(FluxFilterFuseable.java:337)\n\tat reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1784)\n\tat reactor.core.publisher.MonoCollect$CollectSubscriber.onComplete(MonoCollect.java:159)\n\tat reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:142)\n\tat reactor.core.publisher.FluxPeek$PeekSubscriber.onComplete(FluxPeek.java:259)\n\tat reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:142)\n\tat reactor.netty.channel.FluxReceive.onInboundComplete(FluxReceive.java:383)\n\tat reactor.netty.channel.ChannelOperations.onInboundComplete(ChannelOperations.java:396)\n\tat reactor.netty.http.server.HttpServerOperations.onInboundNext(HttpServerOperations.java:540)\n\tat reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:94)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)\n\tat reactor.netty.http.server.HttpTrafficHandler.channelRead(HttpTrafficHandler.java:252)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)\n\tat io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)\n\tat io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)\n\tat io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)\n\tat io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)\n\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)\n\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)\n\tat io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719)\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655)\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581)\n\tat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)\n\tat io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)\n\tat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n\tat java.base/java.lang.Thread.run(Thread.java:834)\n",
"text": "Hi all, I am writing a REST API which is reactive, i.e using Reactive Mongo drivers.\nAll works fine when the concurrency level is 100, however increase the concurrency level to 200, blockhound reports blocking calls in Mongo.\nI am using Mongod community on windows. Below is the error. I am using blockhound to detect blocking calls. This happens only when concurrency above 100.\nAny suggestions on how to resolve this error?",
"username": "major1mong"
},
{
"code": "",
"text": "This is happening at 100 concurrent tasks because the default connection pool size in the driver is 100. So once you hit that number your task now has to wait on a resource to become available: a connection to the database on which to send the operation.This waiting is done in largely done in a non-blocking manner. The blocking that is being reported is just submitting a task to an ExecutorService of unbounded size, so it will never actually block significantly. If you look at the stack trace, the reported blocking is when offering an item to an unbounded blocking queue that the executor service uses to keep track of the submitted tasks.So I think this is a false-positive report from blockhound. However, while the blocking should not be a problem, you still may have throughput limitations. The connection pool is a finite resource that all the reactive tasks have to share, so if you have more tasks than connections, then tasks will have to wait for a connection, even if that waiting does not block any threads. To increase concurrency, you can increase the connection pool max size, but there are limits to how effective that will be, and you may just move the concurrency problem from the client application to the database.Regards,\nJeff",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "Thank you Jeff for your detailed response. As you said, it may be a false positive, so I shall submit this issue to Blockhound team for their response as well.\nSo if I had to whitelist a mongo method, request your suggestion as to which class and method to whitelist.",
"username": "major1mong"
},
{
"code": "",
"text": "I have posted this question on stackoverflow as well. Hope it is in accordance with the policy.",
"username": "major1mong"
},
{
"code": "com.mongodb.internal.connection.DefaultConnectionPool#getAsync(SingleResultCallback<InternalConnection>)",
"text": "From the stack trace I imagine you would add com.mongodb.internal.connection.DefaultConnectionPool#getAsync(SingleResultCallback<InternalConnection>) to the allow-list, though be warned that as this is not part of the public API of the driver it is subject to change in future releases without warning.",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Having blocking operations in Reactive application using MongoDB reactive drivers | 2021-01-02T05:20:13.473Z | Having blocking operations in Reactive application using MongoDB reactive drivers | 7,103 |
null | [
"data-modeling",
"atlas-device-sync",
"legacy-realm-cloud"
] | [
{
"code": "",
"text": "What’s the best migration strategy to handle duplicate primary keys across different Realms?In my legacy realm sync instance, I’m assigning each user their own realm and saving some common API data (like Food info) to those realms using the IDs from the API as the primary key (like a foodId). That means I have multiple legacy realms with objects that share a primary key.This was fine pre-MongoDB Realm, but now since all user data is stored in collections, I can’t use those same primary keys without syncing issues. Unfortunately, users were able to mutate this data, so migrating it to a common partition is not an option.What’s a good approach to migrating this data over without conflicts?I’ve considered making my new schema contain both the API ID and an autogenerated _id. This would require me to make queries and updates client side with only the API ID and I wouldn’t get the benefits of primary key indexing. Is this my only option?",
"username": "Obi_Anachebe"
},
{
"code": "",
"text": "users were able to mutate this data“This data”? Does this mean the user can change the primary key on objects?I think it would help if we knew how your primary keys were being used; did you directly reference the primary key in code or were references made to the object itself instead? Did you use the primary key to perform updates? That kind of information will determine how the data is transitioned.",
"username": "Jay"
},
{
"code": "realm.add(object, update: .modified)foodId// on successful fetch from API\nlet foodEntity = FoodEntity()\nfoodEntity.name = json.foodName\nfoodEntity.id = json.foodId // 'id' is the primary key\n\nrealm.write {\n realm.add(foodEntity, update: . modified)\n}\n// FoodEntity:\n\n{\n \"name\" : \"Eggs\",\n \"id\" : \"food_id\"\n}\nlet foodEntity = realm.objects(FoodEntity.self).filter(\"id == food_id\")[0]\n\nrealm.write {\n foodEntity.name = \"Organic Eggs\"\n realm.add(foodEntity, update: .modified)\n}\n// FoodEntity:\n\n{\n \"name\" : \"Eggs\",\n \"id\" : \"food_id\"\n}\n// FoodEntity:\n\n{\n \"name\" : \"Organic Eggs\",\n \"id\" : \"food_id\"\n}\n",
"text": "users were able to mutate this data“This data”? Does this mean the user can change the primary key on objects?I think it would help if we knew how your primary keys were being used; did you directly reference the primary key in code or were references made to the object itself instead? Did you use the primary key to perform updates? That kind of information will determine how the data is transitioned.Primary keys were never muted. Other fields could be mutated though like the food name or nutrition info.The primary key is directly referenced in code when querying objects by ID. Since all objects have primary keys, updates are performed using realm.add(object, update: .modified)Example:In a legacy Realm, User A and User B both get a Food object from the API. Both users save that object to their respective realms with a primary key of foodId.Their realms now both look like this:User B decides to edit the name of the food they just savedAnd now User A’s realm looks like:But User B’s realm looks like:So now when I’m migrating over their legacy data, I run into an issue where both of those objects have the same primary key, but different fields values.I hope this example was clear.",
"username": "Obi_Anachebe"
},
{
"code": "@objc dynamic var _primaryKey == ObjectID() // or UUID().uuidStringlet foodEntity = FoodEntity()\nfoodEntity.name = json.foodName\nfoodEntity.id = json.foodId // <- where did this come from? How generated?\nrealm.write {\n foodEntity.name = \"Organic Eggs\"\n realm.add(foodEntity, update: .modified)\n}\n",
"text": "This is a great is example of why disassociating an objects primary key from other data is a good idea. Going forward this is a good design pattern:@objc dynamic var _primaryKey == ObjectID() // or UUID().uuidStringHow were the primary key’s (id) for each users FoodEntity generated in the first place? Your object looks likes thisLooking at your write, it’s not based on a primary key, it’s just updating that object regardless of what the primary key isBecause of that it’s not clear how you’re using the primary key of id.As a side note, there’s no reason to filter for objects that have primary keys if you know the key - it can be accessed directly. So instead of thislet foodEntity = realm.objects(FoodEntity.self).filter(“id == food_id”)[0]you can do thislet specificFood = realm.object(ofType: FoodEntity.self, forPrimaryKey: “1234”)",
"username": "Jay"
},
{
"code": "",
"text": "The primary keys come from a 3rd party API. In this example, there is a common database of foods each with their own unique IDs. I poorly architected this codebase a few years ago, unfortunately.Because of that it’s not clear how you’re using the primary key of id.I’m not sure what you mean by this. The primary key never gets generated or updated client-side. We set the primary key as the ID of an object in a 3rd party database. Every other field except the primary key can be updated client-side. The primary key is really just used for performing queries and upserts.Thanks for taking the time to help out with this btw.",
"username": "Obi_Anachebe"
},
{
"code": "food_id_1111 { <- the key\n foodName = \"Pizza\"\n}\n\nfood_id_2222 {\n foodName = \"Burger\"\n}\n\nfoood_id_3333 {\n foodName = \"Tacos\"\n}\nFoodEntity {\n id = food_id_1111 <- matches the key from the food database?\n}\nFoodEntity {\n id = food_id_1111 <- matches the key from the food database?\n}",
"text": "Are you saying you’re taking the primary key from the ‘master food database’ and making that the primary key of the FoodEntity for each user? In other words suppose your json database contains a pizza foodThen when User_A creates a food item it’s thisand when User_B creates a food item it’s the same data with the same primary key?",
"username": "Jay"
},
{
"code": "food_id_1111 { <- the key\n foodName = \"Pizza\"\n}\n\nfood_id_2222 {\n foodName = \"Burger\"\n}\n\nfoood_id_3333 {\n foodName = \"Tacos\"\n}\nFoodEntity {\n id = food_id_1234 <- matches the key from the food database?\n}\nFoodEntity {\n id = food_id_1234 <- matches the key from the food database?\n}\n",
"text": "Are you saying you’re taking the primary key from the ‘master food database’ and making that the primary key of the FoodEntity for each user? In other words suppose your json database contains a pizza foodThen when User_A creates a food item it’s thisand when User_B creates a food item it’s the same data with the same primary key?Yes, that’s exactly what I’m doing",
"username": "Obi_Anachebe"
},
{
"code": "food_id_1111 {\n foodName = \"Pizza\"\n}\n\nfood_id_2222 {\n foodName = \"Burger\"\n}\n\nfoood_id_3333 {\n foodName = \"Tacos\"\n}\nclass UserClass: Object {\n @objc dynamic var _id = UUID().uuidString\n let favoriteFoods = List<FoodEntity>()\n}\nlet foodEntity = realm.objects(FoodEntity.self).filter(\"id == food_id\")[0]",
"text": "Well, ouch.Obviously primary keys much be unique but my question from above stills stands; how are you currently using the primary keys? Are you storing the actual primary keys in Lists or are you storing reference to the objects (which is how it should be)?Suppose a user has a List of his favorite foods. Here are some food items based on the FoodEntity stored in the ‘master food database’then the user is thisso what’s stored in favoriteFoods are the FoodEntity objects, not the primary keys.I know you said thislet foodEntity = realm.objects(FoodEntity.self).filter(\"id == food_id\")[0]above but where is the food_id coming from in that code?I am attempting to come up with a migration path here but it’s dependent on what specifically you’re doing/how your storing the actual primary keys",
"username": "Jay"
},
{
"code": "class UserClass: Object {\n @objc dynamic var _id = UUID().uuidString\n let favoriteFoods = List<FoodEntity>()\n}\nlet foodEntity = realm.objects(FoodEntity.self).filter(\"id == food_id\")[0]FoodAPI.getFoodsForQuery(query: \"Eggs\") { foodJsons, error in\n // Populate list with foodJsons\n}\nlet foodEntity = realm.object(ofType: FoodEntity.self, forPrimaryKey: foodJson.foodId)\nif let food = foodEntity {\n // use food entity\n} else {\n // use food json \n}\n",
"text": "Are you storing the actual primary keys in Lists or are you storing reference to the objects (which is how it should be)?I’m storing references to the objects exactly as you described like this:I know you said thislet foodEntity = realm.objects(FoodEntity.self).filter(\"id == food_id\")[0]above but where is the food_id coming from in that code?The food ID is coming from an API. A user can query the API for a list of foods using a search query.When a user clicks on one of those foods, we query their realm first to see if they have an edited version of that food saved locally.",
"username": "Obi_Anachebe"
},
{
"code": "class FoodClass: Object {\n @objc dynamic var _id = ObjectId() //this is the new primary key\n @objc dynamic var _partition = \"\" //the users uid\n @objc dynamic var foodId = \"\" //this will be a copy of the old primary key\n @objc dynamic var someProperty = \"\"\n\n override static func indexedProperties() -> [String] {\n return [\"foodId\"]\n }\n}\nif let maybeFood = realm.objects(FoodClass.self).filter(\"foodId == %@\"), foodIdFromDatabase).first {\n //do something with maybeFood\n}\n_partitionprimaryKeyfoodIdfoodId",
"text": "Got it. I think the best solution (kind of what you said in your original question) is to change the local model and the query.and the query would change toThe migration should be pretty straightforward - the code would iterate over each users existing FoodEntity, create a new FoodClass (per above) assigning their uid to the object’s _partition property and copying the old primaryKey property to the foodId property.They can then be written to a collection for sync’ing and will be specific to each user by their partition key.Indexing can be a bit tricky and situational - in this case however, it seems you’ll be running equality queries frequently so adding indexing on the foodId property would be appropriate and improve performance.",
"username": "Jay"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | What's the best migration strategy to handle duplicate primary keys across different Realms? | 2020-12-24T20:18:30.006Z | What’s the best migration strategy to handle duplicate primary keys across different Realms? | 6,636 |
null | [
"react-native"
] | [
{
"code": "",
"text": "Whenever I execute a write transaction in my React Native app, the app freezes. Sometimes it freezes for a fraction of a second, sometimes it’s much much longer. This is happening in production (release mode on a new iOS device). I’m trying to figure out a way to get the realm logic off of the main thread, but it looks like React Native just isn’t great at doing that (maybe I’m overlooking something very obvious though).My ideas so far have been:It would be great to get some guidance on what strategies have been successful in the past and what the best practices are for improving Realm performance on React Native or at least refactoring so as to not affect the user directly.",
"username": "Peter_Stakoun"
},
{
"code": "nodejs-mobile-react-native",
"text": "Another solution I recently stumbled across is the nodejs-mobile-react-native npm package that provides a React Native bridge for this project: Node.js for Mobile Apps. I’m not sure how it compares to some of the other “multithreading”/web worker libraries out there (that really don’t look great), so it might be something to consider.",
"username": "Peter_Stakoun"
}
] | React Native performance issues on write | 2020-12-31T22:20:52.665Z | React Native performance issues on write | 2,447 |
null | [
"android",
"kotlin"
] | [
{
"code": "taskApp.emailPasswordAuth.registerUser(email, password)taskApp.emailPasswordAuth.registerUserAsync(email, password)",
"text": "I’m attempting to setup authentication. As reference, I’m mostly looking to https://docs.mongodb.com/realm/tutorial/android-kotlin/#enable-authentication\nI’m running into some trouble with that tutorial.First off, it seems that .emailPassword has been deprecated in favor of .emailPasswordAuthSo, I switch to something like this…\ntaskApp.emailPasswordAuth.registerUser(email, password)\nor\ntaskApp.emailPasswordAuth.registerUserAsync(email, password)But I’m getting a ‘Name already in use’ error. The error is also showing up in my realm.mongoDB logsI’ve set up registration successfully with realm synch on Android before. I seem to remember that the registerUser functions were not working correctly, and that I ultimately had to switch to set up a token system and switch to JWT authentication in order to get registration working.Is that android-kotlin tutorial still valid? What’s the most efficient way to get authentication working with android-kotlin?",
"username": "Ryan_Goodwin"
},
{
"code": "emailPasswordName already in use",
"text": "Where is the indicator that the property emailPassword is deprecated? As far as I can see it should still be valid from https://realm.io/docs/java/10.0.1/api/io/realm/mongodb/App.html#getEmailPassword--The Name already in use error indicates that the user is already registered. Maybe you have a pending user registration or something alike. Try inspecting the Realm App users according to https://docs.mongodb.com/realm/users/create",
"username": "Claus_Rorbech"
},
{
"code": "emailPasswordCredentials.jwt(\"<token>\")",
"text": "Hi Claus, thanks for the reply.The indication for me that emailPassword is deprecated comes from attempting to access the method in android studio. If I try to access the property as demonstrated in the Kotlin tutorial I linked, It comes up as an unresolved reference in Android Studio.I cloned the tutorial app separately and I did not have the same problem for whatever reason. The taskApp object that I set with the same code from the tutorial, does have the emailPassword property.\ntaskApp = App(\nAppConfiguration.Builder(BuildConfig.MONGODB_REALM_APP_ID)\n.build())I gave up and just switched back to my original pattern using Credentials.jwt(\"<token>\")So I guess it’s something with my project, but I’ve given up and gone pack to using JWT authentication for now.",
"username": "Ryan_Goodwin"
},
{
"code": "",
"text": "I solved a portion of this issue. A major challenge here is that tutorial code is frequently outdated.But the error I was receiving is not related to this, or any issue with Realm, but an issue with my server code.I’m using Mongoose via a node backend. I’m also using mongoose-unique-validator to ensure a unique email for each user. It turns out that this validation fails on db access errors. So my backend was receiving db access errors, but they were manifesting as ‘Name already in use’ errors.I discovered that my database user, found under Atlas > Database Access did not have the correct permissions, breaking several of my database interactions. Editing my user to change ‘database user permissions’ to ‘atlas admin’ resolved the errors.My server code had been functional previously. So not sure what happened there.",
"username": "Ryan_Goodwin"
}
] | Kotlin registration problems | 2020-11-23T15:32:47.877Z | Kotlin registration problems | 3,517 |
null | [
"aggregation",
"node-js"
] | [
{
"code": "{\n \"_id\" : \"1293jnqke\",\n \"name\" : \"name_string\",\n \"other_data\" : [{\n \"data\" : ...\n }]\n}\n{\n [ {\"name\" : \"23rd name\"},\n {\"name\" : \"2nd name\"},\n {\"name\" : \"78th name\"},\n {\"name\" : \"99th name\"},\n {\"name\" : \"53rd name\"},\n {\"name\" : \"11th name\"},\n ...]\n}\n",
"text": "Hi, so let’s say I have a random array of n integers [23, 2, 78, 99, … ] where 0 < n < collection.count(), and each element is within this range as well.In my collection, it is currently indexed by _id, and say it contains the following data:How should I go about grabbing the data, in chunks of 50, which correspond to each element in the array? So with the example integer array I have, I will slice it down to the first 50 elements, and then I wish to run a find() on the collection, extracting the names of the 23rd, 2nd, 78th, 99th, etc elements, based on the natural ordering of the collection (no need for any sorting beforehand). So the output should look like this:One way of course is by looping through the array with findOne() and skip() but I don’t think that is very efficient, is it possible to execute this in a single query?",
"username": "Ajay_Pillay"
},
{
"code": "{\n \"_id\" : \"1293jnqke\",\n \"name\" : \"name_string\",\n positon : 1,\n \"other_data\" : [{\n \"data\" : ...\n }]\n},\n{\n \"_id\" : \"1294jnqke\",\n \"name\" : \"name_string\",\n positon : 2,\n \"other_data\" : [{\n \"data\" : ...\n }]\n}\n...\ndb.coll.find({position : {$in : [23, 2, 78, 99, … ]}})\ndb.coll,aggregate([{$facet: {\n search: [ {$limit : 100},{$project : {_id: 0, name : 1}} ]\n}}, {$unwind: {\n path: \"$search\",\n includeArrayIndex: 'index',\n preserveNullAndEmptyArrays: true\n}}, {$match: {\n \"index\" : {$in : [23, 2, 78, 99]}\n}}, {$replaceRoot: {\n newRoot: \"$search\"\n}}]);\n",
"text": "Hi @Ajay_Pillay,To begin with the optimal solution is to add a position field for each document and index it. This will allow an index search for the documents, each new record should be increased in value:This way you can run :If the above schema design is not possible, I would recommend creating a materialized view which will have a calculated position field, you can use $merge or change stream logic to create it.However, I managed to create a “performance” limited aggregation to fetch this logic, it uses a limit stage which should specify the max number + 1 from the input search array (eg. 99 + 1 = 100), $facet to have an array and create an index using the $unwind:Please note that this query s suboptimal as it can’t utilise index and the $facet will allow the created document to be max 16MB of allowed BSON size. Therefore use it only if the amount of data scanned s limited otherwise it will error out.Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny, thank you for the depth and clarity of your answer. I think in my scenario I will probably implement the first method where there is an indexed position field, but I appreciate the thought put into the second method, I will definitely be testing that out as well. Thank you!",
"username": "Ajay_Pillay"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Fetching nth document based on integer array | 2020-12-26T14:37:13.941Z | Fetching nth document based on integer array | 5,428 |
null | [
"data-modeling"
] | [
{
"code": "[\n {name : 'Electronic' , path: ''},\n \n {name :'Mobile' , path :'/Electronic'}, \n {name :'LG' , path :'/Electronic/Mobile'},\n {name :'S330' , path :'/Electronic/Mobile/LG'},\n {name :'Samsung' , path :'/Electronic/Mobile'},\n {name :'Galaxy 10' , path :'/Electronic/Mobile/Samsung'},\n \n {name : 'Laptop' , path: '/Electronic'},\n {name :'HP' , path :'/Electronic/Laptop'}, \n {name :'Pavilion 2000' , path :'/Electronic/Mobile/HP'},\n {name :'DELL' , path :'/Electronic/Laptop'}, \n {name :'D2000' , path :'/Electronic/Mobile/DELL'},\n \n {name : 'Clothes' , path: ''},\n \n {name :'Men' , path :'/Clothes'}, \n {name :'Socks' , path :'/Clothes/Men'},\n {name :'Pants' , path :'/Clothes/Men'}, \n \n \n {name :'Women' , path :'/Clothes'}, \n {name :'Skirt' , path :'/Clothes/Women'},\n {name :'Hat' , path :'/Clothes/Women'},*\n]\n\n\n[\n {\n name:'Electronic', children:[\n {name:'Mobile', children:['LG', 'Samsung']},\n {name:'Laptop', children:['HP', 'DELL']}\n ]\n },\n{\n name:'Clothes', children:[\n {name:'Men', children:['Socks', 'Pants']},\n {name:'Women', children:['Skirt', 'DELL']}\n ]\n },\n]\n",
"text": "Hello Can any one help me to change the structure of that array from materialize path to this structureor if this is not a good idea can any body help me to figure out how to work with nested menu for materialize path please i get stack please any help ?",
"username": "Hamza_Miloud_Amar"
},
{
"code": "",
"text": "Hi @Hamza_Miloud_Amar,Welcome to MongoDB community.I believe you need to use $split operator to split the “/” delimiter and iterate over the array in an aggregation command.If you need to preserve this to a target you can use $merge stage or $out to a new collection.Now the way you should organise your hirerchical data is related to the way you access/modify the data.For example, If you need to access the tree for every root it makes sense to store it together as long as you not cross 16MB per doc.However, if you need to get only subpart of the tree / explode them on clicks it might make sense to store them in pointer docs.Suggest reading this:The Tree Pattern - how to model trees in a document databaseBest\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "Products (leaves)\n{\n name 'Pavilion 2000'\n brand HP\n}\n\nBrands (last level before leaves)\n{\n name HP\n parents [laptops,mobile] //categories\n children ['Pavilion 2000', .....] //products\n}\n\nCategories (internal nodes)\n{\n name Electronic\n parents []\n children [laptop,mobile]\n}\n",
"text": "HelloDynamic documents are hard,and nested dynamic documents are even harder\nDynamic meaning unknown keys/nested level etc\nIts very hard to create/add/access/search on themStatic nested documents are easier,but still hard\nIts hard to search on them,things like object to array and back make things complicated\nOn trees we need easy way to get parents/childrenAlternative to this is using arrays,containing children/parents references\nDynamic/Static arrays are easy to create/add/access/search on them\nTree structure is saved in arrays,not as embedded documentsSomething like this maybeThere is university course that has videos on how to model trees in MongoDB\nM320 Data modelling",
"username": "Takis"
}
] | Change structure of materialize path to children | 2020-12-30T21:48:08.046Z | Change structure of materialize path to children | 1,753 |
null | [
"aggregation"
] | [
{
"code": "{\n\tresource: {\n\t\tname: \"PROJ01\", \n\t\tversion: 1,\n\t\towner: \"\"\n\t\t},\n\tappInfos: [\n\t\t{\n\t\tapp_key: \"APP01\",\n\t\tsize: 20mb,\n\t\tmetadata:{\n\t\t\tdeployOn: \"aws\",\n\t\t\tstatus: \"running\",\n\t\t\treason:{}\n\t\t\t}\n\t\t},\n\t\t{\n\t\tapp_key: \"APP01\",\n\t\tsize: 20mb,\n\t\tmetadata:{\n\t\t\tdeployOn: \"azure\",\n\t\t\tstatus: \"failed\",\n\t\t\treason:{\n\t\t\t\tmessage: \"Connectivity Issue\",\n\t\t\t\terrorCode: \"CONNECTIVITY_ERROR\"\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\tapp_key: \"APP02\",\n\t\tsize: 20mb,\n\t\tmetadata:{\n\t\t\tdeployOn: \"aws\",\n\t\t\tstatus: \"running\",\n\t\t\treason:{}\n\t\t\t}\n\t\t},\n\t\t{\n\t\tapp_key: \"APP02\",\n\t\tsize: 20mb,\n\t\tmetadata:{\n\t\t\tdeployOn: \"azure\",\n\t\t\tstatus: \"failed\",\n\t\t\treason:{\n\t\t\t\tmessage: \"Connectivity Issue\",\n\t\t\t\terrorCode: \"CONNECTIVITY_ERROR\"\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t]\n}\n{\n\tresource: {\n\t\tname: \"PROJ01\", \n\t\tversion: 1,\n\t\towner: \"\"\n\t\t},\n\tappInfos: [\n\t\t{\n\t\tapp_key: \"APP01\",\n\t\tsize: 20mb,\n\t\tmetadata:[{\n\t\t\tdeployOn: \"aws\",\n\t\t\tstatus: \"running\",\n\t\t\treason:{}\n\t\t\t},\n\t\t\t{\n\t\t\tdeployOn: \"azure\",\n\t\t\tstatus: \"failed\",\n\t\t\treason:{\n\t\t\t\tmessage: \"Connectivity Issue\",\n\t\t\t\terrorCode: \"CONNECTIVITY_ERROR\"\n\t\t\t\t}\n\t\t\t}\n\t\t]},\n\t\t{\n\t\tapp_key: \"APP02\",\n\t\tsize: 20mb,\n\t\tmetadata:[{\n\t\t\tdeployOn: \"aws\",\n\t\t\tstatus: \"running\",\n\t\t\treason:{}\n\t\t\t},\n\t\t\t{\n\t\t\tdeployOn: \"azure\",\n\t\t\tstatus: \"failed\",\n\t\t\treason:{\n\t\t\t\tmessage: \"Connectivity Issue\",\n\t\t\t\terrorCode: \"CONNECTIVITY_ERROR\"\n\t\t\t\t}\n\t\t\t}\n\t\t\n\t\t]}\n}",
"text": "I have documents in mongo collection like this:I want to combine metadata on bases of “app_key” by using aggregation and expected output should be like this:",
"username": "Pradip_Kumar"
},
{
"code": "_id_id$unwindappInfos$groupresourceapp_keymetadata$groupappInfos { $unwind: \"$appInfos\" },\n {\n $group: {\n _id: {\n app_key: \"$appInfos.app_key\",\n resource: \"$resource\"\n },\n size: { $first: \"$appInfos.size\" },\n metadata: { $push: \"$appInfos.metadata\" },\n resource: { $first: \"$resource\" }\n }\n },\n {\n $group: {\n _id: \"$_id.resource\",\n appInfos: {\n $push: {\n app_key: \"$_id.app_key\",\n size: \"$size\",\n metadata: \"$metadata\"\n }\n }\n }\n }\n",
"text": "Hello @Pradip_Kumar Welcome to MongoDB Developer Community,You can use $unwind stage, Deconstructs an array field from the input documents to output a document for each element. Each output document is the input document with the value of the array field replaced by the element.\nand $group stage, Groups input documents by the specified _id expression and for each distinct grouping, outputs a document. The _id field of each output document contains the unique group by value.Your can design your query like, this is not tested, and my approach is you can do something like this for your expected result,",
"username": "turivishal"
},
{
"code": "",
"text": "Hi @turivishal Thanks for quick reply. I’ll implement it and test it against my collection. Thanks again for solution.",
"username": "Pradip_Kumar"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Group and push objects in array if they having same key in mongodb document | 2021-01-01T10:18:47.828Z | Group and push objects in array if they having same key in mongodb document | 19,960 |
null | [
"aggregation"
] | [
{
"code": " db.getCollection('BlogPostLikers').aggregate([\n {$match:{likerId:ObjectId('5eb17cf53f0000020ac10c75')}},\n {$unionWith: { coll: \"PostOrBlogs\", pipeline: [{ $match: { postType: 'T' } }] }} ,\n {$unionWith: { coll: \"PaySlip\", pipeline: [{ $match: { employeeId: ObjectId('5eb17cf53f0000020ac10c75') } }] }} ,\n\n ])\n",
"text": "Think that I have 3 collections. I want to run $unionWith to fetch data from those three collections.Will this be parallel or sequential ?",
"username": "Md_Mahadi_Hossain"
},
{
"code": "$unionWith$facet",
"text": "Hi @Md_Mahadi_Hossain,Each stage in aggregation is sequential to the previous one even $unionWith.the $facet stage should run multiple pipelines in one stage, however, I am not sure if it happens in a multi or single threaded way.Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "$unionWith",
"text": "If $unionWiths are not parallel then what is the use of it. Because we can send multiple requests using threads to run query on multiple collections by drivers.",
"username": "Md_Mahadi_Hossain"
},
{
"code": "$unionWith",
"text": "Hi @Md_Mahadi_Hossain,The idea of $unionWith is that you can query different data sets from various collections and include them into one document stream. In later stages after the union you can aggregate or transform this data using the aggregation frameworks.A good example is shown in our documentation is creating a sales report from quarterly gathered collections:Best regards,\nPavel",
"username": "Pavel_Duchovny"
}
] | Does $unionWith run parallel query on independent collections? | 2020-12-31T20:54:02.494Z | Does $unionWith run parallel query on independent collections? | 2,989 |
null | [
"atlas-device-sync",
"react-native"
] | [
{
"code": "",
"text": "Good Morning,\nI’m using Realm in React Native, however when I add a new field in my schema, when generating a new release version of my app, it will have a white screen.\nDoes anyone know how to solve?\nI’m changing the SchemaVersion just right.",
"username": "Mobile_Syntesis"
},
{
"code": "",
"text": "@Mobile_Syntesis We will need more information from you. Like what schema you have, what schema you are migrating to, how you migrate, and then the error you get when you run your current set-up",
"username": "Ian_Ward"
},
{
"code": "",
"text": "This happens to me a lot and I still don’t know exactly why. I usually end up just undoing my changes to the schema until it works and then trying to make them again one by one. I’ve noticed it sometimes (maybe always?) happens when I try to make properties optional.",
"username": "Peter_Stakoun"
}
] | Realm white screen in React Native | 2020-07-30T17:12:52.639Z | Realm white screen in React Native | 2,577 |
null | [
"mongodb-shell"
] | [
{
"code": "MongoDB Enterprise replica001:PRIMARY> db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )\n{\n \"featureCompatibilityVersion\" : {\n \"version\" : \"4.0\"\n },\n \"ok\" : 1,\n \"operationTime\" : Timestamp(1609422524, 1),\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1609422524, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"00U9i7OroDkKT0fyn18KvNX7dM8=\"),\n \"keyId\" : NumberLong(\"6895568894216372225\")\n }\n }\n}\n{\n \"featureCompatibilityVersion\" : {\n \"version\" : \"4.0\"\n }\n}\n",
"text": "Hi Team,I want to avoid bottom part of the output in MongoDB.For ex:In the above output, I want to avoid the output part starting from “ok:1”\nHow can i do that…I want an output similar to the following:Please suggest me how I can achieve this.Thanks in advance",
"username": "venkata_reddy"
},
{
"code": "> db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } ).featureCompatibilityVersion\n{ \"version\" : \"4.0\" }\n",
"text": "Note that the output is an object so you can access any member.The following will not be exactly like you want but very close.",
"username": "steevej"
},
{
"code": "",
"text": "Hi steevej,Cool Stuff \nThank you…Happy New Year!!!",
"username": "venkata_reddy"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to avoid the output part starting from "Ok:1" | 2020-12-31T13:59:01.804Z | How to avoid the output part starting from “Ok:1” | 1,629 |
[] | [
{
"code": "",
"text": "It is 31st December, 2020 - here where I live (the southern India). 2021 is nearby.One of the things that is talked about a new year is the resolutions. I had never thought about this aspect in the past years. But, lot of things happen in life. It is generally a learning thing as one passes thru it. The childhood days are the most fun to learn and explore - after that there is some “purpose” and you have to “think” and make “decision” about it (and the fun part of it leaves).Over the last years, I have been paying attention to what is called as - taking care of oneself. And, I see it is not a simple matter. It is not about learning new things, getting stuff or achieving something. It requires some awareness about yourself and looking at it little closely. So, this is part of my resolution and an ongoing thing for rest of the years.This includes the food you eat, do things you enjoy, and be in decent health. The physical and psychological health. I actually started my New Year resolution activities few weeks before the New Year started in a not so planned way. And, there was no “todo” list.I turned vegetarian few years back. I had to figure some stuff in the process - about food, its nutrition, likes, etc. Few weeks back I started trying sprouts - the mung bean sprouts. I had come to know about then as healthy food. I started making them, and after a few tries, I did get good sprouts. They are delicious as a snack or as part of the meal. I tried them as part of salad, sauteed them, in a soup, etc., but the best is to eat as they are (crunchy and nutty) and in small quantity (a cup full). I grow them with bottled water and don’t store them.I plan to grow some microgreens sometime later and I have collected some material about it already.I also had to look at my physical exercise. I get my exercise mostly from doing my chores and walk when I get out. Lately, I found the need for some stretching exercises. The work and other tasks need that I spend lot of time in front of a computer. The posture, the continued usage, can have its effects on the body. I have not had any injuries to worry about - but, often know about them from others. Pain in the neck, back and hands are common, I see. I have experienced these once each of them. These were mitigated with some quick action - getting proper task chair, checking the posture, and doing some exercises which strengthen the associated muscles and also keep them supple and relaxed.So, I have a new task chair. Keeping a second one allows change chairs and postures during my work or otherwise is helping. I am also doing exercises for the back and neck - and some of them have strange names like “bird dog” and “iliopsoas stretch” - but they are effective.In the last few years I have been reading quite a bit - fiction, non-fiction, philosophy / religion (and then there are software related). 2020 was rather bleak, with not much reading. Now, I have ordered some books already, got one book and started studying it (it happens to be about a modern computer programming language). The learning of the new thing is a new programming language and I like it already.So, my resolutions are already “in progress” and I wish you the best!P.S.: The sprouts I made last week:sprouts600×605 228 KB",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | On resolutions and stuff | 2020-12-31T10:06:57.499Z | On resolutions and stuff | 4,687 |
|
null | [
"devops"
] | [
{
"code": "",
"text": "hello ,\nnow i have a very big log file exceed 280 GB,\ncan i mange or delete this file ?,\nthanks a lot",
"username": "Abdelrahman_N_A"
},
{
"code": "",
"text": "On linux? Try this, or a variation as this one still retains the compressed & rotated files for a long time.",
"username": "chris"
},
{
"code": "logrotatelogrotate",
"text": "Welcome to the MongoDB Community @Abdelrahman_N_A!The best approach would be setting up automatic log rotation and archiving as suggested by @chris. Options will vary depending on your O/S version, but on Linux or Unix logrotate is a standard utility.You can also trigger log rotation using MongoDB’s logrotate administrative command, which might be useful as a first step to allow you to compress or archive your current large log file. I wouldn’t outright delete recent log files until you are certain you won’t need them for any diagnostic purposes.If you need further suggestions, please confirm your O/S version so we can provide more relevant recommendations.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "thanks a lot , we tried it on windows server 2019 ,\nwhen we used “db.adminCommand( { logRotate : 1 } )” command , mongo created a new file for log and rename the old file automatically\nreference: https://docs.mongodb.com/manual/tutorial/rotate-log-files/",
"username": "Abdelrahman_N_A"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Managing or delete log file | 2020-12-30T20:10:25.280Z | Managing or delete log file | 14,603 |
null | [
"app-services-data-access"
] | [
{
"code": "\"apply_when\": {\n \"owner\": {\n \"%stringToOid\": \"%%user.data.id\"\n }\n},\nFailed: failed to import app: failed to migrate permissions: unknown operator '%stringToOid'",
"text": "Hi!\nI have Rules like these which help to compare string value stored in JWT token to the owner field of type ObjectId:It works perfectly if I edit it from the Realm’s UI.But if I try to deploy the same configuration automatically from Github I get Failed: failed to import app: failed to migrate permissions: unknown operator '%stringToOid' in the Deploy logs and no changes are deployed.\nDo you have an idea what I may be doing wrong?",
"username": "dimaip"
},
{
"code": "",
"text": "Hi! thanks for reporting - this looks like a bug. I’ve filed it with the engineering team but as a temporary workaround, you can either\na) continue using the UI to deploy your application\nb) write a function that converts an objectId to a string and use that within rules (example here)",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Thank you so much for the response!\nI think we’ll use the UI for now knowing that it’d soon be fixed, but it’s good to know there’s a workaround!",
"username": "dimaip"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Error on deploy from Github: unknown operator '%stringToOid'" | 2020-12-22T11:07:00.900Z | Error on deploy from Github: unknown operator ‘%stringToOid’” | 3,982 |
null | [
"aggregation"
] | [
{
"code": "use('drug')\ndb.drug_insert.aggregate( \n [ {\n $unionWith: {\n coll: { \"$count\": \"total\" } \n }\n } ]\n)\n",
"text": "Hi,everyone:my code is here:but i get a error as below:\nError validating $unionWith value. err=Error getting coll field in $unionWith err=Expected ‘coll’ to be string, but got primitive.D insteadany help?my mongodb version is 4.4",
"username": "hj_zhang"
},
{
"code": "collpipelinedrug_insertdb.drug_insert.aggregate( \n [ {\n $unionWith: {\n coll: \"drug_insert\", pipeline : [{ \"$count\": \"total\" } \n }]\n } ]\n)\n",
"text": "Hi @hj_zhang,Welcome to MongoDB community!Your syntax for the unionWith stage is not complete.You need to specify a collection name to field coll and your pipeline goes into field pipeline.For example to add the count of drug_insert to the end of the query:But to be honest you can just use the $count stage directly to just get the count.Best\nPavel",
"username": "Pavel_Duchovny"
}
] | Error validating $unionWith value | 2020-12-31T02:17:58.968Z | Error validating $unionWith value | 1,692 |
null | [
"compass",
"connecting"
] | [
{
"code": "",
"text": "Maybe I’m in way over my head on this, but I harmlessly thought I would poke around with MongoDB and see what it could do with some JSON data I’ve been poking at… Except I can’t connect Compass to the database instance on Atlas. I’ve been through the documentation for both Compass and Atlas related to connecting one to the other. Maybe I’m slow. I’ve read through the postings I could find on this site. No help. The problem is SO simple, I’m embarrassed to have to ask about it.Here’ what I got.All of that produces the following connection string:Blockquote\nmongodb+srv://bob:[email protected]/testSeems straightforward… BUTThe Organization Access Manager tells me that the username and email address should be the same. So, is \" mongodb+srv://bob: \" in the connection string correct? Should it be \" mongodb+srv://[email protected]: \" ?I’m unclear about the format of the password in the connection string. Is it or ducks4Sale or <>?I’ve tried just about every combination of the above, so started checking other things.My ISP is Xfinity. We use their stock model and router. When connecting to the DB instance, Atlas detected my IP address without a problem. I’ve done nothing else with this setting. I can’t find that I should be using a particular port on the router.The user, [email protected], has admin privileges.There is something called an admin database for authentication, right? Do I need to do something there? I doesn’t seem like I should.As an FYI, I’m running Ubuntu 18.04, the Bionic Beaver, if that makes a difference.All help will be appreciated. I’d give out chocolate chip cookies, but that covid thing… well, you know.",
"username": "Christopher_Scott"
},
{
"code": "mongodb+srv://bob:[email protected]/test",
"text": "Your mongodb uri mongodb+srv://bob:[email protected]/test should be sufficient.\nRemember you have to go in the Atlas web interface to “Network Access” and whitelist the IP you are logging in from.",
"username": "Jack_Woehr"
},
{
"code": "<password><>",
"text": "Welcome to the MongoDB Community @Christopher_Scott!If you copy your Atlas connection string to the clipboard and open MongoDB Compass, Compass should recognise the format and offer to use it to set up a new connection. You can also copy and paste manually if a prompt doesn’t appear.The Organization Access Manager tells me that the username and email address should be the same. So, is \" mongodb+srv://bob: \" in the connection string correct? Should it be \" mongodb+srv://[email protected]: \" ?Organisation Access is for logging into the Atlas management UI, which would be an email address (or user name for legacy accounts).To connect to a cluster you need to set up a Database User with access to your Atlas cluster and make sure the IP you are connecting from has been added to the IP Access List for your cluster.I’m unclear about the format of the password in the connection string. Is it or ducks4Sale or <>?It looks like you have quoted a correct connection string. The <password> example is meant to be a placeholder for you to replace with your actual password (not including the <>).For full steps, please see: Connect via Compass in the Atlas documentation. I suspect the confusing step is that you are trying to connect to your cluster with Atlas org credentials rather than a database user.If the steps don’t work as expected, please confirm your specific version of Compass and any error messages or outcomes.Also: I’m assuming that you used an example username and password here, but if those are real credentials I would definitely change them ;-).Regards,\nStennie",
"username": "Stennie_X"
}
] | NuBe With Compass Connection Issues | 2020-12-31T04:43:24.837Z | NuBe With Compass Connection Issues | 1,728 |
null | [
"devops"
] | [
{
"code": "",
"text": "hello,\nin version 4.4 anybody have a concern or recommendation about “Compact” command,\nnote : my database exceeded 3.5 TB,\nthanks a lot",
"username": "Abdelrahman_N_A"
},
{
"code": "compactcompactcompactfile bytes available for reusedb[\"collectionname\"].stats().wiredTiger[\"block-manager\"]mongocompactcompactcompactcompactcompactcompactsecondaryhidden",
"text": "Hi @Abdelrahman_N_A,If you were using a server release of MongoDB earlier than 4.4, I’d definitely have serious concerns about blocking side effects of a compact operation in production. Removing the blocking behaviour was one of the improvements included in the MongoDB 4.4 release, so that is a positive change from previous releases.However, although compact will no longer block CRUD operations for the database containing the collection being compacted, there could still be a significant impact on your working set if you are compacting a large collection.Before running compaction I would check that this might be useful to do based on:It is normal to have some reusable space for a collection with active updates. Excessive reusable space is typically the result of deleting a large amount of data, but can sometimes be related to your workload or the provenance of your data files.The outcome of a compact operation is dependent on the storage contents, so I would draw your attention to the note on Disk Space in the compact documentation:On WiredTiger, compact attempts to reduce the required storage space for data and indexes in a collection, releasing unneeded disk space to the operating system. The effectiveness of this operation is workload dependent and no disk space may be recovered. This command is useful if you have removed a large amount of data from the collection, and do not plan to replace it.If this is a production environment, I would hope you have a replica set or sharded cluster deployment so you can minimise the operational impact.If you have many large collections to compact (or want a more likely outcome of freeing up disk space), Re-syncing a Secondary Member of a Replica Set via initial sync will rebuild all of the data files by copying over the data from another member. If compact doesn’t end up freeing up enough space, this would be the next procedure to run.If you do decide to run compact in a production environment, I would minimise the operational impact by:Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Compact command concerns | 2020-12-30T20:10:27.052Z | Compact command concerns | 4,743 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Hi,\nIs there a best practice maximum number of fields in a document ?thanks",
"username": "Normand_Rioux"
},
{
"code": "",
"text": "Welcome to the community @Normand_Rioux!There isn’t a prescriptive maximum number of fields per document, but you should consider the relationships of fields in the same document and ensure that your data model efficiently supports your common use cases and indexing requirements. There are definite anti-patterns like having too many fields with unrelated data or array fields with unbounded growth, but interpreting those usually requires some understanding of your use case and data.If you have a specific example of a data model you are considering (or working with), I suggest starting a new discussion topic on the forum with more details (example documents, common queries, planned indexes, version of MongoDB server, and your specific questions/concerns). It is likely that someone in the community will have useful suggestions.Some worthwhile reads include:There’s also a free online course at MongoDB University if you want some hands-on practice with video lectures & quzzes: M320: Data Modeling.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi Stennie,\nThx for your answer. \nBasically the collections are Products, Vendors, Customers etc.\nAnd the documents can have 150-200 fields.\nI was wandering if there would be a performance problem with this amount of fields hence my question?\nI am taking the MongoDB university courses but it mostly deals with 10-15 fields which is not big.\nNorm",
"username": "Normand_Rioux"
},
{
"code": "",
"text": "Hi @Normand_Rioux,As I mentioned, the consequences depend on how you use the data rather than a strict number of fields. The MongoDB University examples tend to be concise for clarity, but modelling more complex data (like products) can easily take more than a handful of fields.Please have a read through the Patterns & Anti-Patterns series I mentioned, as those go into very helpful detail on schema design.For example, in the case of products you will probably want to use the Attribute Pattern to reduce the number of indexes needed (which should generally improve performance). Some anti-patterns to watch out for would be combining many fields that aren’t accessed together into a large document (aka Bloated Documents) and creating Massive Arrays with unbounded growth.If you have a single document with 200 fields but commonly only needed to read 10, the server still has to load the full document into memory (outside of the very special case of a covered query), but an anti-pattern would be separating data that is accessed together.The best approach to model your data is informed by your common queries and use cases. You can certainly have hundreds (or thousands) of fields in a document if that is appropriate. There are also patterns like Subset and Outlier that are helpful to consider when working with portions of related data.Modelling data to support efficient usage (rather than normalising for storage) is a key differentiator for effective use of MongoDB, but is a definite mindset shift from working with tabular data in SQL.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Best practice max fields in a document | 2020-12-29T20:16:56.941Z | Best practice max fields in a document | 4,665 |
null | [
"mongodb-shell"
] | [
{
"code": "",
"text": "Hi Team,Can we run java script in windows power shell to connect to MongoDB and check status of the server daily.\nCould you please help meThanks in Advance",
"username": "venkata_reddy"
},
{
"code": "",
"text": "Hi @venkata_reddy,This seems like a possible duplicate of your other question: Health script from Ops manager - #2 by Stennie_X.You could certainly write a script to connect to your deployment, although it would be more typical to configure alerts based on conditions of interest or concern using MongoDB Cloud/Ops Manager or a similar management platform.What status are you looking to check, and what type of deployment do you have (standalone, replica set, or sharded cluster)?Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi @Stennie_X,Thank you for the update.I am trying to execute certain batch of commands in power shell by passing java script as an argument.It would look like this:I created a powershell(test.ps1) script with the following lines in it.cd “C:\\Program Files\\MongoDB\\Server\\4.2\\bin”\n.\\mongod --version\n.\\mongo --authenticationMechanism=SCRAM-SHA-256 --authenticationDatabase=‘admin’ --username=‘sysdb’ --password=mongo 192.33.44.55/admin C:\\Users\\xyz\\Desktop\\test1.jswhere test1.js contains the following:print(’*** MongoDB Uptime in Days: ’ + db.serverStatus().uptime / 86400 + ’ ***’);print(’*** MongoDB Version: ’ + db.version() + ’ ***’);print(’*** replicaset members: ’ + rs.status().members + ’ ***’);rs.status().membersOutput:PS C:\\Program Files\\MongoDB\\Server\\4.0\\bin> C:\\Users\\xyz\\Desktop\\test.ps1\ndb version v4.0.14\ngit version: 1622021384533dade8b3c89ed3ecd80e1142c132\nallocator: tcmalloc\nmodules: enterprise\nbuild environment:\ndistmod: windows-64\ndistarch: x86_64\ntarget_arch: x86_64\nMongoDB shell version v4.0.14\nconnecting to: mongodb://192.33.44.55:27017/admin?authMechanism=SCRAM-SHA-256&authSource=admin&gssapiServiceName=mongodb\nImplicit session: session { “id” : UUID(“8591efaf-b36c-414c-9c4c-e3301ee32cd3”) }\nMongoDB server version: 4.0.14\n*** MongoDB Uptime in Days: 5.000949074074074 ***\n*** MongoDB Version: 4.0.14 ***\n*** replica set members: [object BSON],[object BSON],[object BSON] ***whether it is text file or java script file as argument\nIt is not displaying output of a command if it returns more than a line.\nMoreover it is not executing the commands placed outside the print() function.\nCould you please help me.Thanks in Advance",
"username": "venkata_reddy"
},
{
"code": "mongomongoprint(rs.status().members)print()printjson(rs.status().members)print()printjson()",
"text": "Hi @venkata_reddy,There are some differences in interactive vs scripted mongo shell interaction. For more information see: Write Scripts for the mongo Shell.There’s a particular note which might help with your current approach:In interactive mode, mongo prints the results of operations including the content of all cursors. In scripts, either use the JavaScript print() function or the mongo specific printjson() function which returns formatted JSON.*** replica set members: [object BSON],[object BSON],[object BSON] ***This is the expected output of print(rs.status().members). The print() function does not attempt to convert objects to JSON.Use printjson(rs.status().members) to output in JSON.Moreover it is not executing the commands placed outside the print() function.The commands are executing, but won’t display any output in scripted mode unless you explicitly call print() or printjson() on a command or its result.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi @Stennie_X,Great stuff. Thanks for the clear explanation.Happy New Year in advance !!!",
"username": "venkata_reddy"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Java script to use in windows power shell to connect to MongoDB and check status daily | 2020-12-23T20:21:13.037Z | Java script to use in windows power shell to connect to MongoDB and check status daily | 3,962 |
null | [
"upgrading"
] | [
{
"code": "",
"text": "I have a few files from /data/db - database created with MongoDB1.8 over 5 yrs ago. The format is:\nolddb.1\nolddb.2\nolddb.3\nolddb.4\nolddb.nsAny way to convert it to a new format without recreating the old environment to do proper migration?",
"username": "Tyra_M"
},
{
"code": "dbPathmongodumpmongorestoremongodump",
"text": "Welcome to the MongoDB community @Tyra_M!MongoDB 1.8 was released almost 10 years ago (March, 2011) and there have been significant changes since then. There have been several changes in the on-disk MMAP format since 1.8, and I’m not sure how far you can stretch backward compatibility.What is your desired goal state? For example, do you want your data to end up in the latest version of MongoDB server or are you just trying to extract some data from an old backup?A few approaches to consider (make sure you test with a copy of your data files):Try installing MongoDB 4.0.x (the last version supporting the MMAP storage engine) with a dbPath pointing to a copy of your database files. There may be too many changes for this to succeed, but I think is worth trying.Install an older version of MongoDB server which is closer (by major release version) to your original data files, take a backup using mongodump, and then mongorestore that to a current version of MongoDB server. I think MongoDB 2.2.x would be a reasonable starting point (and it included some useful mongodump improvements like backing up index definitions).If you do end up trying one of these approaches (or a different one), please comment with the outcomes (successful or otherwise).Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "{id: int}",
"text": "For fun I created a 1.8 database, 2 collections 40000 docuemts, very simple ({id: int})Running 4.0 with the datafiles did not result in a db with the 1.8 collections present.Stepping back major versions 3.0(also 2.2) seems to load it okay. So @Tyra_M to follow @Stennie_X’s 2nd point you should be able to get your data from 1.8 data files and into a newer vaerion.–\nChris",
"username": "chris"
}
] | Converting files from MongoDB 1.8 | 2020-12-24T23:51:04.360Z | Converting files from MongoDB 1.8 | 1,886 |
null | [
"swift",
"app-services-user-auth"
] | [
{
"code": "RLMApp.login(withCredential: credentials, completion: { })",
"text": "I’m trying to perform user registration on IOS using Swift. I’d like to find some example code.Registration works. I’m collecting a JWT from my own backend. Then I’m creating credentials and calling something like this:RLMApp.login(withCredential: credentials, completion: { })I am actually able to register a user with the same method, as the login method just creates a new user if the username is not taken.The main problem I’d like to solve is that I don’t know when a new user has been created. If a user attempts to login and misspells their username, it will just create a new account. If a user tries to signup with a username that’s already taken, they’ll end up either logging in or getting a bad password notification.Is there a method for synch registration on IOS. Can I get an example of how to call this method?Also, I’d like to trigger a confirmation email. Am I able to do this if I register with a JWT? If so, can someone point me to an example?Thanks,\nRyan",
"username": "Ryan_Goodwin"
},
{
"code": "",
"text": "Ryan,First of all, good question. Based on your post, I understand that you have a functioning backend that correctly generates a JWT token for MongoDB Realm, so you have already done most of the heavy lifting.Second, MongoDB Realm supports JWT authentication, but does not actually provide a JWT authentication provider. That is where your backend comes into play. But as a JWT authentication provider, it is responsible for user registration and user password authentication. Once it has determined that a user is valid, it creates a JWT token for that user. When the JWT token is passed on to Realm through the RLMApp.login(…) function, Realm assumes that the user in question is valid. If the actual user does not exist, Realm will go ahead and create the user record for it. The whole idea behind JWT authentication is that creating and validating user credentials is differed to a third party. MongoDB Realm simply provides the hooks for that to happen. Also, the new version of Realm supports meta-data, which can be very useful for packaging additional information during the signup phase.There are really two things you can do:In my opinion, the second approach is more of a kludge, and could cause problems down the road.I wrote a medium article on JWT authentication and meta-data, maybe this will helpIn June 2020, MongoDB finally released their first beta version of MongoDB Realm — a real time database technology that combined the…\nReading time: 9 min read\nOur company Cosync also provides a commercial JWT service for MongoDB Realm athttps://cosync.ioI hope this was useful.Richard Krueger",
"username": "Richard_Krueger"
},
{
"code": "",
"text": "Super helpful, thanks richard",
"username": "Ryan_Goodwin"
}
] | Looking for an example of user registration on IOS | 2020-12-27T23:54:14.566Z | Looking for an example of user registration on IOS | 2,283 |
null | [
"react-native"
] | [
{
"code": "",
"text": "Hello,\nI’m trying to enable push notifications in a React Native App using Realm ‘Push Notifications’.I created my project in Firebase and connected it to realm → works\nI can send Notifications using the realm UI → works\nI’m getting the notifications on my device → worksNow my issue is that I want to trigger notifications. Therefore I would need to access this service with a function.\nYes, I could use the firebase admin SDK but since realm has a connection to it already my question is if that is possible and if so how?ps. I checked this documentation (push-notifications) but it doesn’t help me at all.Thanks for any help or advice Regards,\nEbrima",
"username": "Ebrima_Ieigh"
},
{
"code": "",
"text": "Hello,Same problem on iOS.Thanks,\nAndrei",
"username": "Rasvan_Andrei_Dumitr"
},
{
"code": " // Construct message\n const message = {\n \"to\": \"/topics/someTopic\",\n \"notification\": {\n \"title\": \"Message Title\",\n \"body\": \"Message Body\",\n },\n };\n \n // Send push notification\n const gcm = context.services.get('gcm');\n const result = gcm.send(message);\n return result;",
"text": "I did a bit of research and figured out that the third-party service you want to call is ‘gcm’ not ‘fcm’ as I expected.This example works for me:",
"username": "Ebrima_Ieigh"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Push Notifications using realm (FCM) | 2020-12-21T20:36:24.460Z | Push Notifications using realm (FCM) | 4,776 |
null | [
"connecting"
] | [
{
"code": "",
"text": "HI guys,\nI am quite new to MOngoDB.\nI installed it on my virtual Linux server and want to connect via MongoDB Compass. But I get a time out and did not know why.\nCan somebody assist a newbie? \nGreets!",
"username": "Florian_Fey"
},
{
"code": "",
"text": "Welcome to MongoDB community.May be firewall issues\nDid you try from another location\nCan you connect by shell\nIs your mongod up and running and on what port\nWhat type of connect string are you using",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Mongod should be up because I’m using Strapi based on this db and it works.I opened the port 27017 with TCP on my server",
"username": "Florian_Fey"
},
{
"code": "",
"text": "Please show how you are connecting\nAre you using SRV string or long form of string or fill individual params option?\nScreenshot of exact error.Just timeout error or timeout with xxx ms?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Oh sorry. After I opened the port everything works ",
"username": "Florian_Fey"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Connecting to MongoDB on vServer not possible | 2020-12-29T10:12:29.795Z | Connecting to MongoDB on vServer not possible | 1,643 |
null | [] | [
{
"code": "",
"text": "Running into below error when trying to create a user using Vault Mongo DB plug-in. Using Free Cluster, not sure if that is the limitation. I have done all the steps to set up vault configuration pointing to my Atlas Cluster - but does not allow me create users.Here are the commands run.\nvault write database/roles/my-role db_name=my-mongodb-database creation_statements=‘{ “db”: “admin”, “roles”: [{ “role”: “readWriteAnyDatabase” }] }’and thenvault read database/creds/my-role ( to provision the user )Vault Mongo DB Steps",
"username": "Kishore_Kumar_Kota"
},
{
"code": "",
"text": "Hi @Kishore_Kumar_Kota,Welcome to MongoDB community.Atlas database users are only allowed to br created from UI or api.You cannot use createUser commands.\nThis is why this plugin fails.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Kishore_Kumar_Kota,Thanks to my colleague @Andrew_Davidson, I was informed that there is a vault api to create the users called “vault secrets”MongoDB Atlas - Secrets Engines | Vault | HashiCorp DeveloperThanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thank you for pointing me to correct documentation on this. I am seeing a different error now.I did follow the steps - I have installed Vault in my local and set up the config to point to my mongo db atlas account using programmatic access keys.Error reading mongodbatlas/creds/test: Error making API request.URL: GET http://127.0.0.1:8200/v1/mongodbatlas/creds/testCode: 400. Errors:",
"username": "Kishore_Kumar_Kota"
},
{
"code": "",
"text": "Hi @Kishore_Kumar_Kota,It might be that either this vault software is out of date and it has not calling the up to date api end points.It look like it hits a deprecated api whitelist one.Have a look at this blogAutomate secrets management for MongoDB Atlas database users and programmatic API keys with two new secrets engines, available in HashiCorp Vault 1.4.\nThanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Error with provisioning users using Vault Plugin | 2020-12-25T05:32:04.267Z | Error with provisioning users using Vault Plugin | 3,289 |
null | [] | [
{
"code": "",
"text": "How to drop (delete, remove) a Mongo Atlas full-text search index on a collection?Thanks",
"username": "Melody_Maker"
},
{
"code": "",
"text": "Please check thishttps://docs.atlas.mongodb.com/data-explorer/indexesFrom command line\ndb.collection.getIndexes()\nIdentify your index name\nThen\ndb.collection.dropIndex(“index_name”)",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "That doesn’t work for Mongo Atlas full-text search indexes. Just regular indexes and text indexes.All I get is…\n[ { “v” : 2, “key” : { “_id” : 1 }, “name” : “id” } ]So I am in a state where I have no idea where these indexes are, how to list them, nor how to drop them.",
"username": "Melody_Maker"
},
{
"code": "",
"text": "So you are able to create FTS index but cannot drop?\nHow did you create? Was it dynamic (default) or static\nFrom where you got above details\nWhat is your cluster type M0,M10…etc\nYou should be having edit index button",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I created the FTS index in the UI.\nThere is apparently no way to create it through an api.\nI did find the delete button in the UI.\nThere is no programmatic way to list or manage these FTS indexes, or know where they are; nor is there a list in the UI, so I had to click through all my collections to see which ones had an FTS index, then take notes in a separate file so I can keep track of what indexes are where since there is no apparent way in the Atlas UI to do that.",
"username": "Melody_Maker"
},
{
"code": "",
"text": "I think you have to use API http get method to get those details",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi @Melody_Maker,As @Ramachandra_Tummala noted there are Atlas Search API methods that allow you to get details for (and manipulate) Atlas Search indexes. The API includes methods to Create an Atlas Search Index and Delete an Atlas Search Index.You can also perform the same actions through the Atlas Search UI. See Delete an Atlas Search Index for the specific steps.It sounds like the UI could be more intuitive. If you have suggestions for improvement, please create a topic on the MongoDB Feedback Engine and comment here with the link so others can watch & upvote.A related feedback idea that you may be interested in is: Allow managing Atlas Search index via drivers.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "import DigestFetch from \"digest-fetch\";\n\n/**\n@func\ndo an ajax fetch using Digest Authentication\n\n@deps\nnpm i digest-fetch crypto-js node-fetch\n\n@param {string} publicKey\n@param {string} privateKey\n@param {string} url\n@return {Promise<object[]>}\n*/\nconst fetchDigest = async (publicKey, privateKey, url) => {\n const client = new DigestFetch(publicKey, privateKey, {});\n const res = await client.fetch(url, {});\n return await res.json();\n};\n\n\n\n/**\n@func\nmake a request to the Mongo API using Digest Authentication\n\n@param {string} url\n@return {Promise<object[]>}\n*/\nconst mongoApiRequest = async urlSegment => {\n const urlbase = \"https://cloud.mongodb.com/api/atlas/v1.0/\"; \n return await fetchDigest(\n process.env.mongoProgrammaticApiKey_public,\n process.env.mongoProgrammaticApiKey_private,\n urlbase + urlSegment);\n};\n\n\n //@test\n // get the Mongodb full-text index info for a particular collection\n logPromise(fetchFtsIndexInfo(\n `groups/${groupid}/clusters/${clusterid}/fts/indexes/${dbName}/${collName}?pretty=true`));",
"text": "Thanks for the help guys.\nI got it running in node.js\nHere’s the code if anyone’s interested:\n(refactored this way because I have the two funcs in separate files for reuse)",
"username": "Melody_Maker"
}
] | How to drop (delete, remove) a Mongo Atlas full-text search index on a collection? | 2020-12-26T23:14:13.070Z | How to drop (delete, remove) a Mongo Atlas full-text search index on a collection? | 4,377 |
null | [
"aggregation",
"php"
] | [
{
"code": " $res[] = array(\n '$match' => array('WIN_BALANCE' => array('$gte' => 1)),\n '$group' => array('_id' => '$USER_ID', 'total' => array('$sum' => 'WIN_BALANCE')),\n '$sort' => array('total' => -1),\n [$cursor' => ['batchSize' => 0 ]]\n );\n $this->mongo_db->aggregate('user',$res);\n",
"text": "I’m Using Aggreate function with following code in my projectBut getting below error.Aggregation operation failed: localhost:27017: The ‘cursor’ option is required, except for aggregate with the explain argument.I have tired with [‘explain’ => true ] also.I was searching many forums didn’t get any solutions.Can you please me out of this.Thanks in advance",
"username": "siva_r"
},
{
"code": "",
"text": "Hi Everyone,\nDo you have any solution for this reported issue",
"username": "siva_r"
},
{
"code": "",
"text": "Hi @siva_r,Did you find a solution to your issue? If not, what version of the MongoDB PHP driver and MongoDB server are you currently using?MongoDB 3.6 and newer servers always return aggregation results using a cursor. This should be automatically handled for you when using a compatible driver. Please see MongoDB Compatibility: PHP Driver for a table indicating the range of supported PHP driver & server combinations.Updating to a newer driver version should resolve your issue, but as with any upgrade please review the release notes in case there are any API or compatibility changes from the version you are currently using.Regards,\nStennie",
"username": "Stennie_X"
}
] | PHP Codeigniter with Mongo 4.4.2 Aggregate Issue | 2020-11-25T20:28:56.713Z | PHP Codeigniter with Mongo 4.4.2 Aggregate Issue | 4,328 |
null | [
"dot-net",
"atlas-functions"
] | [
{
"code": "",
"text": "Hi TeamI am new to MongoDb Realm.Just to give you a background of my requirement, I want to construct a new collection from another collection in Atlas Mongodb (with some business logic). This new collection will then be used by FrondEnd system.I am playing with Realm and wondering if that can resolve my requirement.So basically when ever there is an insert/Update to original collection, a trigger is fired which will take this collection, do some manipulations and insert/Update into a new collection. I am looking at the Realm Functions. This apparently only works for Javascript ?I am more comfortable with .net. Is there a way to write functions in c#.net ?I also looked at MongoDb Realm .net SDK. I am not sure how would you connect that with Realm functions ?Your help will be deeply appreciated !\nThanks",
"username": "learner_me"
},
{
"code": "user.Functions.CallAsync()",
"text": "Welcome to the community @learner_me!So basically when ever there is an insert/Update to original collection, a trigger is fired which will take this collection, do some manipulations and insert/Update into a new collection. I am looking at the Realm Functions. This apparently only works for Javascript ?Server-side logic for Realm Functions and Realm Triggers currently only supports writing JavaScript functions. However, you can execute Realm Functions from a connected client app using any of the MongoDB Realm SDKs (.NET, iOS, Android, …) and return results for processing in your application.If you have a strong preference for writing all of your functions in C#, an alternative approach would be to set up a persistent application watching for relevant insert and update events using MongoDB Change Streams (which is the server feature that Realm Triggers builds on). However, this would be client-side processing rather than server-side.I also looked at MongoDb Realm .net SDK. I am not sure how would you connect that with Realm functions ?You can execute a server-side Realm Function from a connected C# client app using the user.Functions.CallAsync() method. For more information, see: Call a Function in the Realm .NET SDK documentation.Regards,\nStennie",
"username": "Stennie_X"
}
] | Realm functions for .net | 2020-12-30T01:42:32.370Z | Realm functions for .net | 1,872 |
null | [
"aggregation",
"indexes"
] | [
{
"code": "Data:\nstudent class\ns1 [\"english\"]\ns2 [\"maths\"]\nQuery1:\ndb.students.explain('executionStats').aggregate([\n { $match: { $expr: { $setIsSubset: [ \"$class\", [ \"english\" ] ] } } }\n])\nQuery2:\ndb.students.explain('executionStats').aggregate([\n { $match: { \"class\": \"english\" } }\n])\nclass : [\\\"maths\\\", \\\"maths\\\"]",
"text": "The field class is indexed, and values of it are all in array.,The first one does a COLLSCAN, while the second does a IXSCAN with indexBounds class : [\\\"maths\\\", \\\"maths\\\"].Wondering if there is anyway to achieve the setIsSubset functionality, which can utilize the index efficiently.",
"username": "Peng_Huang"
},
{
"code": "",
"text": "Can someone please answer this question? Thank you",
"username": "Peng_Huang"
},
{
"code": "$expr$match$exprdb.students.aggregate([\n { $match: { \"class\": \"english\" } },\n { $match: { $expr: { $setIsSubset: [ \"$class\", [ \"english\" ] ] } } }\n])\nexplain()$expr parsedQuery: {\n '$and': [\n { class: { '$eq': 'english' } },\n { '$expr': { '$setIsSubset': [ '$class', [Object] ] } }\n ]\n },\n winningPlan: {\n stage: 'FETCH',\n filter: {\n '$expr': { '$setIsSubset': [ '$class', { '$const': [Array] } ] }\n },\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { class: 1 },\n indexName: 'class_1',\n isMultiKey: true,\n multiKeyPaths: { class: [ 'class' ] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: { class: [ '[\"english\", \"english\"]' ] }\n }\n },\n",
"text": "Hi @Peng_Huang,The $expr operator does not support multikey indexes, so isn’t a good choice for the first step of this aggregation pipeline by itself.However, you can include multiple $match stages in your pipeline. Combining your two stages would result in $expr only being applied to documents matched via the index:If you look at the explain() results, you’ll notice the query planner optimises these two matches into a single query with $expr used as a filter.Regards,\nStennie",
"username": "Stennie_X"
}
] | $setIsSubset does not use index | 2020-12-14T23:42:17.722Z | $setIsSubset does not use index | 2,131 |
null | [
"java",
"change-streams"
] | [
{
"code": "[#object[com.mongodb.client.model.Aggregates$SimplePipelineStage 0x6ed03635 \"Stage{name='$match', value=And Filter{filters=[Operator Filter{fieldName='operationType', operator='$in', value=[insert, update, delete, replace]}]}}\"]]\nClojure(def watch-filter\n (java.util.Arrays/asList\n (into-array\n [(com.mongodb.client.model.Aggregates/match\n (com.mongodb.client.model.Filters/and\n (java.util.Arrays/asList\n (into-array Object\n (into [(com.mongodb.client.model.Filters/in \"operationType\"\n (java.util.Arrays/asList\n (into-array [\"insert\", \"update\", \"delete\", \"replace\"])))])))))])))\nNOTEdefinto-array(-> ^com.mongodb.client.internal.MongoDatabaseImpl db\n (#(.getCollection ^com.mongodb.client.internal.MongoDatabaseImpl %\n ^String \"mongo-coll-name\"))\n (#(.watch ^com.mongodb.client.internal.MongoCollectionImpl %\n ^java.util.List watch-filter))\n (#(.fullDocument ^com.mongodb.client.ChangeStreamIterable %\n com.mongodb.client.model.changestream.FullDocument/UPDATE_LOOKUP))\n (#(.iterator ^com.mongodb.client.internal.Java8ChangeStreamIterableImpl %)))\nNOTE->db.getCollection(\"mongo-coll-name\").watch...org.mongodb/mongo-java-driver\"3.12.7\"",
"text": "I am watching a collection and my watch function runs fine, but fails and hangs for inserts. The value of my filter is as the following:As you can see, I have added a bunch of operation types, but my watch function only picks up updates, and nothing else.Can someone point me in the right direction?I am creating filter for my watch function in the following manner. Pardon me, the code is in the Clojure programming language:NOTE: def is just a macro for creating variables and into-array is a function for creating java arrays\nWhich I am using in my db cursor iterator like this:NOTE: The -> is a macro which translates into java’s db.getCollection(\"mongo-coll-name\").watch... in the above case.\nFYI, the org.mongodb/mongo-java-driver version that I am using is \"3.12.7\"",
"username": "Punit_Naik"
},
{
"code": ".getRemovedFieldsupdateDescription",
"text": "Turns out this was a different issue. I was calling the .getRemovedFields method on the updateDescription event when I can watching for inserts, which threw an error inside my async code.",
"username": "Punit_Naik"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Watch does not work for inserts, only updates | 2020-08-13T20:24:16.894Z | Watch does not work for inserts, only updates | 2,190 |
null | [
"dot-net"
] | [
{
"code": "Command findAndModify failed: Unknown modifier: _id. Expected a valid update modifier or pipeline-style update specified as an array.",
"text": "Hello what is problem ? Google dont know about this problemCommand findAndModify failed: Unknown modifier: _id. Expected a valid update modifier or pipeline-style update specified as an array.",
"username": "alexov_inbox"
},
{
"code": "findAndModify",
"text": "Hi @alexov_inbox,This indicates there is a problem with your findAndModify syntax. Can you comment with an example of your update command as well as the version of driver you are using?Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": " BsonClassMap.RegisterClassMap<X>(c => { \n c.AutoMap(); \n c.MapIdField(f => f.Email);\n});\n\n\n\n\n var m = new X\n {\n Email = email,\n Language = language,\n Password = password,\n EntryDate = DateTime.UtcNow,\n FindCount = 0\n };\n\n var u = \n new UpdateDefinitionBuilder<X>().Combine(\n Builders<X>.Update.Inc(f => f.FindCount, 1), \n new ObjectUpdateDefinition<X>(m));\n \n MailRaws.FindOneAndUpdate<X>(f => f.Email == email, u, new FindOneAndUpdateOptions<X> {IsUpsert = true});",
"text": "",
"username": "alexov_inbox"
}
] | findAndModify failed: Unknown modifier: _id | 2020-12-28T21:19:03.434Z | findAndModify failed: Unknown modifier: _id | 2,240 |
[] | [
{
"code": "",
"text": "Hello,\ncopyDatabase() is deprecated and mongodump + mongorestore are recommanded to copy database.However, does it means that I need to download to local and re-upload the entire database?\nIs there anyway to copy only on the Atlas Cluster?",
"username": "Tak_Kin_Cheng"
},
{
"code": "",
"text": "Hi @Tak_Kin_Cheng,I think copyDatabase worked in a way where it will read all data to your client and write to target host (similar to dump and restore)The suggsted method is to use mognodump and restore. In 4.4 you can use a $out from one collection to another database with same collection name.Doing so for all collections in a database will essentially clone the data but you will need to rebuild all indexes .Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Copy database on remote MongoDB Atlas Cluster after 4.2 | 2020-12-29T10:13:03.809Z | Copy database on remote MongoDB Atlas Cluster after 4.2 | 8,360 |
|
null | [
"transactions"
] | [
{
"code": "forM1060 seconds",
"text": "I’ve transaction which performs several deletes, inserts, updates and have for loops in it.It’s an M10 cluster. The transaction is taking greater than 60 seconds as data is growing.Is there any possible way to achieve this scenario on Atlas other then moving it to self managed / hosted setup?",
"username": "viraj_thakrar"
},
{
"code": "",
"text": "Hi @viraj_thakrar,I would suggest contacting support and request increasing the transaction life time server parameter for your clusters:\nhttps://docs.mongodb.com/manual/reference/parameters/#param.transactionLifetimeLimitSeconds\nHowever this limitation is there for a reason as long transactions are not recommended and might impose risks and overhead on clusters operations and performance.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Transactions that are taking more than 60 seconds on Atlas | 2020-12-29T15:51:48.154Z | Transactions that are taking more than 60 seconds on Atlas | 5,159 |
null | [
"data-modeling",
"mongoose-odm"
] | [
{
"code": "",
"text": "Hey,I’m new to MongoDB need some advice on how to design my database (with mongoose).I want to have multiple documents that each have a user, a date, a type and a number.\nEvery few minutes the db should be check if any date is smaller than Date.now and if yes, my program do some things with type and user. The second thing that should be checked is if for every type, the number is bigger than another external value I am comparing it to, and if yes, it should do the same thing.Since the database could get very big and these checks should run every couple of minutes, I want to make sure this thing is as efficient as possible. To achieve this, I thought about storing documents with different users and numbers as subdocuments with a parent that has a unique combination of type and date. These top-level documents should be sorted by date on saving so that only the first document has to be checked for an expired date every time the checks begin and for every other document, only the first subdocument (sorted by the number) has to be checked.My question is if this is a good idea or if there is any better way to do it. If no, can you explain how I can sort the top-level documents so that I only have to read the first one’s date?Thank you,\nTil",
"username": "Til_Weimann"
},
{
"code": "",
"text": "Hi @Til_Weimann,Welcome to MongoDB community!Based on the very high level description provided here it sounds like your documents could have 2 potential structures:Now to better address your questions I would need:I recommend looking into the following blogs:\nhttps://www.mongodb.com/article/mongodb-schema-design-best-practicesA summary of all the patterns we've looked at in this serieshttps://www.mongodb.com/article/schema-design-anti-pattern-summaryThis is very good material to pick a good starting point for any design.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi,\nThanks a lot for your answer, I will go a bit more into detail on how the database should work.What I am trying to build is a structure that will help me automatically manage events based on two decisive factors.\nThe most common queries will be:\na) Every couple of minutes, look up if any events have expired by finding the event with the lowest date value and comparing it to the current date. If the event is expired, a function that processes user, date, type and some other values will be executed and the event will be removed from the datebase.\nThis check should be repeated either right after the processing has been done (only if the previous event expired, of course), or a couple of minutes later if the previous event hasnt expired yet.\nEvents should be added to the database all around the clock (probably mostly from every couple of minutes to every ten seconds, depending on user activity).\nb) Every couple of minutes, for every type, find the event with the event with the lowest value and check if this value is below an external value. If yes, process the data in a function just like with query a) and remove the event from the database.\nThere should be up to around 50k possible event types, but the vast majority of them will probably never be used, so the database will most likely contain only a couple of thousands of types but a much larger amount of users.\nc) Another, less important query will be to look up what events a specific user has, but this will only happen based on user-input and not be too frequent.Events are created by users with a date (year, month, day, hour) within the next few months. Every user may have up to 10 events at a time, but the average number will be lower. Since the amount of users should get bigger than the amount of combionations between dates popular types, users will most likely share those two values, but not other variables that probably will be more or less unique to them when looking at the type-date combination.While the bucket structure you proposed could help group dates into groups (e. g. by day), this structure wouldn’t be very useful when performing query b).\nThe other structure you proposed would, if I understand this correctly, rather help speed up things when performing query c), but since this one will be a rather unimportant and not very frequent query compared to a) and b), I dont know if this is the best choice.Thanks,\nTil",
"username": "Til_Weimann"
},
{
"code": "{ \"event_id\" : ...,\n \"Event_Name\" : ...,\n \"expireDate\" : ...,\n \"UserData\": { userId : ...}\n \"EventType\" : { typeId : ....}\n...\n}\n",
"text": "Hi @Til_Weimann,Now looking at the details why not to hold a document for each event in an event collection. This collection can be segregated into partitions (generic names for collections based on a monthly or yearly basis like events_202012) each document will be for an event holding user info, type and expireDate. expireDate will be indexed so you could fetch any expire documents and pass their data to the functions.As you can index your type data you will be able to do queries based on type filtering.An example of a high level document:Now here you have the flexibility to hold a document per user and duplicate it for users sharing the event or turn users to arrays and host user ids in them.Additionally , you can think on using a ttl index to eventually expire to object if you just need to delete it or use a remove command at the end of processing for each event.You can use the trick of 0 seconds expire settings so you set the expire field with the current date only after processing the event for deletion and the ttl thread will expire it.One last question is if you have the need to show a user profile and its associated evenrts as this might be a bit more complex with this design. In such case there might be benift in holding a users collection with events array (up to 10) for each user, and each event will hold its expireDate and type. But you will probably still need to have an extended reference collection for extra event details to avoid overflowing the user profile doc.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Need some advice on how to design my database | 2020-12-28T20:19:37.459Z | Need some advice on how to design my database | 2,103 |
null | [
"queries",
"mongoose-odm"
] | [
{
"code": "{\n \"houses\": [\n {\n \"_id\": \"5fe72f0b4fd2c131bcc7dae0\",\n \"name\": \"John\",\n \"district\": \"Texas\",\n \"events\": [\n {\n \"kind\": \"developer\",\n \"group\": \"facebook\"\n }\n ]\n },\n {\n \"_id\": \"5fe72f0b4fd2c131bcc7dadf\",\n \"name\": \"Michael\",\n \"district\": \"Texas\",\n \"events\": [\n {\n \"kind\": \"advertiser\",\n \"group\": \"instagram\"\n }\n ]\n },\n {\n \"_id\": \"5fe72f0b4fd2c131bcc7dade\",\n \"name\": \"Frank\",\n \"district\": \"Washington\",\n \"events\": [\n {\n \"kind\": \"developer\",\n \"group\": \"school\"\n }\n ]\n }\n ]\n}\ndistrict == \"Texas\"{\n \"houses\": [\n {\n \"_id\": \"5fe72f0b4fd2c131bcc7dae0\",\n \"name\": \"John\",\n \"district\": \"Texas\",\n \"events\": [\n {\n \"kind\": \"developer\",\n \"group\": \"facebook\"\n }\n ]\n },\n {\n \"_id\": \"5fe72f0b4fd2c131bcc7dadf\",\n \"name\": \"Michael\",\n \"district\": \"Texas\",\n \"events\": [\n {\n \"kind\": \"advertiser\",\n \"group\": \"instagram\"\n }\n ]\n }\n ]\n}\nkind == \"developer\"{\n \"houses\": [\n {\n \"_id\": \"5fe72f0b4fd2c131bcc7dae0\",\n \"name\": \"John\",\n \"district\": \"Texas\",\n \"events\": [\n {\n \"kind\": \"developer\",\n \"group\": \"facebook\"\n }\n ]\n },\n {\n \"_id\": \"5fe72f0b4fd2c131bcc7dade\",\n \"name\": \"Frank\",\n \"district\": \"Washington\",\n \"events\": [\n {\n \"kind\": \"developer\",\n \"group\": \"school\"\n }\n ]\n }\n ]\n}\ndistrict == \"Texas\" && kind == \"developer\"{\n \"houses\": [\n {\n \"_id\": \"5fe72f0b4fd2c131bcc7dae0\",\n \"name\": \"John\",\n \"district\": \"Texas\",\n \"events\": [\n {\n \"kind\": \"developer\",\n \"group\": \"facebook\"\n }\n ]\n }\n ]\n}\nrouter.get('/report', (req, res) => {\n let params = {}; \n let { district, kind } = req.headers;\n\n if (district) params[\"district\"] = district;\n if (kind) params[\"kind\"] = kind;\n // Here should be the query\n});\n",
"text": "Hello! I don’t know how correctly I formulated the question. I need to execute a query on both the values of the collection and the values of the referenced objects.\nThe original collection looks like this:When executing a query that meets the condition district == \"Texas\" , I need to get the following result:Under this condition: kind == \"developer\" , get the following result:And for a query that satisfies the condition: district == \"Texas\" && kind == \"developer\" , get the result:The query should be executed using mongoose inside the express route, and should be universal, processing a different set of request parameters:I am learning MongoDB and aggregation , but I don’t know so deeply all its functions. Please tell me how to correctly execute such a request in the traditional way? I will be very grateful!I also put the collection on the MongoDB playground in case someone is more comfortable:\nmongoplayground",
"username": "Narus_N_A"
},
{
"code": "",
"text": "Hi @Narus_N_A, It looks like a simple query you would need to apply. The equality query is pretty straightforward and can be solved using a simple $project stage and filtering the “nested” documents in houses array using $filter operator.Do check the examples of $filter operator and you would get an idea of how to accomplish it. Also, do remember to apply a $match stage as first aggregation step, so that you can limit the documents you want.If you unable to solve it using the examples, do reach out again…! ",
"username": "shrey_batra"
},
{
"code": "",
"text": "Thank you! I used the given methods and the problem was solved.",
"username": "Narus_N_A"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to query on collection values and referenced documents values? | 2020-12-29T00:47:14.397Z | How to query on collection values and referenced documents values? | 2,582 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "Hello!\nI would like to create the full text search index with dotnet core driver using Atlas API.\nIs there a sample code which I can follow?Thanks,\nSupriya",
"username": "Supriya_Bansal"
},
{
"code": "",
"text": "Hi @Supriya_Bansal,You should be able to use any language with our Rest Api create FTS indexes.The MongoDB drivers cannot create those indexes directly but our REST api can.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thank you @Pavel_Duchovny. That link was helpful.",
"username": "Supriya_Bansal"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Full text search index using API | 2020-12-28T16:18:27.272Z | Full text search index using API | 2,039 |
null | [
"performance"
] | [
{
"code": "",
"text": "Doing text search with indexed fields is very fast, < 1second. Just removing $project and adding $count stage causes timeout. How else can we do this? We want to display the total count to the user.",
"username": "Fred_Kufner"
},
{
"code": "",
"text": "Sounds weird. Are you sure this is just Atlas? (Have you tried it on a local machine?)\nCan you show some code?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "What I suspect is that the pipeline with $project can be optimized and the first set of documents can be returned to the application in the first cursor without processing all documents that matches. With $count, it is a different story and matching documents must be processed to obtain the count.",
"username": "steevej"
},
{
"code": "pipeline = [{\n$search: {\n index: 'atlas_compound',\n text: {\n path: [\n 'name',\n 'description',\n 'keywords'\n ],\n query: 'blue',\n fuzzy: {\n maxEdits: 2,\n prefixLength: 0,\n maxExpansions: 50\n }\n }\n}},\n {$count: 'count'}\n]\n",
"text": "Thanks Jack!,I get the same behavior from compass, mongo shell python using pymongo. All of the fields in path are indexed. My collection is ~225K documents, avg. size 5.8KB.Here is pipeline from compass:",
"username": "Fred_Kufner"
},
{
"code": "",
"text": "Hi Steeve,I have no $project. Just a $search and $count.",
"username": "Fred_Kufner"
},
{
"code": "",
"text": "Do you have a schema?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Sorry, I must have misreadJust removing $project and adding $count stage causes timeout.to mean that you had a fast pipeline with a $project that is now timing out after replacing $project with $count.I often do that when working on a counting or grouping pipeline. I develop with $project to see if I only consider the appropriate documents and I replace $project with $group or $count only at the end.",
"username": "steevej"
},
{
"code": "",
"text": "Hi Jack. Atlas search is not available on local install. I did notice that removing fuzzy options reduce the time from about 15secs to 5! Not a solution for me, but it is curious.",
"username": "Fred_Kufner"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Altas search is very fast, but getting count times out | 2020-12-28T18:34:03.450Z | Altas search is very fast, but getting count times out | 3,223 |
null | [
"node-js",
"connecting",
"serverless"
] | [
{
"code": "const {MongoClient} = require('mongodb');\nconst uri = \"mongodb+srv://MYCLUSTER/test?retryWrites=true&w=majority&authSource=%24external&authMechanism=MONGODB-AWS\";\n\nconst client = new MongoClient(uri,{ useUnifiedTopology: true });\nmodule.exports.handler = async (event, context) => {\n async function listDatabases(client){\n databasesList = await client.db().admin().listDatabases();\n console.log(\"Databases:\");\n databasesList.databases.forEach(db => console.log(` - ${db.name}`));\n };\n try {\n await client.connect();\n await listDatabases(client);\n } catch (e) {\n console.error(e);\n } finally {\n await client.close();\n }\n};\n",
"text": "Hi!I’m trying to connect to MongoDB via AWS Lambda.\nI cannot find enough documentation to set this up.This is what I’ve done.In mongoDB atlas:\nin Authorize AWS IAM RoleNow what do I use in my lambda function ?That video ( Using AWS IAM Authentication with MongoDB 4.4 in Atlas to Build Modern Secure Applications - YouTube ) gives some explanations, but not enough unfortunately.Appreciate any help or any links with more practical documentation.Cheers! Fred",
"username": "Fred_F"
},
{
"code": "",
"text": "Hi @Fred_F,Welcome to MongoDB community.When you use IAM to connect you still need to create a database user on atlas side associated with the ARN . Later you need to specify its key and secret as user and password for lambda conn string.Don’t forget you still need to whitelist the Atlas access list , usually via vpc peering lambda vpc to Atlas.Learn how to leverage AWS Lambda caching capabilities and improve query performance by re-using database connections to MongoDB Atlas in your Lambda function code.Best\nPavelRead this guide as well",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks Pavel! That was really useful. Fred",
"username": "Fred_F"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Access MongoDB via AWS Lambda | 2020-12-24T01:47:06.903Z | Access MongoDB via AWS Lambda | 8,260 |
null | [
"mongoose-odm"
] | [
{
"code": "let updateArr = [];\n\ndata.map((position) => {\n position.quantities.map((quantity) => {\n updateArr.push({\n updateOne: {\n filter: {\n _id: mongoose.Types.ObjectId(position.position),\n \"quantities._id\": mongoose.Types.ObjectId(quantity._id),\n },\n update: {\n $set: {\n \"quantity.$.administratorAimedPrice\":\n quantity.administratorAimedPrice,\n },\n },\n },\n });\n });\n});\n\nconsole.log(updateArr);\n\nawait Position.bulkWrite(updateArr);\n[ \n { updateOne: { filter: [Object], update: [Object] } },\n { updateOne: { filter: [Object], update: [Object] } },\n { updateOne: { filter: [Object], update: [Object] } },\n { updateOne: { filter: [Object], update: [Object] } },\n { updateOne: { filter: [Object], update: [Object] } },\n { updateOne: { filter: [Object], update: [Object] } } \n]\nPOS TypeError: Update document requires atomic operators at OrderedBulkOperation.raw",
"text": "Hi everyone, I’m trying to update multiple documents in collection by programmatically creating an update array for bulkWrite operation with mongoose.This is how the array looks like:What i’m getting back from mongoose:POS TypeError: Update document requires atomic operators at OrderedBulkOperation.rawIf there is another way to update multiple documents, with different values by targeting different IDs, please share.Thank you in advance.",
"username": "semperlabs"
},
{
"code": "",
"text": "Please close, it’s resolved.",
"username": "semperlabs"
},
{
"code": "",
"text": "Hi @semperlabs,Are you able to share the solution to your issue to help others who encounter a similar problem?Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi Stennie,\nIt was just a typo, I had a path “quantities” in the model but I was trying to update “quantity”. It was a long coding sprint so I didn’t see it ",
"username": "semperlabs"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to create mongodb bulkWrite update array programmatically? | 2020-12-24T10:31:42.327Z | How to create mongodb bulkWrite update array programmatically? | 4,978 |
null | [
"database-tools"
] | [
{
"code": "2020-12-24T03:10:29.233+0000\t714 objects found 2020-12-24T03:18:36.103+0000\t488211 objects found 2020-12-24T03:20:40.273+0000\t532560 objects found 2020-12-24T03:20:40.284+0000\t0 objects found 2020-12-24T03:20:40.310+0000\t0 objects found 2020-12-24T03:20:42.882+0000\t14688 objects found 2020-12-24T03:20:42.898+0000\tunable to dump document 1: error converting BSON to extended JSON: conversion of BSON value 'failed' of type 'bson.Symbol' not supported 2020-12-24T03:20:42.898+0000\t0 objects found 2020-12-24T03:20:42.898+0000\terror converting BSON to extended JSON: conversion of BSON value 'failed' of type 'bson.Symbol' not supported 2020-12-24T03:20:45.972+0000\t33180 objects found 2020-12-24T03:20:48.615+0000\t32150 objects found 2020-12-24T03:20:52.841+0000\t25550 objects found 2020-12-24T03:20:52.854+0000\t22 objects found 2020-12-24T03:20:55.092+0000\t13946 objects found 2020-12-24T03:21:19.045+0000\t22996 objects found 2020-12-24T03:21:19.075+0000\t0 objects found 2020-12-24T03:21:22.594+0000\t153814 objects found 2020-12-24T03:21:40.923+0000\t42233 objects found 2020-12-24T03:21:40.943+0000\t312 objects found 2020-12-24T03:21:41.331+0000\t5912 objects found 2020-12-24T03:21:43.303+0000\t43433 objects found ",
"text": "I have some bson files and use bsondump to convert them into json files. But I met some errors in the converting phase. Is that the bson files broke? Can someone tell me how to solve this problem?2020-12-24T03:10:29.233+0000\t714 objects found 2020-12-24T03:18:36.103+0000\t488211 objects found 2020-12-24T03:20:40.273+0000\t532560 objects found 2020-12-24T03:20:40.284+0000\t0 objects found 2020-12-24T03:20:40.310+0000\t0 objects found 2020-12-24T03:20:42.882+0000\t14688 objects found 2020-12-24T03:20:42.898+0000\tunable to dump document 1: error converting BSON to extended JSON: conversion of BSON value 'failed' of type 'bson.Symbol' not supported 2020-12-24T03:20:42.898+0000\t0 objects found 2020-12-24T03:20:42.898+0000\terror converting BSON to extended JSON: conversion of BSON value 'failed' of type 'bson.Symbol' not supported 2020-12-24T03:20:45.972+0000\t33180 objects found 2020-12-24T03:20:48.615+0000\t32150 objects found 2020-12-24T03:20:52.841+0000\t25550 objects found 2020-12-24T03:20:52.854+0000\t22 objects found 2020-12-24T03:20:55.092+0000\t13946 objects found 2020-12-24T03:21:19.045+0000\t22996 objects found 2020-12-24T03:21:19.075+0000\t0 objects found 2020-12-24T03:21:22.594+0000\t153814 objects found 2020-12-24T03:21:40.923+0000\t42233 objects found 2020-12-24T03:21:40.943+0000\t312 objects found 2020-12-24T03:21:41.331+0000\t5912 objects found 2020-12-24T03:21:43.303+0000\t43433 objects found ",
"username": "Icarus_Wu"
},
{
"code": "bsondump --version",
"text": "Welcome to the MongoDB forum @Icarus_Wu!conversion of BSON value ‘failed’ of type ‘bson.Symbol’ not supportedThe BSON Symbol type has been deprecated since 2011, so you may have trouble finding support in recent versions of tools and drivers.Can you confirm your O/S version and the output of bsondump --version?Do you know what tool or driver is creating the BSON files?Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "bsondump version: built-without-version-string git version: built-without-git-spec Go version: go1.10.1 os: linux arch: amd64 compiler: gc OpenSSL version: OpenSSL 1.1.0g 2 Nov 2017 ",
"text": "@Stennie_X Hello Sir!\nMy boss gave me some bson files but not sure where them from…\nAnd I just use my ubuntu machine ‘apt install mongo-tools’ to install the mongo-tools. Sorry that I haven’t touched the mongodb ever before, so I’m unfamiliar with mongodb. I checked the bsondump version and get this\nbsondump version: built-without-version-string git version: built-without-git-spec Go version: go1.10.1 os: linux arch: amd64 compiler: gc OpenSSL version: OpenSSL 1.1.0g 2 Nov 2017 \nI just wanna check one thing. Will I get the converted json files if I failed in the converting phase? In other words, will mongo tools ignore the converting errors and return me a json file from a bson file?\nThank u sir.",
"username": "Icarus_Wu"
},
{
"code": "bsondump",
"text": "Will I get the converted json files if I failed in the converting phase? In other words, will mongo tools ignore the converting errors and return me a json file from a bson file?Hi @Icarus_Wu,I expect bsondump would skip any documents that cannot be fully decoded rather than doing an incomplete conversion. For more control over the conversion, I would look into BSON support using one of the officially supported drivers.Would you be able to share a small BSON test file without including any confidential data?I would also ask your boss for more information on the origin of the files, as perhaps the tool/driver that is creating these files might provide a hint at how to read them.Regards,\nStennie",
"username": "Stennie_X"
}
] | Convert bson to json failed: type ‘bson.Symbol’ not supported | 2020-12-24T03:38:55.728Z | Convert bson to json failed: type ‘bson.Symbol’ not supported | 4,027 |
null | [
"change-streams"
] | [
{
"code": "",
"text": "Hi All,How do we get document “deltas” (changed data) with replace operation from change data streams on a collection.Thanks in advance",
"username": "Sundar_Koduru"
},
{
"code": "",
"text": "Hi @Sundar_Koduru,You can check the docs of change events here - https://docs.mongodb.com/manual/reference/change-events/According to this, the “fullDocument” contains the changes for an “insert” and “replace” operation. I don’t think you would need to enable the full Document on the change stream for these operations, only for updates. Hence, for replace operations, this is where you will get the changed document by default.If you want the delta of changes from original to newer, you would need to implement this yourself somehow, as I don’t think Mongo can provide the delta of an “replace” operation. It just takes the newer doc fully and replaces it, without seeing what changed. You might want to store the previous document in some cache and check the change urself for replaces.",
"username": "shrey_batra"
}
] | How to get changed data with replace operation in change data streams | 2020-12-28T20:19:44.746Z | How to get changed data with replace operation in change data streams | 3,177 |
null | [
"queries"
] | [
{
"code": "myquery = {\"Colour\": \"Red\"}\nx = collection.delete_many(myquery) \n_id: \"123\"\nColour: \"red\"\nShape: \"square\"\nName: \"xyz123\"\nLine: \"bold\" \n _id: \"456\"\nHeight: \"6\"\nWidth: \"6\"\nName: \"xyz123\"\nArea: \"36\" \n",
"text": "Apologies, lots of similar questions have been answered, but I still can’t resolve my issue from those. I have two collections shapes and geometry. I have successfully queried shapes to remove all red shapesBut I want to use the result of this query on the geometry collection that shares the common field “Name”, so that I delete any document from geometry that has a Name that was found in the original query (red shapes)example from shapesexample from geometryMany thanks",
"username": "Tim_Shurlock"
},
{
"code": "to_delete_shapes = coll.find({\"Colour\": \"red\"}, {\"Name\":1})\n.delete_many({\"Name\": {\"$in\": array_of_names}})\n",
"text": "Hi @Tim_Shurlock, this does not have an easy solution as I don’t think that delete_many supports aggregations. There are 2 approaches coming to my mind, which you can use -First, query on Shapes collection and retrieve the list of names, you “are going to delete”.Then you can run 1 query (per collection) to delete the documents from both the collections using the names list and $in operator. -Do remember that you are fetching the list of names in memory, and if they are too many documents matching the Color query, you might wanna paginate or bucket. (~1000 documents)The other, more performant way is to add extra information in your Geometry collection’s document. You can easily add another key “Colour” to the geometry document also, so that you can run the delete_many query directly on both the collections.Also, just want to ask, if you have a 1-1 relationship with Shape and Geometry documents, why not embedded one into the other? That way, your queries will become much less complicated.Thanks…!",
"username": "shrey_batra"
}
] | Query result on one collection informing 2nd query on different collection | 2020-12-29T00:47:07.071Z | Query result on one collection informing 2nd query on different collection | 2,698 |
null | [
"atlas-triggers"
] | [
{
"code": "",
"text": "I have a trigger set up to fire on insert into the collection. Out of nowhere, everytime I insert into the collection, all of my triggers go into the suspended state.\nThe error message appears for every trigger I have enabled:\n(ChangeStreamHistoryLost) Resume of change stream was not possible, as the resume point may no longer be in the oplog.\nI restarted the triggers and reinserted the data. I also paused and resumed my cluster but still it happens every time now. How can I fix this?Thanks",
"username": "Boss_Man"
},
{
"code": "",
"text": "Did you try to restart it with unchecking restarting with the resume_token? Just note that if you do not use a resume token, the trigger begins listening for new events but will not fire for any events that occurred while it was suspended.",
"username": "Sumedha_Mehta1"
}
] | Realm Trigger is Repeatedly Suspended | 2020-12-22T09:28:21.314Z | Realm Trigger is Repeatedly Suspended | 2,849 |
null | [
"app-services-user-auth"
] | [
{
"code": "",
"text": "How to validate if user is authenticated at the server side, like PHP?From the front end we could send all data needed, like userId, authId, providerId/providerType, accessToken, refreshToken, but how to check on server side that the user is authenticated?Tried to find an API here but no success Atlas App Services APIA real-world use case: I tried to use Realm Functions to upload large images (10Mb) but given its 30s input timeout Realm functions 30s input timeout I had to resort to a web server to handle the uploads, so now I better validate if the user is authenticated before proceeding with the upload and MongoDb update. Otherwise my API will just allow anyone with a leaked AuthId to upload anytime, anywhere.Thanks!",
"username": "andrefelipe"
},
{
"code": "",
"text": "Hey Andre - this thread will provide more context, but tldr - this feature has been prioritized for near-term work: Verify Access Token server side - #6 by Sumedha_Mehta1",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Thanks. Will keep an eye into that.",
"username": "andrefelipe"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | [Realm Auth] Validate if user is authenticated from server-side (PHP) | 2020-12-23T08:51:06.294Z | [Realm Auth] Validate if user is authenticated from server-side (PHP) | 3,321 |
null | [
"atlas-device-sync",
"app-services-user-auth"
] | [
{
"code": "",
"text": "Hello everyone ,\ni’m trying to work with user authentification for ios mobile app but when i try to log in , i get this error on the REALM UI:there are no mongodb/atlas services with sync enabledcan someone help me please",
"username": "akram_chorfi"
},
{
"code": "",
"text": "Hi Akram - this is most likely due to you trying to open a sync realm without having sync enabled on the Admin UI.If you only want to work with authentication, you can use the following snippets: https://docs.mongodb.com/realm/ios/authenticate#email-password-authentication",
"username": "Sumedha_Mehta1"
}
] | MongoDB realm user authentification | 2020-12-25T19:03:56.768Z | MongoDB realm user authentification | 1,735 |
null | [
"backup",
"field-encryption"
] | [
{
"code": "",
"text": "It’s my understanding from the keynote presentation: Field Level Encryption in MongoDB 4.2 (MongoDB World 2019 Keynote, part 4) - YouTube that it would be possible to create a Data Encryption Key (or DEK) per user. The desire with this was that if the user was ever deleted, we could also simply delete the DEK for that user and render the data inaccessible from backups.My question is, given the fact that the DEK is stored encrypted (via the KEK from the KMS) in the MongoDB database as another collection, wouldn’t backups for that MongoDB Atlas cluster also include the encrypted DEK’s? Deleting a DEK in this case would only render the active copy of the data inaccessible, and one could consider the scenario where a sufficiently motivated DBA could restore to a previous point in time and recover the deleted DEK, therefore revealing the user data again?Are DEK’s backed up differently versus data for Atlas clusters?",
"username": "Wyatt_Johnson"
},
{
"code": "",
"text": "Hi Wyatt. Great question! What’s important to understand, as you point out, is that it’s not the field level encryption (FLE) data encryption keys (DEKs) that reside in the database, it’s the encrypted DEKs that are stored. In fact, at no point do either plaintext field data or raw/plaintext field keys get revealed to the database (and by extension, the DBA, the VM owner, or the infrastructure/cloud provider) for data encrypted using FLE. Which means that even with a backup, a DBA would have to have both the full snapshot of the database containing the deleted key and access to that application user’s specific master key, or more likely, IAM access to make KEK decrypt requests via KMS.If the concern is about DBAs or some other party that does have access to both the decrypted keys and the backups, there are a couple of possible solutions depending on your threat model. If the primary motivation is just to provably ensure that deleted plaintext user records remain deleted no matter what, then it becomes a simple timing and separation of concerns strategy, and the most straight-forward solution is to move the keyvault collection to a different database or cluster completely, configured with a much shorter backup retention; FLE does not assume your encrypted keyvault collection is co-resident with your active cluster or has the same access controls and backup history, just that the client can, when needed, make an authenticated connection to that keyvault database. Important to note though that with a shorter backup cycle, in the event of some catastrophic data corruption (malicious, intentional, or accidental), all keys for that db (and therefore all encrypted data) are only as recoverable to the point in time as the shorter keyvault backup would restore.Note also if you are using KMS, IAM policies can be granted that enforce IP allow-lists (e.g., initially scoped strictly to your production app server VLAN) and even potentially set to require MFA for decrypt operations on a per-IAM human user/role basis, and CloudTrail triggers can be set to alert to non-common events.If the concern is about potential insider attacks from database administrators, then it may make sense to consider segregating responsibilities such that DBAs have no access to production IAM KMS accounts (or, alternatively a secrets manager like Hashicorp Vault if self-managing master keys) and thus no ability to recover any plaintext FLE-protected data.It’s also possible to use multiple master keys, though we wouldn’t recommend this on a per-application user/per-document basis, where more than a small number were used.Lastly, I should point out that there’s nothing specific to Atlas in all of the above - Atlas is oblivious as to whether or not FLE has been enabled, and in fact, short of manually scanning for use of BinData subtype 6 records or the presence of server-side FLE-specific json schema validation, I’m not sure how one could even determine that FLE is running. All of that to say that, no, there’s no baseline difference in how Altas handles FLE-enabled cluster backups. That said, one major advantage of running with Atlas (besides all the other benefits of a fully managed global service) is that you get automatic transparent encryption, e.g., full access to the cryptd package for mongocryptd.I hope that helps. Feel free to reach out any time to me or my colleagues here if you have any questions.-Kennp.s. Some other resources that might be useful for you:Atlas Security whitepaper (which covers some of the internals of FLE keys):930.07 KBOfficial FLE docs (updated regularly):The “MedCo” step by step tutorial for FLE, with full examples in 5 languages:A talk I gave at .Live this year on the FLE architecture:Guide to MongoDB Client-Side Field Level Encryption:A (very unofficial) FLE Sandbox quick-start for lots of different platforms & languages; also includes guidance on scoping KMS IAM policies:sample code for the client-side field level encryption project - GitHub - mongodb-labs/field-level-encryption-sandbox: sample code for the client-side field level encryption projectRecent post from DevHub, a short tutorial on using FLE with Go (golang)\nhttps://www.mongodb.com/how-to/field-level-encryption-fle-mongodb-golang",
"username": "Kenneth_White"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Client-Side Field Level Encryption DEK's and Backups | 2020-12-27T22:17:13.093Z | Client-Side Field Level Encryption DEK’s and Backups | 3,853 |
null | [
"app-services-user-auth"
] | [
{
"code": "linkCredentials",
"text": "Assuming I used the linkCredentials method to link credentials to a user, is there any way whatsoever to “unlink” those credentials from the user, so as to be able to re-link them to a different user?",
"username": "Peter_Stakoun"
},
{
"code": "",
"text": "There is not at the moment - the workaround here would be to delete that user and re-create the user (either by asking them to do so, or doing it as an admin), then link the credentials to the other account when they are used to log in.If you feel like this feature would be valuable, feel free to upvote this idea or create a new suggestion.",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unlinking credentials from a sync user | 2020-12-28T01:01:45.803Z | Unlinking credentials from a sync user | 1,499 |
null | [
"ops-manager"
] | [
{
"code": "",
"text": "Backup resync never completesIam trying out to backup resync one of my application shard in production using Ops manager, but it is restarting in the middle with a rename collection exception and the resync never complete from 3 weeks. The collection in the source database is renamed by application process $out aggregation. The Mongo support says the $out should be stopped in order to complete the initial sync. But the app team can’t stop the process at this stage. The initial sync is not completing how to handle this… Any help would be appreciated…",
"username": "Sivaram_Prasad_Chenn"
},
{
"code": "",
"text": "I thought of trying out with blacklisting the namespaces which are getting renamed during this, but there found to be a lot more required to be added to it. Just stopped the $out on the shards and completed this",
"username": "Sivaram_Prasad_Chenn"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Backup initial sync | 2020-11-19T20:01:02.543Z | Backup initial sync | 1,603 |
null | [
"configuration"
] | [
{
"code": "2020-12-16T02:26:11.415+00:00",
"text": "Hello, new to Atlas.At the moment data is written to my databases as 2020-12-16T02:26:11.415+00:00. I don’t understand how to change the default settings so information is written in the DB according to my timezone. That time stamp is showing that it was written 10 hours ahead of me. Even though I create that entry, a few minutes ago.",
"username": "farah"
},
{
"code": "",
"text": "Hi @farahWelcome to MongoDB community.Are you referring to data in the logs of Atlas instances or the data you write to your collections?If you use values like $$NOW it uses the server set timezone which is always UTC for Atlas. The logs are also UTC.If you wish to set your timezone generate the date on application side.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thank you for the response. I understand now. I was looking to change the Atlas display. I’ve updated my code base to write to Atlas in my timezone instead.",
"username": "farah"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Change time zone in MongoAtlas | 2020-12-16T02:37:08.765Z | Change time zone in MongoAtlas | 15,987 |
null | [
"swift",
"atlas-device-sync"
] | [
{
"code": "let config = SyncUser.current?.configuration(realmURL: Constants.REALM_URL, fullSynchronization:true )\nvar realm = try! Realm(configuration: config)\n",
"text": "hello everyone,i’m new with the development of ios app , i’m working with realm database , now i want to work with realm cloud instead of using realm file , can someone help me to do that ?i did use this :but SyncUser is not foundthanks for any help",
"username": "akram_chorfi"
},
{
"code": "",
"text": "Akram, for starters I would go through the Task Tracker App tutorial on MongoDB RealmI also wrote a Medium article that shows how to write a simple chat program on top of Realm for iOS.I have been a Realm Cloud developer since early 2018, shortly after the Realm company introduced its Realm Cloud upgrade to its native…\nReading time: 12 min read\nI hope this was usefulRichard",
"username": "Richard_Krueger"
},
{
"code": " if let user = self.app.currentUser {\n let uid = user.id\n \n // open user realm\n Realm.asyncOpen(configuration: user.configuration(partitionValue: uid),\n callback: { result in\n \n switch result {\n case .success(let realm):\n self.userRealm = realm\n \n case .failure(let error):\n fatalError(\"Failed to open realm: \\(error)\")\n }\n \n })\n \n }\n",
"text": "Akram,Just a cursory observation, but you are opening your realm as if it were a local realm. If you are using realm cloud with a MongoDB Realm app you should probably use asyncOpen instead of the synchronous Realm open try! Realm(configuration: config). It should look something like thisRichard Krueger",
"username": "Richard_Krueger"
}
] | Trying to add sync to ios app but SyncUser is not found | 2020-12-24T12:22:23.233Z | Trying to add sync to ios app but SyncUser is not found | 1,831 |
null | [
"aggregation"
] | [
{
"code": "{\n \"accountID\" : \"01\",\n \"insertedDate\" : ISODate(\"2020-12-23T00:00:00Z\"),\n \"remarks\" : \"One\",\n \"typeKey\" : \"A\",\n}\n\n{\n \"accountID\" : \"01\",\n \"insertedDate\" : ISODate(\"2020-12-22T00:00:00Z\"),\n \"remarks\" : \"One\",\n \"typeKey\" : \"A\",\n}\n\n{\n \"accountID\" : \"01\",\n \"insertedDate\" : ISODate(\"2020-12-21T00:00:00Z\"),\n \"remarks\" : \"One\",\n \"typeKey\" : \"A\",\n}\n\n{\n \"accountID\" : \"01\",\n \"insertedDate\" : ISODate(\"2020-12-20T00:00:00Z\"),\n \"remarks\" : \"One\",\n \"typeKey\" : \"A\",\n}\n\n{\n \"accountID\" : \"01\",\n \"insertedDate\" : ISODate(\"2020-12-19T00:00:00Z\"),\n \"remarks\" : \"One\",\n \"typeKey\" : \"A\",\n}\n\n{\n \"accountID\" : \"01\",\n \"insertedDate\" : ISODate(\"2020-12-18T00:00:00Z\"),\n \"remarks\" : \"One\",\n \"typeKey\" : \"A\",\n}\n\n{\n \"accountID\" : \"01\",\n \"insertedDate\" : ISODate(\"2020-12-17T00:00:00Z\"),\n \"remarks\" : \"One\",\n \"typeKey\" : \"A\",\n}\n\n{\n \"accountID\" : \"01\",\n \"insertedDate\" : ISODate(\"2020-12-16T00:00:00Z\"),\n \"remarks\" : \"One\",\n \"typeKey\" : \"A\",\n}\n\n{\n \"accountID\" : \"02\",\n \"insertedDate\" : ISODate(\"2020-11-19T00:00:00Z\"),\n \"remarks\" : \"One\",\n \"typeKey\" : \"A\",\n}\n\n{\n \"accountID\" : \"02\",\n \"insertedDate\" : ISODate(\"2020-11-18T00:00:00Z\"),\n \"remarks\" : \"One\",\n \"typeKey\" : \"A\",\n}\n\n{\n \"accountID\" : \"02\",\n \"insertedDate\" : ISODate(\"2020-11-10T00:00:00Z\"),\n \"remarks\" : \"One\",\n \"typeKey\" : \"A\",\n}\n\n{\n \"accountID\" : \"02\",\n \"insertedDate\" : ISODate(\"2020-11-02T00:00:00Z\"),\n \"remarks\" : \"One\",\n \"typeKey\" : \"A\",\n}\n\n{\n \"accountID\" : \"02\",\n \"insertedDate\" : ISODate(\"2020-07-19T00:00:00Z\"),\n \"remarks\" : \"One\",\n \"typeKey\" : \"A\",\n}\n",
"text": "**Hello,I have a collection and in that collection I want to get the accountID and it’s count of documents which are inserted in last 30 days. The date column is insertedDate.For Example there is a accountID = 01 and it have 8 document from which 3 from more then 30 days of timeperiod and 5 are in 30 days so it show like that{\naccountID:01,\ntotalCount:5\n}there is another document and from last 30 days only 4 documents were inserted so it’s output should be{\naccountID:02\ntotalCount:4\n}here is the sample documentNote that the Date field is insertedDate.\nThanks in advance.",
"username": "Nabeel_Raza"
},
{
"code": "{ $project: { _id: 0,\n accountID:1,\n insertedDate:1,\n PreviousDate: { $subtract: [ \"$insertedDate\", (1000*60*60*24*30) ] },\n \n }\n },\n {\n $group: {\n _id: {\n accountID : \"$accountID\" },\n FDate: { $first : \"$insertedDate\" },\n LDate: { $first : \"$PreviousDate\" },\n count: { $sum: 1 } \n }\n \n }\n \n ])\n/* 1 */\n{\n \"_id\" : {\n \"accountID\" : \"02\"\n },\n \"FDate\" : ISODate(\"2020-11-19T00:00:00.000Z\"),\n \"LDate\" : ISODate(\"2020-10-20T00:00:00.000Z\"),\n \"count\" : 5.0\n}\n\n/* 2 */\n{\n \"_id\" : {\n \"accountID\" : \"01\"\n },\n \"FDate\" : ISODate(\"2020-12-23T00:00:00.000Z\"),\n \"LDate\" : ISODate(\"2020-11-23T00:00:00.000Z\"),\n \"count\" : 8.0\n}\n",
"text": "Query:\ndb.TT.aggregate( [getting output:how can I achieve the target. Can you help",
"username": "Nabeel_Raza"
},
{
"code": "$cond$sumdb.test.aggregate([\n { \n $addFields: { \n PreviousDate: { $subtract: [ ISODate(), (1000*60*60*24*30) ] } \n } \n },\n { \n $group: { \n _id: \"$accountID\", \n count: { \n $sum: { $cond: [ { $gte: [ \"$insertedDate\", \"$PreviousDate\" ] }, 1, 0 ] } \n } \n } \n }\n])",
"text": "Hi @Nabeel_Raza, you can try this aggregation. Note the usage of the $cond within the group stage’s $sum operator:",
"username": "Prasad_Saya"
},
{
"code": "/* 1 */\n{\n \"_id\" : \"02\",\n \"count\" : 0.0\n}\n\n/* 2 */\n{\n \"_id\" : \"01\",\n \"count\" : 8.0\n}",
"text": "Thanks for the reply @Prasad_Saya But i am getting this output which isn’t a valid output",
"username": "Nabeel_Raza"
},
{
"code": "{\n \"accountID\" : \"01\",\n \"remarks\" : \"One\",\n \"typeKey\" : \"A\",\n\t\"count\": 8\n}\n\n{\n \"accountID\" : \"02\",\n \"remarks\" : \"One\",\n \"typeKey\" : \"A\",\n\t\"count\": 4\n}",
"text": "Expected Output is:",
"username": "Nabeel_Raza"
},
{
"code": "instertedDateaccountID: '01'",
"text": "I want to get the accountID and it’s count of documents which are inserted in last 30 days.I see that the instertedDate for accountID: '01' within the last 30 days includes all 8 documents.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "But for accountID:02 the count for last 30 days is 4",
"username": "Nabeel_Raza"
},
{
"code": "\n_id : { accountID:$accountID,typeKey:$typeKey,remarks:$remarks }\n",
"text": "If you want a count that accountID, remarks and typeKey specific you must add remarks and typeKey into the _id of $group like\n\n_id : { accountID:$accountID,typeKey:$typeKey,remarks:$remarks }\n",
"username": "steevej"
},
{
"code": "",
"text": "Those documents are dated 2020-11-19 or older, more that 30 days.",
"username": "steevej"
},
{
"code": "",
"text": "But in 30 days the count should be shown which is 4",
"username": "Nabeel_Raza"
},
{
"code": "accountID: '02'instertedDate2020-11-18",
"text": "@Nabeel_Raza, for accountID: '02' all the instertedDate values are before 2020-11-18 - and that is more than 30 days - so the count is zero.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Yeah i know but the count for documents in 30 days for account 02 is 4.\nWe should exclude all those document which are more then 30 days from the first document.",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "We should exclude all those document which are more then 30 days from the first document.How do you determine which one is the first document?",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "the first document for each accountID",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "30days=insertedDate-userMinIsertedDate(the oldest)?",
"username": "Takis"
},
{
"code": "accountID",
"text": "What is the criteria to determine which is the first document for each of the accountIDs? How do you know this “first” document?",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "We can also sort the document on insertedDate field for each accountID.",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "No, this is not correct logic @Takis.Date = inserted data(1st one) - 30 days",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "The first entered document for each accountID is the first document for each accountID",
"username": "Nabeel_Raza"
},
{
"code": "{\n \"aggregate\": \"testcollA\",\n \"pipeline\": [\n {\n \"$lookup\": {\n \"from\": \"testcollA\",\n \"let\": {\n \"acid\": \"$accountID\",\n \"d\": \"$insertedDate\"\n },\n \"pipeline\": [\n {\n \"$match\": {\n \"$expr\": {\n \"$eq\": [\n \"$accountID\",\n \"$$acid\"\n ]\n }\n }\n },\n {\n \"$group\": {\n \"_id\": \"$accountID\",\n \"userNewestPostedDate\": {\n \"$max\": \"$insertedDate\"\n }\n }\n },\n {\n \"$addFields\": {\n \"accountID\": \"$_id\"\n }\n },\n {\n \"$project\": {\n \"_id\": 0\n }\n },\n {\n \"$project\": {\n \"userNewestPostedDate\": 1\n }\n }\n ],\n \"as\": \"joined\"\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$joined\"\n }\n },\n {\n \"$replaceRoot\": {\n \"newRoot\": {\n \"$mergeObjects\": [\n \"$joined\",\n \"$$ROOT\"\n ]\n }\n }\n },\n {\n \"$unset\": [\n \"joined\"\n ]\n },\n {\n \"$group\": {\n \"_id\": \"$accountID\",\n \"sum\": {\n \"$sum\": {\n \"$cond\": [\n {\n \"$lte\": [\n {\n \"$subtract\": [\n \"$userNewestPostedDate\",\n \"$insertedDate\"\n ]\n },\n 2592000000\n ]\n },\n 1,\n 0\n ]\n }\n }\n }\n },\n {\n \"$addFields\": {\n \"accountID\": \"$_id\"\n }\n },\n {\n \"$project\": {\n \"_id\": 0\n }\n }\n ],\n \"cursor\": {},\n \"maxTimeMS\": 1200000\n}\n",
"text": "For the accountID=02\nThe oldest = ISODate(“2020-07-19T00:00:00Z”)\nThe newest= ISODate(“2020-11-19T00:00:00Z”)we want to keep\noldest + max30 days?\nor\nnewest - max30days? (i think you want this but not sure)Yesterday i sendedcurrent_date - inserted_date <= 30 days (first query + merge on other collection)\ninserted_date-olderst_of_user <= 30days (second query + merge on other collection)This one is\nnewest_of_user-insertedDate <=30 days (this query but no merge)Its 4 for user 02 and 8 for user=01 , i think its ok but not sure if this you need",
"username": "Takis"
}
] | How to get previous document of last 30 (custom) days using mongodb query | 2020-12-24T11:01:53.778Z | How to get previous document of last 30 (custom) days using mongodb query | 26,926 |
null | [
"field-encryption"
] | [
{
"code": "",
"text": "Recently I came across an error while trying to implement partial search on an encrypted(Automatic CSFLE) filed.For example, I have an email address “[email protected]” in the database. When the user searches for “samp”, I have to run a query to fetch the “[email protected]” from the database. Also, the Client Side Field Level Encryption is enabled in the email field. When I try to achieve this using $regex, I’m getting the following error“MongoError: Invalid match expression operator on encrypted field ‘name’”\nI have been googling for a while but still not see any good solution. Are there any solutions to this?.",
"username": "master_user"
},
{
"code": "",
"text": "Hi @master_user,Welcome to MongoDB community.Consider the limitations of FLE encryption, there is no server side to achieve this regex ability for encrypted fields:\nhttps://docs.mongodb.com/manual/reference/security-client-side-query-aggregation-support/#supported-query-operatorsI suggest to. Consider another schema design for your searches, for example store the initial part of the email in an uncrypted field (egm sample) and search on it.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Partial search on an encrypted field (CSFLE) | 2020-12-24T17:09:23.673Z | Partial search on an encrypted field (CSFLE) | 4,631 |
null | [
"queries"
] | [
{
"code": "{\n \"UserId\" : 1,\n \"Books\" : [\n {\n \"Category\" : 1,\n \"BookInfos\" : [{Name:xxx,Auth:xxx}]\n }, \n {\n \"Category\" : 2,\n \"BookInfos\" : [{Name:xxx,Auth:xxx}],\n }\n ]\n}\n",
"text": "I want to update a nested array directly. I want push new object to nested array if Books with field ‘Category’=1 or object of Books not exists. Or if Books with ‘Category’=1 field exists then update BookInfos field in this object.I cant write right syntax, help plz",
"username": "grape_Ye"
},
{
"code": " \"newBook\": {\n \"Name\": \"xxx2\",\n \"Auth\": \"xxx2\",\n \"Category\": 2\n }\n\"q\": {\n \"UserId\": 1\n }\n{\n \"update\": \"testcoll\",\n \"updates\": [\n {\n \"q\": {\n \"UserId\": 1\n },\n \"u\": [\n {\n \"$addFields\": {\n \"Books\": {\n \"$let\": {\n \"vars\": {\n \"newBook\": {\n \"Name\": \"xxx2\",\n \"Auth\": \"xxx2\",\n \"Category\": 2\n }\n },\n \"in\": {\n \"$cond\": [\n {\n \"$not\": [\n {\n \"$ne\": [\n {\n \"$type\": \"$Books\"\n },\n \"missing\"\n ]\n }\n ]\n },\n [\n {\n \"Category\": \"$$newBook.Category\",\n \"BookInfos\": [\n {\n \"Name\": \"$$newBook.Name\",\n \"Auth\": \"$$newBook.Auth\"\n }\n ]\n }\n ],\n {\n \"$arrayElemAt\": [\n {\n \"$reduce\": {\n \"input\": \"$Books\",\n \"initialValue\": [\n [],\n 0,\n false\n ],\n \"in\": {\n \"$let\": {\n \"vars\": {\n \"booksIndexAdded\": \"$$value\",\n \"book\": \"$$this\"\n },\n \"in\": {\n \"$let\": {\n \"vars\": {\n \"books\": {\n \"$arrayElemAt\": [\n \"$$booksIndexAdded\",\n 0\n ]\n },\n \"index\": {\n \"$arrayElemAt\": [\n \"$$booksIndexAdded\",\n 1\n ]\n },\n \"added\": {\n \"$arrayElemAt\": [\n \"$$booksIndexAdded\",\n 2\n ]\n }\n },\n \"in\": {\n \"$cond\": [\n {\n \"$eq\": [\n \"$$book.Category\",\n \"$$newBook.Category\"\n ]\n },\n [\n {\n \"$concatArrays\": [\n \"$$books\",\n [\n {\n \"$mergeObjects\": [\n \"$$book\",\n {\n \"BookInfos\": {\n \"$concatArrays\": [\n \"$$book.BookInfos\",\n [\n {\n \"Name\": \"$$newBook.Name\",\n \"Auth\": \"$$newBook.Auth\"\n }\n ]\n ]\n }\n }\n ]\n }\n ]\n ]\n },\n {\n \"$add\": [\n \"$$index\",\n 1\n ]\n },\n true\n ],\n {\n \"$cond\": [\n {\n \"$and\": [\n {\n \"$eq\": [\n \"$$index\",\n {\n \"$subtract\": [\n {\n \"$size\": \"$Books\"\n },\n 1\n ]\n }\n ]\n },\n {\n \"$not\": [\n \"$$added\"\n ]\n }\n ]\n },\n [\n {\n \"$concatArrays\": [\n \"$$books\",\n [\n \"$$book\",\n {\n \"Category\": \"$$newBook.Category\",\n \"BookInfos\": [\n {\n \"Name\": \"$$newBook.Name\",\n \"Auth\": \"$$newBook.Auth\"\n }\n ]\n }\n ]\n ]\n },\n {\n \"$add\": [\n \"$$index\",\n 1\n ]\n },\n true\n ],\n [\n {\n \"$concatArrays\": [\n \"$$books\",\n [\n \"$$book\"\n ]\n ]\n },\n {\n \"$add\": [\n \"$$index\",\n 1\n ]\n },\n \"$$added\"\n ]\n ]\n }\n ]\n }\n }\n }\n }\n }\n }\n },\n 0\n ]\n }\n ]\n }\n }\n }\n }\n }\n ],\n \"upsert\": true,\n \"multi\": true\n }\n ]\n}\n",
"text": "HelloThe bellow does\n1)Add new book\nif no books array,or new category\n2)else new member on add bookinfo*uses pipeline update so need mongodb >=4.2Change those to the query below,use variables\nnewBook needs to have this structure for code to work,else change the code alsoNewBookUserIDWhat it does\nif books doesnt exists adds books have only the newbooks\nelse\nreduce on the books array,to updated books-array\nif category exists\nadd to bookinfo\nelse if i found the end and i dindt added yet(new category)\nadd new-book to books\nIt does it with one array readBefore update\nNew user\nNew category old user\nOld category old user\n",
"username": "Takis"
},
{
"code": "",
"text": "Hi @Takis and @grape_Ye,Pipeline updates is one of the options. However the provided pipeline us very complex.Why can’t u use array filters with upsert?Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "I know,if you can help sent a smaller query,update operators must be enough here.\nI am testing pipeline updates and i will try to make a function to generate the MQL code for nested updates,but its not done yet(like simplicity of arrayfilters but in pipeline)",
"username": "Takis"
},
{
"code": "{\n \"update\": \"testcoll\",\n \"updates\": [\n {\n \"q\": {\n \"UserId\": 1\n },\n \"u\": [\n {\n \"$addFields\": {\n \"Books\": {\n \"$let\": {\n \"vars\": {\n \"newBook\": {\n \"Name\": \"xxx2\",\n \"Auth\": \"xxx2\",\n \"Category\": 3\n }\n },\n \"in\": {\n \"$cond\": [\n {\n \"$not\": [\n {\n \"$ne\": [\n {\n \"$type\": \"$Books\"\n },\n \"missing\"\n ]\n }\n ]\n },\n [\n {\n \"Category\": \"$$newBook.Category\",\n \"BookInfos\": [\n {\n \"Name\": \"$$newBook.Name\",\n \"Auth\": \"$$newBook.Auth\"\n }\n ]\n }\n ],\n {\n \"$let\": {\n \"vars\": {\n \"sameCategoryBook\": {\n \"$arrayElemAt\": [\n {\n \"$filter\": {\n \"input\": \"$Books\",\n \"as\": \"book\",\n \"cond\": {\n \"$eq\": [\n \"$$newBook.Category\",\n \"$$book.Category\"\n ]\n }\n }\n },\n 0\n ]\n },\n \"differentCategoryBooks\": {\n \"$filter\": {\n \"input\": \"$Books\",\n \"as\": \"book\",\n \"cond\": {\n \"$ne\": [\n \"$$newBook.Category\",\n \"$$book.Category\"\n ]\n }\n }\n }\n },\n \"in\": {\n \"$cond\": [\n {\n \"$not\": [\n {\n \"$ne\": [\n {\n \"$type\": \"$$sameCategoryBook\"\n },\n \"missing\"\n ]\n }\n ]\n },\n {\n \"$concatArrays\": [\n \"$Books\",\n [\n {\n \"Category\": \"$$newBook.Category\",\n \"BookInfos\": [\n {\n \"Name\": \"$$newBook.Name\",\n \"Auth\": \"$$newBook.Auth\"\n }\n ]\n }\n ]\n ]\n },\n {\n \"$concatArrays\": [\n \"$$differentCategoryBooks\",\n [\n {\n \"$mergeObjects\": [\n \"$$sameCategoryBook\",\n {\n \"BookInfos\": {\n \"$concatArrays\": [\n \"$$sameCategoryBook.BookInfos\",\n [\n {\n \"Name\": \"$$newBook.Name\",\n \"Auth\": \"$$newBook.Auth\"\n }\n ]\n ]\n }\n }\n ]\n }\n ]\n ]\n }\n ]\n }\n }\n }\n ]\n }\n }\n }\n }\n }\n ],\n \"upsert\": true,\n \"multi\": true\n }\n ]\n}\n",
"text": "This is the same query but with filters,its less code(~1/2) but reads the array 2 times.\nI tested it seems to work also i thinkBooks are filtered 2 times\nbook-same-category (filter …)\nbooks-different category (filter …)If not same category found the new book is added to the books\nIf same category found then the bookInfos is updated and added to books-different-categoryIts also pipeline update",
"username": "Takis"
},
{
"code": "db.sample.update({books: {$elemMatch: {Category: \"<FILTER_FIELD_KEY>\", BookInfos: \"<FILTER_FIELD_VALUE>\"}}}, \n [{$addFields: {input: {$zip: {inputs: [\"$books\", [{Category: \"<TARGET_FIELD_KEY>\", BooksInfos: {...} }]]}}}}, \n {$set: {books: {\"$arrayElemAt\": [\"$input\", 0]}}}, \n {$project: {input : 0}}\n ]\n );\n",
"text": "Hi guys,I used to prepare another approach for a different data model, but try to use something of this sort:The Idea is to zip the array with the nee element and get a sort of upsert to an array.Let me know if you need a more specific examples.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks Pavel. Your method gave me a good idea. I think I can write the right method now.",
"username": "grape_Ye"
},
{
"code": "",
"text": "Your method surprised me. I never thought I could do it before. Now I have a better understanding of Mongo.\nThanks\nTakis.",
"username": "grape_Ye"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Update nested arrays with upsert | 2020-12-25T08:15:54.932Z | Update nested arrays with upsert | 13,752 |
null | [
"queries",
"mongodb-shell"
] | [
{
"code": "",
"text": "HelloI am looking for a online tool or script,to print MQL/JSON closer to the way people write MQL.\nOnline i found only either 1 line or new line for every { or ] and it became un-readable if big query,i also tried the pretty of mongoshell.Thank you",
"username": "Takis"
},
{
"code": "",
"text": "Have a look at jq. It has some nice features. Not sure it will fill your needs but it is a good place to start.",
"username": "steevej"
},
{
"code": "jqmongomatch = { \"$match\" : { ... } } ;\nsort = { \"$sort\" : { ... } } ;\nlookup = { \"$lookup\" : { ... } } ;\ngroup = { \"$group\" : { ... } } ;\npipeline = [ match , lookup , sort , group ] ;\ndb.collection.aggregate( pipeline ) ;\n",
"text": "Hi @Takis,Can you share an example of the sort of formatting you are looking for (sample document before & after)? Folks have different preferences for formatting of MQL and JSON queries.I second @steevej’s suggestion of jq for pretty-printing JSON (and it is great for quick filtering and wrangling, too). For quick formatting I use pretty-printing in the mongo shell, or work in a more visual tool like MongoDB Compass.I build more complicated aggregation queries using variables for readability and more straightforward debugging. @steevej shared a great example of this in a recent discussion:One thing I do to make it more readable is to assign each stage a variable and have the pipeline be an array of my variables. For example:Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "mongomongoshversion()",
"text": ",i also tried the pretty of mongoshell.Hi @Takis,Can you also confirm whether you were using the classic mongo shell or the new mongosh and the version (via version()).The new MongoDB Shell generally has better formatting including colour syntax highlighting.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "I tried it on another query and i cant see the contents of an arrays/objects.\nFormating is very nice its what i wanted,but i cant see all the query.How to print the complete query?\nI got this from JSON.parse(‘myJSON’) on mongoshScreenshot from 2020-12-23 12-21-421119×999 75.2 KB",
"username": "Takis"
},
{
"code": "JSON.parse()mongoshconsole.log(...)doc = {\n \"address\": {\n \"building\": \"8825\",\n \"coord\": [-73.8803827, 40.7643124],\n \"street\": \"Astoria Boulevard\",\n \"zipcode\": \"11369\"\n },\n \"borough\": \"Queens\",\n \"cuisine\": \"American\",\n \"grades\": [ {\n \"date\": new Date(\"2014-11-15T00:00:00.000Z\"),\n \"grade\": \"Z\",\n \"score\": 38\n },\n {\n \"date\": new Date(\"2012-02-10T00:00:00.000Z\"),\n \"grade\": \"A\",\n \"score\": 13\n }],\n \"name\": \"Brunos On The Boulevard\",\n \"restaurant_id\": \"40356151\"\n}\nObjectArrayversion()",
"text": "Hi @Takis,Instead of using JSON.parse(), try using the implicit evaluation in mongosh or console.log(...). This should pretty-print including awareness of MongoDB extended JSON data types.I created a quick test doc with arrays and objects based on the Atlas Sample Restaurants Dataset and all of the values appeared as I expected:If you are still seeing Object and Array, please provide:Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "{\"aggregate\":\"testcollA\",\"pipeline\":[{\"$lookup\":{\"from\":\"testcollA\",\"let\":{\"acid\":\"$accountId\",\"d\":\"$postedDate\"},\"pipeline\":[{\"$match\":{\"$expr\":{\"$eq\":[\"$accountId\",\"$$acid\"]}}},{\"$group\":{\"_id\":\"$accountId\",\"userOldestPostedDate\":{\"$min\":\"$postedDate\"}}},{\"$addFields\":{\"accountId\":\"$_id\"}},{\"$project\":{\"_id\":0}},{\"$project\":{\"userOldestPostedDate\":1}}],\"as\":\"joined\"}},{\"$unwind\":{\"path\":\"$joined\"}},{\"$replaceRoot\":{\"newRoot\":{\"$mergeObjects\":[\"$joined\",\"$$ROOT\"]}}},{\"$unset\":[\"joined\"]},{\"$group\":{\"_id\":\"$accountId\",\"sum\":{\"$sum\":{\"$cond\":[{\"$lte\":[{\"$subtract\":[\"$postedDate\",\"$userOldestPostedDate\"]},2592000000]},1,0]}}}},{\"$addFields\":{\"accountId\":\"$_id\"}},{\"$project\":{\"_id\":0}},{\"$addFields\":{\"target\":{\"$gte\":[\"$sum\",5]}}},{\"$project\":{\"_id\":0,\"accountId\":1,\"target\":1}},{\"$merge\":{\"into\":{\"db\":\"testdb\",\"coll\":\"testcollB\"},\"on\":[\"accountId\"],\"whenMatched\":\"merge\",\"whenNotMatched\":\"discard\"}}],\"cursor\":{},\"maxTimeMS\":1200000}\n\n",
"text": "HelloThank you for the replyI am using mongosh version 0.6.1\nI run it from ubuntu terminalQuery(random aggregate command,that i wanted to send to forum yesterday) =JSON.parse(‘Query_above’) and console.log(Query_above)\nPrints the same,query is very well formated but i dont see some embeded arrays/objects\nHere only objects in previous query i had arrays and objects.Also the below is Javascript Object,it would be nice to have valid JSON with keys as strings.\nSo people can copy paste it.Screenshot from 2020-12-23 14-49-43911×916 76.9 KBSolving this is important for the forum also,so far i am sending hard to read MQL code,because\ni dont know a way to produce compact MQL code.",
"username": "Takis"
},
{
"code": "format-output.ts#L162util.inspect()util.inspect(json,\n {\n colors: true,\n depth: null\n }\n)\n",
"text": "Prints the same,query is very well formated but i dont see some embeded arrays/objects\nHere only objects in previous query i had arrays and objects.Hi @Takis,Thanks for sharing the sample doc, which helped reproduce the issue. I looked a bit further into the shell implementation and tracked this quirk down to max recursion depth which is currently set to 6 by default: format-output.ts#L162.It looks like this default was chosen to provide a balance between performance and visible data.However, you can call util.inspect() directly and provide a depth of “null” (unlimited) to recurse up to the maximum stack size:If you think it might be useful to have a configurable (or larger) default, please raise a suggestion on the MongoDB Feedback Engine and comment here with the link so others can watch and upvote.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "{\"update\" \"testcoll\",\n \"updates\"\n [{\"q\" {\"UserId\" 1},\n \"u\"\n [{\"$addFields\"\n {\"Books\"\n {\"$let\"\n {\"vars\"\n {\"newBook\" {\"Name\" \"xxx2\", \"Auth\" \"xxx2\", \"Category\" 2}},\n \"in\"\n {\"$cond\"\n [{\"$not\" [{\"$ne\" [{\"$type\" \"$Books\"} \"missing\"]}]}\n [{\"Category\" \"$$newBook.Category\",\n \"BookInfos\"\n [{\"Name\" \"$$newBook.Name\", \"Auth\" \"$$newBook.Auth\"}]}]\n {\"$arrayElemAt\"\n [{\"$reduce\"\n {\"input\" \"$Books\",\n \"initialValue\" [[] 0 false],\n \"in\"\n {\"$let\"\n {\"vars\" {\"booksIndexAdded\" \"$$value\", \"book\" \"$$this\"},\n \"in\"\n {\"$let\"\n {\"vars\"\n {\"books\" {\"$arrayElemAt\" [\"$$booksIndexAdded\" 0]},\n \"index\" {\"$arrayElemAt\" [\"$$booksIndexAdded\" 1]},\n \"added\" {\"$arrayElemAt\" [\"$$booksIndexAdded\" 2]}},\n \"in\"\n {\"$cond\"\n [{\"$eq\" [\"$$book.Category\" \"$$newBook.Category\"]}\n [{\"$concatArrays\"\n [\"$$books\"\n [{\"$mergeObjects\"\n [\"$$book\"\n {\"BookInfos\"\n {\"$concatArrays\"\n [\"$$book.BookInfos\"\n [{\"Name\" \"$$newBook.Name\",\n \"Auth\" \"$$newBook.Auth\"}]]}}]}]]}\n {\"$add\" [\"$$index\" 1]}\n true]\n {\"$cond\"\n [{\"$and\"\n [{\"$eq\"\n [\"$$index\"\n {\"$subtract\" [{\"$size\" \"$Books\"} 1]}]}\n {\"$not\" [\"$$added\"]}]}\n [{\"$concatArrays\"\n [\"$$books\"\n [\"$$book\"\n {\"Category\" \"$$newBook.Category\",\n \"BookInfos\"\n [{\"Name\" \"$$newBook.Name\",\n \"Auth\" \"$$newBook.Auth\"}]}]]}\n {\"$add\" [\"$$index\" 1]}\n true]\n [{\"$concatArrays\" [\"$$books\" [\"$$book\"]]}\n {\"$add\" [\"$$index\" 1]}\n \"$$added\"]]}]}}}}}}}\n 0]}]}}}}}],\n \"upsert\" true,\n \"multi\" true}]}\n{\n update: 'testcoll',\n updates: [\n {\n q: { UserId: 1 },\n u: [\n {\n '$addFields': {\n Books: {\n '$let': {\n vars: {\n newBook: { Name: 'xxx2', Auth: 'xxx2', Category: 2 }\n },\n in: {\n '$cond': [\n {\n '$not': [\n { '$ne': [ { '$type': '$Books' }, 'missing' ] }\n ]\n },\n [\n {\n Category: '$$newBook.Category',\n BookInfos: [\n {\n Name: '$$newBook.Name',\n Auth: '$$newBook.Auth'\n }\n ]\n }\n ],\n {\n '$arrayElemAt': [\n {\n '$reduce': {\n input: '$Books',\n initialValue: [ [], 0, false ],\n in: {\n '$let': {\n vars: {\n booksIndexAdded: '$$value',\n book: '$$this'\n },\n in: {\n '$let': {\n vars: {\n books: {\n '$arrayElemAt': [ '$$booksIndexAdded', 0 ]\n },\n index: {\n '$arrayElemAt': [ '$$booksIndexAdded', 1 ]\n },\n added: {\n '$arrayElemAt': [ '$$booksIndexAdded', 2 ]\n }\n },\n in: {\n '$cond': [\n {\n '$eq': [\n '$$book.Category',\n '$$newBook.Category'\n ]\n },\n [\n {\n '$concatArrays': [\n '$$books',\n [\n {\n '$mergeObjects': [\n '$$book',\n {\n BookInfos: {\n '$concatArrays': [\n '$$book.BookInfos',\n [\n {\n Name: '$$newBook.Name',\n Auth: '$$newBook.Auth'\n }\n ]\n ]\n }\n }\n ]\n }\n ]\n ]\n },\n { '$add': [ '$$index', 1 ] },\n true\n ],\n {\n '$cond': [\n {\n '$and': [\n {\n '$eq': [\n '$$index',\n {\n '$subtract': [\n {\n '$size': '$Books'\n },\n 1\n ]\n }\n ]\n },\n {\n '$not': [ '$$added' ]\n }\n ]\n },\n [\n {\n '$concatArrays': [\n '$$books',\n [\n '$$book',\n {\n Category: '$$newBook.Category',\n BookInfos: [\n {\n Name: '$$newBook.Name',\n Auth: '$$newBook.Auth'\n }\n ]\n }\n ]\n ]\n },\n {\n '$add': [ '$$index', 1 ]\n },\n true\n ],\n [\n {\n '$concatArrays': [\n '$$books',\n [ '$$book' ]\n ]\n },\n {\n '$add': [ '$$index', 1 ]\n },\n '$$added'\n ]\n ]\n }\n ]\n }\n }\n }\n }\n }\n }\n },\n 0\n ]\n }\n ]\n }\n }\n }\n }\n }\n ],\n upsert: true,\n multi: true\n }\n ]\n}\n\n",
"text": "Still its not so good the print : (But its better than online tools i found so farOnline tool gave 250 lines,mongosh gave 170 lines,Clojure pprint 63 lines.\nThe query using Clojure pprint as Clojure map is still much better\n(but it doesnt have : and , so its not valid json)Clojure pprintMongosh same query,with : and commas, but some keys are not strings,so still not valid jsonI would like something like Clojure pprint with valid JSON\nif someone finds a tool to do this it would be nice to use it,to send compact JSON.\nIt could help the forum also",
"username": "Takis"
}
] | Pretty print of MQL/JSON code | 2020-12-19T19:16:51.190Z | Pretty print of MQL/JSON code | 7,872 |
[
"connecting",
"security"
] | [
{
"code": "",
"text": "Hello,I followed a guide on how to setup a privatelink with mongodb atlasCustomers want to guarantee private connectivity to MongoDB Atlas running on AWS. All dedicated clusters on MongoDB Atlas are deployed in their own VPC, so customers usually connect to a cluster via VPC peering or public IP access-listing. AWS...I’m unable to connect to my database using mongodb shell and private endpoint inside ec2 terminal, but I managed to connect with standard connection after whitelisting ec2 public ip.Things I checkedWhat else could I check to see where is the problem?",
"username": "Sven_Maikranz"
},
{
"code": "",
"text": "Hi @Sven_MaikranzProbably something is wrong with your specific configuration.Please contact our support for further assistance.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Troubleshoot private endpoint connection | 2020-12-24T10:31:51.561Z | Troubleshoot private endpoint connection | 4,067 |
|
null | [
"replication",
"configuration"
] | [
{
"code": "",
"text": "I’ve read this discussion: Replica Set with Master, Secondary, 3 ArbitersThe scenario I never saw mentioned in that discussion was: PSSAAThe reason I ask is it seems like having the extra two arbiters would allow a split where PS / SAA would allow SAA to stay operational as primary in a case where P,S,S,A,A were instances in a larger pool of machines where you might potentially lose more than one of them at a time.Is that 100% not supported, not recommended, and “doesn’t operate that way”?",
"username": "Nathan_Neulinger"
},
{
"code": "",
"text": "Welcome to the community @Nathan_Neulinger!There is currently (as at MongoDB 4.4) no hard restriction on adding multiple arbiters, but my general advice would be “at most one arbiter, ideally none”.For more elaboration on why, please see my comment on Replica set with 3 DB Nodes and 1 Arbiter - #8 by Stennie_X.The reason I ask is it seems like having the extra two arbiters would allow a split where PS / SAA would allow SAA to stay operational as primary in a case where P,S,S,A,A were instances in a larger pool of machines where you might potentially lose more than one of them at a time.The voting majority situation you are setting up with PS/SAA would be better implemented as P/SS (or ideally P/S/S with members in three data centres).In your PSSAA scenario, unavailability of any data-bearing node also means you lose the ability to acknowledge majority writes despite maintaining a voting majority. This generally has undesirable operational consequences, particularly in modern versions of MongoDB with more features and use cases relying on majority read and write concerns.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "@Nathan_Neulinger, adding to what @Stennie_X said , Also note if you choose data center 1 PS and data center 2 as SAA, there is more chances if there is any network interruption than SAA can become primary and if PS join back SAA than it has to rollback data…if you have only two data centers and when data center 1 goes down, you want to make datacenter 2 read write and keeping oplog to sync when data center come back within oplog window than you can choose\nPS/S(priorty=0)AA- make sure data center 2 S has priorty=0 to avoid it becoming primary during network interruption. and you can manually decide to make primary in case of true disaster.",
"username": "mohammed_naseer"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Does the statement "never more than one arbiter" ALWAYS apply? | 2020-12-14T20:47:35.549Z | Does the statement “never more than one arbiter” ALWAYS apply? | 2,554 |
null | [
"connecting",
"sharding"
] | [
{
"code": "",
"text": "**Could not find host matching read preference { mode: “primary” } for set **I used sh.status() in mongos.\nI checked the following error in the balancerwhat is the error\nPlease tell me how to fix the error",
"username": "Park_49739"
},
{
"code": "",
"text": "Could you please give some more information about your sharded cluster setup. IE number of shards, and number of servers in each shard replica set?Most of the time when I see this error it is when a shard replica set doesn’t have a primary node so the mongos/configs can’t find the primary node. This causes issues because it can’t route writes to the primary of the shard.I would check all of the shard servers directly to verify they are in a healthy state and each shard has a primary node.",
"username": "tapiocaPENGUIN"
}
] | Could not find host matching read preference { mode: "primary" } for set | 2020-12-24T12:59:10.445Z | Could not find host matching read preference { mode: “primary” } for set | 2,049 |
null | [
"atlas-device-sync",
"app-services-data-access"
] | [
{
"code": "",
"text": "HiWe’re about to start building a web app using Realm. There will be many (hopefully thousands in the future) of small business tenants, most of which will only have a single user, but providing the ability to add multiple users per tenant is a must have feature at an early stage.The Realm rule templates only seem to support access to “own” or “shared” documents, the latter of which is impractical, given each tenant could have tens of thousands of documents, which need to be accessed by all of the tenant’s users, including access to historic documents by newly added users.Is there a standard or recommended way of setting up a rule, so that document access is controlled by a tenant ID, rather than a user ID? Assuming this is possible, does the SDK support user authentication against a tenant too?Thanks in advance!",
"username": "AndyTr"
},
{
"code": "",
"text": "Bump.Does anybody have any pointers or suggestions, please?Thanks!",
"username": "AndyTr"
},
{
"code": "",
"text": "Quick question. So there can be multiple users per tenant. Can there be multiple tenants per user? If not, tenants in a sense are groups of users - right?",
"username": "Richard_Krueger"
},
{
"code": "",
"text": "Yes, in a way. As such, every document would belong to a group, rather than a user.I’ve come from an 8 year background working with MySQL and a couple of years working with GCP, but both MongoDB and Stitch/Realm are new to me.I initially posted this question when I was just starting to play with Realm, but having now started the setup of our app, I’m starting to get to grips with it. Not as convoluted as it initially appeared to me.What I’m now looking at … please correct me if you think there is a better way … is the following:Collection: user (assigned as Realm’s custom user data collection).\nProperty: tenant_id (string: contains reference to _id of tenant document).\nRule: All users are unable to write to tenant_id.Collection: tenant (created when first user registers).\nProperty: _id: ObjectID(xyz)\nRule: All users are able to read the tenant document with _id referenced within their user document.Trigger: On initial user creation - creates ‘tenant’ document, then updates user’s document (customData) with tenant_id reference. Subsequent users are invited to register via emailed URL, which contains a token relating back to the tenant_id, allowing the same ID to be written to their user document.Applied to every other collection…\nSchema: property: { tenant_id: ‘string’, default: %%user.id }\nRule: All users are able to read/write any document with { tenant_id: %%user.customData.tenant_id }Does the above make sense? If so, does this seem like a sensible and logical approach, please?Thank you in advance!",
"username": "AndyTr"
},
{
"code": "",
"text": "… and sorry, to answer your first question:In the real world, it is certainly possible to have a user who belongs to multiple tenants, but I don’t think I’ll ever want to support this within the app. As such, a user would only ever belong to one tenant (the parent), but a tenant could have multiple users.Thanks",
"username": "AndyTr"
},
{
"code": "",
"text": "Andy,The problem you are describing seems remotely similar to the chat partition problem I described in a previous medium article I wroteI grew up in Paris France and went to French high-school there. There is a lot I loved about the culture, but one of the most frustrating…\nReading time: 9 min read\nInstead of chat partitions in the custom data, you would have tenant_id. This article basically explains how to enforce the read/write rules you describe using the MongoDB Realm Sync permissions as described in their documentation.Otherwise, I think that you are on the right track with your design.Richard Krueger",
"username": "Richard_Krueger"
}
] | Realm rule for tenant ID (multiple users per tenant) | 2020-12-21T10:38:48.842Z | Realm rule for tenant ID (multiple users per tenant) | 5,234 |
null | [
"atlas-device-sync"
] | [
{
"code": "",
"text": "Hi there,\nI’m looking into the Realm partitionKey configuration but I could not find where to specify the rules to map users to allowed partionKeys.A node that wants to start a Realm synchronization, needs a user authentication key and a partitionKey. I need to restrict the allowed values for the partitionKey only to certain values. The best would be being able to write a custom function that, provided the realm user Id and partition key, return a boolean to know if the value is allowed.\nAlternatively, I would like to add a list of allowed partitionKey values on user creation, stored in the realm custom user data and allow the sync only if the partitionKey is among those values.Is it possible to have such configuration?",
"username": "Daniele_Malinconi"
},
{
"code": "",
"text": "Daniele,I wrote a Medium article on partitions and permissions, maybe this will help.I grew up in Paris France and went to French high-school there. There is a lot I loved about the culture, but one of the most frustrating…\nReading time: 9 min read\nRichard Krueger",
"username": "Richard_Krueger"
}
] | Realm Sync: Restrict partitionKey values for users | 2020-12-21T20:59:21.843Z | Realm Sync: Restrict partitionKey values for users | 1,982 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "Working Dot-net app now gives Bad Changeset error, since starting to sync the analogous Android app .\nDot-net uses realm 10.0.0-beta2, Android 10.2.0.\nNo errors shown in Logs.Order of events:Wiping realm from Android app and deleting both realm files, has no effect.\nI’m stuck with the error on Dot-net app (Android app is OK).I’m not sure if I’ve ever managed to sync a dot-net app with synced data from an android app.\nHas anyone been successful with multi-platform syncing with dot-net realm?",
"username": "Richard_Fairall"
},
{
"code": "",
"text": "The logs on the server should have more details about the bad changeset.",
"username": "nirinchev"
},
{
"code": "",
"text": "How can I access the server logs?\nThere were no errors in the mongo logs on the dashboard.",
"username": "Richard_Fairall"
},
{
"code": "",
"text": "I have found the problem.One of the classes in the Android version had a variable as ‘val’, the dot-net version a ‘var’ type.\nPity that this oversight could not have been reported to the logs.\nThanks for following this post.\nRich",
"username": "Richard_Fairall"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm Dot-net App throws exception Bad Changeset (DOWNLOAD) | 2020-12-21T08:11:54.449Z | Realm Dot-net App throws exception Bad Changeset (DOWNLOAD) | 1,521 |
null | [
"performance",
"devops"
] | [
{
"code": "",
"text": "Hi there,I plan to use the MongoDB Atlas Live-Migration to move a local ReplSet to Atlas. There should be no downtime. So the Live-Migration comes in handy. But I have only a small line of 100MB/s available which is used by ~30% constantly. I fear when I live-migrate that I will have a major impact on current processes which need bandwidth. The DB is > 500GB so even when would use 100% of the line this is > 12h\nSo in case I would impact my current processes for > 12h I would better go for mongorestore, which uses compressed data but this would still be quite a big downtime. This currently not an option. I would like to have the smallest downtime window possible, I do not care how long the Live-Migration takes so even 24h would be ok.TL;DR:\nHow can I limit the traffic of the MongoDB Live-Migration (monogmirror) to max 50% of my available network bandwidth capacity? I did some investigation with tc and trickle but both seem not to be the correct tools, or I misunderstood how to use them.Regards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "tcqdiscpriotricklescptrickle -v -u 50 -s scp Ontario.zip destination:",
"text": "Hi @michael_hoellerOn the mongomirror side adjusting numParallelCollections to a lower number may correspondingly lower bandwidth consumption, but this would be non deterministic.tc won’t allow you to rate limit but you could arrange the qdisc and prio to lower the priority of atlas destined packets.trickle looked like the userspace tool to use, seems to work okay with scp for example:\ntrickle -v -u 50 -s scp Ontario.zip destination:",
"username": "chris"
},
{
"code": "tctrickletricklemongomirrormongomirror",
"text": "Hi @michael_hoeller,If you need to throttle bandwidth or guarantee quality of service for important use cases, I would look at VLANs and QoS settings available via your network or router config. Reducing the number of parallel collection migrations may help, but without explicit traffic shaping you are relying on the cooperative nature of TCP. This will may not be ideal if the 30% of active usage includes latency-sensitive applications or interactive sessions.Something like tc or trickle should also work, but I personally find VLAN & QoS more straightforward to configure if there is a management interface. I noticed that trickle does not support statically linked executables, which may preclude usage with mongomirror.So in case I would impact my current processes for > 12h I would better go for mongorestore, which uses compressed dataMake sure you are using the latest version of mongomirror. It supports compression by default:As of version 0.9.0, mongomirror uses wire compression if it is enabled with either the source or the target. Use the --compressors option to specify which compression libraries to allow.How can I limit the traffic of the MongoDB Live-Migration (monogmirror) to max 50% of my available network bandwidth capacity? I did some investigation with tc and trickle but both seem not to be the correct tools, or I misunderstood how to use them.Can you share more information on what you tried and what didn’t work?You may also be able to get more detailed advice on traffic shaping from a community like Server Fault. Applications generally try to use as much resource as the O/S or network allows, and more nuanced management happens at a layer which has more visibility on overall activity and priorities.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "mongomirror",
"text": "Hello,How can I limit the traffic of the MongoDB Live-Migration (monogmirror) to max 50% of my available network bandwidth capacity? I did some investigation with tc and trickle but both seem not to be the correct tools, or I misunderstood how to use them.Can you share more information on what you tried and what didn’t work?I think it was more confusing than productive to mention mongomirrorThe use case is to utilize the MongoDB Atlas Live-Migation to migrate a local DB to MongoDB Atlas. MongoDB Atlas will use mongomirror. The local DB is uncompressed appr. 500 GB and the line is 100MB/s from which 30% -50% need to be available for daily business. In this scenario I like to make sure that the Live-migation only take max 50% of the line. When the live-migration takes much longer that is no issue, I will take care that I have an oplog big enough to cover a potential resume of the Live-Migation.Regards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "mongomirrormongomirrortctrickletciptablestctc",
"text": "I think it was more confusing than productive to mention mongomirrorHi Michael,I think the mongomirror context is more helpful than confusing, and your general use case of traffic shaping seems clear.However, the elaboration I was looking for was around tc and trickle not working to limit bandwidth as you expected. If you can share the command-line incantations and outcomes that didn’t work, perhaps someone will have suggestions on how to adjust those.I don’t find tc and iptables incantations particularly intuitive to decipher, which is why I usually look for alternatives like VLANs or QoS configured through networking devices rather than fussing with Linux command lines.You might find some useful tc info in the following references:I assumed you were running Linux based on the mention of tc, but what specific distro and version are you using?Regards,\nStennie",
"username": "Stennie_X"
}
] | Throttle Live-Migration network traffic | 2020-12-22T06:59:15.417Z | Throttle Live-Migration network traffic | 2,560 |
null | [] | [
{
"code": "",
"text": "G’day folks!Some of the more active users have commented that they haven’t received notifications when someone replies to a topic they posted on, unless they were explicitly mentioned or the reply was directly to their post.Looking into this further, I realised the default notification when you reply to a topic was set to Tracking but should really be Watching so you don’t miss any replies. I’ve updated this setting so you may see some more notifications than before.If you prefer a different notification scheme, you can override the defaults in your user profile. You can also change notification defaults per-topic, if you prefer to follow more (or less) of the conversation.For more information, please see Managing and subscribing to notifications.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Update to default notification settings when you reply to a topic | 2020-12-23T06:58:16.030Z | Update to default notification settings when you reply to a topic | 3,090 |
null | [
"node-js",
"security"
] | [
{
"code": "",
"text": "How can I suppress this warning:\n“Warning: no saslprep library specified. Passwords will not be sanitized”without having to install this third-party npm, “saslprep”Thanks.",
"username": "Melody_Maker"
},
{
"code": "saslprepSCRAM-SHA-256NODE-1663saslpreplib/core/auth/scram.jssaslprep",
"text": "“Warning: no saslprep library specified. Passwords will not be sanitized”Hi @Melody_Maker,A SASLprep implementation (RFC-4013) is a prerequisite for the SCRAM-SHA-256 spec and required for compliance if you are using this auth method. This library prepares strings that contain non-ASCII characters for use in username and password values.saslprep is currently only used for SCRAM-SHA-256 authentication in the Node.js driver. As this library is large, it was made an optional dependency per NODE-1663 in the MongoDB issue tracker. Other authentication mechanisms (such as the earlier default of SCRAM-SHA-1) do not require this library.The warning should only appear when sha256 is used without saslprep available (ref: lib/core/auth/scram.js).If you are using SCRAM-SHA-256, you should install saslprep to remove the warning.Alternatively, you could use SCRAM-SHA-1 to avoid the library requirement. SCRAM-SHA-256 is a more secure standard than SCRAM-SHA-1, but also computationally more expensive.If this warning appears to be incorrect for your deployment, please confirm your version of the Node.js driver. The code snippet I referenced above is from the 3.6.x Node.js driver, so if you are using an older version behaviour may differ (and I would suggest testing the latest version with your application code).Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thanks for the info.I’ll just install the saslprep library then.I’m sick of looking at that warning message many dozens of times per day, while developing.",
"username": "Melody_Maker"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Suppress "saslprep" warning | 2020-12-21T20:58:12.988Z | Suppress “saslprep” warning | 6,360 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "is there any possibility to do $lookup between two schema with collection\nEg:-\nschema 1:-\ncollection name (users)schema 2:-\ncollection name (courses)Can you suggest some solution ?",
"username": "Afser_Ali"
},
{
"code": "",
"text": "hi, could you clarify a little more?",
"username": "Leandro_Domingues"
}
] | $lookup between schema | 2020-12-22T07:37:44.520Z | $lookup between schema | 1,353 |
null | [
"aggregation"
] | [
{
"code": "{\n \"accountId\" : \"0310100000041704\",\n \"postedDate\" : ISODate(\"2020-12-22T00:00:00Z\"),\n \"reasonCodeSending\" : null,\n \"reference\" : \"4cb103in41103\",\n \"remarks\" : null,\n \"typeKey\" : \"C\",\n \"valueDate\" : \"2020-06-09T00:00:00.000+04:00\"\n}\n\n{\n \"accountId\" : \"0310100000041704\",\n \"postedDate\" : ISODate(\"2020-12-21T00:00:00Z\"),\n \"reasonCodeSending\" : null,\n \"reference\" : \"4cb103in41103\",\n \"remarks\" : null,\n \"typeKey\" : \"C\",\n \"valueDate\" : \"2020-06-09T00:00:00.000+04:00\"\n}\n\n{\n \"accountId\" : \"0310100000041704\",\n \"postedDate\" : ISODate(\"2020-12-20T00:00:00Z\"),\n \"reasonCodeSending\" : null,\n \"reference\" : \"4cb103in41103\",\n \"remarks\" : null,\n \"typeKey\" : \"C\",\n \"valueDate\" : \"2020-06-09T00:00:00.000+04:00\"\n}\n\n{\n \"accountId\" : \"0310100000041704\",\n \"postedDate\" : ISODate(\"2020-12-19T00:00:00Z\"),\n \"reasonCodeSending\" : null,\n \"reference\" : \"4cb103in41103\",\n \"remarks\" : null,\n \"typeKey\" : \"C\",\n \"valueDate\" : \"2020-06-09T00:00:00.000+04:00\"\n}\n\n{\n \"accountId\" : \"0310100000041704\",\n \"postedDate\" : ISODate(\"2020-12-18T00:00:00Z\"),\n \"reasonCodeSending\" : null,\n \"reference\" : \"4cb103in41103\",\n \"remarks\" : null,\n \"typeKey\" : \"C\",\n \"valueDate\" : \"2020-06-09T00:00:00.000+04:00\"\n}\n\n{\n \"accountId\" : \"0310100000041704\",\n \"postedDate\" : ISODate(\"2020-12-17T00:00:00Z\"),\n \"reasonCodeSending\" : null,\n \"reference\" : \"4cb103in41103\",\n \"remarks\" : null,\n \"typeKey\" : \"C\",\n \"valueDate\" : \"2020-06-09T00:00:00.000+04:00\"\n}\n\n{\n \"accountId\" : \"0310100000041705\",\n \"postedDate\" : ISODate(\"2020-12-19T00:00:00Z\"),\n \"reasonCodeSending\" : null,\n \"reference\" : \"4cb103in41103\",\n \"remarks\" : null,\n \"typeKey\" : \"C\",\n \"valueDate\" : \"2020-06-09T00:00:00.000+04:00\"\n}\n\n{\n \"accountId\" : \"0310100000041705\",\n \"postedDate\" : ISODate(\"2020-12-11T00:00:00Z\"),\n \"reasonCodeSending\" : null,\n \"reference\" : \"4cb103in41103\",\n \"remarks\" : null,\n \"typeKey\" : \"C\",\n \"valueDate\" : \"2020-06-09T00:00:00.000+04:00\"\n}\n\n{\n \"accountId\" : \"0310100000041705\",\n \"postedDate\" : ISODate(\"2020-11-19T00:00:00Z\"),\n \"reasonCodeSending\" : null,\n \"reference\" : \"4cb103in41103\",\n \"remarks\" : null,\n \"typeKey\" : \"C\",\n \"valueDate\" : \"2020-06-09T00:00:00.000+04:00\"\n}\n\n\n{\n \"accountId\" : \"0310100000041706\",\n \"postedDate\" : ISODate(\"2020-06-19T00:00:00Z\"),\n \"reasonCodeSending\" : null,\n \"reference\" : \"4cb103in41103\",\n \"remarks\" : null,\n \"typeKey\" : \"C\",\n \"valueDate\" : \"2020-06-09T00:00:00.000+04:00\"\n}\n\n\n{\n \"accountId\" : \"0310100000041706\",\n \"postedDate\" : ISODate(\"2020-07-19T00:00:00Z\"),\n \"reasonCodeSending\" : null,\n \"reference\" : \"4cb103in41103\",\n \"remarks\" : null,\n \"typeKey\" : \"C\",\n \"valueDate\" : \"2020-06-09T00:00:00.000+04:00\"\n}\n\n\n{\n \"accountId\" : \"0310100000041706\",\n \"postedDate\" : ISODate(\"2020-08-19T00:00:00Z\"),\n \"reasonCodeSending\" : null,\n \"reference\" : \"4cb103in41103\",\n \"remarks\" : null,\n \"typeKey\" : \"C\",\n \"valueDate\" : \"2020-06-09T00:00:00.000+04:00\"\n}\n\n\n{\n \"accountId\" : \"0310100000041706\",\n \"postedDate\" : ISODate(\"2020-09-19T00:00:00Z\"),\n \"reasonCodeSending\" : null,\n \"reference\" : \"4cb103in41103\",\n \"remarks\" : null,\n \"typeKey\" : \"C\",\n \"valueDate\" : \"2020-06-09T00:00:00.000+04:00\"\n}\n\n\n{\n \"accountId\" : \"0310100000041706\",\n \"postedDate\" : ISODate(\"2020-10-19T00:00:00Z\"),\n \"reasonCodeSending\" : null,\n \"reference\" : \"4cb103in41103\",\n \"remarks\" : null,\n \"typeKey\" : \"C\",\n \"valueDate\" : \"2020-06-09T00:00:00.000+04:00\"\n}\n{\n \"accountId\" : \"0310100000041704\",\n \"remarks\" : \"B\",\n \"typeKey\" : \"C\",\n \"valueDate\" : \"2020-06-09T00:00:00.000+04:00\"\n}\n\n{\n \"accountId\" : \"0310100000041705\",\n \"remarks\" : \"M\",\n \"typeKey\" : \"C\",\n \"valueDate\" : \"2020-07-09T00:00:00.000+04:00\"\n}\n\n{\n \"accountId\" : \"0310100000041706\",\n \"remarks\" : \"K\",\n \"typeKey\" : \"C\",\n \"valueDate\" : \"2020-08-09T00:00:00.000+04:00\"\n}\n{\n \"accountId\" : \"0310100000041704\",\n \"remarks\" : \"B\",\n \"typeKey\" : \"C\",\n \"valueDate\" : \"2020-06-09T00:00:00.000+04:00\",\n\t\"target\": \"true\"\t\n}\n\n{\n \"accountId\" : \"0310100000041705\",\n \"remarks\" : \"M\",\n \"typeKey\" : \"C\",\n \"valueDate\" : \"2020-07-09T00:00:00.000+04:00\",\n\t\"target\": \"false\"\n\n}\n\n{\n \"accountId\" : \"0310100000041706\",\n \"remarks\" : \"K\",\n \"typeKey\" : \"C\",\n \"valueDate\" : \"2020-08-09T00:00:00.000+04:00\",\n\t\"target\": \"false\"\n}",
"text": "Hi,\nI hope you all are fine. I came with a problem that I want to calculate the documents of a customer on the bases of date filter(last 30 days from the postedDate field) and then if there are more or equal to 5 documents then add a new field in another collection with a true flag otherwise false ( datatype for this field should be boolean)Here are some samples of documents for your ease.Collection ACollection BExpected Output",
"username": "Nabeel_Raza"
},
{
"code": " { \n \"accountId\" \"0310100000041704\"\n \"target\" true/false\n }\n on: \"accountId\" \n whenMatched: \"merge\"\n whenNotMatched: \"discard\"\n",
"text": "HelloI think you need $merge1)Aggregate the first Collection(group etc) so in pipeline to have something like2)$merge with collection B*merge requires Collection B to have a unique index on acountId",
"username": "Takis"
},
{
"code": "",
"text": "But you have missed couples of thing, how can I get the count of the document of last 30 days( form postedDate field) and then on that bases we have to create a new field in next collection.",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "@Prasad_Saya @scott_molinari\nNeed your answer on this.\nOne more thing that can we do this in mongodb or not that doing some calculation on single document and then add the result(field) in the second collection on the bases of first collection calculation?",
"username": "Nabeel_Raza"
},
{
"code": "date_difference= {$subtract [now_date posted_date]) \n\n(if you use java now_date=new Timestamp(System/currentTimeMillis)) \n\nYou need\n\ndate_difference <= 86400000*30 //1 day = 86400000 millisec\n",
"text": "HelloYes i know i thought the main problem was the merge,maybe the bellow can solve all problems1)Filter\nYou can use subtract to filter those dates\n$subtrack works in dates also,if it takes 2 days returns the difference in milliseconds$subtract2)Then group by accountId,sum the members\n$addField target true if >5 else false3)And then you can do the merge",
"username": "Takis"
},
{
"code": "",
"text": "That looks cool but I am writing mongodb query not a java query.\nSteps are:",
"username": "Nabeel_Raza"
},
{
"code": "{\"$lte\" {$subtract [now_date, \"$postedDate\"]} 86400000*30} \n",
"text": "Hello ,The driver will calculate the now_date ,and the 86400000*30 ,before sending the query.Also $addField target true if >5 else false,you will use $cond ,not driver ifTo be sure that all work i have to run the query , but try it i think it will work",
"username": "Takis"
},
{
"code": "{\n \"aggregate\": \"testcollA\",\n \"pipeline\": [\n {\n \"$group\": {\n \"_id\": \"$accountId\",\n \"sum\": {\n \"$sum\": {\n \"$cond\": [\n {\n \"$lte\": [\n {\n \"$subtract\": [\n \"2020-12-22T15:09:11Z\",\n \"$postedDate\"\n ]\n },\n 2592000000\n ]\n },\n 1,\n 0\n ]\n }\n }\n }\n },\n {\n \"$addFields\": {\n \"accountId\": \"$_id\"\n }\n },\n {\n \"$project\": {\n \"_id\": 0\n }\n },\n {\n \"$addFields\": {\n \"target\": {\n \"$gte\": [\n \"$sum\",\n 5\n ]\n }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"accountId\": 1,\n \"target\": 1\n }\n },\n {\n \"$merge\": {\n \"into\": {\n \"db\": \"testdb\",\n \"coll\": \"testcollB\"\n },\n \"on\": [\n \"accountId\"\n ],\n \"whenMatched\": \"merge\",\n \"whenNotMatched\": \"discard\"\n }\n }\n ],\n \"cursor\": {},\n \"maxTimeMS\": 1200000\n}\n",
"text": "HelloI wrote the query hopefully does what you needCollA(your data i kept only accountid and postedDate)\nScreenshot from 2020-12-22 17-06-011671×971 84.8 KBCollB (before running the query)\nScreenshot from 2020-12-22 17-06-121674×553 52.4 KBCollB(after running the query)\nScreenshot from 2020-12-22 17-06-591672×591 55.4 KBThe query\n2592000000 = 86400000 * 30\n“2020-12-22T15:09:11Z” = its now_date ,dont use string,use your driver method to take the\nSomething like date(now)\nOne more thing,for this to work, CollB needs a unique index on accountId (merge needs it)",
"username": "Takis"
},
{
"code": "",
"text": "“$subtract”: [\n“2020-12-22T15:09:11Z”,\n“$postedDate”\n]But the requirement was that we have to count the previous 30 days documents starting from postedDate not from the current date.",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "30days = date1 - date2In my query\n30days = current_date - $postedDate (keep it if $postedDate is in the last month)Which is the the date1 and date2 you need?",
"username": "Takis"
},
{
"code": "",
"text": "Previous 30 days document from postedDate",
"username": "Nabeel_Raza"
},
{
"code": "{\n \"aggregate\": \"testcollA\",\n \"pipeline\": [\n {\n \"$lookup\": {\n \"from\": \"testcollA\",\n \"let\": {\n \"acid\": \"$accountId\",\n \"d\": \"$postedDate\"\n },\n \"pipeline\": [\n {\n \"$match\": {\n \"$expr\": {\n \"$eq\": [\n \"$accountId\",\n \"$$acid\"\n ]\n }\n }\n },\n {\n \"$group\": {\n \"_id\": \"$accountId\",\n \"userOldestPostedDate\": {\n \"$min\": \"$postedDate\"\n }\n }\n },\n {\n \"$addFields\": {\n \"accountId\": \"$_id\"\n }\n },\n {\n \"$project\": {\n \"_id\": 0\n }\n },\n {\n \"$project\": {\n \"userOldestPostedDate\": 1\n }\n }\n ],\n \"as\": \"joined\"\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$joined\"\n }\n },\n {\n \"$replaceRoot\": {\n \"newRoot\": {\n \"$mergeObjects\": [\n \"$joined\",\n \"$$ROOT\"\n ]\n }\n }\n },\n {\n \"$unset\": [\n \"joined\"\n ]\n },\n {\n \"$group\": {\n \"_id\": \"$accountId\",\n \"sum\": {\n \"$sum\": {\n \"$cond\": [\n {\n \"$lte\": [\n {\n \"$subtract\": [\n \"$postedDate\",\n \"$userOldestPostedDate\"\n ]\n },\n 2592000000\n ]\n },\n 1,\n 0\n ]\n }\n }\n }\n },\n {\n \"$addFields\": {\n \"accountId\": \"$_id\"\n }\n },\n {\n \"$project\": {\n \"_id\": 0\n }\n },\n {\n \"$addFields\": {\n \"target\": {\n \"$gte\": [\n \"$sum\",\n 5\n ]\n }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"accountId\": 1,\n \"target\": 1\n }\n },\n {\n \"$merge\": {\n \"into\": {\n \"db\": \"testdb\",\n \"coll\": \"testcollB\"\n },\n \"on\": [\n \"accountId\"\n ],\n \"whenMatched\": \"merge\",\n \"whenNotMatched\": \"discard\"\n }\n }\n ],\n \"cursor\": {},\n \"maxTimeMS\": 1200000\n}\n",
"text": "The before was\n30days = current_date- $postedDate\nNow it is\n30days = $postedDate - $userOldestPostedDate\nGivesScreenshot from 2020-12-22 19-03-241674×794 57.4 KB",
"username": "Takis"
}
] | Counting the document and adding the resultant to another collection | 2020-12-22T08:09:44.652Z | Counting the document and adding the resultant to another collection | 3,045 |
[
"replication"
] | [
{
"code": "",
"text": "Machine A (bankiz-db 10.20.106.12)\nMachine B (srv-oba-proxydigit 10.1.0.192)On each of the machines I can connect with the mongo shell. but from machine A i can’t connect to machine B mongod and vice versa. However, the port is open on the firewall, there is no authentication on the mongod deamon and a telnet or a netcat on the port shows that the ports is open.I am at the end of my debugging. in the logs we notice that at each connection attempt of a peer there a socket is opened and immediately closed:\n2020-12-19T11: 04: 08.984 + 0000 I NETWORK [listener] connection accepted from 10.1.0.192:48352 # 5 (1 connection now open)\n2020-12-19T11: 04: 08.985 + 0000 I NETWORK [conn5] end connection 10.1.0.192:48352 (0 connections now open)In the screenshot below I am doing obvious tests to show you my problem.\nCapture d’écran 2020-12-19 à 12.09.551599×653 157 KB\nhow i started mongod instance on each node ? :\nmongod --port 27017 --dbpath /var/lib/mongodb2 --replSet rs1 --bind_ip localhost,the_private_ip_adress",
"username": "shadai_ALI"
},
{
"code": "",
"text": "Are you running with no security?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Yes i’m running with no security as by default. I’m not using a config file. So to show all options i have passed pls check my command below:mongod --port 27017 --dbpath /var/lib/mongodb2 --replSet rs1 --bind_ip localhost,the_private_ip_address_of_my_host",
"username": "shadai_ALI"
},
{
"code": "",
"text": "They are on different subnets.One is on a /16 the other a /24. It is a networking issue.",
"username": "chris"
}
] | Curious network problem between two mongodb nodes | 2020-12-19T18:55:20.466Z | Curious network problem between two mongodb nodes | 1,961 |
|
[
"aggregation",
"field-encryption"
] | [
{
"code": "",
"text": "I am getting the error “MongoError: Pipeline over an encrypted collection cannot reference additional collections.”\nAs I can see in my code the errors are coming wherever we have used aggregate like $lookup, $unwind, etc.\nSo, how to fix this.\nHere is the screenshot of that\necncryption_error11351×401 55.3 KB",
"username": "Great_Manager_Instit"
},
{
"code": "",
"text": "Hi @Great_Manager_Instit,The produced error you are getting is expected as currently Field Level Encryption fields cannot be use in a foriegn (not-self) collection lookup:Automatic client-side field level encryption supports the $lookup and $graphLookup only if the from collection matches the collection on which the aggregation runs against (i.e. self-lookup operations).$lookup and $graphLookup stages that reference a different from collection return an error.Therefore, the only workaround is to avoid the lookup.Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "We have a large codebase, changing/removing lookup will take time and not looks easy task.\nWhat will be the ideal way to encrypt the data if we have a foreign reference in collections?",
"username": "Great_Manager_Instit"
}
] | "MongoError: Pipeline over an encrypted collection cannot reference additional collections." | 2020-12-22T11:29:35.960Z | “MongoError: Pipeline over an encrypted collection cannot reference additional collections.” | 2,999 |
|
null | [
"transactions",
"spring-data-odm"
] | [
{
"code": "\norg.springframework.transaction.TransactionSystemException: Could not commit Mongo transaction for session [ClientSessionImpl@61cd23f8 id = {\"id\": {\"$binary\": {\"base64\": \"OLTFzpcTQaqZRecyjnQbSg==\", \"subType\": \"04\"}}}, causallyConsistent = true, txActive = false, txNumber = 1, error = d != java.lang.Boolean].; nested exception is com.mongodb.MongoCommandException: Command failed with error 112 (WriteConflict): 'WriteConflict error: this operation conflicted with another operation. Please retry your operation or multi-document transaction.' on server localhost:27017. The full response is {\"errorLabels\": [\"TransientTransactionError\"], \"operationTime\": {\"$timestamp\": {\"t\": 1608049752, \"i\": 1}}, \"ok\": 0.0, \"errmsg\": \"WriteConflict error: this operation conflicted with another operation. Please retry your operation or multi-document transaction.\", \"code\": 112, \"codeName\": \"WriteConflict\", \"$clusterTime\": {\"clusterTime\": {\"$timestamp\": {\"t\": 1608049757, \"i\": 41045}}, \"signature\": {\"hash\": {\"$binary\": {\"base64\": \"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\", \"subType\": \"00\"}}, \"keyId\": 0}}}\n\tat org.springframework.data.mongodb.MongoTransactionManager.doCommit(MongoTransactionManager.java:203)\n\tat org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:743)\n\tat org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:711)\n\tat org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:654)\n\tat org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:407)\n\tat org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:119)\n\tat org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)\n\tat org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:749)\n\tat org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:691)\n\tat com.xxx.RapportBuildServiceImpl$$EnhancerBySpringCGLIB$$8542fe9b.buildSingleReport(<generated>)\n\tat com.xxx.BuildPoller.processQueue(BuildPoller.java:67)\n\tat com.xxx.BuildPoller.lambda$init$0(BuildPoller.java:43)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\n\tat java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)\n\tat java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base/java.lang.Thread.run(Thread.java:834)\nCaused by: com.mongodb.MongoCommandException: Command failed with error 112 (WriteConflict): 'WriteConflict error: this operation conflicted with another operation. Please retry your operation or multi-document transaction.' on server localhost:27017. The full response is {\"errorLabels\": [\"TransientTransactionError\"], \"operationTime\": {\"$timestamp\": {\"t\": 1608049752, \"i\": 1}}, \"ok\": 0.0, \"errmsg\": \"WriteConflict error: this operation conflicted with another operation. Please retry your operation or multi-document transaction.\", \"code\": 112, \"codeName\": \"WriteConflict\", \"$clusterTime\": {\"clusterTime\": {\"$timestamp\": {\"t\": 1608049757, \"i\": 41045}}, \"signature\": {\"hash\": {\"$binary\": {\"base64\": \"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\", \"subType\": \"00\"}}, \"keyId\": 0}}}\n\tat com.mongodb.internal.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:175)\n\tat com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:359)\n\tat com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:280)\n\tat com.mongodb.internal.connection.UsageTrackingInternalConnection.sendAndReceive(UsageTrackingInternalConnection.java:100)\n\tat com.mongodb.internal.connection.DefaultConnectionPool$PooledConnection.sendAndReceive(DefaultConnectionPool.java:490)\n\tat com.mongodb.internal.connection.CommandProtocolImpl.execute(CommandProtocolImpl.java:71)\n\tat com.mongodb.internal.connection.DefaultServer$DefaultServerProtocolExecutor.execute(DefaultServer.java:255)\n\tat com.mongodb.internal.connection.DefaultServerConnection.executeProtocol(DefaultServerConnection.java:202)\n\tat com.mongodb.internal.connection.DefaultServerConnection.command(DefaultServerConnection.java:118)\n\tat com.mongodb.internal.connection.DefaultServerConnection.command(DefaultServerConnection.java:110)\n\tat com.mongodb.internal.operation.CommandOperationHelper$13.call(CommandOperationHelper.java:712)\n\tat com.mongodb.internal.operation.OperationHelper.withReleasableConnection(OperationHelper.java:620)\n\tat com.mongodb.internal.operation.CommandOperationHelper.executeRetryableCommand(CommandOperationHelper.java:705)\n\tat com.mongodb.internal.operation.TransactionOperation.execute(TransactionOperation.java:69)\n\tat com.mongodb.internal.operation.CommitTransactionOperation.execute(CommitTransactionOperation.java:133)\n\tat com.mongodb.internal.operation.CommitTransactionOperation.execute(CommitTransactionOperation.java:54)\n\tat com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:195)\n\tat com.mongodb.client.internal.ClientSessionImpl.commitTransaction(ClientSessionImpl.java:129)\n\tat org.springframework.data.mongodb.MongoTransactionManager$MongoTransactionObject.commitTransaction(MongoTransactionManager.java:469)\n\tat org.springframework.data.mongodb.MongoTransactionManager.doCommit(MongoTransactionManager.java:236)\n\tat org.springframework.data.mongodb.MongoTransactionManager.doCommit(MongoTransactionManager.java:200)\n\t... 17 common frames omitted\n",
"text": "Hello,I have a Spring Boot Application (version 2.4.0 with mongo-driver 4.4.1) with a pretty heavy task which do the following in a transaction :I am running this on a MongoDB 4.2.1 replicaset with one node on my laptop. It works very well.\nThen I upgrade my replicaset to MongoDB 4.4.2 and run same test. It fails with this error :I am running this on my laptop, nothing is using my application in parallel of my test. Is there something chaned in 4.4 which can explain this new behavior ?\nI have activated profiling (level 2) and I can confirm that no other query is running during my test. This is always reproducible.Thanks",
"username": "Olivier_Boudet"
},
{
"code": "",
"text": "Hello,\nit seems to be linked to the number of documents (writes ?) in the transactions.\nThe same test with 11000 deletes & 11000 inserts successfully pass.If I raise the limit to 12000 deletes & 12000 inserts, the WriteConflict error appears.",
"username": "Olivier_Boudet"
},
{
"code": "idstring@RunWith(SpringRunner.class)\n@ActiveProfiles(value = \"test-mongo44\")\n@Import(\n {MongoAutoConfiguration.class, MongoDataAutoConfiguration.class}\n)\n@SpringBootTest()\npublic class Mongo44Test {\n @Autowired\n private MongoTemplate mongoTemplate;\n\n @Autowired\n private MongoTransactionManager mongoTransactionManager;\n\n @Test\n public void test1() {\n\n // init the collection with 20 000 docs with a 1000 chars length string\n Collection<Document> elements = new ArrayList<Document>();\n IntStream.range(0,20000).forEach(i -> {\n Document doc = new Document();\n doc.put(\"id\", \"site\");\n doc.put(\"string\", RandomStringUtils.random(1000, true, true));\n elements.add(doc);\n });\n mongoTemplate.insert(elements, \"mongo44\");\n\n TransactionTemplate transactionTemplate = new TransactionTemplate(mongoTransactionManager);\n transactionTemplate.execute(new TransactionCallbackWithoutResult() {\n\n @Override\n protected void doInTransactionWithoutResult(TransactionStatus status) {\n\n // remove all docs in transaction\n mongoTemplate.remove(Query.query(Criteria.where(\"id\").is(\"site\")), \"mongo44\");\n\n elements.clear();\n // and re-insert 20 000 docs with a 1000 chars length string in the transaction\n IntStream.range(0,20000).forEach(i -> {\n Document doc = new Document();\n doc.put(\"id\", \"site\");\n doc.put(\"string\", RandomStringUtils.random(1000, true, true));\n elements.add(doc);\n });\n mongoTemplate.insert(elements, \"mongo44\");\n };\n });\n\n }\n}\n",
"text": "I wrote a simple unit test which do the following :This test passes with mongo 4.2.1 but fails with 4.4.2.",
"username": "Olivier_Boudet"
},
{
"code": "",
"text": "For those concerned by this issue, I opened a ticket : https://jira.mongodb.org/browse/SERVER-53464",
"username": "Olivier_Boudet"
},
{
"code": "",
"text": "Hi @Olivier_Boudet,Thanks for sharing a unit test and link to the Jira issue for others to follow!Regards,\nStennie",
"username": "Stennie_X"
}
] | Upgrade Mongo 4.2 to Mongo 4.4 : regression on some heavy transaction | 2020-12-15T17:18:04.625Z | Upgrade Mongo 4.2 to Mongo 4.4 : regression on some heavy transaction | 5,663 |
null | [] | [
{
"code": "",
"text": "Hi, right now I ´m using in my project a m-30 cluster. The specs says that this cluster have 3000 max connections. However, I´m using Mongoose driver with a Node.js App, and the poolsize is 5. I want to know if there is a relation between this two concepts.\nThanks in advance!",
"username": "Roberto_Gutierrez"
},
{
"code": "",
"text": "Hi @Roberto_Gutierrez,Welcome to MongoDB community.There is a dependency between the 2 concepts. The Atlas limitations is the total connections the deployment can accept.If your client application has a max pool size of 5 it means to reach 3000 you need 600 client/app instances to exhaust connections. (Little less as each driver holds a monitoring thread to the Atlas members).Hope this helps.Thanks\nPavel",
"username": "Pavel_Duchovny"
}
] | Relationship between Poolsize and Max Connections in Atlas Mongodb | 2020-12-21T14:05:05.814Z | Relationship between Poolsize and Max Connections in Atlas Mongodb | 3,428 |
null | [
"php"
] | [
{
"code": "Error: Class 'MongoDB\\Client' not found in C:\\xampp\\htdocs\\mongo.php on line 6mongodb\n\nMongoDB support => enabled\nMongoDB extension version => 1.9.0\nMongoDB extension stability => stable\nlibbson bundled version => 1.17.2\nlibmongoc bundled version => 1.17.2\nlibmongoc SSL => enabled\nlibmongoc SSL library => OpenSSL\nlibmongoc crypto => enabled\nlibmongoc crypto library => libcrypto\nlibmongoc crypto system profile => disabled\nlibmongoc SASL => enabled\nlibmongoc ICU => disabled\nlibmongoc compression => disabled\nlibmongocrypt bundled version => 1.0.4\nlibmongocrypt crypto => enabled\nlibmongocrypt crypto library => libcrypto\n\nDirective => Local Value => Master Value\nmongodb.debug => no value => no value\nname : mongodb/mongodb\ndescrip. : MongoDB driver library\nkeywords : database, driver, mongodb, persistence\nversions : * 1.8.0\ntype : library\nlicense : Apache License 2.0 (Apache-2.0) (OSI approved) https://spdx.org/licenses/Apache-2.0.html#licenseText\nhomepage : https://jira.mongodb.org/browse/PHPLIB\nsource : [git] https://github.com/mongodb/mongo-php-library.git 953dbc19443aa9314c44b7217a16873347e6840d\ndist : [zip] https://api.github.com/repos/mongodb/mongo-php-library/zipball/953dbc19443aa9314c44b7217a16873347e6840d 953dbc19443aa9314c44b7217a16873347e6840d\npath : C:\\git\\waw\\vendor\\mongodb\\mongodb\nnames : mongodb/mongodb\n\nsupport\nissues : https://github.com/mongodb/mongo-php-library/issues\nsource : https://github.com/mongodb/mongo-php-library/tree/1.8.0\n\nautoload\npsr-4\nMongoDB\\ => src/\nfiles\n\nrequires\next-hash *\next-json *\next-mongodb ^1.8.1\njean85/pretty-package-versions ^1.2\nphp ^7.0 || ^8.0\nsymfony/polyfill-php80 ^1.19\n\nrequires (dev)\nsquizlabs/php_codesniffer ^3.5, <3.5.5\nsymfony/phpunit-bridge 5.x-dev\n",
"text": "Hey everyone, I’ve encountered a bit of a problem following along with the offical docs here.\nError: Class 'MongoDB\\Client' not found in C:\\xampp\\htdocs\\mongo.php on line 6My codeI was unable to get the Mongo extension installed using PECL (tells me that it doesn’t exist) so I had to install it manually. I have confirmed it’s installed correctly:Next I installed the mongodb library via composer and confirmed it’s been added correctly:I’m not sure what the issue is at this point. It’s possible something is wrong with the autoloader, so I dumped it and had composer regenerate it. No dice. Do you guys have any thoughts on what the cause might be?I’m running php 7.4 on Windows 10 via XAMPP.",
"username": "Iain_F"
},
{
"code": "SITE_ROOT",
"text": "where did you define SITE_ROOT ?",
"username": "Jack_Woehr"
},
{
"code": "SITE_ROOT<?php\n\n// Path settings\ndefine(\"SITE_ROOT\", __DIR__);",
"text": "SITE_ROOT is a constant defined in config.php that points to site’s root directory, the file also lives at the root level.",
"username": "Iain_F"
},
{
"code": "",
"text": "Simplify your example.See if that works.",
"username": "Jack_Woehr"
},
{
"code": "C:\\xampp\\htdocs\\index.php:11:\n",
"text": "That appears to work. So that confirms the library is working. So something is probably wrong with the import. I confirmed that SITE_ROOT has the correct value, and the the path to autoload.php is correct.object(MongoDB\\Collection)[7]\nprivate ‘collectionName’ => string ‘sources’ (length=7)\nprivate ‘databaseName’ => string ‘waw’ (length=3)\nprivate ‘manager’ =>\nobject(MongoDB\\Driver\\Manager)[8]\npublic ‘uri’ => string ‘mongodb://127.0.0.1/’ (length=20)\npublic ‘cluster’ =>\narray (size=0)\nempty\nprivate ‘readConcern’ =>\nobject(MongoDB\\Driver\\ReadConcern)[13]\nprivate ‘readPreference’ =>\nobject(MongoDB\\Driver\\ReadPreference)[14]\npublic ‘mode’ => string ‘primary’ (length=7)\nprivate ‘typeMap’ =>\narray (size=3)\n‘array’ => string ‘MongoDB\\Model\\BSONArray’ (length=23)\n‘document’ => string ‘MongoDB\\Model\\BSONDocument’ (length=26)\n‘root’ => string ‘MongoDB\\Model\\BSONDocument’ (length=26)\nprivate ‘writeConcern’ =>\nobject(MongoDB\\Driver\\WriteConcern)[15]",
"username": "Iain_F"
},
{
"code": "",
"text": "I figured it out! So I’ve been using PHP storm to edit my code in C:/git/waw. PHP storm automatically pushes my code to C:/xampp/htdocs. Because of this, it never occurred to me that there might be a difference between those two directories, however it seems that PHP storm doesn’t push files added by composer inside the vendor file automatically. So library was in fact missing on the server, but not in my local repo.TLDR: I’m a moron and should have checked the vendor file on the server, not just the repo.",
"username": "Iain_F"
},
{
"code": "",
"text": "Glad you figured it out, best of luck in your project.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Thanks for the help!",
"username": "Iain_F"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Class 'MongoDB\Client' not found while following offical installation docs | 2020-12-20T20:03:47.732Z | Class ‘MongoDB\Client’ not found while following offical installation docs | 11,938 |
null | [
"swift",
"atlas-device-sync"
] | [
{
"code": "2020-11-11 10:54:45.462465+0100 AppQuote[70213:555903] Sync: Realm sync client ([realm-core-10.1.1], [realm-sync-10.1.1])\n 2020-11-11 10:54:45.462720+0100 AppQuote[70213:555903] Sync: Supported protocol versions: 1-1\n 2020-11-11 10:54:45.462978+0100 AppQuote[70213:555903] Sync: Platform: iOS Darwin 19.6.0 Darwin Kernel Version 19.6.0: Mon Aug 31 22:12:52 PDT 2020; root:xnu-6153.141.2~1/RELEASE_X86_64 x86_64\n 2020-11-11 10:54:45.463215+0100 AppQuote[70213:555903] Sync: Build mode: Release\n 2020-11-11 10:54:45.463446+0100 AppQuote[70213:555903] Sync: Config param: max_open_files = 256\n 2020-11-11 10:54:45.463682+0100 AppQuote[70213:555903] Sync: Config param: one_connection_per_session = 1\n 2020-11-11 10:54:45.463928+0100 AppQuote[70213:555903] Sync: Config param: connect_timeout = 120000 ms\n 2020-11-11 10:54:45.464324+0100 AppQuote[70213:555903] Sync: Config param: connection_linger_time = 30000 ms\n 2020-11-11 10:54:45.464869+0100 AppQuote[70213:555903] Sync: Config param: ping_keepalive_period = 60000 ms\n 2020-11-11 10:54:45.465275+0100 AppQuote[70213:555903] Sync: Config param: pong_keepalive_timeout = 120000 ms\n 2020-11-11 10:54:45.465750+0100 AppQuote[70213:555903] Sync: Config param: fast_reconnect_limit = 60000 ms\n 2020-11-11 10:54:45.466175+0100 AppQuote[70213:555903] Sync: Config param: disable_upload_compaction = 0\n 2020-11-11 10:54:45.466601+0100 AppQuote[70213:555903] Sync: Config param: tcp_no_delay = 0\n 2020-11-11 10:54:45.467075+0100 AppQuote[70213:555903] Sync: Config param: disable_sync_to_disk = 0\n 2020-11-11 10:54:45.470877+0100 AppQuote[70213:555903] Sync: User agent string: 'RealmSync/10.1.1 (iOS Darwin 19.6.0 Darwin Kernel Version 19.6.0: Mon Aug 31 22:12:52 PDT 2020; root:xnu-6153.141.2~1/RELEASE_X86_64 x86_64) RealmObjectiveC/10.1.2 appquote-vyxhe'\n 2020-11-11 10:54:45.473365+0100 AppQuote[70213:556552] Sync: Connection[1]: WebSocket::Websocket()\n 2020-11-11 10:54:45.473870+0100 AppQuote[70213:556552] Sync: Connection[1]: Session[1]: Binding '/Users/xxxxx/Library/Developer/CoreSimulator/Devices/2ADD59A5-C62D-4DD0-8F75-17993EC0C380/data/Containers/Data/Application/F06C7170-7430-4A8C-8D6B-A86FC14E08DE/Documents/mongodb-realm/appquote-vyxhe/5ee8c66c6429e7d1c7cac98b/5ee8c66c6429e7d1c7cac98b%2F%2522customers%2522.realm' to '\"customers\"'\n 2020-11-11 10:54:45.474406+0100 AppQuote[70213:556552] Sync: Connection[1]: Session[1]: Activating\n 2020-11-11 10:54:45.475073+0100 AppQuote[70213:556552] Sync: Connection[1]: Session[1]: client_reset_config = false, Realm exists = true, async open = false, client reset = false\n 2020-11-11 10:54:45.475877+0100 AppQuote[70213:556552] Sync: Opening Realm file: /Users/xxxxx/Library/Developer/CoreSimulator/Devices/2ADD59A5-C62D-4DD0-8F75-17993EC0C380/data/Containers/Data/Application/F06C7170-7430-4A8C-8D6B-A86FC14E08DE/Documents/mongodb-realm/appquote-vyxhe/5ee8c66c6429e7d1c7cac98b/5ee8c66c6429e7d1c7cac98b%2F%2522customers%2522.realm\n 2020-11-11 10:54:45.477736+0100 AppQuote[70213:556552] Sync: Connection[1]: Session[1]: client_file_ident = 8, client_file_ident_salt = 7178767744394399870\n 2020-11-11 10:54:45.478968+0100 AppQuote[70213:556552] Sync: Connection[1]: Session[1]: Progress handler called, downloaded = 961, downloadable(total) = 961, uploaded = 904, uploadable = 904, reliable_download_progress = 0, snapshot version = 5\n 2020-11-11 10:54:45.479489+0100 AppQuote[70213:556552] Sync: Connection[1]: Resolving 'ws.realm.mongodb.com:443'\n 2020-11-11 10:54:45.483831+0100 AppQuote[70213:556552] Sync: Connection[1]: Connecting to endpoint '52.49.130.120:443' (1/1)\n 2020-11-11 10:54:45.512983+0100 AppQuote[70213:556552] Sync: Connection[1]: Connected to endpoint '52.49.130.120:443' (from '192.168.0.11:52757')\n 2020-11-11 10:54:45.547766+0100 AppQuote[70213:555903] [CustomerTableViewController] Following changes occured on items:\n initial(Results<Customers> <0x618000049880> (\n\n ))\n 2020-11-11 10:54:45.658334+0100 AppQuote[70213:556552] Sync: Connection[1]: WebSocket::initiate_client_handshake()\n 2020-11-11 10:54:46.161267+0100 AppQuote[70213:556552] Sync: Connection[1]: WebSocket::handle_http_response_received()\n 2020-11-11 10:54:46.161787+0100 AppQuote[70213:556552] Sync: Connection[1]: Negotiated protocol version: 1\n 2020-11-11 10:54:46.162345+0100 AppQuote[70213:556552] Sync: Connection[1]: Will emit a ping in 41198 milliseconds\n 2020-11-11 10:54:46.163012+0100 AppQuote[70213:556552] Sync: Connection[1]: Session[1]: Sending: BIND(path='\"customers\"', signed_user_token_size=469, need_client_file_ident=0, is_subserver=0)\n 2020-11-11 10:54:46.163615+0100 AppQuote[70213:556552] Sync: Connection[1]: Session[1]: Sending: IDENT(client_file_ident=8, client_file_ident_salt=7178767744394399870, scan_server_version=7, scan_client_version=2, latest_server_version=7, latest_server_version_salt=6097420258415283507)\n 2020-11-11 10:54:46.164094+0100 AppQuote[70213:556552] Sync: Connection[1]: Session[1]: Sending: MARK(request_ident=1)\n 2020-11-11 10:54:46.980175+0100 AppQuote[70213:556552] Sync: Connection[1]: Session[1]: Received: MARK(request_ident=1)\n 2020-11-11 10:54:46.980696+0100 AppQuote[70213:556552] Sync: Connection[1]: Session[1]: Sending: UPLOAD(progress_client_version=5, progress_server_version=7, locked_server_version=7, num_changesets=0)",
"text": "Hi there!I’m using Realm Sync to develop an iOS app.\nI recently added a new database with some collections and since I’m unable to retrieve any objects of any databases.\nI tried to erase the new database and collections I added but still not working.\nI also tried to restart Sync.According to the log of my app, the connection is correctly established, the realm is well opened but it appears that there is no object on my collections. Here is a example where I’m opening a Realm to access a collection Customers to retrieve objects with the partition value “customers”:",
"username": "Julien_Chouvet"
},
{
"code": "",
"text": "Hi @Julien_Chouvet,\nCould you please share the model of Customers object?",
"username": "Pavel_Yakimenko"
},
{
"code": "{\n \"title\": \"Customers\",\n \"bsonType\": \"object\",\n \"required\": [\n \"_id\",\n \"_parentId\"\n ],\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"_parentId\": {\n \"bsonType\": \"string\"\n },\n \"lastName\": {\n \"bsonType\": \"string\"\n },\n \"firstName\": {\n \"bsonType\": \"string\"\n },\n \"phoneNumber\": {\n \"bsonType\": \"string\"\n },\n \"email\": {\n \"bsonType\": \"string\"\n },\n \"address\": {\n \"bsonType\": \"string\"\n },\n \"notes\": {\n \"bsonType\": \"string\"\n }\n }\n}",
"text": "Hi @Pavel_YakimenkoHere it is :",
"username": "Julien_Chouvet"
},
{
"code": "",
"text": "Can we see the Swift Object model? Did you check the error log in the Realm Console? Any errors?Are you using Developer mode?",
"username": "Jay"
},
{
"code": "",
"text": "The fact is that before I tried to add another db and other collections everything was working fine and I was able to get my customers objects as well as all the other objects of all my other collections. So I don’t think it come from my swift code.class Customers: Object {@objc dynamic var _id: ObjectId = ObjectId.generate()@objc dynamic var _parentId: String = “”@objc dynamic var address: String? = nil@objc dynamic var email: String? = nil@objc dynamic var firstName: String? = nil@objc dynamic var invoices: String? = nil@objc dynamic var lastName: String? = nil@objc dynamic var notes: String? = nil@objc dynamic var phoneNumber: String? = nil@objc dynamic var quotes: String? = niloverride static func primaryKey() -> String? {return “_id”}}I checked the Realm console and there is no error. And I’m not using the developer mode.",
"username": "Julien_Chouvet"
},
{
"code": "",
"text": "I tried to create a new Realm app (within the same cluster) and it is working, I’m able to get my customers objects.\nAny ideas why it is not working in the initial Realm app?",
"username": "Julien_Chouvet"
},
{
"code": "",
"text": "Maybe … sounds a little like my problem … if you’re making schema changes check they are reflected correctly in the Realm App… i think mine got out of whack and yet i couldn’t always find anything in the logs.",
"username": "Damian_Raffell"
},
{
"code": "",
"text": "Yes I saw your post! Is it working now for you?\nWhat do you mean by :check they are reflected correctly in the Realm AppHow can I check that?",
"username": "Julien_Chouvet"
},
{
"code": "",
"text": "I’ve just tried again with my first Realm app (i.e. initial configuration) and now it’s working again…",
"username": "Julien_Chouvet"
},
{
"code": "",
"text": "In Data Access / Schema … glad you’re up and running again .",
"username": "Damian_Raffell"
}
] | RealmSwift - Unable to get data after creating new collections | 2020-11-11T10:00:36.204Z | RealmSwift - Unable to get data after creating new collections | 3,010 |
null | [] | [
{
"code": "",
"text": "At the end of the lab we are asked \" Did the logical size of the dataset and the number of operations increase in your cluster view similar to how it did in this image?\"\nThe logical size of the dataset increased after I loaded the sample data; however, the number of operations stayed at 0. Is that normal, or did I make a mistake in the lab?",
"username": "Sam_Reamer"
},
{
"code": "",
"text": "Hi Sam,It is not normal. The number of operations should be increased. Please check the IP Address and modify as 0.0.0.0.0",
"username": "chandrahas_k"
},
{
"code": "",
"text": "Please share more artifacts if the issue persists",
"username": "Sudeep_Banerjee"
},
{
"code": "",
"text": "",
"username": "system"
}
] | Create and Deploy an Atlas Cluster (Number of operations does not change) | 2020-12-16T16:04:06.694Z | Create and Deploy an Atlas Cluster (Number of operations does not change) | 2,335 |
null | [
"cxx"
] | [
{
"code": "Program received signal SIGSEGV, Segmentation fault.\n0x0000555555d6d070 in bson_free ()\n(gdb) where\n#0 0x0000555555d6d070 in bson_free ()\n#1 0x00007ffff3f21b63 in bson_string_free (string=0x555555e4d320, free_segment=free_segment@entry=false)\n at /home/lukasz/qliqsoft/mongo-c-driver/src/libbson/src/bson/bson-string.c:101\n#2 0x00007ffff3f9086b in _set_platform_string (handshake=0x7ffff3ff7780 <gMongocHandshake>)\n at /home/lukasz/qliqsoft/mongo-c-driver/src/libmongoc/src/mongoc/mongoc-handshake.c:378\n#3 _mongoc_handshake_init () at /home/lukasz/qliqsoft/mongo-c-driver/src/libmongoc/src/mongoc/mongoc-handshake.c:442\n#4 0x00007ffff3f8ae33 in _mongoc_do_init () at /home/lukasz/qliqsoft/mongo-c-driver/src/libmongoc/src/mongoc/mongoc-init.c:140\n#5 0x00007ffff456547f in __pthread_once_slow (once_control=0x7ffff3ff7734 <once>, init_routine=0x7ffff3f8adf0 <_mongoc_do_init>)\n at pthread_once.c:116\n#6 0x00007ffff6c6cb7b in std::unique_ptr<mongocxx::v_noabi::instance::impl, std::default_delete<mongocxx::v_noabi::instance::impl> > core::v1::make_unique<mongocxx::v_noabi::instance::impl, void, std::unique_ptr<mongocxx::v_noabi::logger, std::default_delete<mongocxx::v_noabi::logger> > >(std::unique_ptr<mongocxx::v_noabi::logger, std::default_delete<mongocxx::v_noabi::logger> >&&) () from /lib/x86_64-linux-gnu/libmongocxx.so._noabi\n#7 0x00007ffff6c6c8fd in mongocxx::v_noabi::instance::instance(std::unique_ptr<mongocxx::v_noabi::logger, std::default_delete<mongocxx::v_noabi::logger> >) () from /lib/x86_64-linux-gnu/libmongocxx.so._noabi\n#8 0x00007ffff6c6c9d9 in mongocxx::v_noabi::instance::instance() () from /lib/x86_64-linux-gnu/libmongocxx.so._noabi\n#9 0x000055555578b47f in main (argc=1, argv=0x7fffffffde88) at ../qliqdesktop/src/qliqstor/service/main.cpp:37\n(gdb) quit\nA debugging session is active.\n",
"text": "Hello everyone,I work on a Linux application based on C++ & Qt and I try to properly build mongocxx and link to it. Everything works smoothly, but the app crashes on launch. Here is the stack trace:So creating an mongocxx::instance variable is the very first thing that application does, but I get segmentation fault. I think that it is something related to mongocxx dependencies, so how I actually build the library?I needed mongoc, so I cloned the official repository, built it, and installed using official guideline. I used version from r1.17 branch. I could install from Ubuntu repository, but that version is 1.16 and the newest mongocxx requires at least 1.17.So that worked and I cloned mongocxx repository from releases/stable branch, built it and installed. Everything using commands from official guideline.App builds without problems, ldd sees mongocxx and bsoncxx, but I get that segmentation fault. Do you have any ideas how to solve this issue?",
"username": "Lukasz_Kosinski"
},
{
"code": "mongocxx::instancemongocxx::instancemongocxx::instance",
"text": "Hi @Lukasz_Kosinski,Thank you for including the stack trace. From that, I suspect the mongocxx::instance is going out of scope and being destroyed at some point during the lifetime of your application.The mongocxx::instance should be created exactly once for the lifetime of the application (Tutorial for mongocxx)If that is not the issue, can you include the relevant snippet from your application code where the mongocxx::instance is being created?",
"username": "Kevin_Albertson"
},
{
"code": "#include <mongocxx/instance.hpp>\n\nint main(int argc, char *argv[])\n{\n mongocxx::instance mongoInstace;\n\n return 1;\n}\n",
"text": "Thanks @Kevin_Albertson. I think that it is not the case as I’m pretty sure that I create only one instance.The effect is the same if I run the following code:Just wanted to ensure me ",
"username": "Lukasz_Kosinski"
},
{
"code": "ldd ./a.out",
"text": "@Lukasz_Kosinski the example code you provided to does not result in a SIGSEV on my system. Can you provide the CMake commands you used for configuring/installing the C and C++ drivers? The CMake configuration commands along with their output would be most helpful, along with the output of ldd ./a.out from the compiled example?",
"username": "Roberto_Sanchez"
},
{
"code": "",
"text": "Thanks, @Roberto_Sanchez. I had a fresh look today and it looks like I had another, old driver in my include path. After removing that older driver it worked like charm.\nThanks for your proposals.The thread can be closed.",
"username": "Lukasz_Kosinski"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongocxx segmentation fault issue | 2020-12-16T23:01:32.293Z | Mongocxx segmentation fault issue | 3,556 |
null | [
"installation"
] | [
{
"code": "",
"text": "I’m facing a problem which doesn’t allow me to continue my work with mongodb. After installing mongodb I try running mongod but it exits by itself so I’m not able to work with it. Apparently I’m not alone (you can see the output in Stackoverflow)https://stackoverflow.com/questions/64770949/unable-to-run-mongodpls help!! thank you very much!",
"username": "Alexandre_Goebbels"
},
{
"code": "",
"text": "Please show screenshot or few lines from your logfile\nOn which os you ran mongod?Windows or some other\nThe example you gave from stack clearly shows mongod is terminated as the address is in use which means you cannot run two mongods on same portIn your case it could be different\nSo please give more details",
"username": "Ramachandra_Tummala"
}
] | Unable to run mongod | 2020-12-21T04:47:08.937Z | Unable to run mongod | 1,679 |
null | [
"aggregation",
"java"
] | [
{
"code": "public Long datasetSize(String setId) {\n return mongoClient\n .getDatabase(databaseName)\n .getCollection(dataMapName)\n .aggregate(\n Arrays.asList(\n match(Filters.eq(\"datasetId\", datasetId)),\n count()\n )\n )\n .first()\n .getInteger(\"count\")\n .longValue();\n}\n <dependency>\n <groupId>org.mongodb</groupId>\n <artifactId>mongo-java-driver</artifactId>\n <version>3.12.7</version>\n </dependency>\n",
"text": "Hi Folks,I am counting documents returned by a match.This works if the match clause matches one or more documents. But if the match clause matches no documents, the result of first() is null and the invocation of getInteger() fails with a NullPointerException.I was expecting that the count() aggregate would yield an Integer.ZERO result if the match was empty.I have tested this with:Any suggestions?If the null result from first() is indeed expected, is there a better way to express a default result than wrapping the whole pipeline in try / catch and returning 0 in case of NullPointerException? My concern is that an NPE might occur for at various places and for various reasons, and only the first() == null case should give rise to a 0 return.Kind regards, Robin.",
"username": "Robin_Roos"
},
{
"code": "countcount",
"text": "Essentially, the point is not to try to interpret count in the pipeline, instead, return a document containing count in a field and test it for null in plain Java outside the pipeline.",
"username": "Jack_Woehr"
}
] | Count Aggregate with empty Match results in NPE | 2020-12-16T16:11:58.913Z | Count Aggregate with empty Match results in NPE | 2,793 |
null | [
"php"
] | [
{
"code": "$client = new MongoDB\\Client(\n 'mongodb+srv://USER:PASSWORD@CLUSTER/test?retryWrites=true&w=majority'\n);\n$db = $client->test;\n",
"text": "Hello,I have a PHP v7.4 server with mongo enabled.I just tried the suggested connection and I got a failure.Could do with some help working out what the issue is.I have seen several different versions of mongo usage in php which makes it a little confusing.Also, I couldn’t find a clear link from php 7.4 to mongo driver version. Because I saw a list of other version numbers of the mongo php version. E.g https://docs.mongodb.com/drivers/php under compatibility.Thanks in advance",
"username": "Russell_Smithers"
},
{
"code": "+srv",
"text": "Possibly you could get the help you desire if you posted the error you are encountering?PS I’d drop the +srv from the URI.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Also, I couldn’t find a clear link from php 7.4 to mongo driver version. Because I saw a list of other version numbers of the mongo php version. E.g https://docs.mongodb.com/drivers/php under compatibility.Use the current version. Follow the installation instructions. I’m on PHP 7.4 with the driver + library. Your example should work.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "I was coming here to post the error, remembered this morning that I hadn’t shared that important bit of information.Fatal error : Uncaught Error: Class ‘MongoDB\\Client’ not found",
"username": "Russell_Smithers"
},
{
"code": "",
"text": "Use the current version. Follow the installation instructions. I’m on PHP 7.4 with the driver + library. Your example should work.Thanks Jack,This is the version I most recently tried, it’s the one which gives the error I just shared.",
"username": "Russell_Smithers"
},
{
"code": "",
"text": "Just to clarify, the error refers to line 21 in my script.Line 21 is\n$client = new MongoDB\\Client(",
"username": "Russell_Smithers"
},
{
"code": "php.iniextension=mongodb.so\nrequire_once 'vendor/autoload.php';",
"text": "Okay, are you sure you followed the instructions on installation, especiallyFinally, add the following line to your php.ini file:And does your PHP file start with\nrequire_once '/whatever/path/vendor/autoload.php';",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Thanks Josh,What does the autoload.php containe?is vendor meant to be mongo or something? in the requireonce? or just vendor. What sort if area is the whatever/path location meant to be?I will check the php.ini file.Many thanks\nRussell",
"username": "Russell_Smithers"
},
{
"code": "autoload.phpautoload.phpvendornpmnode_modulesmongodb.sophp.ini./vendorrequire_once ./vendor/autoload.php./vendor",
"text": "autoload.php is a conventional name for a file which loads a bunch of other php files.PHP packages typically come with an autoload.php file.Composer, the tool that installs the MongoDB PHP library on top of PHP’s own MongoDB driver, installs all the packages you install via Composer in a directory called vendor. It’s just like npm in Javascript where everything is installed in a project subdirectory called node_modules.So typically for PHP + MongoDB you:Then, in your program code, require_once ./vendor/autoload.php which loads all the modules Composer installed in your ./vendor subdirectory of your project.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Thanks Jack, much appreciated.",
"username": "Russell_Smithers"
},
{
"code": "",
"text": "Holler if you get stuck! ",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "7 posts were merged into an existing topic: Class ‘MongoDB\\Client’ not found while following offical installation docs",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Need help with Mongo and PHP 7.4 | 2020-12-02T22:04:05.533Z | Need help with Mongo and PHP 7.4 | 11,013 |
null | [
"queries"
] | [
{
"code": "query test {\n character(charID=\"a\", enemy:\"c\") {\n charID\n enemy {\n name\n weapons {\n type\n power\n }\n }\n }\n}\n{\n \"data\": {\n \"charEnemy\": [\n {\n \"charID\": \"a\",\n \"enemy\": [\n {\n \"name\": \"q\",\n \"weapons\": [\n { \"type\": \"metal\", \"power\": \"5\" }\n ]\n },\n {\n \"name\": \"X\",\n \"weapons\": [\n { \"type\": \"metal\", \"power\": \"6\" }\n ]\n },\n {\n \"name\": \"c\",\n \"weapons\": [\n { \"type\": \"metal\", \"power\": \"7\" }\n ]\n },\n ]\n }\n ]\n}\nenemy[1]return charEnemy.find({charID: args.charID, enemy : {$elemMatch: {name : \"X\"}}});",
"text": "Hi, so I have the following query set up (with data):Suppose my data takes this shape (when the query is run):How should I go about querying a specific CharID with a specific enemy name? If I query charID=“a” and enemy name “X” for example, I only want to be able to access enemy[1], or rather, everything within the object with that specific name.I currently have this but it returns the entire charEnemy object which has both conditions met. It doesn’t ONLY return the charID, with that specific enemy object, but rather returns that charID with ALL the enemy objects associated with it: return charEnemy.find({charID: args.charID, enemy : {$elemMatch: {name : \"X\"}}}); How do I make it such that I get that entire charEnemy object with that charID and enemy name, but not the other enemy objects within the enemy array?",
"username": "Ajay_Pillay"
},
{
"code": "charEnemy.find({ 'data.charEnemy.enemy.name': 'X' }, { 'data.charEnemy.enemy.name': 1, 'data.charEnemy.enemy.weapons': 1 })",
"text": "Hi Ajay_Pillay, I think you are looking for the dot notation to access embedded documents. To return only specific fields, you have to specify all of them inside your projection. In your case, this query should work:charEnemy.find({ 'data.charEnemy.enemy.name': 'X' }, { 'data.charEnemy.enemy.name': 1, 'data.charEnemy.enemy.weapons': 1 })",
"username": "Mounir_Farhat"
},
{
"code": "{ character, { enemy1, enemy2, enemy3, enemy4 } }{character, {enemy1, enemy2 } }(character, enemy1, enemy2){ charA, { enemyA, {nameA, weaponA}}, { charB, { enemyB, {nameB, weaponA}, { enemyC, {nameC, weaponC}}charEnemy.find({ 'data.charEnemy.enemy.weapon': 'weaponA' }, { 'data.charEnemy.enemy.name': 1, 'data.charEnemy.enemy.weapons': 1 }){ {nameA,weaponA} , {{ nameB,weaponA },{nameC,weaponC}} }",
"text": "I need to match two things - charID and an enemy name. Also take note this isn’t my actual data - the actual data is named differently and has many more dimensions but this is just a simplified form of it (with different names too - so it may seem pretty dumb).Currently my data is stored in this form (if I may use set builder notation for simplicity) { character, { enemy1, enemy2, enemy3, enemy4 } } however when I query, I would like to return {character, {enemy1, enemy2 } } for example, if I request for (character, enemy1, enemy2). So here you can see that my requests take a different form, a character name as the first input (mandatory), and an optional number of enemy names. But the thing is with the matching currently set up, when I query (character, enemy1), it returns the entire character object because enemy2, enemy3, enemy4 etc are all associated with it.When I tried out what you suggested - I was able to obtain specific fields. It only returns the enemy.name and enemy.weapons field, which is great! However, if I have two objects like { charA, { enemyA, {nameA, weaponA}}, { charB, { enemyB, {nameB, weaponA}, { enemyC, {nameC, weaponC}}, and I run the query as suggested, charEnemy.find({ 'data.charEnemy.enemy.weapon': 'weaponA' }, { 'data.charEnemy.enemy.name': 1, 'data.charEnemy.enemy.weapons': 1 }), it returns { {nameA,weaponA} , {{ nameB,weaponA },{nameC,weaponC}} } which is not correct because I only want the enemies with weaponA, but because it was found in charB, I get the entire of charB’s “enemy” object/array, when I only want that specific entry which had the match.Hopefully this makes sense. It’s a little tough to explain it precisely but if it’s still unclear I’d be happy to draw out a diagram of some sort to help visualize this.",
"username": "Ajay_Pillay"
},
{
"code": "charEnemy.find({ 'data.charEnemy.charID': 'charA', 'data.charEnemy.enemy.weapons': 'weaponA' }, { 'data.charEnemy.enemy.name': 1, 'data.charEnemy.enemy.weapons': 1 })",
"text": "If you want to find a specific enemy of a specific character, you can add a field ‘charID’ inside your query:charEnemy.find({ 'data.charEnemy.charID': 'charA', 'data.charEnemy.enemy.weapons': 'weaponA' }, { 'data.charEnemy.enemy.name': 1, 'data.charEnemy.enemy.weapons': 1 })This query should only return the enemy with ‘weaponA’ from the character named ‘charA’.",
"username": "Mounir_Farhat"
},
{
"code": "db.getCollection('test').find({ 'charID': 'a', 'enemy.name': 'enemyA' }, { 'enemy.name': 1, 'enemy.weapon': 1 })'enemy.type':1{\n \"_id\" : ObjectId(\"5fdf16c956d6a07c2880d8a3\"),\n \"charID\" : \"a\",\n \"enemy\" : [ \n {\n \"name\" : \"enemyA\",\n \"weapon\" : \"weaponA\",\n \"type\" : \"metal\"\n }\n ]\n}\n",
"text": "Nope I think the relation that you’re working upon is slightly different. I will include some images here so that it’s illustrated clearer. This is in Robo3T.These are my database entries:\n\n\nWhen I run this command db.getCollection('test').find({ 'charID': 'a', 'enemy.name': 'enemyA' }, { 'enemy.name': 1, 'enemy.weapon': 1 }), the result is:\nBut this is not what I want returned, what I want to get should look like this (I can include the type by extending my query to include 'enemy.type':1 as well but that’s just a little addition that doesn’t change the goal):So I want to return all the fields within that enemy object, but not any other enemy objects.",
"username": "Ajay_Pillay"
},
{
"code": "db.test.aggregate([\n\t{\n\t\t$unwind: '$enemy'\n\t},\n\t{\n\t\t$match: {\n\t\t\tcharID: 'a',\n\t\t\t'enemy.name': 'enemyA'\n\t\t}\n\t}\n])\ndb.test.aggregate([\n\t{\n\t\t$match: {\n\t\t\tcharID: 'a'\t\n\t\t}\n\t},\n\t{\n\t\t$unwind: '$enemy'\n\t},\n\t{\n\t\t$match: {\n\t\t\t'enemy.name': 'enemyA'\n\t\t}\n\t}\n])\n",
"text": "One way to achieve this result is to use an $unwind aggregation pipeline, which will return a document for each entry in the array:I think there are many ways to obtain the same result, maybe playing around with the aggregation pipeline operators $indexOfArray and $arrayElemAt.If your database is pretty big, consider adding another $match stage to your pipeline, preselecting the ‘charID’ your are interested in:This way, your will skip the creation of multiple documents for each ‘charID’. I’m sure there is a neater way to pur things together, but it should do the work.",
"username": "Mounir_Farhat"
},
{
"code": "",
"text": "Thank you, this was what I needed. I will definitely look into what the most optimal solution is but I think aggregating works very well so far.",
"username": "Ajay_Pillay"
},
{
"code": "",
"text": "You are welcome, I’m glad it helps.",
"username": "Mounir_Farhat"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Finding an item in a nested object of objects | 2020-12-18T18:39:25.504Z | Finding an item in a nested object of objects | 97,549 |
null | [
"devops"
] | [
{
"code": "",
"text": "I want to create automatic schedule for run compact command because my database size is large.Which I want to know can I write a trigger or call an api to run a compact command in my cluster ?Thank you.",
"username": "Peeraphat_Jearananta"
},
{
"code": "",
"text": "Hi @Peeraphat_Jearananta,Welcome to MongoDB community.Compact command is not a supported command via API or realm functions.You should use an application driver or a shell to trigger it externally.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "thank you pavelBest regards\nPeeraphat j.",
"username": "Peeraphat_Jearananta"
}
] | Can I create schedule trigger or call API for run compact command automatic? | 2020-12-19T18:55:32.735Z | Can I create schedule trigger or call API for run compact command automatic? | 2,309 |
null | [
"dot-net"
] | [
{
"code": " _context.Aggregate().AppendStage<dynamic>(pipeline1).AppendStage(pipeline2).ToList();\n [\"data\" : [ { \"Account\":{account_obj}, \"Contact\": {contact_obj} },\n { \"Account\":{account_obj_2}, \"Contact\": {contact_obj_2} }],\n \"count\" : []\n ]\n_context.Aggregate<dynamic>(pipelines).ToList();\nThe value returned turns out to be different like below:\n [\"data\" : [ { \"Account\":{account_obj}},\n { \"Account\":{account_obj_2} }],\n \"count\" : [],\n \"Contact\": null\n ]\n",
"text": "Hi,I am trying to do pagination with c# Mongodb driver. I have a collection called “Account”, from which I am doing lookup to “Contact”. I am having my pipeline stages. when I load the pipeline with appendstage one by one (like below)it returns the value like this (which is Ok as I want it):but when I load all pipelines at once like below:like you see the Contact lookup value is coming out of data.Why is that ? What am I missing here? Any help would be much appreciated. Thanks.",
"username": "Ruban_Joshva"
},
{
"code": "",
"text": "Hi, I have fixed it. Sorry it was my mistake. I was loading the lookup to Contact after facet, now I loaded the lookup stage before facet stage into the pipeline, and it works as expected.",
"username": "Ruban_Joshva"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | C# aggregate, pagination lookup data structure changes with different pipeline loading approach | 2020-12-18T13:15:06.624Z | C# aggregate, pagination lookup data structure changes with different pipeline loading approach | 3,386 |
null | [
"installation"
] | [
{
"code": "",
"text": "Hello, I’ve been trying run mongo through brew for a few days now, but I keep getting these error messages:❯ mongo\nMongoDB shell version v4.4.1\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nError: couldn’t connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:374:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1every time I try running the brew services list, my mongodb-community status is “error”. Not sure how this can be fixed, please help!",
"username": "Athena_Su"
},
{
"code": "",
"text": "Is your mongod up and running on port 27017?Is the (greater than symbol>) your os prompt or are you already connected to mongo prompt and trying mongo command again?",
"username": "Ramachandra_Tummala"
}
] | SocketException: Error connecting to 127.0.0.1:27017 | 2020-12-20T00:03:12.012Z | SocketException: Error connecting to 127.0.0.1:27017 | 9,833 |
Subsets and Splits