image_url
stringlengths
113
131
tags
list
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
https://www.mongodb.com/…e_2_1024x512.png
[ "replication" ]
[ { "code": "", "text": "Hello,I am facing some issues with my installation on MongoDB Atlas. I have an M10 cluster with mongo 4.4.This cluster needs to be configured under minimum requirements, with an M10 and 10GB of space. Something happened and the cluster storage autoscaled to 14GB (i forgot to deactivate the cluster storage autoscaling).The culprit of the storage consumption was the oplog.rs collection, on the local database. I successfully reduced the size of the oplog collection from the Atlas console (Cluster → Configuration → Additional Settings → More configuration options → set oplog size) and this works, but i’m not able to reclaim the unused space.The documentation says to perform a compact operation on the oplog collection:This works on the two secondary replicas i have but the documentation specifically says that this command must not be executed on the primary replica set member. It advices to execute rs.stepDown to choose another primary but this operation is not supported on Mongodb Atlas: https://www.mongodb.com/docs/atlas/unsupported-commands/#replication.Is there a way to do this on Atlas? I just need to promote a new primary member so i can run compact on the last member of the replicaset.Thank you in advance!", "username": "Monstruo_Delasgalletas" }, { "code": "", "text": "Hi @Monstruo_Delasgalletas - Welcome to the community This works on the two secondary replicas i have but the documentation specifically says that this command must not be executed on the primary replica set member. It advices to execute rs.stepDown to choose another primary but this operation is not supported on Mongodb AtlasIs there a way to do this on Atlas? I just need to promote a new primary member so i can run compact on the last member of the replicaset.Hope this helps.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi @Jason_TranThank you! It worked perfectly. The Test failover process can take some time but it works and changes the primary.Compacting the primary reduces the size of the oplog as explected. One thing i noticed (if someone finds this thread in the future) is that the size that Atlas counts (“Disk Usage” on the cluster panel) for the storage autoscaler is the primary size. Once changed the primary and compacted i was able to reduce the disk size.Thanks for your help!Regards", "username": "Monstruo_Delasgalletas" }, { "code": "", "text": "Thank you! It worked perfectly. The Test failover process can take some time but it works and changes the primary.No problem and happy to hear that got it sorted ", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb Atlas compact oplog.rs
2022-08-01T15:15:49.368Z
Mongodb Atlas compact oplog.rs
2,317
null
[ "queries", "python" ]
[ { "code": "", "text": "Hello, I’m kind of a novice with MongoDB. I started using 2 years ago and now I’m having some problems with it.State of Art\nI’m using Flask and Flask-Pymongo to connect to my database which runs on a different machine and the connection between the 2 works thanks to a VPN tunnel.The problem\nQueries by themselves work fine, they usually take a bunch of milliseconds (got this result through some logging) but when I start iterating over the cursor given back from the find( ) method I start noticing a serious slowdown. I iterate over the cursor using a simple for loop but the time to start with the first iteration is endless, it takes from 20-25 seconds up while for the successive iterations it works normally.Some testingSystem InfosServer with Flask and Flask-Pymongo (hosted through DigitalOcean):Server with MongoDB (local machine):", "username": "Umberto_Casaburi" }, { "code": "2022-07-31T18:47:53.970+0200 I NETWORK [conn10699] received client metadata from x.x.x.x:yyyyy conn10699: { driver: { name: \"PyMongo\", version: \"3.11.0\" }, os: { type: \"Linux\", name: \"Linux\", architecture: \"x86_64\", version: \"5.4.0-99-generic\" }, platform: \"CPython 3.8.12.final.0\" }\n2022-08-03T18:03:40.995+0200 D COMMAND [conn10699] run command myDB.$cmd { find: \"myColl\", filter: { ... }, lsid: { id: UUID(\"043da910-3a33-49a3-a749-5ba84a20cf53\") }, $db: \"myDB\", $readPreference: { mode: \"primaryPreferred\" } }\n2022-08-03T18:03:40.996+0200 D STORAGE [conn10699] NamespaceUUIDCache: registered namespace myDB.myColl with UUID 4b4e693d-e72b-4dab-a4de-0ca15618a93c\n2022-08-03T18:03:40.996+0200 D QUERY [conn10699] Tagging the match expression according to cache data: \n2022-08-03T18:03:40.996+0200 D QUERY [conn10699] Index 0: _id_\n2022-08-03T18:03:40.996+0200 D QUERY [conn10699] Index 1: timestamp_1\n2022-08-03T18:03:40.996+0200 D QUERY [conn10699] Index 2: index2_1\n2022-08-03T18:03:40.996+0200 D QUERY [conn10699] Tagged tree:\n2022-08-03T18:03:40.996+0200 D QUERY [conn10699] Planner: solution constructed from the cache:\n2022-08-03T18:03:40.996+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184330476\n2022-08-03T18:03:41.041+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184330476\n2022-08-03T18:03:41.042+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184331391\n2022-08-03T18:03:41.044+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184331391\n2022-08-03T18:03:41.044+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184331423\n2022-08-03T18:03:41.079+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184331423\n2022-08-03T18:03:41.080+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184332035\n2022-08-03T18:03:41.125+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184332035\n2022-08-03T18:03:41.125+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184332608\n2022-08-03T18:03:41.157+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184332608\n2022-08-03T18:03:41.157+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184333162\n2022-08-03T18:03:41.231+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184333162\n2022-08-03T18:03:41.231+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184334386\n2022-08-03T18:03:41.399+0200 D STORAGE [conn10699] Slow WT transaction. Lifetime of SnapshotId 31184334386 was 168ms\n2022-08-03T18:03:41.399+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184334386\n2022-08-03T18:03:41.399+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184337302\n2022-08-03T18:03:41.429+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184337302\n2022-08-03T18:03:41.429+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184337920\n2022-08-03T18:03:41.561+0200 D STORAGE [conn10699] Slow WT transaction. Lifetime of SnapshotId 31184337920 was 131ms\n2022-08-03T18:03:41.561+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184337920\n2022-08-03T18:03:41.561+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184339906\n2022-08-03T18:03:41.696+0200 D STORAGE [conn10699] Slow WT transaction. Lifetime of SnapshotId 31184339906 was 134ms\n2022-08-03T18:03:41.696+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184339906\n2022-08-03T18:03:41.696+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184342481\n2022-08-03T18:03:41.914+0200 D STORAGE [conn10699] Slow WT transaction. Lifetime of SnapshotId 31184342481 was 218ms\n2022-08-03T18:03:41.914+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184342481\n2022-08-03T18:03:41.915+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184345334\n2022-08-03T18:03:42.110+0200 D STORAGE [conn10699] Slow WT transaction. Lifetime of SnapshotId 31184345334 was 195ms\n2022-08-03T18:03:42.110+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184345334\n2022-08-03T18:03:42.110+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184348869\n2022-08-03T18:03:42.216+0200 D STORAGE [conn10699] Slow WT transaction. Lifetime of SnapshotId 31184348869 was 105ms\n2022-08-03T18:03:42.216+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184348869\n2022-08-03T18:03:42.217+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184350693\n2022-08-03T18:03:42.283+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184350693\n2022-08-03T18:03:42.284+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184351440\n2022-08-03T18:03:42.504+0200 D STORAGE [conn10699] Slow WT transaction. Lifetime of SnapshotId 31184351440 was 219ms\n2022-08-03T18:03:42.504+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184351440\n2022-08-03T18:03:42.505+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184354257\n2022-08-03T18:03:42.559+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184354257\n2022-08-03T18:03:42.560+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184355080\n2022-08-03T18:03:42.575+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184355080\n2022-08-03T18:03:42.575+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184355269\n2022-08-03T18:03:42.608+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184355269\n2022-08-03T18:03:42.609+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184355652\n2022-08-03T18:03:42.623+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184355652\n2022-08-03T18:03:42.623+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184355819\n2022-08-03T18:03:42.730+0200 D STORAGE [conn10699] Slow WT transaction. Lifetime of SnapshotId 31184355819 was 106ms\n2022-08-03T18:03:42.730+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184355819\n2022-08-03T18:03:42.731+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184357261\n2022-08-03T18:03:42.870+0200 D STORAGE [conn10699] Slow WT transaction. Lifetime of SnapshotId 31184357261 was 139ms\n2022-08-03T18:03:42.870+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184357261\n2022-08-03T18:03:42.870+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184359828\n2022-08-03T18:03:42.909+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184359828\n2022-08-03T18:03:42.909+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184360596\n2022-08-03T18:03:42.982+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184360596\n2022-08-03T18:03:42.983+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184361835\n2022-08-03T18:03:43.080+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184361835\n2022-08-03T18:03:43.080+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184362969\n2022-08-03T18:03:43.204+0200 D STORAGE [conn10699] Slow WT transaction. Lifetime of SnapshotId 31184362969 was 123ms\n2022-08-03T18:03:43.204+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184362969\n2022-08-03T18:03:43.204+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184364388\n2022-08-03T18:03:43.284+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184364388\n2022-08-03T18:03:43.284+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184365614\n2022-08-03T18:03:43.446+0200 D STORAGE [conn10699] Slow WT transaction. Lifetime of SnapshotId 31184365614 was 161ms\n2022-08-03T18:03:43.446+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184365614\n2022-08-03T18:03:43.446+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184367034\n2022-08-03T18:03:43.626+0200 D STORAGE [conn10699] Slow WT transaction. Lifetime of SnapshotId 31184367034 was 180ms\n2022-08-03T18:03:43.627+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184367034\n2022-08-03T18:03:43.628+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184369200\n\n...\n\n2022-08-03T18:03:50.668+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184453150\n2022-08-03T18:03:50.668+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184453699\n2022-08-03T18:03:50.685+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184453699\n2022-08-03T18:03:50.685+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184453896\n2022-08-03T18:03:50.727+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184453896\n2022-08-03T18:03:50.727+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184454569\n2022-08-03T18:03:50.849+0200 D STORAGE [conn10699] Slow WT transaction. Lifetime of SnapshotId 31184454569 was 121ms\n2022-08-03T18:03:50.849+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184454569\n2022-08-03T18:03:50.849+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184455869\n2022-08-03T18:03:50.955+0200 D STORAGE [conn10699] Slow WT transaction. Lifetime of SnapshotId 31184455869 was 106ms\n2022-08-03T18:03:50.955+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184455869\n2022-08-03T18:03:50.956+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184456419\n2022-08-03T18:03:51.254+0200 D STORAGE [conn10699] Slow WT transaction. Lifetime of SnapshotId 31184456419 was 297ms\n2022-08-03T18:03:51.254+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184456419\n2022-08-03T18:03:51.254+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184458830\n2022-08-03T18:03:51.301+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184458830\n2022-08-03T18:03:51.301+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184459071\n2022-08-03T18:03:51.535+0200 D STORAGE [conn10699] Slow WT transaction. Lifetime of SnapshotId 31184459071 was 233ms\n2022-08-03T18:03:51.535+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184459071\n2022-08-03T18:03:51.535+0200 D STORAGE [conn10699] WT begin_transaction for snapshot id 31184461290\n2022-08-03T18:03:51.545+0200 D QUERY [conn10699] score(1.01162) = baseScore(1) + productivity((44 advanced)/(3814 works) = 0.0115364) + tieBreakers(2.62192e-05 noFetchBonus + 2.62192e-05 noSortBonus + 2.62192e-05 noIxisectBonus = 7.86576e-05)\n2022-08-03T18:03:51.547+0200 D STORAGE [conn10699] WT rollback_transaction for snapshot id 31184461290\n2022-08-03T18:03:51.548+0200 I COMMAND [conn10699] command myDB.myColl command: find { find: \"myColl\", filter: { ... }, lsid: { id: UUID(\"043da910-3a33-49a3-a749-5ba84a20cf53\") }, $db: \"myDB\", $readPreference: { mode: \"primaryPreferred\" } } planSummary: IXSCAN { index2: 1 } keysExamined:3813 docsExamined:3813 cursorExhausted:1 numYields:115 nreturned:44 reslen:18967 locks:{ Global: { acquireCount: { r: 232 } }, Database: { acquireCount: { r: 116 } }, Collection: { acquireCount: { r: 116 } } } protocol:op_msg 10552ms\nP.S. I removed some lines of the log but the ones i removed contain the same stuff again and again", "text": "Edit\nA further inspection of the MongoDB log following a certain query affected by the aforementioned problem pointed this out:", "username": "Umberto_Casaburi" } ]
Slow data processing within Flask-Pymongo
2022-08-01T15:32:30.415Z
Slow data processing within Flask-Pymongo
1,900
null
[]
[ { "code": "", "text": "Hii All,I have a use case where I want to access the Atlas database through my AWS. For this, I used Data API with API Gateway but this API is giving 3000ms latency, which is too much. My cluster is hosted in AWS Mumbai Region, I also tried VPC Peering but don’t have any idea how to connect from here.Is there any way to connect Atlas database through API Gateway? My ultimate goal is to read and write data through an API so that I can use this in my application.Thanks in Advance!!", "username": "Hemendra_Chaudhary" }, { "code": "", "text": "Hi Hemendra - what kind of queries are you running? You will be able to get much better latency by setting your data api to be deployed in a single region, since currently your request is probably going from your API gateway → sydney (data api) → mumbai (atlas cluster). Other things to consider are indexing your queries and potentially sizing up your Atlas cluster tier.to do this you can:(Note: you’ll have a different URL)", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Hi Hemendra - what kind of queries are you running? You will be able to get much better latency by setting your data api to be deployed in a single region, since currently your request is probably going from your API gateway → sydney (data api) → mumbai (atlas cluster). Other things to consider are indexing your queries and potentially sizing up your Atlas cluster tier.That is very awesome, and things are start working for me now. I create new app in the same region as cluster. But one thing that we have to consider is that we have to make service name the same as cluster name in Linked Data Sources.to do so you can:", "username": "Hemendra_Chaudhary" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Access Database on Atlas through AWS API Gatway
2022-07-29T06:05:22.625Z
Access Database on Atlas through AWS API Gatway
2,731
null
[ "aggregation", "atlas-search" ]
[ { "code": "$search$addFields", "text": "Hello everyone.I need some advice how to proceed with this problem, scenario:\nCollection of users (I prepared pseudo-materialized view with other required properties), they will have quite a few other clauses but the most important ones are date range filters, and now based on that date range I need to calculate users availability (calculation is based on 3 other collections, I can put them into the materialized view as well, not a lot of data).Question:\nIs it possible to calculate some values before executing $search stage itself? For example some $addFields so I can do my calculation in the code? Or maybe there is a better way to achieve that?Thanks in advance for all of the tips!", "username": "Michal_Janocha" }, { "code": "$search$addFields$search$addFields$search", "text": "Hi @Michal_Janocha Question:\nIs it possible to calculate some values before executing $search stage itself? For example some $addFields so I can do my calculation in the code? Or maybe there is a better way to achieve that?The $search stage must be the first stage the pipeline so the exampling adding $addFields prior to that will most likely result in an error. I’m just wondering if you are trying to execute this operation in a singular operation? Have you thought about performing the calculation client side and then providing these values to the $search stage?To help better understand the context of the question, would you be able to provide some details / sample documents and expected output?Regards,\nJason", "username": "Jason_Tran" }, { "code": "$search{\n \"_id\": {\n \"$oid\": \"628b81d159a887d838e37944\"\n },\n \"firstName\": \"User\",\n \"lastName\": \"Example\",\n \"email\": \"[email protected]\",\n \"photoURL\": \"\",\n \"skills\": [{...}],\n \"certificates\": [{...}],\n \"languages\": [{...}],\n \"availability\": [\n {\n allocatedHours: Double,\n day: Date,\n maxHours: Double,\n availableHours: Double,\n availablePercent: Double\n }\n ] <= this array grew to 3k documents because I need daily granularity and I am storing availability up to 10 years\n}\n", "text": "Hello @Jason_Tran,Thank you for the response! The part where you are saying about providing calculated values to $search stage looks very promising, how can I achieve that?Anyway I went with pre-calculated values in the search model so it already contains calculated values, the only drawback of this solution is that I have quite large array of objects in each document (~3000), at this moment it doesn’t affect performance but I am not sure if this is a proper solution.", "username": "Michal_Janocha" }, { "code": "$search$search$searchavailability", "text": "The part where you are saying about providing calculated values to $search stage looks very promising, how can I achieve that?Apologies here, what I had meant was pre-calculating the fields (outside of the database) and then providing those to the $search stage which is what you have done currently based off your most recent reply.Anyway I went with pre-calculated values in the search model so it already contains calculated values, the only drawback of this solution is that I have quite large array of objects in each document (~3000), at this moment it doesn’t affect performance but I am not sure if this is a proper solution.Just to clarify, do you mean each individual document output from the $search stage has approximately ~3000 (and/or more) documents inside the availability array field?Regards,\nJason", "username": "Jason_Tran" }, { "code": "$search$search$search$searchavailabilityavailability$filter{\n \"$addFields\" : {\n \"availability\" : {\n \"$filter\" : {\n \"input\" : \"$availability\",\n \"as\" : \"item\",\n \"cond\" : {\n \"$and\" : [\n { \"$lte\" : [ \"$$item.day\", ISODate('2022-11-30') ] },\n { \"$gte\" : [ \"$$item.day\", ISODate('2022-08-01') ] }\n ]\n }\n }\n }\n }\n}\n", "text": "Apologies here, what I had meant was pre-calculating the fields (outside of the database) and then providing those to the $search stage which is what you have done currently based off your most recent reply.I guess my current solution does not work as you expect, at this moment I am not providing anything external to my $search stage, everything comes from materialized view which was built totally out of $search.Just to clarify, do you mean each individual document output from the $search stage has approximately ~3000 (and/or more) documents inside the availability array field?No, I guess it would totally kill networking between db => backend => frontend. But my input documents (the ones in materialized view) indeed contains ~3000 documents in availability array. After searching on them but before returning results I am using projection with $filter to reduce amount of documents inside this array, you can see that solution below.", "username": "Michal_Janocha" } ]
Calculated fields
2022-07-25T07:04:14.136Z
Calculated fields
2,388
null
[ "security", "field-encryption" ]
[ { "code": "", "text": "We want to use mongo db in our application so our data must be encrypted in mongodb. As per documents we are trying to use Enterprise edition but still unable to configure encryption. Can you please guide us. We are using windows.", "username": "Deepak_Maharana" }, { "code": "", "text": "Welcome to the MongoDB community @Deepak_Maharana !Can you provide more information on your use case:Specific version of MongoDB serverType of deployment (standalone, replica set, or sharded cluster)Type of encryption you are trying to configure (Encryption at Rest, Network Encryption, or Queryable Encryption (MongoDB 6.0+))Any error messages and steps to reproduce your configuration issueThanks,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hey @Stennie_X thanks for your response. Please find below answers for your queryI am using version 6 enterprise edition.\nI have deployed it in my local setup Amazon ec2.\nI am trying to configure Encryption at Rest using local key file.\nNow i am unable to connect the server using java driver, can you please tell me how to connect the server using local key file.", "username": "Deepak_Maharana" }, { "code": "", "text": "Hi @Deepak_Maharana,My previous comment has links to relevant documentation.Encryption at Rest is configured on the MongoDB server and does not require any client or driver parameters.If you have also configured TLS Network Encryption (always recommended), then you will have to provide appropriate certificates to connect from a client/driver.The MongoDB Security Checklist has a helpful overview of security measures you should implement to protect your MongoDB installation.Now i am unable to connect the server using java driveWhat specific version of the Java driver are you using and what is the error message?Were you able to connect with the Java driver before making server configuration changes?Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hey @Stennie_X Thanks for your response, I have doubts regarding Encryption at Rest. If Client dose not required any parameter to connect the Server then any client can connect my db server and fetch my data right? Then how Encryption at Rest will work? I understood Network Encryption but how to make a client which will access the server data if the client has the same key file.", "username": "Deepak_Maharana" }, { "code": "", "text": "Hi @Deepak_Maharana ,If Client dose not required any parameter to connect the Server then any client can connect my db server and fetch my data right?Remote access to a MongoDB deployment is determined by Access Control including authentication and Role-Based access. With access control properly configured, a remote client must present valid credentials in order to remotely view or manipulate data in your deployment.Encryption at rest refers to the underlying data files, not remote connections. If someone had a physical copy of data files (for example, from a backup of your MongoDB deployment) the files would not be decipherable without the private encryption key.Network encryption encrypts data in transit to and from your MongoDB deployment.All of the above security measures are separately configured, but complementary as part of a well secured deployment. The Security Checklist mentions a few other measures including limiting network exposure via firewalls and VPNs.To summarise, you can configure:Regards,\nStennie", "username": "Stennie_X" } ]
How to encrypt mongo db data?
2022-07-29T09:25:38.201Z
How to encrypt mongo db data?
3,141
null
[ "replication", "monitoring" ]
[ { "code": "", "text": "We have multiple databases in the replica and want to move one of the databases to another replica set. We want to be able to calculate how much space is occupied by this database in the OPLOG.RS collection in order to correctly size it in new replica set.", "username": "Kamal_Parshotam" }, { "code": "db.collection.stats().storageSize", "text": "Welcome to the MongoDB community @Kamal_Parshotam !The replication oplog only includes a rolling subset of data changes. Once the oplog collection reaches its maximum configured size, older oplog entries will be removed to keep storage usage within expectations.If you want to find the size of a collection on disk, use the db.collection.stats().storageSize value as a guide.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks Stennie. For a more detailed breakdown of the documents in the collection, I managed to write this querydb.oplog.rs.aggregate([\n{$group : {_id:{collection:\"$ns\", operation:\"$op\"}, “combined_object_size”: { $sum: { $bsonSize: “$$ROOT” } } , DocCount:{$sum:1} }}\n])This allowed me to do some slicing and dicing of the result set.", "username": "Kamal_Parshotam" }, { "code": "$groupdb.collection.stats().avgObjSize", "text": "Hi @Kamal_Parshotam,Since oplog entries only include document insertions and changes (and a rolling history), this may not be a useful indication of actual document size. A $group query on the oplog will also be a full collection scan and may have some performance impact on a production system.You could use db.collection.stats().avgObjSize to quickly get the average document size for a collection (similar to your query which is getting the BSON size of oplog entries), however the storage size would likely be much less with compression.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi Stennie,The issue we were facing was that the OPLOG WINDOW had gone up from 10 hours to 5 days. I needed to understand which namespace + operation was occupying the most space in the OPLOG.RS collection, hence the group by. We actually found that a NOOP operation in a particular database / collection was occupying 90% of the oplog window. This was quite revealing.Anyway , thanks once again.", "username": "Kamal_Parshotam" }, { "code": "", "text": "db.collection.stats().avgObjSizeCopieded! Thanks @Stennie_X", "username": "Henrique_Souza" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Analyze oplog.rs collection to calculate space used by each name space
2022-07-29T22:55:14.741Z
Analyze oplog.rs collection to calculate space used by each name space
3,583
null
[ "replication" ]
[ { "code": "", "text": "Hello team! Is all good with you?Please, i have one Replica Set with PSA Architectury and I need stop all this cluster. I think, read the documentation, only stop the PRIMARY and the others nodes (SECONDARY and ARBITER) stop too. Is this true?–ON PRIMARY\ndb.adminCommand({ “shutdown” : 1, timeoutSecs: 60 })If exists others procedures to do this, please help me!Thanks,\nHenrique.", "username": "Henrique_Souza" }, { "code": "shutdownmongodmongos", "text": "Hi @Henrique_Souza ,The shutdown command only shuts down the mongod or mongos process you are connected to. You need to shut down each member of a replica set individually.If you shut down the primary for a replica set, the remaining members will try to elect a new primary if possible.If you plan to shut down all members of a replica set deployment, I would start with the other nodes (arbiter and secondary) and shut down the primary last in order to avoid any unnecessary election attempts.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks one more time @Stennie_X!", "username": "Henrique_Souza" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Stop All Replica Set with PSA Architectury
2022-08-03T19:11:55.790Z
Stop All Replica Set with PSA Architectury
1,906
null
[ "golang" ]
[ { "code": "", "text": "Hi, I’m new with the mongodb driver for mongo but I’m quite experienced with other languages drivers.\nI’m using the struct tags to insert document and I was wondering if there’s a way to use them (or any other way) to write strongly typed queries, projections, sorting documents.I tried to look around but it seems everyone uses just strings over and over, and I hoped there would be a better way. I investigated a bit the bson package with the idea of marshalling the document myself but I cannot cover all cases.Are there proper ways to do this?Thank you", "username": "G_G" }, { "code": "", "text": "I think the design pattern is to cover this on the MongoDB side with schema validation rules.", "username": "Jack_Woehr" }, { "code": "", "text": "Thank you for your answer Jack, but I’m not sure this would solve my problem. I’m looking for a method to write strongly typed code, possibly reusing the struct I already wrote, instead of using strings.Thank you and sorry if the question was not clear.", "username": "G_G" }, { "code": "Bson", "text": "You know your own code best, but out of curiosity, what’s the problem looking for this solution?This is not enough? Again, no argument, just trying to understand what you’re seeing that I’m missing.", "username": "Jack_Woehr" }, { "code": "coll.FindOne(context.TODO(), bson.D{{\"title\", \"The Room\"}})\nvar builder = Builders<Widget>.Filter;\nvar filter = builder.Eq(widget => widget.X, 10) & builder.Lt(widget => widget.Y, 20);\n", "text": "I’m looking for a way to do something similar to what Builders in the dotnet driver enable users to do, strongly typed queries.Instead of writingwhere “title” is a string, I would like to write something like this (from the dotnet driver)And not only for queries, which for most of them I can marshal the document from a struct, but also for projections and sorting.Let me know if it’s still not clear.", "username": "G_G" }, { "code": "", "text": "Okay, I think I understand now.In answer to your question, it seems to me that Golang considers the “Builder” pattern exhibited by the C#/.NET driver to be a liability more than an asset. Vide the Go Proverbs:This is not to say that one cannot implement such a framework in Go, but it would be left to the programmer to do so.", "username": "Jack_Woehr" } ]
Strongly typed queries with the Go driver
2022-08-02T18:30:34.618Z
Strongly typed queries with the Go driver
2,461
null
[ "queries", "dot-net" ]
[ { "code": "BsonRegularExpressionObjectIdISODatevar searchValue = \"3a5f\";\nvar expr = new BsonRegularExpression(new Regex(searchValue, RegexOptions.None));\nvar builder = Builders<BsonDocument>.Filter;\nvar filter = builder.Regex(\"_id\", expr);\nvar result = context.Find(filter).ToList();\n", "text": "I implement match using BsonRegularExpression,\nbut It’s not work for fields that type is ObjectId or ISODate.Below is my code:How can I do?", "username": "thank111222333" }, { "code": "", "text": "This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to filter ObjectId by using regex in MongoDB C#/.NET Driver?
2022-08-03T14:04:22.068Z
How to filter ObjectId by using regex in MongoDB C#/.NET Driver?
3,032
null
[ "swift" ]
[ { "code": "@ObservedResultsif let writableItem = item.thaw() {\n try? writableItem.realm?.write {\n writableItem.favorite.toggle()\n }\n}\n@ObservedRealmObject $item.favorite.wrappedValue = true\n@ObservedRealmObjectswitch asyncOpen {\ncase .open(let realm) {\n.\n.\n.\n@Environment@ObservedResults@ObservedRealmObject$group.items.remove(item)\nfor (index, anItem) in group.items.enumerated() where item._id == anItem._id {\n $group.items.remove(at: index)\n}\n", "text": "I’m working with my team on an update to a How-To article about using Realm with SwiftUI, and I have some feedback for the Realm developers.What you have done looks great and works nicely and there are a few things I would love to see added that I’d argue would make it even better.Regarding the @ObservedResults wrapper as explained in the Use Realm Database with SwiftUI — MongoDB Realm quick start, this is very nice. The move, append, and remove (atOffsets: IndexSet) integrate very nicely with the ForEach. However, then when we want to do a write to one of the objects from a child View, it gets pretty awkward.For example, let’s say you want to implement setting the favorite flag to true on the item. The QuickStart uses a Toggle with a $item.favorite binding, but I wanted to use a Button. I tried to take the action to set the favorite attribute to true, and then I ran into the whole series of Realm issues that naive people like me generally hit:Which worked although it looks a mess, but then I realized that @ObservedRealmObject has a wrappedValue so I tried this:Which also worked but looks way better. Is this what you’d recommend that we should do to update an @ObservedRealmObject in general? If not, please correct me…if so, this may be a nice add to the documentation.For all of my own applications, I only ever have one Realm instance, whether I’m using Sync with Atlas, or not. I’ve never had a reason to use multiple Realms. However, since I started using Realm SwiftUI, I am frequently getting an error that I’m using the wrong Realm instance for the object I’m trying to modify. The thing is that with this API the instance I’m using is created under the hood in some cases and I haven’t the foggiest how to reference it. The exception to this is using AsyncOpen because with that you explicitly handleSo then you can inject this into @Environment or wherever you want to keep it.\nConversely, when you don’t use sync and you start with @ObservedResults and @ObservedRealmObject for your data model and then you write something as try? Realm.write { realm.updateThing }, you get an error that the realm instance must match that with which you created the object. But how do I get that instance?I just want a singleton Realm, please.The List swipe-delete thing to delete an Item is fine, but I want a button that deletes the item. Now it gets fun…I looked at how this is done in the RChat when you have the ChatInputBox that updates the list of ChatMessages in the ChatBalloons, and what was done there is to write a function in the parent view to (in this case add) or delete a message from the list of messages, and it passes that closure to the child View. Is that really the easiest way to do this? Do I have to have a binding to the containing list in order to remove an item? What about if there was a method on the LinkingObjects that had a signature like linkingObjects.remove(item)?So I write my closure in the parent view, pass it down, call back up from the child view button with my item argument, then I want to call…but I can’t because that method signature isn’t supported. After a bit of deliberation, I land on this:Which works, but looks a mess. Please tell me there is an easier way! If not, please plan an easier way on the roadmap. I’ll file a ticket if you like.", "username": "Mark_Powell1" }, { "code": "", "text": "Hi @Mark_Powell1,Thanks for the great detailed feedback. It’s very appreciated!I started thinking about the “Deleting a Realm object from a list” point. I worked a little on it here: Remove object from a list · Issue #7663 · realm/realm-swift · GitHub.Some other team members will comment on the other points too. Thanks again.", "username": "Eric_Mossman" }, { "code": "", "text": "Hey @Mark_Powell1 - thank you for taking the time to explain the things you’re running into! This is great feedback. I’m one of the folks who work on the documentation for the Swift SDK; I’m glad you found some of the info you need there.I’ll keep an eye on discussion here as I believe there may be some workarounds for some of the things you’ve mentioned. As members of the SDK team chime in with more details, I’ll be watching for things we might add to the docs to provide better info for devs who want to use Realm with SwiftUI.I don’t think it addresses any of the things you’ve raised here, but we’ve also got a Use Realm Database with SwiftUI Guide that explains some of the things a little more explicitly that we go through quickly and may gloss over in the quick start. It’s buried a little deeper in the docs; I’ll put in a ticket next week to add a link to it from the Quick Start so it’s more visible.Again, thank you so much for the detailed feedback, and I’m looking forward to beefing up the docs around some of these things as the SDK team provides details about possible workarounds.", "username": "Dachary_Carey" }, { "code": "$item.favorite = true", "text": "The projected value of our property wrappers allows you to “quick write.” You should just be able to do $item.favorite = true. I would also recommend keeping your view code as small as possible to be able to use our property wrappers and the quick write functionality wherever you can.", "username": "Jason_Flax" }, { "code": "", "text": "Remove object from a list · Issue #7663 · realm/realm-swift · GitHubLooks like a great start to the ideation, thank you!", "username": "Mark_Powell1" }, { "code": "", "text": "Very nice, very nice. Hard to imagine that getting simpler. Thanks for validating my thinking, that is helpful!", "username": "Mark_Powell1" }, { "code": "", "text": "Thanks so much Dachary! These docs have really been improved a lot lately, and I made it a point to mention that to other folks on the MongoDB Realm team recently…in particular, partitioning strategy got some documentation love that really helped us understand it better.Not sure if I just missed it before, but I see that $item.favorite = true quick write technique explicitly called out in the SwiftUI guide you linked to, so that’s there!I think maybe there what I’m after with the realm instance is sort of already there, just maybe I need an example implementation for my use case? Specifically, there is .environment(.realm, realm) to inject an opened realm into a view…I think I would just need to make my realm instance at the top view and inject it there into all the children, if that’s possible? That seems do-able for the asyncOpen (gives an opened realm instance back) use case just fine, and if there’s a way to do that for a local offline realm instance as well, I think that might do it.", "username": "Mark_Powell1" }, { "code": "", "text": ".environment(.realm, realm)There is a .environment(.realmConfiguration, …) that you can pass any configuration (local included) into .", "username": "Jason_Flax" }, { "code": "", "text": "Right…the docs say that the ObservedResults, ObservedRealmObject implicitly create a realm instance. If you pass in a realm instance via environment, that takes precedence? If you make a local realm instance at the App level and inject it into the environment there, is that the instance that will be reused by all the child views?", "username": "Mark_Powell1" }, { "code": " Button {\n item.isFavorite.toggle()\n } label: {\n Text(\"Toggle Fave\")\n }\nTerminating app due to uncaught exception 'RLMException', reason: 'Attempting to modify a frozen object - call thaw on the Object instance first.'\n $item.isFavorite.wrappedValue.toggle()\n", "text": "Ok, so, here is my problem with this now.\nI took the Realm SwiftUI quick start verbatim and added only this bit of code here:…to the ItemDetailsView. When you tap the Button, you get this:However, when you call this:It seems to work just great for me. Is this a bug, or is this how we’re supposed to code it?", "username": "Mark_Powell" }, { "code": "Cannot assign through dynamic lookup property: '$item' is immutable\nCannot assign value of type 'Bool' to type 'Binding<Bool>'\nitem", "text": "This gives me a compile error:When item is an @\\ObservedRealmObject as in the Realm SwiftUI quick start ItemDetailsView.", "username": "Mark_Powell" }, { "code": "item.isFavorite.toggle()$set$item.isFavorite.wrappedValue.toggle()", "text": "item.isFavorite.toggle() will not work because you need to call the projected value (using the dollar sign). Using the $ will effectively open a write transaction, enabling the set behaviour previously mentioned.Is $item.isFavorite.wrappedValue.toggle() not compiling?", "username": "Jason_Flax" }, { "code": "$item.isFavorite.wrappedValue.toggle()", "text": "Is $item.isFavorite.wrappedValue.toggle() not compiling?\nYes this compiles and seems to work just fine! Is this way what you’d recommend?I ask because I thought you wrote earlier:$item.favorite = truebut this does not compile for me, and I took it as something of a recommendation so I then wondered whether it’s a bug, etc.", "username": "Mark_Powell1" }, { "code": "", "text": "This is what I’m searching for and was hoping to find but was not answered in this thread:you get an error that the realm instance must match that with which you created the object. But how do I get that instance?I just want a singleton Realm, please.Once a user logs in, I want every interaction with Realm to be the same realm instance with a single _partition defined in the models.I discussed my issues more here, but I would love to just understand if there is a clean way to set one realm and always use it. If there isn’t, then what in the world do we do to actually make sure we’re using the same realm.I feel like I’m taking crazy pills, and thanks @Mark_Powell1 for this thread because it seems like I’m not the only one. Any complexity beyond the examples and there seems to be a bunch of unwritten workarounds to get it to work right. At the very least, I just wish they were written somewhere.", "username": "Kurt_Libby1" }, { "code": "", "text": "As beginner on Realm/Swift (but with some background on web frameworks) I must say Realm is great on early stages.\nBut it lacks on trivial operations (present on web frameworks ) when you go a step further.You guys really should consider better DX for trivial operations, reported here and many other issues. It does not need to be or sound like a ORM, but a rich API/SDK means happy developer.", "username": "Robson_Tenorio" }, { "code": "", "text": "Hi @Robson_Tenorio, welcome to the community.The original post on this thread was concerning some SwiftUI issues and since the time of the post, the documentation has been updated to add some clarity to several of the topics. Some new features were also added to the SDK along way too.The Realm team is always open to feature requests and feedback if something is unclear or undocumented.Perhaps you can create a new thread with some of the shortcomings you’re running into so we can explore those a bit. Can you maybe come up with some example code we can take a look at? That would be really beneficial to the development team and will add details to help make the API/SDK richer!", "username": "Jay" } ]
Realm Swift(UI) pain points
2022-02-11T00:55:12.542Z
Realm Swift(UI) pain points
6,621
null
[]
[ { "code": "", "text": "I am trying to create a registration but I want to use API key authentication how can I do that in the client side. I am using JavaScript SDK.", "username": "Aaron_Parducho" }, { "code": "", "text": "@Aaron_Parducho : Thanks for reaching out to the community. I would be great if you add to more description to problem so that anyone can answer to your query.", "username": "Mohit_Sharma" }, { "code": "", "text": "How can I manage registering user on the Client side. I want to use apiKey Authentication not username and password. Every user will have apiKey", "username": "Aaron_Parducho" }, { "code": "", "text": "Hey @Aaron_Parducho - we don’t support API key auth from the SDKs, as API key isn’t typically used for frontend/client apps. You could build an admin app with the Admin API for API Keys and have your client use the admin app to build some logic around API keys.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "we don’t support API key auth from the SDKsI’m confused, doesn’t Realm support using API credentials? https://www.mongodb.com/docs/atlas/app-services/authentication/api-key/#user-api-keysI’m also looking to have a unique API key for each user, but I’m running issues.", "username": "Annie_Sexton" } ]
How to create a api key from the client side and see it on the APP user?
2021-07-27T19:48:09.434Z
How to create a api key from the client side and see it on the APP user?
3,005
null
[ "production", "golang", "field-encryption" ]
[ { "code": "ClientEncryption.RewrapManyDataKey", "text": "The MongoDB Go Driver Team is pleased to release version 1.10.1 of the MongoDB Go Driver.This release, along with the libmongocrypt v1.5.2 release, fixes a potential encryption key corruption bug in ClientEncryption.RewrapManyDataKey that can lead to encrypted data corruption when rotating encryption keys backed by GCP or Azure key services For more information please see the 1.10.1 release notes.You can obtain the driver source from GitHub under the v1.10.1 tag.Documentation for the Go driver can be found on pkg.go.dev and the MongoDB documentation site. BSON library documentation is also available on pkg.go.dev. Questions and inquiries can be asked on the MongoDB Developer Community. Bugs can be reported in the Go Driver project in the MongoDB JIRA where a list of current issues can be found. Your feedback on the Go driver is greatly appreciated!Thank you,\nThe Go Driver Team", "username": "Matt_Dale" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB Go Driver 1.10.1 Released
2022-08-03T16:36:51.145Z
MongoDB Go Driver 1.10.1 Released
1,928
null
[]
[ { "code": "", "text": "Just upgraded to “/usr/local/Cellar/mongodb-community/6.0.0/bin/mongod” with ‘brew’. Can’t run with MongoDB now. Did you get a solution? TIA.", "username": "Roger_Lee" }, { "code": "", "text": "Can only run: MongoDB mongodb-community with version 5.0.7 on:WildFly 26.1.1.Final (EE 8/Jakarta 9.1), Weld 4.0.2 & CDI 3.0\nJava OpenJDK 18.0.1, Kotlin 1.7.10, Gradle 7.4.2.", "username": "Roger_Lee" }, { "code": "", "text": "Welcome to the MongoDB community @Roger_Lee !Can you provide more information on the issue you encountered, including your O/S version and the error message?WildFly 26.1.1.Final (EE 8/Jakarta 9.1), Weld 4.0.2 & CDI 3.0\nJava OpenJDK 18.0.1, Kotlin 1.7.10, Gradle 7.4.2.This information relates to a Java application server. If you are having trouble connecting to a MongoDB 6.0 deployment from a driver, can you please confirm the driver version used and the specific error message received?Thanks,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi Stennie,unfortunately i have the same problem, i can’t install “[email protected]” on my mac.\nI have the version “macOS Monterey 12.5”.This is the message I get through the Termianl: Warning: No available formula with the name “mongosh” (dependency of mongodb/brew/mongodb-community).\n==> Searching for similarly named formulae…\nError: No similarly named formulae found.\nIt was migrated from mongodb/brew to homebrew/core.", "username": "vahab_html" }, { "code": "", "text": "It may have go lost in translation.Using ‘mongodb-community 5.0.7’ I installed using ‘brew’ on macOS Monterey 12.5../mongod --config /usr/local/etc/mongod.confNeither ‘mongod’ or ‘mongo’ will not load/start/run om MongoDB 6.0.", "username": "Roger_Lee" } ]
Can't run with MongoDB 6.0
2022-07-23T11:47:59.987Z
Can&rsquo;t run with MongoDB 6.0
3,133
null
[ "replication", "sharding", "storage" ]
[ { "code": "2022-08-02T11:37:48.690+0300 E STORAGE [repl writer worker 11] WiredTiger error (0) [1659429468:685187][2567:0x7f20df43c700], file:index-25--485386741746989461.wt, WT_CURSOR.insert: read checksum error for 12288B block at offset 18314407936: block header checksum of 604962355 doesn't match expected checksum of 3306781470\n2022-08-02T11:37:48.699+0300 E STORAGE [repl writer worker 11] WiredTiger error (0) [1659429468:690748][2567:0x7f20df43c700], file:index-25--485386741746989461.wt, WT_CURSOR.insert: index-25--485386741746989461.wt: encountered an illegal file format or internal value\n2022-08-02T11:37:48.699+0300 E STORAGE [repl writer worker 11] WiredTiger error (-31804) [1659429468:699213][2567:0x7f20df43c700], file:index-25--485386741746989461.wt, WT_CURSOR.insert: the process must exit and restart: WT_PANIC: WiredTiger library panic\n2022-08-02T11:37:48.699+0300 I - [repl writer worker 11] Fatal Assertion 28558 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 361\n2022-08-02T11:37:48.699+0300 I - [repl writer worker 11]\n\n***aborting after fassert() failure\n\n\n2022-08-02T11:37:48.702+0300 I - [repl writer worker 3] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64\n2022-08-02T11:37:48.702+0300 I - [repl writer worker 4] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64\n2022-08-02T11:37:48.702+0300 I - [repl writer worker 3]\n\n***aborting after fassert() failure\n\n\n2022-08-02T11:37:48.702+0300 I - [repl writer worker 4]\n\n***aborting after fassert() failure\n\n\n2022-08-02T11:37:48.702+0300 I - [repl writer worker 5] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64\n2022-08-02T11:37:48.703+0300 I - [repl writer worker 5]\n\n***aborting after fassert() failure\n\n\n2022-08-02T11:37:48.703+0300 I - [repl writer worker 2] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64\n2022-08-02T11:37:48.703+0300 I - [repl writer worker 9] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64\n2022-08-02T11:37:48.704+0300 I - [repl writer worker 2]\n\n***aborting after fassert() failure\n\n\n2022-08-02T11:37:48.704+0300 I - [repl writer worker 7] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64\n2022-08-02T11:37:48.704+0300 I - [repl writer worker 1] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64\n2022-08-02T11:37:48.704+0300 I - [repl writer worker 9]\n\n***aborting after fassert() failure\n\n\n2022-08-02T11:37:48.704+0300 I - [repl writer worker 7]\n\n***aborting after fassert() failure\n\n\n2022-08-02T11:37:48.704+0300 I - [repl writer worker 12] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64\n2022-08-02T11:37:48.704+0300 I - [repl writer worker 1]\n\n***aborting after fassert() failure\n\n\n2022-08-02T11:37:48.704+0300 I - [repl writer worker 14] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64\n2022-08-02T11:37:48.704+0300 I - [repl writer worker 8] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64\n2022-08-02T11:37:48.704+0300 I - [repl writer worker 12]\n\n***aborting after fassert() failure\n\n\n2022-08-02T11:37:48.704+0300 I - [repl writer worker 8]\n\n***aborting after fassert() failure\n\n\n2022-08-02T11:37:48.704+0300 I - [repl writer worker 14]\n\n***aborting after fassert() failure\n\n\n2022-08-02T11:37:48.704+0300 I - [repl writer worker 6] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64\n2022-08-02T11:37:48.704+0300 I - [repl writer worker 6]\n\n***aborting after fassert() failure\n\n\n2022-08-02T11:37:48.704+0300 E STORAGE [thread2] WiredTiger error (-31804) [1659429468:704562][2567:0x7f20fac73700], eviction-server: cache eviction thread error: WT_PANIC: WiredTiger library panic\n2022-08-02T11:37:48.704+0300 I - [thread2] Fatal Assertion 28558 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 361\n2022-08-02T11:37:48.704+0300 I - [thread2]\n\n***aborting after fassert() failure\n\n\n2022-08-02T11:37:48.707+0300 I - [repl writer worker 15] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64\n2022-08-02T11:37:48.707+0300 I - [repl writer worker 13] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64\n2022-08-02T11:37:48.707+0300 I - [repl writer worker 10] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64\n2022-08-02T11:37:48.707+0300 I - [repl writer worker 15]\n\n***aborting after fassert() failure\n\n\n2022-08-02T11:37:48.707+0300 I - [repl writer worker 13]\n\n***aborting after fassert() failure\n\n\n2022-08-02T11:37:48.707+0300 I - [repl writer worker 10]\n\n***aborting after fassert() failure\n\n\n2022-08-02T11:37:48.708+0300 E STORAGE [thread3] WiredTiger error (-31804) [1659429468:708427][2567:0x7f20fa472700], eviction-server: cache eviction thread error: WT_PANIC: WiredTiger library panic\n2022-08-02T11:37:48.708+0300 I - [thread3] Fatal Assertion 28558 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 361\n2022-08-02T11:37:48.708+0300 I - [thread3]\n\n***aborting after fassert() failure\n\n\n2022-08-02T11:37:48.708+0300 E STORAGE [thread4] WiredTiger error (-31804) [1659429468:708474][2567:0x7f20f9470700], eviction-server: cache eviction thread error: WT_PANIC: WiredTiger library panic\n2022-08-02T11:37:48.708+0300 I - [thread4] Fatal Assertion 28558 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 361\n2022-08-02T11:37:48.708+0300 I - [thread4]\n\n***aborting after fassert() failure\n\n\n2022-08-02T11:37:49.001+0300 I - [ftdc] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64\n2022-08-02T11:37:49.001+0300 I - [ftdc]\n\n***aborting after fassert() failure\n\n\n2022-08-02T11:37:49.348+0300 E STORAGE [thread5] WiredTiger error (-31804) [1659429469:348068][2567:0x7f20fb474700], log-server: log server error: WT_PANIC: WiredTiger library panic\n2022-08-02T11:37:49.348+0300 I - [thread5] Fatal Assertion 28558 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 361\n2022-08-02T11:37:49.348+0300 I - [thread5]\n\n***aborting after fassert() failure\n\n\n2022-08-02T11:37:52.761+0300 I - [repl writer worker 0] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64\n2022-08-02T11:37:52.761+0300 I - [repl writer worker 0]\n\n***aborting after fassert() failure\n\n\n2022-08-02T11:38:04.048+0300 F - [repl writer worker 11] Got signal: 6 (Aborted).\n\n 0x7f21019bf6b1 0x7f21019be8c9 0x7f21019bedad 0x7f20ff09e100 0x7f20fed035f7 0x7f20fed04ce8 0x7f2100c63267 0x7f21016cfcb6 0x7f2100c6d576 0x7f2100c6d792 0x7f2100c6d9f4 0x7f21022c9e35 0x7f21022e3bbb 0x7f21022ead93 0x7f210230bcb0 0x7f21022d9b32 0x7f2102328d54 0x7f21016a3a69 0x7f21016a5914 0x7f21010b2a03 0x7f2100e5896a 0x7f2100e58c0d 0x7f2100e58ce3 0x7f2100e30a51 0x7f2100e30cb6 0x7f21013ccd51 0x7f21014ad9a4 0x7f21014a3e92 0x7f21014a6b5e 0x7f21014a75bd 0x7f21014a76cd 0x7f21014ad06e 0x7f21014ad663 0x7f21014a2c7f 0x7f21014a3d6a 0x7f2101938f1c 0x7f21019399cc 0x7f210193a3b6 0x7f2102436d40 0x7f20ff096dc5 0x7f20fedc41cd\n----- BEGIN BACKTRACE -----\n{\"backtrace\":[{\"b\":\"7F2100447000\",\"o\":\"15786B1\",\"s\":\"_ZN5mongo15printStackTraceERSo\"},{\"b\":\"7F2100447000\",\"o\":\"15778C9\"},{\"b\":\"7F2100447000\",\"o\":\"1577DAD\"},{\"b\":\"7F20FF08F000\",\"o\":\"F100\"},{\"b\":\"7F20FECCE000\",\"o\":\"355F7\",\"s\":\"gsignal\"},{\"b\":\"7F20FECCE000\",\"o\":\"36CE8\",\"s\":\"abort\"},{\"b\":\"7F2100447000\",\"o\":\"81C267\",\"s\":\"_ZN5mongo32fassertFailedNoTraceWithLocationEiPKcj\"},{\"b\":\"7F2100447000\",\"o\":\"1288CB6\"},{\"b\":\"7F2100447000\",\"o\":\"826576\",\"s\":\"__wt_eventv\"},{\"b\":\"7F2100447000\",\"o\":\"826792\",\"s\":\"__wt_err\"},{\"b\":\"7F2100447000\",\"o\":\"8269F4\",\"s\":\"__wt_panic\"},{\"b\":\"7F2100447000\",\"o\":\"1E82E35\",\"s\":\"__wt_bm_read\"},{\"b\":\"7F2100447000\",\"o\":\"1E9CBBB\",\"s\":\"__wt_bt_read\"},{\"b\":\"7F2100447000\",\"o\":\"1EA3D93\",\"s\":\"__wt_page_in_func\"},{\"b\":\"7F2100447000\",\"o\":\"1EC4CB0\",\"s\":\"__wt_row_search\"},{\"b\":\"7F2100447000\",\"o\":\"1E92B32\",\"s\":\"__wt_btcur_insert\"},{\"b\":\"7F2100447000\",\"o\":\"1EE1D54\"},{\"b\":\"7F2100447000\",\"o\":\"125CA69\",\"s\":\"_ZN5mongo23WiredTigerIndexStandard7_insertEP11__wt_cursorRKNS_7BSONObjERKNS_8RecordIdEb\"},{\"b\":\"7F2100447000\",\"o\":\"125E914\",\"s\":\"_ZN5mongo15WiredTigerIndex6insertEPNS_16OperationContextERKNS_7BSONObjERKNS_8RecordIdEb\"},{\"b\":\"7F2100447000\",\"o\":\"C6BA03\",\"s\":\"_ZN5mongo17IndexAccessMethod6insertEPNS_16OperationContextERKNS_7BSONObjERKNS_8RecordIdERKNS_19InsertDeleteOptionsEPl\"},{\"b\":\"7F2100447000\",\"o\":\"A1196A\",\"s\":\"_ZN5mongo12IndexCatalog21_indexFilteredRecordsEPNS_16OperationContextEPNS_17IndexCatalogEntryERKSt6vectorINS_10BsonRecordESaIS6_EEPl\"},{\"b\":\"7F2100447000\",\"o\":\"A11C0D\",\"s\":\"_ZN5mongo12IndexCatalog13_indexRecordsEPNS_16OperationContextEPNS_17IndexCatalogEntryERKSt6vectorINS_10BsonRecordESaIS6_EEPl\"},{\"b\":\"7F2100447000\",\"o\":\"A11CE3\",\"s\":\"_ZN5mongo12IndexCatalog12indexRecordsEPNS_16OperationContextERKSt6vectorINS_10BsonRecordESaIS4_EEPl\"},{\"b\":\"7F2100447000\",\"o\":\"9E9A51\",\"s\":\"_ZN5mongo10Collection16_insertDocumentsEPNS_16OperationContextEN9__gnu_cxx17__normal_iteratorIPKNS_7BSONObjESt6vectorIS5_SaIS5_EEEESB_bPNS_7OpDebugE\"},{\"b\":\"7F2100447000\",\"o\":\"9E9CB6\",\"s\":\"_ZN5mongo10Collection15insertDocumentsEPNS_16OperationContextEN9__gnu_cxx17__normal_iteratorIPKNS_7BSONObjESt6vectorIS5_SaIS5_EEEESB_PNS_7OpDebugEbb\"},{\"b\":\"7F2100447000\",\"o\":\"F85D51\",\"s\":\"_ZN5mongo4repl21applyOperation_inlockEPNS_16OperationContextEPNS_8DatabaseERKNS_7BSONObjEbSt8functionIFvvEE\"},{\"b\":\"7F2100447000\",\"o\":\"10669A4\",\"s\":\"_ZNSt17_Function_handlerIFN5mongo6StatusEPNS0_16OperationContextEPNS0_8DatabaseERKNS0_7BSONObjEbSt8functionIFvvEEEPSC_E9_M_invokeERKSt9_Any_dataOS3_OS5_S8_ObOSB_\"},{\"b\":\"7F2100447000\",\"o\":\"105CE92\"},{\"b\":\"7F2100447000\",\"o\":\"105FB5E\",\"s\":\"_ZN5mongo4repl8SyncTail9syncApplyEPNS_16OperationContextERKNS_7BSONObjEbSt8functionIFNS_6StatusES3_PNS_8DatabaseES6_bS7_IFvvEEEES7_IFS8_S3_S6_bEESC_\"},{\"b\":\"7F2100447000\",\"o\":\"10605BD\",\"s\":\"_ZN5mongo4repl8SyncTail9syncApplyEPNS_16OperationContextERKNS_7BSONObjEb\"},{\"b\":\"7F2100447000\",\"o\":\"10606CD\"},{\"b\":\"7F2100447000\",\"o\":\"106606E\",\"s\":\"_ZN5mongo4repl22multiSyncApply_noAbortEPNS_16OperationContextEPSt6vectorIPKNS0_10OplogEntryESaIS6_EESt8functionIFNS_6StatusES2_RKNS_7BSONObjEbEE\"},{\"b\":\"7F2100447000\",\"o\":\"1066663\",\"s\":\"_ZN5mongo4repl14multiSyncApplyEPSt6vectorIPKNS0_10OplogEntryESaIS4_EEPNS0_8SyncTailE\"},{\"b\":\"7F2100447000\",\"o\":\"105BC7F\"},{\"b\":\"7F2100447000\",\"o\":\"105CD6A\"},{\"b\":\"7F2100447000\",\"o\":\"14F1F1C\",\"s\":\"_ZN5mongo10ThreadPool10_doOneTaskEPSt11unique_lockISt5mutexE\"},{\"b\":\"7F2100447000\",\"o\":\"14F29CC\",\"s\":\"_ZN5mongo10ThreadPool13_consumeTasksEv\"},{\"b\":\"7F2100447000\",\"o\":\"14F33B6\",\"s\":\"_ZN5mongo10ThreadPool17_workerThreadBodyEPS0_RKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE\"},{\"b\":\"7F2100447000\",\"o\":\"1FEFD40\"},{\"b\":\"7F20FF08F000\",\"o\":\"7DC5\"},{\"b\":\"7F20FECCE000\",\"o\":\"F61CD\",\"s\":\"clone\"}],\"processInfo\":{ \"mongodbVersion\" : \"3.4.10\", \"gitVersion\" : \"078f28920cb24de0dd479b5ea6c66c644f6326e9\", \"compiledModules\" : [], \"uname\" : { \"sysname\" : \"Linux\", \"release\" : \"3.10.0-327.el7.x86_64\", \"version\" : \"#1 SMP Thu Oct 29 17:29:29 EDT 2015\", \"machine\" : \"x86_64\" }, \"somap\" : [ { \"b\" : \"7F2100447000\", \"elfType\" : 3, \"buildId\" : \"94C7FAB092E567C9338D13DB9B68751363D15EFD\" }, { \"b\" : \"7FFF283D9000\", \"elfType\" : 3, \"buildId\" : \"17A121B1F7BBB010F54735FFDE3347B27B33884D\" }, { \"b\" : \"7F20FFFB6000\", \"path\" : \"/lib64/libssl.so.10\", \"elfType\" : 3, \"buildId\" : \"8DB4545998776514159031B754BB67F7F396F83A\" }, { \"b\" : \"7F20FFBCF000\", \"path\" : \"/lib64/libcrypto.so.10\", \"elfType\" : 3, \"buildId\" : \"038F79F7C3F6E60C29184B8E70D0B1E62525D64D\" }, { \"b\" : \"7F20FF9C7000\", \"path\" : \"/lib64/librt.so.1\", \"elfType\" : 3, \"buildId\" : \"A1D9E0B471D827008C36FA72BAB34BE08FE54B33\" }, { \"b\" : \"7F20FF7C3000\", \"path\" : \"/lib64/libdl.so.2\", \"elfType\" : 3, \"buildId\" : \"5958E57738366BCC217783F3CD4C836437F7C45F\" }, { \"b\" : \"7F20FF4C1000\", \"path\" : \"/lib64/libm.so.6\", \"elfType\" : 3, \"buildId\" : \"02C4E38A8145201D9C574499CF75132551835CEB\" }, { \"b\" : \"7F20FF2AB000\", \"path\" : \"/lib64/libgcc_s.so.1\", \"elfType\" : 3, \"buildId\" : \"97D5E2F5739B715C3A0EC9F95F7336E232346CA8\" }, { \"b\" : \"7F20FF08F000\", \"path\" : \"/lib64/libpthread.so.0\", \"elfType\" : 3, \"buildId\" : \"FA15B7D2CA650B34E6A0C9AD999BA6625AEC4068\" }, { \"b\" : \"7F20FECCE000\", \"path\" : \"/lib64/libc.so.6\", \"elfType\" : 3, \"buildId\" : \"B0A1DFA62C6AF7AA62487E3C260DC4B9C24D8BF8\" }, { \"b\" : \"7F2100223000\", \"path\" : \"/lib64/ld-linux-x86-64.so.2\", \"elfType\" : 3, \"buildId\" : \"CEB78DAE1EE5B4C544047DC26F88A8E4586A34D2\" }, { \"b\" : \"7F20FEA82000\", \"path\" : \"/lib64/libgssapi_krb5.so.2\", \"elfType\" : 3, \"buildId\" : \"8AB5682155DE13D0916B984306B4E044E216B2EB\" }, { \"b\" : \"7F20FE79D000\", \"path\" : \"/lib64/libkrb5.so.3\", \"elfType\" : 3, \"buildId\" : \"BE8968836D439581B2816CE3827642FCF4B8BF4A\" }, { \"b\" : \"7F20FE599000\", \"path\" : \"/lib64/libcom_err.so.2\", \"elfType\" : 3, \"buildId\" : \"B25574847B066A26CD593C8101DF6779898FF2C2\" }, { \"b\" : \"7F20FE367000\", \"path\" : \"/lib64/libk5crypto.so.3\", \"elfType\" : 3, \"buildId\" : \"F5784ED7E64118BAFE898DBF178DC9E37CBDA4AA\" }, { \"b\" : \"7F20FE151000\", \"path\" : \"/lib64/libz.so.1\", \"elfType\" : 3, \"buildId\" : \"FC37913FB197B822BCDBF3697D061E248698CEC1\" }, { \"b\" : \"7F20FDF42000\", \"path\" : \"/lib64/libkrb5support.so.0\", \"elfType\" : 3, \"buildId\" : \"4BBED12CFDC9647C8771A4B897E0D5A4F217ED7C\" }, { \"b\" : \"7F20FDD3E000\", \"path\" : \"/lib64/libkeyutils.so.1\", \"elfType\" : 3, \"buildId\" : \"8CA73C16CFEB9A8B5660015B9223B09F87041CAD\" }, { \"b\" : \"7F20FDB24000\", \"path\" : \"/lib64/libresolv.so.2\", \"elfType\" : 3, \"buildId\" : \"D08CF135D143704DA93E5F025AE6AE6943838F03\" }, { \"b\" : \"7F20FD8FF000\", \"path\" : \"/lib64/libselinux.so.1\", \"elfType\" : 3, \"buildId\" : \"5062031216B995004A297D555D834C0109F7598C\" }, { \"b\" : \"7F20FD69E000\", \"path\" : \"/lib64/libpcre.so.1\", \"elfType\" : 3, \"buildId\" : \"8E3819A80BF876382A6F0CB2A08F82F1742EE8DB\" }, { \"b\" : \"7F20FD479000\", \"path\" : \"/lib64/liblzma.so.5\", \"elfType\" : 3, \"buildId\" : \"61D7D46225E85F144221E1424B87FBF3CB2B9D3F\" } ] }}\n mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x7f21019bf6b1]\n mongod(+0x15778C9) [0x7f21019be8c9]\n mongod(+0x1577DAD) [0x7f21019bedad]\n libpthread.so.0(+0xF100) [0x7f20ff09e100]\n libc.so.6(gsignal+0x37) [0x7f20fed035f7]\n libc.so.6(abort+0x148) [0x7f20fed04ce8]\n mongod(_ZN5mongo32fassertFailedNoTraceWithLocationEiPKcj+0x0) [0x7f2100c63267]\n mongod(+0x1288CB6) [0x7f21016cfcb6]\n mongod(__wt_eventv+0x3D7) [0x7f2100c6d576]\n mongod(__wt_err+0x9D) [0x7f2100c6d792]\n mongod(__wt_panic+0x2E) [0x7f2100c6d9f4]\n mongod(__wt_bm_read+0x135) [0x7f21022c9e35]\n mongod(__wt_bt_read+0x1FB) [0x7f21022e3bbb]\n mongod(__wt_page_in_func+0x1303) [0x7f21022ead93]\n mongod(__wt_row_search+0x660) [0x7f210230bcb0]\n mongod(__wt_btcur_insert+0xB62) [0x7f21022d9b32]\n mongod(+0x1EE1D54) [0x7f2102328d54]\n mongod(_ZN5mongo23WiredTigerIndexStandard7_insertEP11__wt_cursorRKNS_7BSONObjERKNS_8RecordIdEb+0x149) [0x7f21016a3a69]\n mongod(_ZN5mongo15WiredTigerIndex6insertEPNS_16OperationContextERKNS_7BSONObjERKNS_8RecordIdEb+0xF4) [0x7f21016a5914]\n mongod(_ZN5mongo17IndexAccessMethod6insertEPNS_16OperationContextERKNS_7BSONObjERKNS_8RecordIdERKNS_19InsertDeleteOptionsEPl+0x1F3) [0x7f21010b2a03]\n mongod(_ZN5mongo12IndexCatalog21_indexFilteredRecordsEPNS_16OperationContextEPNS_17IndexCatalogEntryERKSt6vectorINS_10BsonRecordESaIS6_EEPl+0xDA) [0x7f2100e5896a]\n mongod(_ZN5mongo12IndexCatalog13_indexRecordsEPNS_16OperationContextEPNS_17IndexCatalogEntryERKSt6vectorINS_10BsonRecordESaIS6_EEPl+0x1FD) [0x7f2100e58c0d]\n mongod(_ZN5mongo12IndexCatalog12indexRecordsEPNS_16OperationContextERKSt6vectorINS_10BsonRecordESaIS4_EEPl+0x73) [0x7f2100e58ce3]\n mongod(_ZN5mongo10Collection16_insertDocumentsEPNS_16OperationContextEN9__gnu_cxx17__normal_iteratorIPKNS_7BSONObjESt6vectorIS5_SaIS5_EEEESB_bPNS_7OpDebugE+0x4F1) [0x7f2100e30a51]\n mongod(_ZN5mongo10Collection15insertDocumentsEPNS_16OperationContextEN9__gnu_cxx17__normal_iteratorIPKNS_7BSONObjESt6vectorIS5_SaIS5_EEEESB_PNS_7OpDebugEbb+0x176) [0x7f2100e30cb6]\n mongod(_ZN5mongo4repl21applyOperation_inlockEPNS_16OperationContextEPNS_8DatabaseERKNS_7BSONObjEbSt8functionIFvvEE+0x17F1) [0x7f21013ccd51]\n mongod(_ZNSt17_Function_handlerIFN5mongo6StatusEPNS0_16OperationContextEPNS0_8DatabaseERKNS0_7BSONObjEbSt8functionIFvvEEEPSC_E9_M_invokeERKSt9_Any_dataOS3_OS5_S8_ObOSB_+0x74) [0x7f21014ad9a4]\n mongod(+0x105CE92) [0x7f21014a3e92]\n mongod(_ZN5mongo4repl8SyncTail9syncApplyEPNS_16OperationContextERKNS_7BSONObjEbSt8functionIFNS_6StatusES3_PNS_8DatabaseES6_bS7_IFvvEEEES7_IFS8_S3_S6_bEESC_+0x54E) [0x7f21014a6b5e]\n mongod(_ZN5mongo4repl8SyncTail9syncApplyEPNS_16OperationContextERKNS_7BSONObjEb+0xFD) [0x7f21014a75bd]\n mongod(+0x10606CD) [0x7f21014a76cd]\n mongod(_ZN5mongo4repl22multiSyncApply_noAbortEPNS_16OperationContextEPSt6vectorIPKNS0_10OplogEntryESaIS6_EESt8functionIFNS_6StatusES2_RKNS_7BSONObjEbEE+0xFEE) [0x7f21014ad06e]\n mongod(_ZN5mongo4repl14multiSyncApplyEPSt6vectorIPKNS0_10OplogEntryESaIS4_EEPNS0_8SyncTailE+0x73) [0x7f21014ad663]\n mongod(+0x105BC7F) [0x7f21014a2c7f]\n mongod(+0x105CD6A) [0x7f21014a3d6a]\n mongod(_ZN5mongo10ThreadPool10_doOneTaskEPSt11unique_lockISt5mutexE+0x14C) [0x7f2101938f1c]\n mongod(_ZN5mongo10ThreadPool13_consumeTasksEv+0xBC) [0x7f21019399cc]\n mongod(_ZN5mongo10ThreadPool17_workerThreadBodyEPS0_RKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0x96) [0x7f210193a3b6]\n mongod(+0x1FEFD40) [0x7f2102436d40]\n libpthread.so.0(+0x7DC5) [0x7f20ff096dc5]\n libc.so.6(clone+0x6D) [0x7f20fedc41cd]\n", "text": "Our secondary MongoDB nodes went down with wiredtiger error. we try to restart the servers and it was not started. I don’t know what is it and how to fix it. This project is very urgent, so I hope someone can help me fix this problem. please find errors below.", "username": "veerendra_pulapa" }, { "code": "", "text": "Hi @veerendra_pulapa,Please provide some more details on your environment:Fatal assertions indicate data issues that cannot be automatically recovered. In these situations the recommended approach is normally to Resync from a healthy member of your replica set.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi @Stennie_X,MongoDB Version 3.4cat /etc/mongod.conf | grep -v ^#\nsystemLog:\ndestination: file\nlogAppend: true\npath: /var/log/mongodb/mongod.log\nstorage:\ndbPath: /data/mongo\njournal:\nenabled: true\nprocessManagement:\nfork: true # fork and run in background\npidFilePath: /var/run/mongodb/mongod.pid # location of pidfile\nnet:\nport: 27017\nbindIp: 127.0.0.1\nsecurity:\nauthorization: “enabled”\nkeyFile: /etc/mongodb_repl_prd.key\nreplication:\nreplSetName: “prd”for secondary servers it is showing unhealthy.", "username": "veerendra_pulapa" }, { "code": "", "text": "Hi Team,what could be the main reason, does secondary mongodb nodes went down?", "username": "veerendra_pulapa" } ]
MongoDB secondary nodes down with WiredTiger error
2022-08-03T10:50:56.935Z
MongoDB secondary nodes down with WiredTiger error
2,652
null
[ "containers" ]
[ { "code": "", "text": "Hi,\nCan’t find any 6.0.0 tags for the official mongo docker image. Any hints on when it will be available? Or if it already is somewhere?Thx // Torben", "username": "Torben_Norling" }, { "code": "", "text": "So, this is where I look:\nMongo official hub", "username": "Torben_Norling" }, { "code": "mongo", "text": "Welcome to the MongoDB community @Torben_Norling !Docker Official Images are maintained by the Docker team and you can find/report issues in their GitHub repo: Issues · docker-library/mongo · GitHub.There is an open issue and PR in review at the moment: Create MongoDB 6.0 Docker image · Issue #552 · docker-library/mongo · GitHub. It looks like the delay is because they are working out how to handle the removal of the mongo shell in 6.0: Add 6.0 release by yosifkit · Pull Request #553 · docker-library/mongo · GitHub.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Aha,\nThx for the quick reply and pointer to the PR.Regards // Torben", "username": "Torben_Norling" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongo 6.0.0 docker image
2022-08-03T09:37:17.657Z
Mongo 6.0.0 docker image
8,433
null
[ "aggregation" ]
[ { "code": " [\n {\n '$unwind': {\n 'path': '$Vehicles'\n }\n },\n {\n '$match': {\n 'Vehicles.Manufacturer': 'FORD'\n }\n },\n {\n '$facet': {\n 'makes': [\n {\n '$group': {\n '_id': '$Vehicles.Manufacturer', \n 'count': {\n '$sum': 1\n }\n }\n }\n ]\n }\n },\n {\n '$project': {\n 'makes': {\n '$sortArray': {\n 'input': '$makes', \n 'sortBy': 1\n }\n }\n }\n }\n ]\n", "text": "I’ve got a MongoDB / Nodes aggregation that looks a little like this (there are other values in there, but this is the basic idea).This works fine. But I would also like to pass an unmatched list through. IE an an array of vehicles whose Manufacturer = FORD and an other list of all Manufacturer.Can’t get it to work. Any ideas please?Thanks in advance.", "username": "Andy_Bryan" }, { "code": "{\n '$match': {\n 'Vehicles.Manufacturer': 'FORD'\n }\n }\n", "text": "If you wantto pass an unmatched list throughyou have to remove", "username": "steevej" }, { "code": "", "text": "Thanks, but no, because I need both the matched and unmatched lists.", "username": "Andy_Bryan" }, { "code": "", "text": "Use two different queries maybe?", "username": "sassbalint" }, { "code": "'Vehicles.Manufacturer': 'FORD''_id': '$Vehicles.Manufacturer',", "text": "It will be very very hard, aka impossible, to have unmatched documents available after your $match. So your $match has to be removed. Only documents with'Vehicles.Manufacturer': 'FORD'are available to your next $facet stage.But you have a $facet with'_id': '$Vehicles.Manufacturer',This actually gives you your matched and unmatched list. The matched one is with _id:FORD and the unmatched is all the others. But to get the unmatched one in the $facet you have to remove the $match.", "username": "steevej" } ]
Mongodb aggregation to pass both a matched array and an unmatched array
2022-08-02T13:56:45.117Z
Mongodb aggregation to pass both a matched array and an unmatched array
2,316
null
[ "aggregation", "queries", "replication" ]
[ { "code": "", "text": "We are having two environments (local1and local2), both environment we are using Mongodb for storing and retrieving data. We have an availability to use aggression with multiple pipelines in both region but the both resultset provides different values.Data available in the both the region are same and aggression query we used to fetch the data is same, but we got the different values.While we inspect the individual pipeline, we are getting a different resultset from the group pipeline. Could you please help on this issue?", "username": "Prakash_Rajendran" }, { "code": "", "text": "In principal the same queries over the same data provides the same result.Except for 2 cases.Note that if you do not sort or sort and have duplicates, your $limit:n will not provide the same documents.If you do not sort or sort and have duplicates, things like $first for your $group will not provide the same documents.", "username": "steevej" }, { "code": "", "text": "Please double check that your data is really the same in the two cases.", "username": "sassbalint" } ]
Aggregation group pipeline issue
2022-08-01T06:44:18.735Z
Aggregation group pipeline issue
1,325
null
[ "aggregation", "change-streams" ]
[ { "code": "", "text": "Hi!I try to monitor how many changes in the change stream have not been processed yet,\nI can count the total amount by aggregate group+count (filtered by collection),\nbut how do I count the number of tokens after the resume token?Thank you.", "username": "Shay_I" }, { "code": "", "text": "Hi @Shay_I and welcome to the community!!The general recommendation is not to the query the oplog directly. Instead you might be able to do by decoding the timestamp from the resumeToken and then querying the oplog based on the timestamp.The following mongosh ticket might prove to be useful in your scenario.Please let us know if you need any further assistance.Thanks\nAasawari", "username": "Aasawari" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to find the resumeToken position in the Oplog
2022-07-13T18:19:55.566Z
How to find the resumeToken position in the Oplog
2,418
null
[ "java", "atlas-device-sync" ]
[ { "code": "public class UserModel extends RealmObject{\n @PrimaryKey\n String _id;\n RealmList<String> contacts;\n}\n", "text": "Im trying to create a RealmList field for one of my RealmObjects in Java but i get an error in realm logs saying that “sync does not support collections of nullable primitives unless using the mixed type”. All the documentations show that that is doable.What am i doing wrong?", "username": "Ibrahim_Abouelseoud" }, { "code": "", "text": "Hi @Ibrahim_Abouelseoud, welcome to the community forum.Did you find a solution for this?What does your backend Realm schema look like?", "username": "Andrew_Morgan" }, { "code": "import io.realm.annotations.PrimaryKey;\nimport io.realm.annotations.Required;\n@PrimaryKey @Required public ObjectId _id;", "text": "****** FINALLY FIXED ******\nThere are multiple steps you have to do and multiple rules you have to follow in order to make this work. First I will cover the rules in order to correctly set up your Java object model, then I will guide you through the steps to create a schema from them.Rules:)\n2. Every other RealmList of for example another type you define in your app (e.g. RealmList{Article} where Article is another one of your Realm Object Classes) you can not put @Required\n3. The @PrimaryKey attribute has to be @Required, for example @PrimaryKey @Required public ObjectId _id;\n4. If your partition key is a synthetic one like ‘_partition’, it should not be one of the attributes of your Realm model class (just leave it away. Realm automatically adds _partition to your schema then!)\n5. Don’t use @NonNull for anythingSteps to create a schema from your Java classes (also called the Realm Object Model):\n(this guide is for Android Studio)", "username": "SirSwagon_N_A" } ]
RealmObject with RealmList<String>
2021-03-25T11:09:34.213Z
RealmObject with RealmList&lt;String&gt;
3,833
null
[ "backup", "migration" ]
[ { "code": "", "text": "I am in the process of planning for a migration.\nI have about 10TB of data in my Transaction database which I want to migrate to an empty database on a new server.\nI know a backup and restore would take a very long time. I also want to minimise downtime as much as possible.Is there any way this can be done with minimal downtime?", "username": "Remaketse_Maphisa" }, { "code": "", "text": "Hello @Remaketse_Maphisa\nin case you are targeting on MongoDB Atlas the Life Migration might be an option to reduce the downtime to a minimum.\nThe Life Migration runs monogmirror in background, plus a lot of extras which makes the migration super easy. Mongomirror may pay off when you are not focussing on Atlas.\nMongopush an unofficial tool might be interesting. I just checked the side, the docker image is not found as of now, hope this is only temporary the tool is great to have at hand…Regards,\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "You can use Robocopy to migrate your data from one server to another and FYI Robocopy is a CLI-based tool. also, there are many alternatives to Robocopy that are GUI-based like Goodsync, Gs Richcopy 360, and SyncThing.\nSearch All", "username": "Sarah_Watson" } ]
Best method to migrate TB data from one server to another
2022-02-15T17:29:01.183Z
Best method to migrate TB data from one server to another
6,305
null
[ "flutter" ]
[ { "code": "@RealmModel()\nclass _FirstModel {\n late int first;\n late String something;\n}\n\n@RealmModel()\nclass _SecondModel extends _FirstModel {\n late int second;\n late String secondthing;\n}\n_FirstModel can't be extended", "text": "In Flutter/Dart can we extend the Realm models? For example;I tried this and got error message _FirstModel can't be extended . How can I solve this?", "username": "Tembo_Nyati" }, { "code": "", "text": "Hi,\nRealm does not support model hierarchy yet, so this is not allowed in the Flutter SDK as well.\nWe have plans on supporting this functionality in the future.cheers.", "username": "Lyubomir_Blagoev" }, { "code": "", "text": "That make sense, thank you.", "username": "Tembo_Nyati" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How can I extend the RealModel classes?
2022-08-03T06:40:52.180Z
How can I extend the RealModel classes?
2,477
https://www.mongodb.com/…d_2_1024x576.png
[]
[ { "code": "", "text": "This error come when i run mongod\n\nScreenshot (228)1920×1080 38.1 KB\n", "username": "shivanium_N_A" }, { "code": "PerformanceTask Manager", "text": "Welcome to the MongoDB community @shivanium_N_A !The error indicates you are trying to run a version of MongoDB that is not compatible with your hardware or O/S.If you are trying to install MongoDB 5.0 or newer, it is possible that an older CPU may not meet the current microarchitecture requirements.Can you please confirm:version of MongoDB server you downloadedyour CPU model (you can find this info in the Performance tab in Windows Task Manager)Thanks,\nStennie", "username": "Stennie_X" } ]
Windows install problem: This app can't run on your PC
2022-08-03T04:38:45.243Z
Windows install problem: This app can&rsquo;t run on your PC
2,765
null
[ "aggregation" ]
[ { "code": "db.mycollection.aggregate([\n{\"$match\":{\n \"create_date_audit\":{\n $gte: ISODate('2022-07-25T18:27:56.084+00:00'),\n $lte: ISODate('2022-07-26T20:15:50.561+00:00')\n }\n}},\n{\"$sort\":{\n _id: -1\n}},\n{\"$group\":{\n _id: {\n notification_id: '$notifId',\n empId: '$empId',\n date: '$date'\n },\n dups: {\n $push: '$_id'\n },\n creationTimestamp: {\n $push: '$create_date'\n },\n count: {\n $sum: 1\n }\n}},\n{\"$match\":{\n _id: {\n $ne: null\n },\n count: {\n $gt: 1\n }\n}},\n{\"$sort\":{\n create_date: -1\n}},\n], { allowDiskUse: true }).forEach(function(doc) { \n db.mycollection.deleteMany({_id : {doc.dups[0]}); \n})```", "text": "I’m trying to remove duplicates of documents, in the collection the collection have millions of records and duplicates data I assume have 100k records. I did use aggregation to delete those duplicates, but it is slow and is not ideal when we deploy it to prod. Is there a better and fast way to remove duplicates in a collection?What I have tried so far:", "username": "Paul_N_A1" }, { "code": "", "text": "Is there a better and fast way to remove duplicates in a collection?I don’t know better than your aggregation query, if this is a one-time process and you want to do this faster then I would suggest you can do this operation on your local machine in the MongoDB server after taking dump of the collection and you can restore it after the successful operation.", "username": "turivishal" }, { "code": "", "text": "As mentioned by @turivishal, if this is a one-time process vs a regular use-case will most likely be different. So it would be nice to know.For continuously preventing duplicates, you might want to create a unique index on notifId,empId and date.I am not sure about your deleteMany. It looks like you might delete only one of the duplicates, because you are deleting _id:doc.dups[0]. It might not be an issue if you know you only have 1 duplicate. But the use of $push makes me think that you might have many.The final $sort seems useless since you do not have a field named create_date after the $group stage.To speedup a little bit, you might remove the count in your $group and $match on doc.dups[1] : { $exists : true }.You do not seem to use the $group field creationTimestamp so you could remove the $push. Unless this is what you want to $sort as the last stage.", "username": "steevej" } ]
MongoDB remove duplicate with millions of records
2022-08-02T06:29:08.139Z
MongoDB remove duplicate with millions of records
12,359
null
[ "java", "indexes", "spring-data-odm" ]
[ { "code": "", "text": "Hello!!!\nI have an application in java springboot, which is a CRUD application that uses mongodb. The thing is, one of my collections has an index, set by the java code with the tag @Indexed. Now I want to create a second index, but when i tag the second field with @Indexed doesn’t work… I like only the first time the collection is created the indexes too… and then once the collection is already created you can not update the indexed by java code anymore… because when I remove the collection, I set the second index and after the deploy, the second index is properly created…Any suggestion?? i can’t remove the collections, and i want to update the indexes by code…Regards,", "username": "Anna_Barrera_Quintanilla" }, { "code": "MongoTemplate#indexOps", "text": "Hello @Anna_Barrera_Quintanilla, welcome to the MongoDB community forum!Any suggestion?? i can’t remove the collections, and i want to update the indexes by code…You can use the Spring Data MongoDB API method MongoTemplate#indexOps to create an index on a collection field programmatically.", "username": "Prasad_Saya" } ]
Index not updated with springboot
2022-08-02T08:15:38.275Z
Index not updated with springboot
3,435
null
[ "replication" ]
[ { "code": "", "text": "Good night to all, now in Brazil!Please, is true the limit of secundaries instances on Replica Set MongoDB Community Edition?Thanks,\nHenrique.", "username": "Henrique_Souza" }, { "code": "", "text": "Welcome to the MongoDB community @Henrique_Souza !General MongoDB Limits and Thresholds are the same for both Community and Enterprise Editions of the server.MongoDB Enterprise Advanced includes the same core developer feature set with additional security, monitoring, and management features.is true the limit of secundaries instances on Replica Set MongoDB Community Edition?Replica sets can have up to 7 voting members and 50 members in total.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks Stennie by your fast answer!", "username": "Henrique_Souza" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Limits on Replica Set - MongoDB Community Edition
2022-08-03T00:57:37.850Z
Limits on Replica Set - MongoDB Community Edition
2,271
null
[ "aggregation", "node-js", "mongoose-odm" ]
[ { "code": "const mongoose = require('mongoose');\nconst userSchema = mongoose.Schema({\n useremail: {\n type: String,\n required: true\n },\n password: {\n type: String,\n required: true\n },\n phone: {\n type: String,\n required: true\n },\n detail: {\n type: mongoose.SchemaTypes.ObjectId,\n ref: 'Detail'\n }\n});\n\nmodule.exports = mongoose.model('User', userSchema, 'users');\nconst mongoose = require('mongoose');\nconst detailsSchema = mongoose.Schema({\n full_name: {\n type: String,\n required: true\n },\n birthday: {\n type: String,\n required: true\n },\n second_contact: {\n type: Number,\n required: false\n },\n gender: {\n type: String,\n required: true\n },\n user: {\n type: mongoose.SchemaTypes.ObjectId,\n ref: 'User',\n required: true\n }\n});\n\nconst express = require('express');\nconst router = express.Router();\nconst Users = require('../models/user');\nconst Detail = require('../models/details');\n\nrouter.post('/register', async(req, res) => {\n const { useremail, password, phone, full_name, birthday, gender } = req.body;\n try {\n let user = await Users.create({ useremail, password, phone });\n Users.findOne({ useremail }).populate('detail');\n\n } catch (err) {\n res.status(422).send(err);\n console.log(err);\n }\n});\n\nmodule.exports = router;\n", "text": "I’m learning MongoDB/Mongoose by the docs and I’m trying to insert the second collection into the first collection, I know how to do it by using $lookup in mongo terminal, but I kind of confuse how to do it the same in mongoose. I’ve already tried populate() but it didn’t work either.First SchemaSecond SchemaThe routeWhat I want is to create a new user along with its detail", "username": "Shrek_Realista" }, { "code": "populate_idnullsave", "text": "I think your “insert collection into collection” statement is not what you want here, and calling it “insert documents into document” would be a better choice.I am not mongoose expert, but it seems populate changes your client-side document, not on the server side. it seems it needs some already created documents with _id fields or returns null if the id does not match. then if your client side population does fine (you can check by printing logs as examples use) then you can use save method to add new updated document back into database.but if you want to create completely new document, this is not the way how you would do it. first start exporting “Details” model as you did for “User” model. then create the details object as you did for the user object but use this user’s id as it is a required field for details object. now you may just “save” them in their respective collections and later use “$lookup” or “populate” to merge when needed.and more on that, if what you want is to embed details into a user document, then you need serious schema design changes.", "username": "Yilmaz_Durmaz" } ]
How can I insert a collection into another collection using mongoose?
2022-08-02T21:21:44.380Z
How can I insert a collection into another collection using mongoose?
10,852
null
[ "java", "atlas-device-sync", "android" ]
[ { "code": "", "text": "I use the Java SDK in Android Studio with an Android App in development for Realm to connect to my Realm App. I am trying to set up the schemas for my collections but it keeps failing. Now I always get the error “client file not found. The server has forgotten about the client-side file presented by the client. This is likely due to using a synchronized realm after terminating and re-enabling sync. Please wipe the file on the client to resume synchronization”. But I can’t find those files. I feel like I searched the whole internet. Please, can someone finally tell me where this file is I have to wipe to resume synchronization? I have an defaultSyncErrorHandler and all the other stuff already. I just need to know where this file is located so I can delete it.", "username": "SirSwagon_N_A" }, { "code": "https://realm.mongodb.com/groups/5fd280184de1015b9826b7b2/apps/60c2660fabba8acd737d45a8/dashboard\n", "text": "Hi, the next time that you reconnect you should client-reset and everything should just work? Is this not happening for you / can you send a link to your application (the URL in the cloud ui)Looks like this:", "username": "Tyler_Kaye" } ]
BadClientFileIdent Error client file not found. The server has forgotten about the client-side file presented by the client. This is likely due to using a synchronized realm after terminating and re-enabling sync. Please wipe the file on the client
2022-08-02T21:02:33.376Z
BadClientFileIdent Error client file not found. The server has forgotten about the client-side file presented by the client. This is likely due to using a synchronized realm after terminating and re-enabling sync. Please wipe the file on the client
2,054
null
[ "react-native" ]
[ { "code": "const result = useQuery('Player').filtered(\n `'clubId == $0 AND _id != $1', userProfile.clubId, userProfile._id`\n);\nlet queryStr = ('clubId == $0 AND _id != $1', userProfile.clubId, userProfile._id);\nconst result = useQuery('Player').filtered(\n queryStr\n);\n", "text": "I have an issue with creating a dynamic query that can be updated by a search bar. The current issue is that I get an “Invalid Predicate” error when passing the filter/query string as a variable vs hardcoding it in. I can figure out why this works:But this gives the invalid predicat eerror:Any help or insight is greatly appreciated.", "username": "Scott_Alphonso" }, { "code": "", "text": "Found a way around it by having a dynamic results value with hardcoded filter string but still curious as to why the original way wont work if anyone knows.", "username": "Scott_Alphonso" } ]
Realm/React useQuery Invalid Predicate
2022-08-02T20:49:30.687Z
Realm/React useQuery Invalid Predicate
2,280
null
[ "aggregation", "node-js", "compass" ]
[ { "code": "{\n _id: new ObjectId(\"62e82d326f664edd9b96f33a\"),\n title: 'Test 0',\n description: '...',\n user_id: new ObjectId(\"62e82d326f664edd9b96f330\"),\n entries: [\n new ObjectId(\"62e82d6d0f5748cbefb0a4a3\"),\n new ObjectId(\"62e82f8c1b832b9714edf574\"),\n new ObjectId(\"62e82f8c1b832b9714edf577\"),\n new ObjectId(\"62e82fcc5c3e92e184d5c733\"),\n new ObjectId(\"62e830535297f48e35d47629\")\n ]\n{\n _id: new ObjectId(\"62e830535297f48e35d47629\"),\n beginTime: 2022-08-01T19:58:11.772Z,\n endTime: 2022-08-01T19:58:11.772Z,\n content: [Object],\n mapView: [Object],\n stories: [Array]\n }\n[\n {\n $match: {\n _id: new ObjectId('62e82d326f664edd9b96f33a')\n }\n },\n {\n $lookup: {\n from: 'entries',\n localField: 'entries',\n foreignField: '_id',\n as: 'entries2'\n }\n }\n ]\n{\n _id: new ObjectId(\"62e82d326f664edd9b96f33a\"),\n title: 'Test 0',\n description: '...',\n user_id: new ObjectId(\"62e82d326f664edd9b96f330\"),\n entries: [\n new ObjectId(\"62e82d6d0f5748cbefb0a4a3\"),\n new ObjectId(\"62e82f8c1b832b9714edf574\"),\n new ObjectId(\"62e82f8c1b832b9714edf577\"),\n new ObjectId(\"62e82fcc5c3e92e184d5c733\"),\n new ObjectId(\"62e830535297f48e35d47629\")\n ],\n entries2: [\n {\n _id: new ObjectId(\"62e830535297f48e35d47629\"),\n beginTime: 2022-08-01T19:58:11.772Z,\n endTime: 2022-08-01T19:58:11.772Z,\n content: [Object],\n mapView: [Object],\n stories: [Array]\n }\n ]\n}\n", "text": "Hi,\nI’m using an aggregation to replace ObjectIds in one collection with entries in another collection. The stories collection has the following structure:The entries array are the _ids of the entries in the entries collection:My aggregation is the following:I would expect, that the result (entries2) contains 5 entries, but instead the array has only one (the last element):I get the same result with compass or running it on nodejs.\nAny ideas?Carsten", "username": "Carsten_Bottcher" }, { "code": "mongosh> db.stories.find()\n{ _id: 1, entries: [ 2, 3, 4 ] }\nmongosh> db.entries.find()\n{ _id: 2, title: 'two' }\n{ _id: 3, title: 'three' }\n{ _id: 4, title: 'four' }\nmongosh> db.stories.aggregate( {\n $lookup: {\n from: 'entries',\n localField: 'entries',\n foreignField: '_id',\n as: 'entries2'\n }\n })\n{ _id: 1,\n entries: [ 2, 3, 4 ],\n entries2: \n [ { _id: 2, title: 'two' },\n { _id: 3, title: 'three' },\n { _id: 4, title: 'four' } ] }\n", "text": "Your $lookup should work.Sometimes, we think the documents are in the $lookup-ed up collection because we forget to check if the data type is the same. It is possible that in one collection you a string and in the other collection you have an ObjectId.This or the entries are really not in the collection.", "username": "steevej" }, { "code": "", "text": "Hi Steeve,\nthank you for your answer. I re-generated my test data and now the aggregation works as expected. Unclear what was wrong, but I discovered that the Aggregation view in Compass had old data shown, so maybe I used old IDs.\nThanks anyway.", "username": "Carsten_Bottcher" } ]
Lookup for array returns only one element
2022-08-02T17:31:02.720Z
Lookup for array returns only one element
2,330
null
[ "production", "field-encryption", "c-driver" ]
[ { "code": "", "text": "No changes since 1.22.0. Version incremented to match the libmongoc version.Bug fixes:Fix documentation build when using Sphinx 5.0 or newerUpdate patch release of libmongocrypt to 1.5.2: Fix a potential data\ncorruption bug in RewrapManyDataKey when rotating encrypted data encryption\nkeys backed by GCP or Azure key services.The following conditions will trigger this bug:A GCP-backed or Azure-backed data encryption key being rewrapped requires\nfetching an access token for decryption of the data encryption key.The result of this bug is that the key material for all data encryption keys\nbeing rewrapped is replaced by new randomly generated material, destroying\nthe original key material.To mitigate potential data corruption, upgrade to this version or higher\nbefore using RewrapManyDataKey to rotate Azure-backed or GCP-backed data\nencryption keys. A backup of the key vault collection should always be taken\nbefore key rotation.Other:Thanks to everyone who contributed to this release.", "username": "Colby_Pike" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB C Driver 1.22.1 Released
2022-08-02T18:59:33.399Z
MongoDB C Driver 1.22.1 Released
2,528
null
[ "kafka-connector" ]
[ { "code": "", "text": "Hello. I have used mongodb sink connector for writing the message into database. I integrated with Create and update that’s working fine. I wish I want to delete the document completely based on document fields. I have given below curl configuration it’s not working properly. Please, pointout what i have done wrong…\ncurl -X POST -H “Content-Type: application/json” -d '{“name”:“test-testing-delete”,“config”:{“topics”:“movies”,“connector.class”:“com.mongodb.kafka.connect.MongoSinkConnector”,“tasks.max”:“1”,“connection.uri”:“mongodb://localhost:27017”,“database”:“flower”,“collection”:“movies”,“key.converter”:“org.apache.kafka.connect.storage.StringConverter”,“value.converter”:“org.apache.kafka.connect.storage.StringConverter”,“key.converter.schemas.enable”:“false”,“value.converter.schemas.enable”:“false”,“document.id.strategy.overwrite.existing”:true,“document.id.strategy”:“com.mongodb.kafka.connect.sink.processor.id.strategy.PartialKeyStrategy”,“document.id.strategy.partial.key.projection.list”:“id”,“document.id.strategy.partial.key.projection.type”:“ALLOWLIST”,“writemodel.strategy”:“com.mongodb.kafka.connect.sink.writemodel.strategy.DeleteOneBusinessKeyStrategy”}}’ localhost:8083/connectors", "username": "Sherlin_Susanna" }, { "code": " \"document.id.strategy.partial.value.projection.type\": \"AllowList\",\n \"document.id.strategy.partial.value.projection.list\": \"id\",\n", "text": "Here is an example of this exact scenario MongoDB Connector for Apache Kafka 1.5 Available Now | MongoDB BlogIm not sure what your source data in the topic looks like, but it looks like you are using the key project, perhaps try the value", "username": "Robert_Walters" }, { "code": "", "text": "Hi. Actually , My implementation also same which you have shared with me. some times, update and delete working fine sometimes its not working as i expected. I have posted my config below, please point out what I did wrong?,\nUpdate config,\ncurl -X POST -H “Content-Type: application/json” -d ‘{“name”:“test-testing-update11”,\n“config”:{“topics”:“employees”,\n“connector.class”:“com.mongodb.kafka.connect.MongoSinkConnector”,\n“tasks.max”:“1”,\n“connection.uri”:“mongodb://localhost:27017”,\n“database”:“flower”,\n“collection”:“employees”,\n“key.converter”:“org.apache.kafka.connect.storage.StringConverter”,\n“value.converter”:“org.apache.kafka.connect.storage.StringConverter”,\n“key.converter.schemas.enable”:“false”,\n“value.converter.schemas.enable”:“false”,\n“document.id.strategy.overwrite.existing”:true,\n“document.id.strategy”:“com.mongodb.kafka.connect.sink.processor.id.strategy.PartialValueStrategy”,\n“document.id.strategy.partial.value.projection.list”:“id”,\n“document.id.strategy.partial.value.projection.type”:“AllowList”,\n“writemodel.strategy”:“com.mongodb.kafka.connect.sink.writemodel.strategy.UpdateOneTimestampsStrategy”\n}}’ localhost:8083/connectorsDelete config,\ncurl -X POST -H “Content-Type: application/json” -d ‘{“name”:“test-testing-delete36”,\n“config”:{“topics”:“subjects”,\n“connector.class”:“com.mongodb.kafka.connect.MongoSinkConnector”,\n“tasks.max”:“1”,\n“connection.uri”:“mongodb://localhost:27017”,\n“database”:“flower”,\n“collection”:“subjects”,\n“key.converter”:“org.apache.kafka.connect.storage.StringConverter”,\n“value.converter”:“org.apache.kafka.connect.storage.StringConverter”,\n“key.converter.schemas.enable”:“false”,\n“value.converter.schemas.enable”:“false”,\n“document.id.strategy”:“com.mongodb.kafka.connect.sink.processor.id.strategy.PartialValueStrategy”,\n“document.id.strategy.partial.value.projection.list”:“class”,\n“document.id.strategy.partial.value.projection.type”:“AllowList”,\n“writemodel.strategy”:“com.mongodb.kafka.connect.sink.writemodel.strategy.DeleteOneBusinessKeyStrategy”\n}}’ localhost:8083/connectors", "username": "Sherlin_Susanna" }, { "code": "class", "text": "If it is sometimes working, my guess is the data that is on the kafka topic doesn’t include the key value pair that contains the class field.", "username": "Robert_Walters" }, { "code": "", "text": "data will be like this {“id”:“ESC001”,“class”:“2”}\nIn distributed.properties file except plugin.path what should I include?", "username": "Sherlin_Susanna" }, { "code": "", "text": "Could you please give the implementation link for update?. because, whenever I am giving curl command its inserting onemore record with new object id and _id field. I don’t know what I am doing wrong,\nUpdate Curl command is below,curl -X POST -H “Content-Type: application/json” -d ‘{“name”:“test-testing-update1”,\n“config”:{“topics”:“products”,\n“connector.class”:“com.mongodb.kafka.connect.MongoSinkConnector”,\n“tasks.max”:“1”,\n“connection.uri”:“mongodb://localhost:27017”,\n“database”:“flower”,\n“collection”:“quickstart”,\n“key.converter”:“org.apache.kafka.connect.storage.StringConverter”,\n“value.converter”:“org.apache.kafka.connect.storage.StringConverter”,\n“key.converter.schemas.enable”:“false”,\n“value.converter.schemas.enable”:“false”,\n“document.id.strategy”:“com.mongodb.kafka.connect.sink.processor.id.strategy.PartialValueStrategy”,\n“document.id.strategy.partial.value.projection.type”:“AllowList”,\n“document.id.strategy.partial.value.projection.list”:“p_id”,\n“writemodel.strategy”:“com.mongodb.kafka.connect.sink.writemodel.strategy.UpdateOneTimestampsStrategy”,\n“document.id.strategy.overwrite.existing”:true\n}}’ localhost:8083/connectors", "username": "Sherlin_Susanna" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb connector - delete document by using sink connector
2022-08-01T06:07:21.444Z
Mongodb connector - delete document by using sink connector
4,049
null
[ "node-js", "connecting" ]
[ { "code": "import { MongoClient, MongoClientOptions } from \"mongodb\"\n\n// @TODO should be typed properly, typing HAVE to be improves\nconst uri = process.env.MONGODB_URI\nconst dbName = process.env.MONGODB_DB\n\nlet cachedClient: any = null\nlet cachedDb: any = null\n\nexport async function connectToDatabase() {\n // check the cached.\n if (cachedClient && cachedDb) {\n // load from cache\n return {\n client: cachedClient,\n db: cachedDb,\n }\n }\n\n // set the connection options\n const opts = {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n }\n\n // Connect to cluster\n let client = new MongoClient(uri!, opts as MongoClientOptions)\n await client.connect()\n let db = client.db(dbName)\n\n // set cache\n cachedClient = client\n cachedDb = db\n\n return {\n client: cachedClient,\n db: cachedDb,\n }\n}\nMONGODB_URI=mongodb+srv://admin:[email protected]/?retryWrites=true&w=majority\nMONGODB_DB=dbname\n", "text": "The database was previously stored on Atlas and now it needs to be transferred to a local server, since there is no access to it due to company restrictions. I’m trying to change the code of the node js application to connect to db but so far without success.I created a local mongo database and I can connect to it through mongosh.\nBut I can’t connect node js application to this database.What i must to change or add into code to connect to local mongodb?\nEvery time my web application writes - “there is no connection to the database”.Working for remote DB on Atlas:\nconnection to db: db → connection → mongodb.tsI have db ENVs in file .env.local + in DockerfileAnd now I created a local mongo database on the same host (localhost) and node js web application can’t connect to it.", "username": "Orange_Banana" }, { "code": "console.log(process.env.MONGODB_URI)", "text": "is it still writing to Atlas? Maybe your app is caching the connection? Try restarting the NodeJS app and add a console.log(process.env.MONGODB_URI) to verify its getting the right value.", "username": "Robert_Walters" }, { "code": "MONGODB_URImongodb+srv://admin:[email protected]/?retryWrites=true&w=majority", "text": "If yourMONGODB_URIlooks likemongodb+srv://admin:[email protected]/?retryWrites=true&w=majorityThen you are still trying to connect to Atlas. You are not trying to connect tolocal mongo databaseVisit documentation to see how to connect localhost.", "username": "steevej" }, { "code": "mongodb+srv://admin:[email protected]/?retryWrites=true&w=majoritymongodb://localhostmongodb://admin:pass@localhost", "text": "Then you are still trying to connect to Atlas. You are not trying to connect toSorry I’m not indicate that this is previous connection settings\nmongodb+srv://admin:[email protected]/?retryWrites=true&w=majorityAnd this one is new local connection settings:\n1st variant without login and pass\nmongodb://localhost\n2nd variant with credentials\nmongodb://admin:pass@localhost", "username": "Orange_Banana" }, { "code": "console.log(process.env.MONGODB_URI)", "text": "is it still writing to Atlas? Maybe your app is caching the connection? Try restarting the NodeJS app and add a console.log(process.env.MONGODB_URI) to verify its getting the right value.I run the app and database through docker-compose every time, so there should be any cache.", "username": "Orange_Banana" }, { "code": "", "text": "Is your local mongodb a replica set?", "username": "Robert_Walters" }, { "code": "ss -tlnp\nps -aef | grep mongo\ndocker ps\n", "text": "Please share any error messages you get. The message“there is no connection to the database”is one of your application message. It would be nice to see the code that prints it.Share the command you use toconnect to it through mongoshShare the output of:What do you get if you remove ! from uri! in your call new MongoClient.", "username": "steevej" }, { "code": "", "text": "The problem is solved.In order for the web application to connect to the local database, I added lines to the dockerfile with the installation of mongo for debian 10.I also indicated everywhere the name of the base that I indicated in docker-compose - “mongo” instead of localhost.After that, there is a connection and everything works.It remains to configure so that new data remains after a reboot. But this is another topic.Thanks to everyone who answered!", "username": "Orange_Banana" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to change the code so that the nodejs connects to a local database instead of a remote one on Atlas?
2022-08-01T14:08:26.801Z
How to change the code so that the nodejs connects to a local database instead of a remote one on Atlas?
3,417
null
[ "database-tools", "backup" ]
[ { "code": "mongorestore --gzip --drop --nsInclude=\"*\" --archive=/./././code/backups/notura-db-1659281356578.gzip -v2022-07-31T23:55:19.574+0200 using write concern: &{majority false 0}\n2022-07-31T23:55:19.606+0200 will listen for SIGTERM, SIGINT, and SIGKILL\n2022-07-31T23:55:19.607+0200 connected to node type: standalone\n2022-07-31T23:55:19.619+0200 archive format version \"0.1\"\n2022-07-31T23:55:19.619+0200 archive server version \"5.0.3\"\n2022-07-31T23:55:19.619+0200 archive tool version \"100.5.1\"\n2022-07-31T23:55:19.620+0200 preparing collections to restore from\n2022-07-31T23:55:19.632+0200 demux finishing (err:<nil>)\n2022-07-31T23:55:19.632+0200 received from namespaceChan\n2022-07-31T23:55:19.632+0200 restoring up to 4 collections in parallel\n2022-07-31T23:55:19.632+0200 building indexes up to 4 collections in parallel\n2022-07-31T23:55:19.632+0200 0 document(s) restored successfully. 0 document(s) failed to restore.\n", "text": "after many hours scouring the internet (including on Stackoverflow), checking the documentation back and forth of the official MongoDB website. I still did not find the solution why my archived file with mongodump is not restored with the following commandsmongorestore --gzip --drop --nsInclude=\"*\" --archive=/./././code/backups/notura-db-1659281356578.gzip -vI get the following output:I do not get any error message, nor does my database getting updated.I am running MongoDB 5.0.3 CommunityAny help is highly appreciated!Aron", "username": "Aron_Bijl" }, { "code": "", "text": "How was the dump created?\nWhat command was used?", "username": "Ramachandra_Tummala" }, { "code": "mongodump --archive=/./././backups/notura-db-1659281356578.gzip --db=notura-db --gzip", "text": "I used the following command to dump the library:mongodump --archive=/./././backups/notura-db-1659281356578.gzip --db=notura-db --gzip", "username": "Aron_Bijl" }, { "code": "", "text": "Is your mongodump and mongorestore of same version?\nIs your source DB version and target DB version same?\nDid you try without gzip option?\nI tested it on my Windows mongodb version 4.0.5.Works fine with both archive and gzip\nNote:I am not using mongodb tools.In old versions the utility tools used to come with mongodb installation", "username": "Ramachandra_Tummala" }, { "code": "mongodump --uri=\"mongodb://localhost:27017\" --db=\"notura-recipies\" --out=/./././code/backups/mongorestore --drop /./././code/backups/notura-recipies -v2022-08-01T20:09:35.949+0200 using write concern: &{majority false 0}\n2022-08-01T20:09:35.969+0200 will listen for SIGTERM, SIGINT, and SIGKILL\n2022-08-01T20:09:35.970+0200 connected to node type: standalone\n2022-08-01T20:09:35.971+0200 mongorestore target is a directory, not a file\n2022-08-01T20:09:35.971+0200 preparing collections to restore from\n2022-08-01T20:09:35.971+0200 don't know what to do with file \"/./././code/backups/notura-recipies/recipes.bson\", skipping...\n2022-08-01T20:09:35.971+0200 don't know what to do with file \"/./././code/backups/notura-recipies/recipes.metadata.json\", skipping...\n2022-08-01T20:09:35.971+0200 don't know what to do with file \"/./././code/backups/notura-recipies/users.bson\", skipping...\n2022-08-01T20:09:35.971+0200 don't know what to do with file \"/./././code/backups/notura-recipies/users.metadata.json\", skipping...\n2022-08-01T20:09:35.971+0200 restoring up to 4 collections in parallel\n2022-08-01T20:09:35.971+0200 building indexes up to 4 collections in parallel\n2022-08-01T20:09:35.971+0200 0 document(s) restored successfully. 0 document(s) failed to restore.\n", "text": "Thanks for your help. I have altered a few things to be sure, based on the feedback you have given me.I changed the the dump to:\nmongodump --uri=\"mongodb://localhost:27017\" --db=\"notura-recipies\" --out=/./././code/backups/This works, it dumps the folder with the files in it (2 .bson and 2 .json files), then I altered the restore to the following mongorestore instruction:mongorestore --drop /./././code/backups/notura-recipies -vThe verbose output is a bit different though, but still the same end result:It seems like that the mongorestore is somehow unable to read in the files - sigh -", "username": "Aron_Bijl" }, { "code": "", "text": "Please share the content of the 2 .json files.Please redo the mongodump with --verbose and share the output.Note that you are doing something very risky. You do mongorestore --drop into the same database of the same server for which you did mongodump. I think you might lose your data if something goes wrong. You might want to use --dryrun until you are certain of what you are doing.If the collections are not too big, try mongoexport with --verbose and share the output.Why do you start your path with/././.In the examples I saw, the path specified in mongorestore must match the path specified in mongodump. You seem to add the /notura-recipies in mongorestore compared to mongodump, so you might start mongorestore one directory too deep.Since you mongodump and mongorestore into the same server, you might want to use --nsFrom and --nsTo to restore into a different database.", "username": "steevej" }, { "code": "{\"indexes\":[{\"v\":{\"$numberInt\":\"2\"},\"key\":{\"_id\":{\"$numberInt\":\"1\"}},\"name\":\"_id_\"},{\"v\":{\"$numberInt\":\"2\"},\"key\":{\"title\":{\"$numberInt\":\"1\"}},\"name\":\"title_1\",\"background\":true,\"unique\":true},{\"v\":{\"$numberInt\":\"2\"},\"key\":{\"slug\":{\"$numberInt\":\"1\"}},\"name\":\"slug_1\",\"background\":true,\"unique\":true}],\"uuid\":\"45ee68c614634f2c8b3d7c3abe5343c2\",\"collectionName\":\"recipes\",\"type\":\"collection\"}{\"indexes\":[{\"v\":{\"$numberInt\":\"2\"},\"key\":{\"_id\":{\"$numberInt\":\"1\"}},\"name\":\"_id_\"},{\"v\":{\"$numberInt\":\"2\"},\"key\":{\"email\":{\"$numberInt\":\"1\"}},\"name\":\"email_1\",\"background\":true,\"unique\":true}],\"uuid\":\"ee929337aa1b4da1a2f447775d11726b\",\"collectionName\":\"users\",\"type\":\"collection\"}mongoexport --uri=\"mongodb://localhost:27017\" --collection=\"recipes\" --db=\"notura-recipies\" --out /Users/aron/Desktop/code/backups/recipes.json -v2022-08-02T09:05:24.414+0200 will listen for SIGTERM, SIGINT, and SIGKILL\n2022-08-02T09:05:24.430+0200 connected to: mongodb://localhost:27017\n2022-08-02T09:05:24.437+0200 exported 6 records\n --nsFrom \"dbname.*\" \n --nsTo \"new_dbname.*\" \n", "text": "Thanks, these are good questions! the two .json files consist of the following information:Recipes metadata:\n{\"indexes\":[{\"v\":{\"$numberInt\":\"2\"},\"key\":{\"_id\":{\"$numberInt\":\"1\"}},\"name\":\"_id_\"},{\"v\":{\"$numberInt\":\"2\"},\"key\":{\"title\":{\"$numberInt\":\"1\"}},\"name\":\"title_1\",\"background\":true,\"unique\":true},{\"v\":{\"$numberInt\":\"2\"},\"key\":{\"slug\":{\"$numberInt\":\"1\"}},\"name\":\"slug_1\",\"background\":true,\"unique\":true}],\"uuid\":\"45ee68c614634f2c8b3d7c3abe5343c2\",\"collectionName\":\"recipes\",\"type\":\"collection\"}Users metadata:\n{\"indexes\":[{\"v\":{\"$numberInt\":\"2\"},\"key\":{\"_id\":{\"$numberInt\":\"1\"}},\"name\":\"_id_\"},{\"v\":{\"$numberInt\":\"2\"},\"key\":{\"email\":{\"$numberInt\":\"1\"}},\"name\":\"email_1\",\"background\":true,\"unique\":true}],\"uuid\":\"ee929337aa1b4da1a2f447775d11726b\",\"collectionName\":\"users\",\"type\":\"collection\"}Nope, it is a small collection. I used mongoexport:\nmongoexport --uri=\"mongodb://localhost:27017\" --collection=\"recipes\" --db=\"notura-recipies\" --out /Users/aron/Desktop/code/backups/recipes.json -vI have two collections, I exported one here. The other one would be pretty much the same.Output is:I started with /././. to remove the actual path, since that is not important to know and improve readability. On the actual server, this is shorter anyway. So I thought. Sorry if this confused you.The reason why I went one layer deeper is that in backups there will be multiple backups, with nodeJS I select the latest one and will feed that to my database in case of an accident. The other ones I can download with something like FileZilla.But you re right about this, after removing notura-recipies the files got restored! You mean that I should rename the database from one to. new one, with:My idea was to pluck the latest archived backup in the folder and restore that, without any duplications. Especially when these files get larger.How can I best do this? I clearly did not do it right - sigh -Thanks for your help guys, this helps me tremendously!", "username": "Aron_Bijl" }, { "code": " --nsFrom \"dbname.*\" \n --nsTo \"new_dbname.*\" \n", "text": "You mean that I should rename the database from one to. new one, with:That is what I meant but for your use case (restoring from backups) you do not need that as you really want to overwrite what is there.To handle the following:in backups there will be multiple backupsYou need to use https://www.mongodb.com/docs/database-tools/mongorestore/#std-option-mongorestore.--nsInclude", "username": "steevej" }, { "code": "", "text": "Thank you! With the information you have given me, I will be able to correct the issue. Which is making sure that it is being dumped correctly before it can be restored correctly!", "username": "Aron_Bijl" } ]
Mongorestore --gzip --archive failed w/ - 0 document(s) restored successfully. 0 document(s) failed
2022-08-01T06:47:41.082Z
Mongorestore &ndash;gzip &ndash;archive failed w/ - 0 document(s) restored successfully. 0 document(s) failed
5,881
null
[ "aggregation", "queries", "node-js", "mongoose-odm" ]
[ { "code": "[\n {\n \"day\": \"Monday\",\n \"startTime\": \"09:00\",\n \"endTime\": \"17:00\",\n },\n {... //Other days}\n]\nlet startTime = await Booking.aggregate([\n {\n $match: { _id: new mongoose.Types.ObjectId(req.params.id) },\n },\n {\n $lookup: {\n from: \"parkings\",\n localField: \"parkingId\",\n foreignField: \"_id\",\n as: \"parking\",\n },\n },\n {\n $unwind: \"$parking\",\n },\n { $unwind: \"$parking.availability\" },\n ]);\n[{\n _id: new ObjectId(\"62e8ac3943a6bfaf80de75b5\"),\n parkingId: new ObjectId(\"62e11ab3079daa939290fa07\"),\n user: new ObjectId(\"62c950dfc96c2b690028be88\"),\n date: 2022-07-26T00:00:00.000Z,\n startTime: 2022-07-26T09:30:00.000Z,\n duration: { days: 0, hours: 2, minutes: 0 },\n endTime: 2022-07-26T11:30:00.000Z,\n isFeePaid: false,\n status: 'sent',\n isBookingCancelled: { value: false },\n isStarted: false,\n isEnabled: false,\n parking: {\n _id: new ObjectId(\"62e11ab3079daa939290fa07\"),\n userId: new ObjectId(\"62c950dfc96c2b690028be88\"),\n contactInfo: [Object],\n about: 'Nisi occaecat ipsum',\n parkingImage: [],\n location: [Array],\n price: 5,\n availability: [Object],\n parkingType: 'residence',\n parkingInfo: [Array],\n totalSpots: 10,\n coordinates: [Object],\n status: 'active',\n isFeePaid: false,\n }\n },\n]\n", "text": "I have 2 tables booking and parking. In the booking table there is parkingId which refers to parking model. In the parking model there is a field named availability which is following:I want to unwind the availability array.I have tried this:But it logs the result 6 times because there are 6 objects in the availability array like following.I want only availability of the requested day only. How can I get this?", "username": "Mitul_Kheni" }, { "code": "", "text": "You need a $set stage that uses $filter on the array.", "username": "steevej" } ]
Unwind array of array in mongoDB
2022-08-02T11:19:58.480Z
Unwind array of array in mongoDB
1,128
https://www.mongodb.com/…b_2_1024x473.png
[ "atlas-cluster", "database-tools" ]
[ { "code": "mongoexport --uri=\"mongodb+srv://preprod-shippingservice.xxxxx.mongodb.net/\" --username=chase --db=iHerb_GLS --collection=Gls_Pickup_Location --query=\"{\"CarrierName\":\"Gaash\"}\" --type=json --out=B:\\MongoExport\\Itsd261730_BackupPreprod.json\n", "text": "For the life of me I cannot get MongoExport with --query to work. Ive tried many variations of single and double quotes and it just wont work, not sure whats wrong.Here is the commandAnd then examples of trying a mix of queries and variations of double and single quotes\n\nimage1362×630 174 KB\n", "username": "Chase_Russell" }, { "code": "--query='{\"CarrierName\" : \"Gaash\" }' should work.\n", "text": "The version withI would try with version latest 100.5.4 version.", "username": "steevej" } ]
MongoExport Error
2022-07-29T16:55:44.273Z
MongoExport Error
1,537
null
[]
[ { "code": "exports = async() => {\n const stripe = require('stripe')(context.values.get('StripeSecretKey'))\n \n const res = await stripe.customers.list()\n console.log(JSON.stringify(res))\n \n const res2 = await stripe.customers.list()\n console.log(JSON.stringify(res2))\n // this second call would just hang forever and fail, it does work now\n}\n// I'm using latest package stripe 8.216.0\nexports = () => {\n const { S3Client } = require('@aws-sdk/client-s3')\n // just by requiring the client, it fails\n}\n// using latest @aws-sdk/client-s3 3.67.0\n> ran at 1649539308006\n> took \n> error: \nfailed to execute source for 'node_modules/@aws-sdk/client-s3/dist-cjs/index.js': FunctionError: failed to execute source for 'node_modules/@aws-sdk/client-s3/dist-cjs/S3.js': FunctionError: failed to execute source for 'node_modules/@aws-sdk/client-s3/dist-cjs/S3Client.js': FunctionError: failed to execute source for 'node_modules/@aws-sdk/client-s3/dist-cjs/runtimeConfig.js': FunctionError: failed to execute source for 'node_modules/@aws-sdk/client-sts/dist-cjs/index.js': FunctionError: failed to execute source for 'node_modules/@aws-sdk/client-sts/dist-cjs/STS.js': FunctionError: failed to execute source for 'node_modules/@aws-sdk/client-sts/dist-cjs/STSClient.js': FunctionError: failed to execute source for 'node_modules/@aws-sdk/client-sts/dist-cjs/runtimeConfig.js': FunctionError: failed to execute source for 'node_modules/@aws-sdk/credential-provider-node/dist-cjs/index.js': FunctionError: failed to execute source for 'node_modules/@aws-sdk/credential-provider-node/dist-cjs/defaultProvider.js': FunctionError: failed to execute source for 'node_modules/@aws-sdk/credential-provider-ini/dist-cjs/index.js': FunctionError: failed to execute source for 'node_modules/@aws-sdk/credential-provider-ini/dist-cjs/fromIni.js': FunctionError: failed to execute source for 'node_modules/@aws-sdk/util-credentials/dist-cjs/index.js': FunctionError: failed to execute source for 'node_modules/@aws-sdk/util-credentials/dist-cjs/parse-known-profiles.js': FunctionError: failed to execute source for 'node_modules/@aws-sdk/shared-ini-file-loader/dist-cjs/index.js': FunctionError: failed to execute source for 'node_modules/@aws-sdk/shared-ini-file-loader/dist-cjs/loadSharedConfigFiles.js': FunctionError: failed to execute source for 'node_modules/@aws-sdk/shared-ini-file-loader/dist-cjs/slurpFile.js': TypeError: Cannot access member 'readFile' of undefined\n\tat node_modules/@aws-sdk/shared-ini-file-loader/dist-cjs/slurpFile.js:18:16(34)\n\n\tat require (native)\n\tat node_modules/@aws-sdk/shared-ini-file-loader/dist-cjs/loadSharedConfigFiles.js:24:27(52)\n\n\tat require (native)\n\tat node_modules/@aws-sdk/shared-ini-file-loader/dist-cjs/index.js:19:30(41)\n\n\tat require (native)\n\tat node_modules/@aws-sdk/util-credentials/dist-cjs/parse-known-profiles.js:22:40(34)\n\n\tat require (native)\n\tat node_modules/@aws-sdk/util-credentials/dist-cjs/index.js:19:30(41)\n\n\tat require (native)\n\tat node_modules/@aws-sdk/credential-provider-ini/dist-cjs/fromIni.js:16:34(28)\n\n\tat require (native)\n\tat node_modules/@aws-sdk/credential-provider-ini/dist-cjs/index.js:17:30(32)\n\n\tat require (native)\n\tat node_modules/@aws-sdk/credential-provider-node/dist-cjs/defaultProvider.js:36:41(51)\n\n\tat require (native)\n\tat node_modules/@aws-sdk/credential-provider-node/dist-cjs/index.js:17:30(32)\n\n\tat require (native)\n\tat node_modules/@aws-sdk/client-sts/dist-cjs/runtimeConfig.js:30:42(58)\n\n\tat require (native)\n\tat node_modules/@aws-sdk/client-sts/dist-cjs/STSClient.js:58:31(94)\n\n\tat require (native)\n\tat node_modules/@aws-sdk/client-sts/dist-cjs/STS.js:54:27(90)\n\n\tat require (native)\n\tat node_modules/@aws-sdk/client-sts/dist-cjs/index.js:18:30(35)\n\n\tat require (native)\n\tat node_modules/@aws-sdk/client-s3/dist-cjs/runtimeConfig.js:26:28(48)\n\n\tat require (native)\n\tat node_modules/@aws-sdk/client-s3/dist-cjs/S3Client.js:66:31(114)\n\n\tat require (native)\n\tat node_modules/@aws-sdk/client-s3/dist-cjs/S3.js:224:26(515)\n\n\tat require (native)\n\tat node_modules/@aws-sdk/client-s3/dist-cjs/index.js:18:30(35)\n// to create the stripe event with need the raw body\n// the body.text() does not work\nexports = async function({ headers, body }, response) {\n\n const stripe = require('stripe')(context.values.get('StripeSecretKey'))\n\n let event\n\n try {\n event = stripe.webhooks.constructEvent(\n body.text(), // HERE IS THE ISSUE, WE CAN'T USE THE TEXT\n headers['Stripe-Signature'][0],\n context.values.get('StripeWebhookSecret'),\n )\n }\n catch (err) {\n // ...\n }\n}\n“Webhook Error: No signatures found matching the expected signature for payload. Are you passing the raw request body you received from Stripe? https://github.com/stripe/stripe-node#webhook-signing ”\n", "text": "Hello @Drew_DiPalma thanks for the updates.I had 3 problems with Realm Functions, I just tested them now, please allow me to share:1— Stripe, couldn’t make 2 calls in a row using its client:\nIT IS WORKING NOW! Thanks!2— AWS SDK v3, S3 Client, it fails just by requiring the client:\nIT IS NOT WORKING YET, STILL FAILINGHere is the full error:My goal is to use AWS SDK v3, to replace the third-party services that will be deprecated later the year, so would be good if you could look into.3— Must have access to the raw request body for Stripe Webhook:\nNOT YET, THERE IS NO WAY TO ACCESS THE RAW REQUEST BODY ON WEBHOOKSHere is the error that the Stripe library emits:For the Stripe Webhook I had to use a PHP server for that.Wish I didn’t, I have the entire Stripe implementation runing successfully on Realm Functions, only the Webhook I have to do somewhere else.Thank you very much.", "username": "andrefelipe" }, { "code": "text()aws-sdk/client-s3", "text": "G’Day @andrefelipe, I have extracted this post to a new one as there are lot of different issues getting mixed up with old post and it would help to separate each issue to a new post.About the issue on Stripe Webhooks, could you share more detail if text() isn’t populating for you, or isn’t in the format you expect?On testing, text() is working fine on POST endpoints, but not GET endpoints.On your aws-sdk/client-s3 question, could you please try using v2 client and share your observations? The engineering team is working on improvements for s3.I look forward to your response.Cheers, ", "username": "henna.s" }, { "code": "text()<Buffer 7b 0a 20 20 22 63 72 65 61 74 ...", "text": "Thanks @henna.s for the organization.I will follow the same numbers:2— AWS SDK v3, S3 Client\nThe v2 client works fine. I am using it through the Realm’s third-party services.3— Must have access to the raw request body for Stripe Webhook:\nI am indeed using POST. And the text() is populating correctly, it just isn’t in the format expected by Stripe, which is something like <Buffer 7b 0a 20 20 22 63 72 65 61 74 ...\nFor cross reference the issue is discussed here too Setup Stripe webhook - #3 by rouuugeThanks a lot, have a good day too.", "username": "andrefelipe" }, { "code": "", "text": "Thanks, @andrefelipe for the updates.There are two posts shared with me, could you check if this helps you with your use-caseI will update you on feedback from the engineering team.Cheers, ", "username": "henna.s" }, { "code": "", "text": "@henna.s Any updates?", "username": "Ebrima_Ieigh" }, { "code": "/***\n * This function manually validates the stripe signature for the webhook event\n *\n * Returns: boolean => true if valid request\n *\n * stripeSignature: string => Stripe-Signature received from request header i.e. request.headers[\"Stripe-Signature\"][0]\n * requestBody:string => request body from header i.e. request.body.text()\n */\nexports = function (stripeSignature, requestBody) {\n // e.g stripeSignature = \"t=2345345,v1=sdfas234sfasd, v0=akhjgfska1234234123\"\n const { getUnixTime } = require(\"date-fns\");\n\n //0. tolerance of 5 mins\n const TOLERANCE = 300;\n\n //1. get webhook endpoint secret from the values\n const STRIPE_WEBHOOK_SECRET = context.values.get(\"STRIPE_WEBHOOK_SECRET\");\n console.log(\"STRIPE_WEBHOOK_SECRET: \", STRIPE_WEBHOOK_SECRET);\n\n // 2. split the signature using \",\" to get a list of signature elements\n const signatureComponents = stripeSignature.split(\",\");\n\n // 3. Create a dictionary with with prefix as key for the splitted item\n const signComponentsDict = {};\n signatureComponents.forEach((item) => {\n const [prefix, value] = item.split(\"=\");\n if ([\"v1\", \"t\"].includes(prefix)) {\n signComponentsDict[prefix] = value;\n }\n });\n\n // 4. crete a signed payload by concatenting \"t\" element and the request body\n const signedPayload = `${signComponentsDict[\"t\"]}.${requestBody}`;\n\n //5. the \"v1\" is actual signature from stripe\n const incomingSignature = signComponentsDict[\"v1\"];\n\n //6. compute your own signature using the signed payload created in step 4\n const expectedSignature = utils.crypto.hmac(\n input = signedPayload,\n secret = STRIPE_WEBHOOK_SECRET,\n hash_function = \"sha256\",\n output_format = \"hex\"\n );\n\n //7. Get current time in unix format and also get the time from stripeSignature\n const incomingTimestamp = signComponentsDict[\"t\"];\n const currTimestamp = getUnixTime(new Date());\n const timestampDiff = currTimestamp - incomingTimestamp;\n\n //8. Comparision the signature respecting tolerance\n if (timestampDiff <= TOLERANCE) {\n if (incomingSignature === expectedSignature) {\n return true;\n }\n }\n\n return false;\n};\n\n", "text": "I am not sure but feels like the realm is transforming the request body before passing it to the attached function. Therefore “stripe.webhooks.constructEvent” is not getting the data in the desired format.We solved the issue by manually verifying the signature with a custom function.", "username": "Surender_Kumar" }, { "code": "", "text": "G’ Day @Ebrima_Ieigh , @Surender_Kumar ,Thank you @Surender_Kumar for sharing the code snippet, appreciate it.@Ebrima_Ieigh , unfortunately, “rawBody” is not supported by the platform and perhaps text() is something you are not looking for?Could you please help me know if it’s the same issue as what Andre is experiencing or if any of the above links or the snippet provided by Surender has helped you?I look forward to your response.Best,\nHenna", "username": "henna.s" }, { "code": "exports = async function(req, response) {\n const stripe = require('stripe')(context.values.get('STRIPE_API_KEY'));\n const { headers, body} = req;\n const endpointSecret = context.values.get('STRIPE_WEBHOOK_SECRET')\n\n // Get the signature sent by Stripe\n const signature = headers['Stripe-Signature'][0]\n\n try {\n event = stripe.webhooks.constructEvent(\n body.text(),\n signature,\n endpointSecret\n );\n console.log('body-text worked')\n } catch (err) {\n console.log(`️body-text Webhook signature verification failed.`, err.message);\n console.error(err)\n\n response.setStatusCode(400);\n return response\n }\n \n response.setStatusCode(200);\n return response;\n};\n\n", "text": "The stripe issue seems to be fixed now as the following is working for me", "username": "clueless_dev" }, { "code": "", "text": "Thanks for the confirmation @clueless_dev.I will close this topic now. If help is needed for a related issue, please create a new topic and link this topic with that.Happy Coding!Cheers, ", "username": "henna.s" }, { "code": "", "text": "", "username": "henna.s" } ]
Stripe Webhooks and AWS-S3 function call issue
2022-04-09T21:36:23.221Z
Stripe Webhooks and AWS-S3 function call issue
5,138
null
[ "queries", "node-js", "crud" ]
[ { "code": "socket.on('saveHike', () => { \n\n // save game stats\n \n async function saveHike() {\n\n console.log ('saving the Hike data');\n\n await Game.updateOne({},{ $set: { hikeEnd: Date.now() } });\n\n await Game.updateOne({},{ $set: { townStart: Date.now() } });\n\n var gameFind = await Game.find( { teamName: thisTeam } ); // grab this game's record\n\n console.log ('gameFind: ' + gameFind);\n\n console.log ('gameFind.teamName: ' + gameFind.teamName);\n\n console.log ('gameFind.hikeEnd: ' + gameFind.hikeEnd);\n\n }\n\n saveHike();\n\n });\n", "text": "In my Node.js server code, I retrieve the object from the collection, log it, but get ‘undefined’ when I try to log a field from the collection.\nHere’s the code:Console output:gameFind: {\n_id: new ObjectId(“62e1b6130b725fd445edd7b0”),\nteamName: ‘a’,\ncaptain: ‘s’,\nscore: 0,\nstartTime: 2022-07-27T22:02:59.022Z,\nleavetakingsEnd: 2022-07-27T22:02:59.038Z,\nhikeStart: 2022-07-27T22:02:59.042Z,\nhikeVotes: 1,\nhikeEnd: 2022-07-27T22:03:18.449Z,\ntownStart: 2022-07-27T22:03:18.453Z,\ntownEnd: 1970-01-01T00:00:00.000Z,\n__v: 0\n}gameFind.teamName: undefined\ngameFind.hikeEnd: undefinedI’m not getting why the object is clearly logged but calling the dot fields returns them as undefined. Any ideas would be greatly appreciated. Thanks in advance!", "username": "SleepyWakes" }, { "code": "await Game.updateOne({},{ $set: { hikeEnd: Date.now() } });\n\nawait Game.updateOne({},{ $set: { townStart: Date.now() } })\n", "text": "You should do the above in a single database access. You might be updating 2 completely different documents and with 2 different values of Date.now().Since you do Game.find() with teamName:thisTeam, I suspect that you want to Game.updateOne() with the same query.Finally, Game.find() returns a cursor but you seem to expect a document, I think you should be using Game.findOne().", "username": "steevej" }, { "code": "", "text": "Brilliant, that did the trick. Thanks much, Steeve!", "username": "SleepyWakes" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Difficulty pulling fields from collection with Node.js
2022-08-01T22:35:12.079Z
Difficulty pulling fields from collection with Node.js
1,463
null
[ "replication", "time-series" ]
[ { "code": "", "text": "Hello!\nI’m using Mogo community edition on 4.4 version in replicaset and classical collection, but i start reaching max performances on this clusters.\nSo i plan to switch for Timeseries on Shard in 6.0.0 (4x conf server replica + 4x RS1 + 4x RS2 + 2 mogos) usng DEV scaleway servers ( 2GB ram, 2 Cpu, 300Mb network, SSD), architecture is deployed, but, my performances seems bad.Because of the hardware impact, I started by doing a comparaison with:mongo are on the exact same machine configuration, fresh install (ubuntu) , and indexes are similar ( I added same indexes on both)Doing bluck writes (500 , 5000, 10000 items) i discovered that writes are slower than regular collection:I was expecting a better ingestion rate on timeseries (than regular collection), Im i missing something?many thanks\nbest regards,", "username": "Olivier_Guille" }, { "code": "", "text": "Hi,\nI was checking time inside my python script, using pymongo, and reversing ingestion order (regular collection prior to TS), changed all the results.\nI achieved similar ingestion Rate, but a lot of improuvement in time queries and storage size.best regards,", "username": "Olivier_Guille" } ]
Times series vs regular collection performances
2022-08-01T19:25:58.515Z
Times series vs regular collection performances
2,093
null
[ "server", "installation" ]
[ { "code": "", "text": "After installing homebrew and adding the MongoDB tap with “brew tap mongodb/brew”, the installation command “brew install [email protected]” returns the following error:Warning: No available formula with the name “[email protected]”.Instructions taken from this link: https://www.mongodb.com/docs/upcoming/tutorial/install-mongodb-on-os-x/. I know it’s only been a few days since 6.0 was released, but is the community server available to install on macOS with homebrew?", "username": "Alex_O_Brien" }, { "code": "", "text": "What is your os version?\nIt is supported on 10.14 or later\nDid you try brew update?\nAlso check this llink", "username": "Ramachandra_Tummala" }, { "code": "", "text": "2 posts were split to a new topic: Can’t run with MongoDB 6.0", "username": "Stennie_X" }, { "code": "", "text": "As of today it appears that mongodb-community has been upgraded to 6.0. I was able to successfully install the latest version with homebrew.", "username": "Alex_O_Brien" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can't install MongoDB 6.0 on macOS with Homebrew
2022-07-21T17:20:41.606Z
Can&rsquo;t install MongoDB 6.0 on macOS with Homebrew
8,656
null
[]
[ { "code": "", "text": "I cant register the exam with my github student developer account. [email protected]. I send email with this account but not replied yet.\nCan you please help me?", "username": "Orhan_Veli_GOGEBAKAN" }, { "code": "", "text": "\nVerification", "username": "Orhan_Veli_GOGEBAKAN" }, { "code": "", "text": "Hi @Orhan_Veli_GOGEBAKANThank you for reaching out and welcome to the forums!According to our data you’ve successfully registered for our MongoDB Student Pack offer. If you want to access the MongoDB University courses, you can click on ‘Start Learning with MongoDB University’ on your profile page to create an account. It looks like you don’t have a MongoDB University account at the moment.Please not that you’ll have to finish one of the two learning paths in order to receive the free certification.Good luck with the courses!Lieke", "username": "Lieke_Boon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
I cant login with my github student account after many tries
2022-08-01T15:15:01.942Z
I cant login with my github student account after many tries
4,378
null
[ "queries", "mongodb-shell" ]
[ { "code": "", "text": "Hi,\nIs there a way to save the history of command run to a file or redirect to a file (like in unix “fc” or “> filename” )", "username": "Yuri_52502" }, { "code": "", "text": "The mongo shell seems to keep an history by default, at least on my Arch Linux. Using the cursor up brings previous commands and let you edit them.", "username": "steevej" }, { "code": "", "text": "that is true. arrows up and down put previous commands under the cursor. But I want to save the whole history to a file just in case I want to review the labs or exercises. Copy-paste is a solution , should it be a better way.", "username": "Yuri_52502" }, { "code": "\nvar countries_USA = { \"countries\" .... }\ncountries_USA\nvar sort = { ‘$sort’ : … }\nsort\nvar pipeline = {\n{ '$match : [ … , countries_USA , … ] } ,\nsort\n}\n", "text": "If I may add. When I do things like trying to get the pipeline working and I know I will have to retype often more or less the same thing I always do it in a script and use stdin redirection to execute it. In my script I put trace that help me find my errors. For example for one of the lab I writecountries_USA\nvar sort = { ‘$sort’ : … }\nsort\nvar pipeline = {\n{ '$match : [ … , countries_USA , … ] } ,\nsort\n}\nThe shell will output all my intermediate variables and if a syntax error occurs, and it does often 8-(, I can pinpoint the error faster.", "username": "steevej" }, { "code": "", "text": "that is true. arrows up and down put previous commands under the cursor. But I want to save the whole history to a file just in case I want to review the labs or exercises. Copy-paste is a solution , should it be a better way.I usually open up a text editor and input all my ‘statements’ there before copy paste to mongo shell.", "username": "Benjamin_93799" }, { "code": "", "text": "Hi Yuri_52502! It’s been a minute, but just in case anyone else has this question, I believe that MongoDB takes care of this for you. Check out documentation on shell logs here: https://www.mongodb.com/docs/mongodb-shell/logs/", "username": "Russell_Gerhard" }, { "code": "", "text": "Hi @Yuri_52502,\nYou can utilize MongoDB for VS Code’s playgrounds to run a set of commands together and:If you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB", "username": "SourabhBagrecha" }, { "code": "", "text": "", "username": "SourabhBagrecha" } ]
Unix type operators in MongoDB shell
2018-11-14T21:09:02.888Z
Unix type operators in MongoDB shell
2,578
https://www.mongodb.com/…cb1bf88e902b.png
[ "aggregation", "data-modeling" ]
[ { "code": "\n[\n {\n \"matches\": [\n {\n \"player\": \"Hary\", \"challenger\": \"Jon\", \"rank\": 1\n },\n {\n \"player\": \"Jon\", \"challenger\": \"Hary\", \"rank\": 2\n },\n {\n \"player\": \"Birt\", \"challenger\": \"null\", \"rank\": 3\n }\n ]\n }\n ]\n", "text": "Via the aggregation pipeline on 2 collections (users and rankings) I have 3 object arrays: players, user_players and user_challengers. User_player and challengers are arrays of objects with 2 fields: name and ObjectId:\nMerge3_consolidated_Screenshot from 2022-07-27 14-21-10556×592 51.1 KB\nPlayers array matches the players to the correct challengers by ObjectId.You can see from the screenshots that an additional problem may be that the first object in the challengers array (correctly) has name: “Hary”. Hary is a challenger, but, of course, he is Jon’s challenger and Jon is the second object in the challengers array (so there needs to be a matching of the correct id’s, not just e.g. a merging of 2 arrays).I want a single array of corresponding names and a rank. In other words, I need to match the ObjectIds in Players to the corresponding ObjectIds and names in the other 2 arrays to give me json output that should look something like:There is likely a more efficient way to do this, but for now this was the only way I could find to connect the player and challenger ids to their names in the ‘users’ collection as I’m not an experienced db user.How can I obtain the json output I’m looking for from the 3 arrays and/or is there a better, more efficient, way to achieve the same result? thanks…", "username": "freeross" }, { "code": "", "text": "Could you please publish sample documents in JSON so that we can cut-n-paste into our system? This way we could experiment. We can’t with images from Compass or other GUI. Well we could, but it would be too long to type and create these documents.", "username": "steevej" }, { "code": " \"players\": [\n {\n \"playerId\": {\n \"$oid\": \"62c2b79d966b72973fe52316\"\n },\n \"challengerId\": {\n \"$oid\": \"62c2b79d966b72973fe52317\"\n },\n \"rank\": 1\n },\n {\n \"playerId\": {\n \"$oid\": \"62c2b79d966b72973fe52317\"\n },\n \"challengerId\": {\n \"$oid\": \"62c2b79d966b72973fe52316\"\n },\n \"rank\": 2\n },\n {\n \"playerId\": {\n \"$oid\": \"62df7cec5fe09351fb906997\"\n },\n \"challengerId\": {\n \"$oid\": \"62df80935488fc6b309dcf32\"\n },\n \"rank\": 3\n }\n ]\n\"user_players\": [\n {\n \"_id\": {\n \"$oid\": \"62c2b79d966b72973fe52316\"\n },\n \"name\": \"Hary\"\n },\n {\n \"_id\": {\n \"$oid\": \"62c2b79d966b72973fe52317\"\n },\n \"name\": \"Jon\"\n },\n {\n \"_id\": {\n \"$oid\": \"62df7cec5fe09351fb906997\"\n },\n \"name\": \"Birt\"\n }\n ]\n\"user_challengers\": [\n {\n \"_id\": {\n \"$oid\": \"62c2b79d966b72973fe52316\"\n },\n \"name\": \"Hary\"\n },\n {\n \"_id\": {\n \"$oid\": \"62c2b79d966b72973fe52317\"\n },\n \"name\": \"Jon\"\n }\n ]\n", "text": "Sure. No problem:‘Birt’ currently isn’t challenging or being challenged by anyone. Please let me know if there are any other input/output json formats I can provide that might assist. Thanks for having a look at it for me …", "username": "freeross" }, { "code": "[\n {\n \"_id\": {\n \"$oid\": \"62c2b5f0966b72973fe52314\"\n },\n \"active\": true,\n \"name\": \"Mary\",\n \"age\": 28,\n \"gender\": \"Female\",\n \"description\": {\n \"level\": \"C grade\",\n \"comment\": \"Looking for good games with women only\"\n },\n \"ownerOf\": [\n \"Anglia Ladies Ladder\",\n \"Marys Buddies Ladder\"\n ],\n \"memberOf\": [\n \"Janes Buddies\",\n \"Anglia C graders\"\n ]\n },\n {\n \"_id\": {\n \"$oid\": \"62c2b79d966b72973fe52316\"\n }\n \"active\": true,\n \"name\": \"Hary\",\n \"age\": 29,\n \"gender\": \"Male\",\n \"description\": {\n \"level\": \"B grade\",\n \"comment\": \"Looking for good games with men only\"\n },\n \"ownerOf\": [\n \"Anglia Mens Ladder\",\n \"Harys Buddies Ladder\"\n ],\n \"memberOf\": [\n \"Bobs Buddies\",\n \"Anglia B graders\"\n ]\n },\n {\n \"_id\": {\n \"$oid\": \"62c2b79d966b72973fe52317\"\n },\n \"active\": true,\n \"name\": \"Jon\",\n \"age\": 27,\n \"gender\": \"Male\",\n \"description\": {\n \"level\": \"A grade\",\n \"comment\": \"Looking for good games with men only\"\n },\n \"ownerOf\": [\n \"Jons Buddies Ladder\"\n ],\n \"memberOf\": [\n \"Harys Buddies Ladder\",\n \"Anglia A graders\"\n ]\n },\n {\n \"_id\": {\n \"$oid\": \"62df7cec5fe09351fb906997\"\n },\n \"active\": true,\n \"name\": \"Birt\",\n \"age\": 24,\n \"gender\": \"Male\",\n \"description\": {\n \"level\": \"D grade\",\n \"comment\": \"lkjljlj\"\n },\n \"ownerOf\": [],\n \"memberOf\": [\n \"62c66dc612296752b7c82cde\"\n ]\n }\n]\n\n[\n {\n \"_id\": \"62c66dc612296752b7c82cde\",\n \"active\": true,\n \"name\": \"Jons Buddies Ladder\",\n \"owner\": \"Jon\",\n \"number\": 2,\n \"base address\": {\n \"street\": \"99 George Street\",\n \"city\": \"Super City\"\n },\n \"players\": [\n {\n \"playerId\": \"62c2b79d966b72973fe52316\",\n \"challengerId\": \"62c2b79d966b72973fe52317\",\n \"rank\": 1\n },\n {\n \"playerId\": \"62c2b79d966b72973fe52317\",\n \"challengerId\": \"62c2b79d966b72973fe52316\",\n \"rank\": 2\n },\n {\n \"playerId\": \"62df7cec5fe09351fb906997\",\n \"challengerId\": \"62df80935488fc6b309dcf32\",\n \"rank\": 3\n }\n ],\n \"owner_id\": \"62c2b79d966b72973fe52317\",\n \"lastModified\": \"2022-07-21T02:51:41.842Z\"\n },\n {\n \"_id\": \"62c66dc612296752b7c82cdd\",\n \"active\": true,\n \"name\": \"Harys Buddies Ladder\",\n \"owner\": \"Hary\",\n \"number\": 2,\n \"base address\": {\n \"street\": \"99 George Street\",\n \"city\": \"Super City\"\n },\n \"players\": [\n {\n \"playerId\": \"62c2b79d966b72973fe52317\",\n \"challengerId\": \"62c2b79d966b72973fe52316\"\n },\n {\n \"playerId\": \"62c2b79d966b72973fe52316\",\n \"challengerId\": \"62c2b79d966b72973fe52317\"\n }\n ],\n \"owner_id\": \"62c2b79d966b72973fe52316\"\n },\n {\n \"_id\": \"62df827ed09d7e792a2531e5\",\n \"active\": true,\n \"name\": \"Birt's Ladder\",\n \"owner\": \"62df7cec5fe09351fb906997\",\n \"base address\": {\n \"street\": \"99 George Street\",\n \"city\": \"Super City\"\n },\n \"players\": [\n {\n \"playerId\": \"62df7cec5fe09351fb906997\",\n \"challengerId\": null,\n \"rank\": 1\n }\n ]\n }\n]\n\"_id\": \"62c66dc612296752b7c82cde\",\"_id\": {\n \"$oid\": \"62c66dc612296752b7c82cde\"\n },\n", "text": "Here also are the collections for reference:\nUsers:Rankings:Please note that I have noticed that VSCode renders all ‘View Documents’ with e.g.\nrankings _id as:\n\"_id\": \"62c66dc612296752b7c82cde\",\nbut individual documents as:perhaps this behavior is well known to more experienced users(?). To clarify all references to _id in the collections on the mongodb server side use the ObjectId (not e.g. _id:) …", "username": "freeross" }, { "code": "_id: string_value", "text": "(not e.g. _id:)meant to say “(not e.g. simply _id: string_value)”(I don’t know how to edit my responses so I have to use a reply instead …", "username": "freeross" }, { "code": "", "text": "Another way of expressing the problem in Playground. I can match player/challenger to name, but not at the same time and in a way that distinguishes between players and their challengers.\nIn this Playground attempt I distinguish between players and challengers, but I do not see how to match ids to names.", "username": "freeross" }, { "code": "[{$match: {\n _id: ObjectId('62c66dc612296752b7c82cde')\n}}, {$unwind: {\n path: '$players'\n}}, {$lookup: {\n from: 'users',\n localField: 'players.playerId',\n foreignField: '_id',\n as: 'players.player'\n}}, {$lookup: {\n from: 'users',\n localField: 'players.challengerId',\n foreignField: '_id',\n as: 'players.challenger'\n}}, {$unwind: {\n path: '$players.player',\n preserveNullAndEmptyArrays: true\n}}, {$unwind: {\n path: '$players.challenger',\n preserveNullAndEmptyArrays: true\n}}, {$project: {\n name: 1,\n players: {\n player: {\n name: 1\n },\n challenger: {\n name: 1\n },\n rank: 1\n }\n}}, {$group: {\n _id: '$_id',\n ranking: {\n $push: '$players'\n }\n}}, {$lookup: {\n from: 'rankings',\n localField: '_id',\n foreignField: '_id',\n as: 'rankingDetails'\n}}, {$unwind: {\n path: '$rankingDetails',\n preserveNullAndEmptyArrays: true\n}}, {$addFields: {\n 'rankingDetails.ranking': '$ranking'\n}}, {$replaceRoot: {\n newRoot: '$rankingDetails'\n}}, {$sort: {\n 'rankingDetails.ranking.rank': -1\n}}, {$project: {\n active: 1, owner: 1, 'base address': 1, name: 1, ranking:1\n}}]\n", "text": "I managed to solve this:with the assistance of this excellent article.\nThank you to the author, Krishna Bose and to anyone else who took a look at this.", "username": "freeross" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Matching values and merging 3 object arrays into 1
2022-07-27T06:24:30.102Z
Matching values and merging 3 object arrays into 1
2,138
null
[ "aggregation" ]
[ { "code": "$match\n{\n \"communities.fullReviewScore\":{$gt:8}\n}\n\n$project\n{\n \"communities.recommendedScoreV3\":{\n $eq:{\"communities.recommendedScoreV3\",\n $sum:{\"communities.fullReviewScore\",100 }\n \n }\n }\n}\n\n$count\n{'results'}\n", "text": "Hi guys, I am new to mongo and trying to right query with a sum operator but with no luck.\nI need to check this logic:If fullReviewScore > zip_score_thresholdElseSort the listing in Group 1What I did:could you help me please!", "username": "Natalia_Merkulova" }, { "code": "</>", "text": "Hi @Natalia_Merkulova, it would be nice if you provided along with your description of what you’re trying to do, some sample documents (remove any unnecessary fields to keep the size small) and the results/errors after running the command. This makes it easier for other members to help you determine what’s going on, and makes it more likely that you get a reply.It’s also easier to read if you put your sample code in a code block. You can do that by going to a blank line and then clicking on the Preformatted text icon ( </>). This makes it easier to read the code, and more importantly copy/paste the code (regular text on the forum uses fancy quotes which which cause problems if one were to copy/paste the text).I can say that the code as you have it above will cause errors as it’s not properly written Mongo query/JSON. I don’t know if that’s what you’re having problems with and need help with or if it’s something else.", "username": "Doug_Duncan" }, { "code": "{\n \"_id\": {\n \"$oid\": \"627ca4981fc9c169c6a0ea8e\"\n },\n \"careType\": \"ALZHEIMERS_CARE\",\n \"relativeUrl\": \"florida/miami\",\n \"communities\": [\n {\n \"_id\": {\n \"distance\": 9.273651757596374,\n \"id\": 1436709,\n \"imageCount\": 21,\n \"isCustomer\": true,\n \"name\": \"Mirabelle\",\n \"orgName\": \"The Arbor Company\",\n \"priceAverage\": 7422.857142857143,\n \"ratingAverage\": 4.8,\n \"ratingScore\": 65000,\n \"reviewCount\": 6,\n \"reviewScore\": {\n \"reviewScoreDisplay\": 9.4,\n \"reviewScoreFull\": 9.433,\n \n },\n \"recommendedScoreV3\": 109.43299999999999,\n \"fullReviewScore\": 9.433\n}]\n", "text": "Hi @Doug_Duncan , this is an example of doc:", "username": "Natalia_Merkulova" }, { "code": "", "text": "@Doug_Duncan\nI am trying to check:\n1.", "username": "Natalia_Merkulova" }, { "code": "$unwindfullReviewScoredb.test.aggregate(\n [\n {\n \"$unwind\": { \"path\": \"$communities\" }\n },\n {\n \"$addFields\": {\n \"recommendationScorev3\": {\n \"$cond\": {\n \"if\": { \"$gt\": [\"$communities._id.fullReviewScore\", 8] },\n \"then\": { \"$add\": [\"$communities._id.fullReviewScore\", 100] },\n \"else\": \"$communities._id.fullReviewScore\"\n }\n }\n }\n },\n {\n \"$sort\": {\"recommendationScoreV3\": -1, \"distance\": 1}\n }\n ]\n)\n$addFields$project$project$match$addFields$cond$match", "text": "Thanks for supplying the sample document.Looking at the given document, we can see that the field that will be used for determining if we add 100 or not is nested in a sub-document in an array. The distance is also in this same array sub-document. Based on this you will want to use $unwind to break each array element into its own document. This allows for multiple communities to be dealt with properly. After that you add a field that does the addition logic based on the fullReviewScore field and then do your sort.The following should do what you’re looking for:In the above you can see that is used $addFields instead of $project like you had in your original attempt. I did this since I wanted to keep the entire original document. Had I used $project I would have had to put all the top level fields in for projection.Also notice that your $match stage criteria has been moved into the $addFields stage as a $cond condentional operator. If I would have used the $match stage, then you wouldn’t have gotten the complete list of documents in your resulting output.Hopefully this helps you with what you’re trying to do.", "username": "Doug_Duncan" }, { "code": "", "text": "@Doug_Duncan Thank you so much!!! You helped a lot and thank you for all your notes!!! I am really veery grateful!!!Have an amazing day!", "username": "Natalia_Merkulova" } ]
Query to get sum of a field values plus 100
2022-07-31T22:19:52.130Z
Query to get sum of a field values plus 100
2,614
null
[ "queries", "crud" ]
[ { "code": " _id: ObjectId(\"62e32a75a984c054fb986fd0\"),\n id: '19',\n product_id: '3',\n body: 'Why is this product cheaper here than other sites?',\n date_written: '1590413602314',\n asker_name: 'jbilas',\n asker_email: '[email protected]',\n reported: '0',\n helpful: '6',\n answers: {}```\n\nI am trying to update 'helpful' field to be an integer on every document in my collection. And am running this command:\n\nAnd get this error message: \n**MongoServerError:** Failed to parse number 'helpful' in $convert with no onError value: Did not consume whole string.\n\nWhen I try to update one individually like this, It works successfully. Any tips on what I am doing wrong?\n", "text": "I currently have a collection ‘question’ with a document schema as follows:db.question.updateMany({ }, [{ $set: { helpful: {$toInt: “$helpful”}}}])db.question.updateMany({id: “16” }, [{ $set: { helpful: {$toInt: “$helpful”}}}])", "username": "Sam_B1" }, { "code": "When I try to update one individually like this, It works successfully..$toInt$toIntonErroronNulldb.question.updateMany(\n {}, \n [{ \n $set: { \n helpful: {\n $convert: {\n input: “$helpful”,\n to: \"int\",\n onError: null, // Optional.\n onNull: null // Optional.\n }\n }\n }\n }]\n)\n", "text": "Hello @Sam_B1, Welcome to MongoDB Community Forum,When I try to update one individually like this, It works successfully.Because some of the documents have alphabets or any special symbols like dot ., You can check the examples in below documentation, when $toInt throws an error,type conversion, convert to integer, integer conversion, aggregation, convert to intYou can the $convert operator instead of $toInt, because it handles errors and null values, ex:", "username": "turivishal" }, { "code": "", "text": "Wow, thank you! If I have my onError property set to null, does that mean it will skip over that instance?", "username": "Sam_B1" }, { "code": "null", "text": "If I have my onError property set to null, does that mean it will skip over that instance?No, it will update that value to null value.", "username": "turivishal" } ]
Unable to Parse Integers in updateMany Command
2022-08-01T15:23:44.848Z
Unable to Parse Integers in updateMany Command
3,874
null
[ "node-js", "data-modeling", "mongoose-odm" ]
[ { "code": "const post = req.body;\nawait Post.create(post);\n image: '',\n title: 'hi',\n subtitle: '',\n category: 'Jobs',\n tags: '',\n text: '',\n contactperson: '',\n contact: '',\n author: 'Felicia',\n expires: '2022-08-06'\n}\n", "text": "Hi everyone,i would like to store a post as a document in mongodb. I’m using mongoose for modelling and the content is created by a user using a form. The content of the form is append to FormData and sending to server. This works so far. The only issue is, that empty fields, that are appended as empty strings in the req.body will be stored in the document. The minimalize-property of my dataschema is already set true …req.body looks like:\n[Object: null prototype] {Thanks so much for your help!", "username": "Felicia_Debye" }, { "code": "req.bodyimport mongoose from 'mongoose';\nconst { Schema } = mongoose;\n\nmongoose.connect('mongodb+srv://USER:[email protected]/mongoose?retryWrites=true&w=majority', () => {\n console.log(\"db connected successfully..\")\n})\n\nconst bodySchema = new Schema({\n\timage: { type: String, required: false, set: a => a === '' ? undefined : a},\n\ttitle: { type: String, required: false, set: b => b === '' ? undefined : b},\n\tsubtitle: { type: String, required: false, set: c => c === '' ? undefined : c},\n\tcategory: { type: String, required: false, set: d => d === '' ? undefined : d},\n\ttags: { type: String, required: false, set: e => e === '' ? undefined : e},\n\ttext: { type: String, required: false, set: f => f === '' ? undefined : f},\n\tcontactperson: { type: String, required: false, set: g => g === '' ? undefined : g},\n\tcontact: { type: String, required: false, set: h => h === '' ? undefined : h},\n\tauthor: { type: String, required: false, set: i => i === '' ? undefined : i},\n\texpires: { type: String, required: false, set: j => j === '' ? undefined : j}\n},{ minimize: true })\n\nconst Body = mongoose.model('Body', bodySchema);\n\nawait Body.create({\n image: '',\n title: 'hi',\n subtitle: '',\n category: 'Jobs',\n tags: '',\n text: '',\n contactperson: '',\n contact: '',\n author: 'Felicia',\n expires: '2022-08-06'\n})\n\nconsole.log(\"doc inserted\")\n{\n\"_id\":{\"$oid\":\"62e89ea6eb8a35c11e33b220\"},\n\"title\":\"hi\",\n\"category\":\"Jobs\",\n\"author\":\"Felicia\",\n\"expires\":\"2022-08-06\",\n\"__v\":{\"$numberInt\":\"0\"}\n}\nreq.body", "text": "Hi @Felicia_Debye,My interpretation is that you do not want to store fields that have empty string values in the database. Please correct me if I am wrong here.In saying so, I have created a small test snippet which performs the above based off the Schema using customer setter. This is just one alternate method that may suit your use case. Please feel free to check any mongoose plugins if you wish to do more research.Example test code using the req.body value you have provided:The resulting document that is inserted:(In Data Explorer):\n\nimage796×445 26 KB\nWith any of these snippets, it is highly recommended you test thoroughly in a test environment to see if it suits your use case(s) and meets all your requirements. I have only tested this on a singular document based off the req.body value you had provided.Hope this helps.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Don't store empty strings
2022-07-06T08:59:08.811Z
Don&rsquo;t store empty strings
8,359
null
[ "xamarin" ]
[ { "code": "", "text": "I worked on the dotnet team 2015-17 so am very familiar with how things worked back then.I haven’t been able to find any decent examples of best practices with Xamarin Forms especially with the Atlas-backed Sync model. I have a standalone app I want to migrate to sync which is going to need partial sync as users share some appointments with friends.The official MongoDB Uni sample is extremely simple and doesn’t show editing with bindings. It has a trivial model of copying data from the Realm object and saving it back on pressing Save.Is that the recommended approach for all apps? How does this interact with sync and changes being propagated. Have I missed something obvious that shows more use of sync?The more sophisticated sync samples I’ve found do NOT include dotnet:", "username": "Andy_Dent" }, { "code": "", "text": "This might help, MongoDB World 2022 presentation, How I Made a Mobile App for Tracking Blood Pressure Using Atlas Device Sync, Realm, & Xamarin.Forms - YouTube", "username": "Paul_Leury1" } ]
Xamarin Forms best practices with Sync?
2022-05-13T08:28:59.599Z
Xamarin Forms best practices with Sync?
2,758
null
[ "serverless" ]
[ { "code": "", "text": "when try to use $function mongo atlas return error “$function not allowed in this atlas tier” on payed serverless cluster", "username": "Max_Virchenko" }, { "code": "", "text": "Hi @Max_Virchenko - Welcome to the community.“$function not allowed in this atlas tier”This is noted within the Actions section of the Serverless Instances Limitations documentation.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
$function not allowed in this atlas tier serverless
2022-08-01T16:48:10.444Z
$function not allowed in this atlas tier serverless
2,389
null
[ "aggregation", "queries", "serverless" ]
[ { "code": "nameawait self._async_collection.update_one(\n self.find_query,\n {\n \"$push\": {\n \"macros\": {\n \"$each\": [],\n \"$sort\": {\"name\": 1},\n }\n }\n },\n collation={\"locale\": \"en\"},\n)\n$set", "text": "I’m evaluating an Atlas serverless setup, but unfortunately, it doesn’t support collation (has it been stated whether adding support is on the roadmap?).I’ve got an array of subdocuments. If a user edits one of the subdocuments’ name field, I want it to be re-sorted. I’m using the following routine to do so:I know that I could sort the array in application code and use $set to replace the entire array, but before I resort to that, I want to know if there is another method I should try first. I’m using collation because I need a case-insensitive sort.", "username": "Jared_Lindsay1" }, { "code": "$toLower$sort$merge$toLower", "text": "Hi @Jared_Lindsay1,Depending on the use case and If the content of the array both before & after the update is known to the application end, it may be more efficient from a performance perspective to push the sorted array from the application side instead of sorting in the database.In saying so, you can possibly consider $toLower , $sort and $merge usage but this would be more complex and generally more resource intensive than the application side sorting. Additionally, this would lead to changes in the string values of the field you want to sort (in which $toLower would lower case the characters) so this is another factor to consider. Although I would also note the serverless operational limitations and considerations as well.If further assistance is required, please provide some sample documents and expected/desired output.Please let me know your thoughts.Regards,\nJason", "username": "Jason_Tran" }, { "code": "$setbisect$pushlist.insert()", "text": "Thanks for the reply. I wound up sorting in application code and pushing the whole new array via $set. In another place where I was inserting to the array, I used Python’s bisect module to find the insertion index, then $pushed to that index (as well as using list.insert() in the app code).", "username": "Jared_Lindsay1" }, { "code": "", "text": "Thanks for updating the post with your solution Jared! Glad to hear you decided on a method you’ve found suitable.", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How should I do this without collation?
2022-07-25T17:13:38.000Z
How should I do this without collation?
1,852
null
[ "aggregation", "queries" ]
[ { "code": "sponsored: trueaddFields[...]\n\nconst aggregationResult = await Resource.aggregate()\n .search({\n compound: {\n must: [\n [...]\n ],\n should: [\n [...]\n ]\n }\n })\n [...]\n //only do this for the first sponsored result\n .addFields({\n selectedForSponsoredSlot: { $cond: [{ $eq: ['$sponsored', true] }, true, false] }\n })\n .sort(\n {\n selectedForSponsoredSlot: -1,\n _id: 1\n }\n )\n .facet({\n results: [\n { $match: matchFilter },\n { $skip: (page - 1) * pageSize },\n { $limit: pageSize },\n ],\n totalResultCount: [\n { $match: matchFilter },\n { $group: { _id: null, count: { $sum: 1 } } }\n ],\n [...]\n })\n .exec();\n\n[...]\n", "text": "My app can search through a database of resources using MongoDB’s aggregation pipeline. Some of these documents have the property sponsored: true.I want to move exactly one of these sponsored entries to the top of the search results, but keep natural ordering up for the remaining ones (no matter if sponsored or not).Below is my code. My idea was to make use of addFields but change the logic so that it only applies to the first element that meets the condition. Is this possible?", "username": "Florian_Walther" }, { "code": "$search$searchsponsoredtruesponsored:true", "text": "Hi @Florian_Walther,I want to move exactly one of these sponsored entries to the top of the search results, but keep natural ordering up for the remaining ones ( no matter if sponsored or not ).From what it sounds like here, it doesn’t really sound like a sort per se. It sounds like you are wanting to “re-arrange” the result set. Is this correct?In saying so, could you provide the following details so I can try work out a possible method to see if the desired output can be achieved?:Please redact any personal or sensitive information before posting it hereRegards,\nJason", "username": "Jason_Tran" }, { "code": " const queryString = req.query.q;\n [...]\n const queryWords = queryString.split(\" \");\n [...]\n const matchFilters: { $match: QueryOptions }[] = [{ $match: { approved: true } }];\n\n if (typeFilter) matchFilters.push({ $match: { type: { $in: typeFilter } } });\n if (fromYearFilter) {\n if (fromUnknownYearFilter === 'exclude') {\n matchFilters.push({ $match: { year: { $gte: fromYearFilter } } });\n } else {\n matchFilters.push({ $match: { $or: [{ year: { $gte: fromYearFilter } }, { year: '' }] } });\n }\n }\n if (languageFilter) matchFilters.push({ $match: { language: { $in: languageFilter } } });\n\n const SPONSORED_RESULT_SLOTS = 1;\n\n [...]\n\n let aggregation = Resource.aggregate()\n .search({\n compound: {\n must: [{\n wildcard: {\n query: queryMainWordsWithSynonymsAndWildcards.flat(),\n path: ['title', 'topics', 'link', 'creatorName'],\n allowAnalyzedField: true,\n score: { constant: { value: 1 } }\n }\n }\n ],\n }\n })\n [...]\n .facet({\n sponsoredResult: [\n ...matchFilters,\n { $match: { sponsored: true } },\n { $limit: SPONSORED_RESULT_SLOTS },\n {\n $lookup: {\n from: \"resourcevotes\",\n localField: \"_id\",\n foreignField: \"resourceId\",\n as: \"votes\",\n }\n }\n ],\n results: [\n ...matchFilters,\n { $skip: (page - 1) * pageSize },\n { $limit: pageSize },\n {\n $lookup: {\n from: \"resourcevotes\",\n localField: \"_id\",\n foreignField: \"resourceId\",\n as: \"votes\",\n }\n }\n ],\n totalResultCount: [\n ...matchFilters,\n { $group: { _id: null, count: { $sum: 1 } } }\n ],\n [...]\n })\n .append({\n $set: {\n results: {\n $filter: {\n input: \"$results\",\n cond: { $not: { $in: [\"$$this._id\", \"$sponsoredResult._id\"] } }\n }\n }\n }\n });\n\n if (page === 1) {\n aggregation = aggregation\n .append({\n $set: {\n results: {\n $concatArrays: [\"$sponsoredResult\", \"$results\"]\n },\n }\n });\n }\n\nconst resourceSchema = new mongoose.Schema<Resource>({\n title: { type: String },\n topics: [{ type: String, required: true }],\n description: { type: String },\n type: { type: String, required: true },\n year: { type: String },\n language: { type: String, required: true },\n sponsored: { type: Boolean },\n});\n", "text": "Thank you for answering. Here is my aggregate with my current approach. It works, but the problem is that the reordering makes it show 11 results (instead of 10) on the first page (where the sponsored entry moved) and 9 results on the page where the sponsored entry would’ve usually been.The document schema:", "username": "Florian_Walther" }, { "code": "1. \"sponsored\" : true /// The \"one out of many\" sponsored:true document you have taken to the top\n2 - 11. \"sponsored\" : ... /// I presume the values here don't matter. Correct me if I am wrong here.\n12 - 20. \"`sponsored\" : ... /// I presume the values here don't matter. Correct me if I am wrong here.\n$searchsponsoredtruesponsored:true", "text": "It works, but the problem is that the reordering makes it show 11 results (instead of 10) on the first page (where the sponsored entry moved) and 9 results on the page where the sponsored entry would’ve usually been.I’m having trouble visualising this but is the structure of the page items for page 1 and page 2 somewhat like the below high level examples:(I have just grouped up the page items for example purposes to help clarify)Page 1 (11 documents):Page 2 (9 documents):I am wondering if the application logic could be altered to achieve what you are after. However, I would need more information for this. Can you provide the following information:Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
Sort element with property: true to the top, but only one out of many
2022-07-19T10:27:01.924Z
Sort element with property: true to the top, but only one out of many
2,011
null
[ "indexes" ]
[ { "code": "", "text": "Hi, I am a newbie for MongoDB, I just wonder if it is possible to set TTL to certain documents in a collection, rather than having to create a TTL index that will apply the expiration timestamp for all document in a collection?", "username": "Bo_Yuan" }, { "code": "", "text": "Welcome to the MongoDB Community @Bo_Yuan!TTL expiry is based on matching an indexed Date field in a collection. Documents without the TTL date field (or a valid BSON Date value for this field) will not expire.Another option would be configuring your TTL index to Expire Documents at a Specific Clock Time, with a far future date for the documents you do not want to delete.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Set TTL for some (not all) documents in a collection
2022-08-01T17:57:48.900Z
Set TTL for some (not all) documents in a collection
2,551
https://www.mongodb.com/…0_2_1024x741.png
[ "aggregation", "node-js", "compass", "serverless" ]
[ { "code": "[\n {\n '$geoNear': {\n 'near': {\n 'type': 'Point', \n 'coordinates': [\n -0.0000, 51.00000\n ]\n }, \n 'distanceField': 'string', \n 'maxDistance': 500, \n 'query': {}, \n 'includeLocs': 'dist.location', \n 'spherical': true\n }\n }, {\n '$lookup': {\n 'from': 'blockedusers', \n 'localField': 'user', \n 'foreignField': 'blockedUserId', \n 'as': 'relationship'\n }\n }, {\n '$match': {\n 'relationship.userId': {\n '$ne': 'eudwvdeiwuedbwiuebRandomUserId'\n }\n }\n }\n]\n", "text": "\nScreenshot 2022-07-20 at 11.13.42 pm1161×841 124 KB\nIn my aggregation pipeline, running $lookup after $geocode causes a connection error as shown in the screenshot. The error only occurs when using atlas serverless because I’ve tried it on the community server and it works fine.\nThe aggregation code looks like this:I’ve tried running this on compass and hosted the node js api on elastic beanstalk and still ran into the problem of connection 36 to 52.215.2.251:27017 closed", "username": "David_Oyedeji" }, { "code": "", "text": "Hi @David_Oyedeji - Welcome to the community.Thanks for raising this one.I’d just like to clarify one thing here regarding the following:The error only occurs when using atlas serverless because I’ve tried it on the community server and it works fine.When you ran it on the community server, could you confirm what version of MongoDB was in use?Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi @Jason_Tran - thanks for the reply.\nThe MongoDB version used was 5.0.9Best,\nDavid", "username": "David_Oyedeji" }, { "code": "", "text": "Thanks for confirming @David_Oyedeji,I believe this may be related to SERVER-68062 noted in the 6.0 Release Notes. If you’re still receiving this error in version 6.0.1, please let me know. As per the current status of the time of this messge:Unresolved. This issue will be fixed in a future 6.0 minor release.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
[Serverless] Connection 27017 closed
2022-07-20T22:22:37.727Z
[Serverless] Connection 27017 closed
2,469
null
[]
[ { "code": "", "text": "Hi, I’m LeeI wonder maximum size of database.also, I want to ask you that I can increase maximum size (storage space).Thank you.", "username": "Lee_Changwon" }, { "code": "db.stats()", "text": "HeyIf you want to check your database size you can use db.stats() command\nthe command returns storage statistics for a given database.\nThere is no maximum size for an individual MongoDB database, you can find MongoDB limits and thresholds\nhere", "username": "Arkadiusz_Borucki" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How can I check total database size?
2022-08-01T15:59:08.275Z
How can I check total database size?
8,874
null
[ "connector-for-bi" ]
[ { "code": "", "text": "I am using MongoDB ODBC Driver 1.4.2. I am using this driver to connect Microsoft Power BI report to mongosqld (MongoDB connector for BI).It works all fine except for one part : when i try to implement a Power Query custom filter, i get the following error :“Failed to save modifications to the server. Error returned: 'OLE DB or ODBC error: [DataSource.Error] ODBC: ERROR [HY000] [MySQL][ODBC 1.4(a) Driver][mysqld-5.7.12 mongosqld v2.14.0]This command is not supported in the prepared statement protocol yet. '.”It looks like this feature (query folding) is not implemented on the driver’s side (or maybe on mongosqld’s side ? i’m not sure).My questions being :Thanks", "username": "Mathias_B" }, { "code": "", "text": "Hi Mathias,I have the same issue here, did you find a solution?Best Regards,Ciro", "username": "Ciro_Correa" }, { "code": "", "text": "Hi Ciro,There is no solution for this with this approach. I see there was a release for mongodb odbc driver 1.4.3 three weeks ago but i doubt this fixes anything regarding power bi incremental refresh and query folding.Another approach is to use CDATA mongodb odbc driver.Best Regards,\nMathias", "username": "Mathias_B" } ]
MongoDB ODBC Driver - Power BI's Power query filter
2021-11-30T11:39:39.798Z
MongoDB ODBC Driver - Power BI&rsquo;s Power query filter
4,402
null
[ "time-series" ]
[ { "code": "db.createCollection(\n \"timeseries_5\",\n {\n timeseries: {\n timeField: \"timestamp\",\n metaField: \"symbol\",\n granularity: \"seconds\"\n }\n }\n)\ndb.timeseries_5.createIndex(\n{ \n \"timestamp\": 1,\n \"symbol\": 1\n \n},\n {\n name: \"symbol_timestamp\",\n sparse: true\n }\n);\n{\n\"message\" : \"Unique indexes are not supported on collections clustered by _id\",\n\"ok\" : 0,\n\"code\" : 72,\n\"codeName\" : \"InvalidOptions\"\n}\n\ndb.timeseries_5.insert([{\n \n timestamp: ISODate('2019-01-01T13:15:00.000Z'),\n symbol: 'SNX',\n\n metadata:{\n open: 0.040613,\n high: 0.040613,\n volume: 3286.08613337,\n close: 0.040123,\n exchange: 'kucoin',\n low: 0.039847,\n }\n }])\n", "text": "hey guys i am new to mongo db\nwe are using mongo db for recording candlestick information with timeseries collectionbut we want to avoid duplicate data\ni created a time series collection:i created my indexbut i can pass unique:truehow i avoid duplicate data in mongo db timeseries collection???\ni need unique 2 field (symbol, timestamp)this is how i insert my data", "username": "Ali_moradi_N_A" }, { "code": "", "text": "Hi,\nUnique Index is not possible in Timeseries for now:Time Series, IOT\n\"\nThese index types aren’t supported:These index properties aren’t supported:I hope it’s will be changed in next released.best regards,", "username": "Olivier_Guille" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to have 2 unique field together in timeseries collection
2022-07-31T15:18:49.981Z
Unable to have 2 unique field together in timeseries collection
2,276
null
[ "aggregation", "atlas-search" ]
[ { "code": "[\n {\n \"$search\": {\n \"compound\": {\n \"should\": [\n {\n \"phrase\": {\n \"query\": \"test phrase\",\n \"path\": {\n \"wildcard\": \"*\"\n }\n }\n }\n ],\n \"filter\": [\n {\n \"text\": {\n \"path\": \"owner\",\n \"query\": \"owner_id_redacted\"\n }\n }\n ]\n }\n }\n },\n {\n \"$project\": {\n \"score\": {\n \"$meta\": \"searchScore\"\n },\n \"desc\": 1\n }\n }\n]\n10.5570.0", "text": "I have the following query:I’m expecting 1 result in my test. I have several results. The first one has a search score of 10.557… The rest of the results (which are many) all have search scores of 0.0. Why are these being included and how can I filter them out of the results?", "username": "djedi" }, { "code": "minimumShouldMatch", "text": "Nevermind. I found the minimumShouldMatch option in the docs. Working better now.", "username": "djedi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Search Score of 0.0
2022-08-01T18:08:47.349Z
Search Score of 0.0
1,789
null
[]
[ { "code": "", "text": "Hello,Registering and logging in as an email/password user for the most part works fine in my SwiftUI app that uses Flexible Sync, but something goes wrong if I try to create and log in as a new user right after deleting a different one. Specifically, if I don’t manually close and reopen my app before creating a new user after having deleted one, I get this error when loading the new user:terminating with uncaught exception of type realm::RealmFileException: Directory at path ‘/var/mobile/Containers/Data/Application/7377FD0A-6C34-4BD2-901F-E7681014E17A/Documents/mongodb-realm/app-id/62e21a1622edb7644cb9d1ff/flx_sync_default.realm.lock’ does not exist.If on the other hand I relaunch my app manually before trying to create the new user after deleting one, there’s no issue, so for some reason things don’t seem to be resetting correctly on their own without an app refresh.I use app.currentUser?.delete() to delete a user, and I can confirm that it gets removed from the user list in my app’s dashboard online. Also, I’m not sure if this is useful to know, but there’s no similar issue if I simply log out as a user and log in as a different one. It only happens with user deletions.Any help on interpreting the error would be much appreciated!Thank you,\nJack", "username": "Jack_Edmonds" }, { "code": "", "text": "Can you include the complete code you’re using to delete the user from the app?", "username": "Jay" }, { "code": "func deleteAccount() async {\n do {\n try await app.currentUser?.delete()\n print(\"Account deleted\")\n } catch {\n print(\"Unable to delete account: \\(error)\")\n }\n}\nflx_sync_default.realm.lock", "text": "Hi Jay,I call this function to delete a user’s account:Should the flx_sync_default.realm.lock file be getting automatically deleted when I delete a user? Based on the path in the error message it would make sense that it’s no longer needed (62e21a1622edb7644cb9d1ff was the ID of the deleted user), but the app doesn’t seem to agree haha.", "username": "Jack_Edmonds" }, { "code": "TaskTracker@IBAction func testButton1Action(_ sender: Any) {\n print(#function)\n\n Task {\n await self.handleDeleteUserAsync()\n }\n}\nfunc handleDeleteUserAsync() async {\n do {\n try await gTaskApp.currentUser?.delete()\n print(\"Account deleted\")\n } catch {\n print(\"Unable to delete account: \\(error)\")\n }\n}\nAccount deleted", "text": "Here’s some datapoints after testing so let me set this upI have (macOS) app TaskTracker that with button press, can log in anonymously (or with user). When that happens a new folder is created on the local drive\nCreated920×398 53.6 KB\nthen, I have a button action that calls a functionto delete the user using the same code you’re using (note that gTaskApp is a “global” var that points to my app)When that happens, it deletes the folder 62e6…\nDeleted800×226 26.3 KB\nand logs the user out and prints Account deleted in console - the .lock file is deleted because everything, including the parent folder is deleted.So in other words, I can’t duplicate the issue. This behavior appears to be identical between a partition based and flex app as it involves authentication only so the sync type doesn’t seem to have an impact. I could be incorrect on that assumption.", "username": "Jay" }, { "code": "if let user = app.currentUser {\n ... // Flexible Sync config setup\n OpenSyncedRealmView()\n .environment(\\.realmConfiguration, config)\n} else {\n LoginView()\n}\n", "text": "Thank you for testing this out. The issue with my app seems to arise after the process you describe though. After you delete a user and it returns you to your login screen, does it let you log in with a new user without any issue? It’s when I attempt to log in with a user right after deleting a different one that my app crashes.The .lock file seems to be getting deleted like in your case, it prints “Account deleted,” and it takes me back to my login screen because my app variable is monitored to allow the app to recognize that there is no longer a user logged in by checkingIt’s just that my app then tries to find the deleted .lock file for the old user instead of moving on to deal solely with a newly logged in user if I don’t first close and reopen my app before logging in with a new one. It’s like the app isn’t refreshing correctly on its own. Is there some call I need to make to refresh the realm app manually after deleting a user? I’m using AsyncOpen in my OpenSyncedRealmView to open a realm in a way that’s almost identical to the SwiftUI Quickstart tutorial btw (https://www.mongodb.com/docs/realm/sdk/swift/swiftui-tutorial/), and it’s when it gets back to that point with a new user that the crash seems to occur.", "username": "Jack_Edmonds" }, { "code": "", "text": "After you delete a user and it returns you to your login screen, does it let you log in with a new user without any issue?Yes. I can create new users or anonymous and login, delete the user and immediately create and log in with a new user or anonymous account without exiting the app.We have a class var that stores the task results so we nil that after deleting a user and then refresh the task tableview to clear it.Perhaps you’re holding a strong ref the user somewhere?", "username": "Jay" } ]
Error with next login after deleting a user
2022-07-28T05:35:32.334Z
Error with next login after deleting a user
2,480
null
[ "spark-connector" ]
[ { "code": "", "text": "We want to read and write from spark streaming to global mongo db. We tried to use option shardkey in v10 spark mongo db connector but it is not working. We tried with v3.0 spark mongo db connector it is working but it doesn’t support streaming as streaming is supported in v10 spark mongo db connector. Can any one please help me on that ?How we can read write from spark to global mongo db. It must read write from particular zone.", "username": "Deepak_gusain" }, { "code": "", "text": "By not work what exactly was the error ? Can you provide any example code that you tried ? Are you trying to write to a sharded collection ?", "username": "Robert_Walters" }, { "code": "", "text": "we are trying to insert data into shard mongo db and we are able to insert data using format “mongo” (spark mongo db connector version - 3.0) for batch job, but while inserting using spark streaming job we are using format “Mongodb” version -10.0 , which is not working.Below is the code syntax which we are using and it is not working:-df.writeStream.format(“mongodb”) \n.queryName(“some query”) \n.outputMode(“append”) \n.option(“spark.mongodb.connection.uri”, uri) \n.option(“spark.mongodb.database”, database) \n.option(“spark.mongodb.collection”, collection) \n.option(“replaceDocument”, False) \n.option(“shardkey” , ‘{“first field”:”hashed” , “second field”:”hashed\"}’) \n.trigger(processingTime=some value) \n.option(“checkpointLocation”,checkpoint_folder_path) \n.start()getting below error :-\ncom.mongodb.MongoBulkWriteException: Bulk write operation error on server “mongo server name”:27016. Write errors: [BulkWriteError{index=0, code=61, message=‘Failed to target upsert by query :: could not extract exact shard key’, details={}}].how we can write into shard mongo db from spark streaming job (global mongo db ) using shard key (as in mongo db doc it was mentioned that we must use shadkey option while insert our data into specific shard )?Can we use spark mongo db connector version 3.0 for both streaming and batch job to write data into shard mongo db ( global mongo db )?Is is possible to use spark mongo db connector version 10 for both streaming and batch for write data into global mongo db (shard mongo db ).", "username": "Deepak_gusain" }, { "code": "idFieldList", "text": "@Deepak_gusain There is no shardkey write option, https://www.mongodb.com/docs/spark-connector/current/configuration/write/you will need to use the idFieldList and specify the fields that are used to identify the documentV10 handles batch and structured stream, V3.x just batch. That said, we are working to make V10 at parity with V3.x, still need to add RDD support and some other things.", "username": "Robert_Walters" }, { "code": "", "text": "As per mongo db document :- Without shardkey document will be not distribute, and we are using global mongo db. Data must go into particular zone.Will this “idfieldlist” distribute inserted data in global mongo db ?", "username": "Deepak_gusain" }, { "code": "", "text": "When you define the shard itself you define zones. Based upon the shard key value in the document that determines which shard it will be written to. Thus, there isn’t anything on the Spark side to configure it more to do with how you configured the shard collection itself.", "username": "Robert_Walters" }, { "code": "", "text": "Hi,I am trying to update a field ( which is not a index field ) in existing data using below code but it is not working and giving below error. Can you please tell us if we are doing anything wrong?df.write.format(“mongodb”).mode(“append”).option(“spark.mongodb.connection.uri”,_uri) \n.option(“database”,_database) \n.option(“collection”, _collection) \n.option(“replaceDocument”,False) \n.option(“operationType” , “update”) \n.option(“idFieldList” , [ “field_1”, “field_2” , “field_3”]).save()field_1 = default “_id”\nfield_2 = compound index - field 1\nfields_3 = compound index - field 2ERROR :- com.mongodb.MongoBulkWriteException: Bulk write operation error on server “mongo server name”:27016. Write errors: [BulkWriteError{index=0, code=11000, message=‘E11000 duplicate key error collection: db_name.collection_name index: id dup key: { _id: “101” }’, details={}}].Please note → in mongo spark connector version - 3 , it was working but in version -10 we are not able to update data.", "username": "Deepak_gusain" }, { "code": "", "text": "@Robert_Walters , can you please help us on above error? We are not able to update data in global mongo db. Above I have pasted code.", "username": "Deepak_gusain" } ]
Spark mongo connector - spark streaming into global mongo db , read and write operation in particular zone
2022-07-29T04:55:12.926Z
Spark mongo connector - spark streaming into global mongo db , read and write operation in particular zone
3,913
https://www.mongodb.com/…0_2_1024x534.png
[ "atlas-cluster" ]
[ { "code": "", "text": "I have checked database and old database is disappeared.I received as bellowing transfer mail.Could you recovery my cluster for me, please ?This cluster’s database is very important.\n\nScreen Shot 2022-08-01 at 21.17.241612×842 133 KB\n", "username": "CHIEN_PHAM_NGOC" }, { "code": "", "text": "Your M0 free tier cluster, RealmCluster, has been idle for more than 60 days.MongoDB Atlas will automatically pause this cluster in 7 days at 8:12 AM EDT on 2022/08/08 due to prolonged inactivity.To prevent this cluster from being paused, initiate a connection to your cluster before 2022/08/08. View our documentation for instructions on how to connect to your cluster.", "username": "CHIEN_PHAM_NGOC" }, { "code": "", "text": "Above message is a headsup that your cluster will go to pause mode due to inactivity.This is normal\nAre you on the right organization/project?\nWhat does all clusters at top right show\nCluster cannot dissappear unless it is dropped", "username": "Ramachandra_Tummala" } ]
Please help me to recovery this cluster!
2022-08-01T14:37:18.312Z
Please help me to recovery this cluster!
1,583
null
[ "python" ]
[ { "code": "TypeError: Object of type ObjectId is not JSON serializable", "text": "I am currently storing all documents’ ids and references as ObjectId rather than string.To send documents from my Flask app to my frontend, I have to convert all ObjectIds to string first. This is due to the error TypeError: Object of type ObjectId is not JSON serializableWhen my server receives documents from my frontend, I have to convert these strings back to ObjectIds before performing an update operation on the document.I am contemplating storing all id as strings in order to avoid performing these two additional steps. However, I understand that ObjectId is more efficient which is why I have been resisting this change.Would really appreciate some advice on how should I handle this situation.", "username": "W_N_A" }, { "code": "from bson import json_util, ObjectId\nimport json\n\n#Dump loaded BSON to valid JSON string and reload it as dict\npage_sanitized = json.loads(json_util.dumps(page))\nreturn page_sanitized\n", "text": "This is usually how I handle it,", "username": "tapiocaPENGUIN" }, { "code": "_id: Object(...)_id: {\"$oid\": hex_representation}", "text": "json_util.dumps(page) would convert _id: Object(...) to _id: {\"$oid\": hex_representation} which makes it unintuitive for the frontend", "username": "W_N_A" } ]
How should I handle ObjectId with Flask?
2022-08-01T14:58:22.189Z
How should I handle ObjectId with Flask?
4,146
null
[ "java", "mongodb-shell", "atlas-cluster", "kafka-connector" ]
[ { "code": "[Worker-00cbe7dbb4d2fb6dc] [2022-04-27 13:30:21,449] INFO Exception in monitor thread while connecting to server [mydb].mongodb.net:27017 (org.mongodb.driver.cluster:76)\n[Worker-00cbe7dbb4d2fb6dc] com.mongodb.MongoSocketOpenException: Exception opening socket\n[Worker-00cbe7dbb4d2fb6dc] \tat com.mongodb.internal.connection.SocketStream.open(SocketStream.java:70)\n", "text": "I am attempting to use the official MongoDB kafka connector https://www.mongodb.com/docs/kafka-connector/current/sink-connector/ as a SINK to move data from an AWS hosted MSK cluster into an ATLAS hosted mongodb.I’ve configured the client in a way that I THINK should work but I’m getting issuesI’m wondering if anyone out there has successfully done this and if there is a trick somewhere that I’m missing.This is a development level proof of concept at the moment, so I’ve got a fully-open kafka solution in AWS, the mongodb in atlas has a 0.0.0.0/0 setting so it should be open.My connection.uri uses a mongodb+srv linkI am able to successfully connect to the mongo instance if I attempt to connect via mongosh from within an EC2 instance on the same AWS cluster, but the MSK Connect workers in AWS are failing.Would love to hear any insights from others who have successfully done this in the past.", "username": "Alex_Chesser" }, { "code": "", "text": "Hey\nI’ve just run into the exact same problem, where you able to resolve it in the end?", "username": "Martin_Wyett" }, { "code": "", "text": "Just in case someone else stumbles across this I’ll share the answerYou have to create a Private Endpoint in Atlas then a VPC endpoint in AWS.There is a section in this guide that explains how to set up the private endpoint: Integrating MongoDB with Amazon Managed Streaming for Apache Kafka (MSK) | MongoDBFor me even though we had a peering connection set up between AWS and Atlas with all of the correct routes etc, we still had to go via this private endpoint option, it’s a limitation of MSK I believe", "username": "Martin_Wyett" }, { "code": "", "text": "For VPC Peering to work, you’ll need NAT GW configured. It doesn’t work with IGW.", "username": "igor_alekseev" } ]
Unable to connect to Atlas-Hosted MongoDB via Confluent connector hosted in MSK
2022-04-27T15:41:18.397Z
Unable to connect to Atlas-Hosted MongoDB via Confluent connector hosted in MSK
3,820
https://www.mongodb.com/…22cd17e7de41.png
[ "python", "compass", "indexes" ]
[ { "code": " -177.32885073866842,\n 69.26482080715196],\n [\n -172.4949740088106,\n 73.33141551639744\n ],[\n -171.4150558672426,\n 75.10461444020666\n ],[\n -171.41354695103954,\n 75.11130642199625\n ],[\n -169.36300659762358,\n 87.20295230697523\n ],[\n -169.13419616263977,\n 90\n ],[\n -180,\n 90\n ],[\n -180,\n 70.39749169346709]]]", "text": "Hi!\nI hope that I chose right section for my issue. I’m trying to create 2dsphere index using mongoDB compass (but the same issue occurs when using pymongo) for polygons and I’m getting error.\n\nMy setup:\nI’m using database on M1 mac running macOS 12 montarey in localhost. My mongoDB version is 5.0.7.\nWhat I try to do:\nI have divided world map into 15000 polygons, which don’t interfere. I have uploaded all of them in DB, and now I need to index them. I tried to just remove this polygon which raises error, but there are at least 10 more.\nInteresting that polygon which raises error, is right on edge of world map. Could it somehow be the reason of error? No other software I tried this polygon in didn’t have any problems.Full GeoJSON object coordinates:\n“coordinates”: [\n[ [\n-180,\n70.39749169346709 ],[", "username": "Armands_Vagalis" }, { "code": "", "text": "There is this polygon on map:\n\nUntitled (48)1062×1170 196 KB\n", "username": "Armands_Vagalis" } ]
Unknown error while creating mongoDB geospatial 2dsphere index
2022-08-01T14:51:04.451Z
Unknown error while creating mongoDB geospatial 2dsphere index
1,688
null
[ "dot-net", "atlas-device-sync" ]
[ { "code": "", "text": "Hi, our SYNC server is down because have sync trouble between Realm and Atlas.\nHow can i detect this in .Net SDK ?\nI don’t want execute functions like WaitForDownloadAsync() because they enter in endless loop.\nI can’t use Session State because they return Active.\nthanks", "username": "Sergio_Carbonete" }, { "code": "Session.ConnectionStateDisconnected", "text": "There’s no mechanics in the SDK to detect the server being down. There should be server-side alerts that notify you whenever that happens. If the server is completely unresponsive, Session.ConnectionState should be Disconnected. But if it’s able to establish connections, just not syncing for whatever reason, then that won’t be a reliable indicator.", "username": "nirinchev" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to detect Sync Server is down
2022-07-07T17:18:59.695Z
How to detect Sync Server is down
1,863
null
[ "atlas-functions", "app-services-data-access" ]
[ { "code": "", "text": "How to configure the Realm Sync permissions so that users can write from within (specific) functions, e.g. a function “increaseCounter” that will check if certain conditions are met and then increase a value in a document by 1.The users should not be allowed to change this value themselves (which they would be able to if i gave them basic “write” access, so this is where my question comes from. They should only be allowed to call this function).How do I define that in the Sync permissions section?", "username": "SirSwagon_N_A" }, { "code": "", "text": "Okay this is super simple. Just do not give write permission in the Sync “Define Permissions” section and configure the function your clients should call to change data to use “System” authentication. Warning: when testing, this will be overwritten whenever you test a function directly from the Realm UI on realm.mongodb.com under “Functions” if you press “Change User” below a function’s code in the console area and select a user from the list. If you want to test the function with the correct context.user, it is mandatory to call the function actually from a (test) client SDK.", "username": "SirSwagon_N_A" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm sync permissions rules allow write only from functions
2022-06-12T12:45:19.125Z
Realm sync permissions rules allow write only from functions
2,660
null
[ "dot-net", "atlas-search", "atlas", "field-encryption" ]
[ { "code": "", "text": "Hi,I’m hoping someone could point me in the right direction here.We are working with highly sensitive data and have subsequently starting working on implement CSFLE on sensitive data in sensitive collections. We have got CSFLE working using an Azure KMS and all is well.However, it seems that when using an encryption enabled MongoClient, any unsupported operations are blocked on all collections regardless of whether they have encrypted fields or not. Is this the desired behaviour of the driver and if so, what is a suitable workaround?We are using the v2.16 of the C# driver for reference.The only way I can see us working around this is by registering 2 clients:Is this the recommended approach? My concern is the number of connections to the database will increase as from my understanding the connections are handled by the MongoClient and 2 clients would result in 2 collection pools.In summary, my questions are:Thanks,\nLuke", "username": "Luke_Warren" }, { "code": "", "text": "Hey, can you provide a command you run that fails with csfle?", "username": "Dmitry_Lukyanov" }, { "code": "", "text": "Hi Demitry,I ended up logging a support ticket. It seems that this is a known bug with the driver: https://jira.mongodb.org/browse/SERVER-68371Please note to anyone is future that you cannot run your search pipelines with an encryption enabled Mongo Client.Commands run via the Mongocryptd process which does not recognise $search stages in aggregations. You will also get errors with lookup aggregations as they are not supported.The workaround is to register 2 mongo clients - one for dealing with encrypted collections and one for running aggregations. You will then need to specifically omit encrypted fields to avoid serialisation issues.This is not ideal as you will use more DB connections because Mongo cannot pool across 2 different clients with different configurations.", "username": "Luke_Warren" } ]
Client-side Field-Level Encryption
2022-07-13T17:39:22.406Z
Client-side Field-Level Encryption
2,438
null
[]
[ { "code": "", "text": "Data Explorer operation for request ID’s [62e37ed36f1de2088fa09817] timed out after 45 secs Check your query and try again.\nI am also facing the same problem. It just happened suddenly. Can you suggest some solution for this???", "username": "Neaz_Morshed" }, { "code": "", "text": "Hi @Neaz_Morshed - Can you advise the following information:Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
Data Explorer operation for request ID error
2022-07-29T06:37:41.279Z
Data Explorer operation for request ID error
1,533
null
[]
[ { "code": "", "text": "Hi There,We are using Charts v1.12.0 and suddenly encountered an issue:{“error”:“app is at max permitted number of services”,“error_code”:“FunctionExecutionError”,“link”:“http://localhost:8080/groups/5cd322d1cc1889001381febe/apps/5cd322d2cc1889001381fece/logs?co_id=62e409a4cc1889013b17008f”}@tomhollander Can u pls help with this.?", "username": "Apurv_Tyagi" }, { "code": "", "text": "Hi @Apurv_Tyagi -The on-prem version of Charts is out of support now. The solution to this will be to find the collection for Stitch Services and delete those which aren’t in use by any data sources, but I don’t recall what the schema looks like.Tom", "username": "tomhollander" } ]
Mongo Charts Error | Max permitted number of services
2022-07-29T16:33:28.070Z
Mongo Charts Error | Max permitted number of services
1,838
null
[]
[ { "code": "", "text": "Mongodb\\Driver\\manager not found", "username": "Tanmay_Ghosh" }, { "code": "", "text": "", "username": "Jack_Woehr" } ]
Mongodb\Driver\manager not found
2022-07-31T10:41:46.225Z
Mongodb\Driver\manager not found
1,023
null
[ "queries" ]
[ { "code": "", "text": "Hi,We are having 2.7TB of data on primary and due to some reason secondary went down and it is now completely out of sync. So now whenever it is getting added into cluster my application starts timing out. Also since slave sync from the beginning it takes almost 2 to 3 days to fully in sync with master.For application we tried to check if there is issue with write concern but it shows w:1.Is there is any way we can add Replica without application timeout. Because all operations on master starts timing out as soon as we add Replica. Once replica is in sync we don’t see such issues.MongoDB Version is 4.2 community version", "username": "Prasad_Wani" }, { "code": "", "text": "We have a similar problem with our cluster.We’re trying to sync up a replica using a seeded member set to hidden. When the optimeDate caught up, we brought it into service by setting hidden to false, and immediately everything that connected to this replica for reads was timing out. Looking at rs.status(), the optimeDate just froze. We seem to be missing optimeDurableDate and the optimeDurable: {} from rs.status(), and I can’t find anything to force mongodb to sync up.", "username": "DB_CA" } ]
Adding new Slave to existing Mongo cause the Timeout in Application
2022-05-22T05:08:48.389Z
Adding new Slave to existing Mongo cause the Timeout in Application
1,726
null
[]
[ { "code": "{geopositions:[{lat:41.1,lon:-87.8},{lat:41.5,lon:-87.3}]}\n", "text": "I have a mongo collection populated with geoposition data. Structure of document is like:Is there a way like some tool which I can use to work with mongo collection and display geopositions on a map? Or a way of doing it programmatically?", "username": "Manish_Ghildiyal" }, { "code": "", "text": "You can visualize data on map in MongoDB Charts", "username": "Katya" }, { "code": "", "text": "Is it possible to display this chart map in a front end as an image?", "username": "An_De" } ]
Displaying geospatial data on google map
2020-10-28T16:51:34.913Z
Displaying geospatial data on google map
2,628
null
[ "java", "production" ]
[ { "code": "RewrapManyDataKey", "text": "Release Notes:\nThis is a patch release that fixes a potential data corruption bug in RewrapManyDataKey when rotating encrypted data encryption keys backed by GCP or Azure key services.The following conditions will trigger this bug:The result of this bug is that the key material for all data encryption keys being rewrapped is replaced by new randomly generated material, destroying the original key material.To mitigate potential data corruption, upgrade to this version or higher before using RewrapManyDataKey to rotate Azure-backed or GCP-backed data encryption keys. A backup of the key vault collection should always be taken before key rotation.The list of JIRA tickets resolved in this release is available on Java driver Jira project.Documentation on the MongoDB Java driver will be updated in due course and can be found here:http://mongodb.github.io/mongo-java-driver/Upgrading\nThere are no known backwards breaking changes in this release.", "username": "Ross_Lawley" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
The 4.7.1 MongoDB Java & JVM Drivers have been released
2022-07-30T15:22:41.059Z
The 4.7.1 MongoDB Java &amp; JVM Drivers have been released
3,468
https://www.mongodb.com/…83d4e38bb3dd.png
[ "connecting" ]
[ { "code": "{\"t\":{\"$date\":\"2022-07-28T15:31:25.696+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4712102, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"Host failed in replica set\",\"attr\":{\"replicaSet\":\"rs\",\"host\":\"52.x.x.x:1996\",\"error\":{\"code\":202,\"codeName\":\"NetworkInterfaceExceededTimeLimit\",\"errmsg\":\"Couldn't get a connection within the time limit of 500ms\"},\"action\":{\"dropConnections\":false,\"requestImmediateCheck\":false,\"outcome\":{\"host\":\":1996\",\"success\":false}}}}\n{\"t\":{\"$date\":\"2022-07-28T15:31:36.195+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4333222, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"RSM received failed isMaster\",\"attr\":{\"host\":\"52.x.x.x:1996\",\"error\":\"NetworkInterfaceExceededTimeLimit: Couldn't get a connection within the time limit of 500ms\",\"replicaSet\":\"rs\",\"isMasterReply\":\"{}\"}}\n{\"t\":{\"$date\":\"2022-07-28T15:31:36.196+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4712102, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"Host failed in replica set\",\"attr\":{\"replicaSet\":\"rs\",\"host\":\"52.x.x.x:1996\",\"error\":{\"code\":202,\"codeName\":\"NetworkInterfaceExceededTimeLimit\",\"errmsg\":\"Couldn't get a connection within the time limit of 500ms\"},\"action\":{\"dropConnections\":false,\"requestImmediateCheck\":false,\"outcome\":{\"host\":\":1996\",\"success\":false}}}}\n\ndirectconnection", "text": "i am trying to test connectivity to my local mongodb through mongo compass, but i still reach out same error, however the service was running properly before.\n\nmongo785×429 46 KB\nalso, this is the most recent logs for mongodb:i only have access to db if i used directconnection option when connecting to db, but we use replicaset and not available for us to use.", "username": "mahmoud_abdelfatah" }, { "code": "", "text": "Are you sure the port is right? it’s non-default", "username": "Andrew_Davidson" }, { "code": "netstat -na | grep 1996\ntelnet localhost 1996\n", "text": "What happens if you run the following commands?andThese will help you determine if you have an application running on the port and help you to troubleshoot if connectivity is an issue. Note that these are Linux commands so if your local machine is Windows you would have to look for equivalent commands (I can’t remember if these are standard/available on Windows machines or not).", "username": "Doug_Duncan" }, { "code": "directconnection/etc/hosts", "text": "Welcome to the MongoDB Community @mahmoud_abdelfatah!i only have access to db if i used directconnection option when connecting to db, but we use replicaset and not available for us to use.If a direct connection works but a replica set connection does not, I expect your replica set configuration has hostnames that cannot be resolved locally.Per the MongoDB Server Discovery and Monitoring (SDAM) spec, Clients use the hostnames listed in the replica set config.A client using a replication set connection must be able to connect to members of the replica set using the configured hostnames/IPs and ports. A workaround might be adding hostnames to local DNS resolution via /etc/hosts or an equivalent for your O/S.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Right , I am not using the default port, so I have changed it to 1996 in mongod.cfg file", "username": "mahmoud_abdelfatah" }, { "code": "", "text": "Hi stennie,The thing is, I have just captured an image “actually AMI because I am using AWS” and the machine which I captured image from, it was working properly with no issues and no local DNS reolution as you have suggested.\n\nmongo21148×592 59.3 KB\nIn the old machine i had not used replicaset members as they should be used, I just enabled replicaset in the .cfg file and added a name for it, so later I can add members when I decide to do that.so, for new machine, I just spined up another mongoDB machine from the image and applied the machine NEW elastic ip in the application code which already in another machine.All things should work properly as before but unfortunately no.", "username": "mahmoud_abdelfatah" }, { "code": "", "text": "Hi doug,\nHope all is well.this is the output of two testing commands, i also tested the same commands in our live mongo which is the machine that working properly, and have the same result in two machines.\nnetstat818×399 104 KB\n", "username": "mahmoud_abdelfatah" }, { "code": "replicaSet", "text": "Thanks for showing the output of the commands @mahmoud_abdelfatah. In playing around, I was able to get a similar timeout message while trying to connect to a local replica set using Compass if my replica set name was incorrect. Have you validated that you are using the correct name?You can check the name of the replica set by connecting directly to one member of the set and looking to see what information is provided in the upper left box in Atlas:If that name (foo) in the example above matches the value you’re typing in for the replicaSet parameter, then I’m not sure why you’re still timing out.", "username": "Doug_Duncan" }, { "code": "", "text": "issue solved, but the problem was little bit different for our casei have created a new mongodb machine from an image of old machine, so the replicaset host ip address still the same for the old machine, so i should change it to the new machine ip", "username": "mahmoud_abdelfatah" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can not connect to mongodb locally: Server selection timed out after 30000 ms
2022-07-28T16:50:57.725Z
Can not connect to mongodb locally: Server selection timed out after 30000 ms
6,389
null
[ "atlas-search" ]
[ { "code": "", "text": "Hello everyone!I noticed in the docs for Dynamic Mappings there is no mention to the amount of fields that can be indexed, which leads me to believe that it is unlimited. Here is the text I am referring to:Atlas Search automatically indexes the fields of supported types in each document. … Use dynamic mappings if your schema changes regularly or is unknownI’m not doing anything fancy regarding types as all searchable fields will be text in my case, however we are allowing our users to create arbitrary properties such as firstName while another user might have first_name in their account, so manually creating indexes on each property would get out of hand and dynamic mappings seems to be the solution.I noticed elastic search has a default of 1,000 and they recommend flattening if the number of fields is unknown - I can’t link the docs them because I’m a new user We’re in the process of migrating off of DynamoDB to Mongo, and would obviously prefer to have our search on Atlas instead of Elastic since we’re already here.For more context, we are building an applicant management system GitHub - plutomi/plutomi: Applicant management at any scale. Currently undergoing maintenance! :D\nand each organization can create questions for applicants to answer, and I would like to provide search functionality for any (text) field on their applicants.", "username": "joswayski" }, { "code": "", "text": "@joswayski First of all, I want to welcome you to the MongoDB community. We are so excited to have you, and I am particularly interested in the amazing application you have built. Please DM me on Twitter if you’d like to chat more about it.Secondarily, we do not limit the number of fields in an index, though when you are in the range of thousands of fields you could see performance issues unless you scale your boxes. You have a few options.To avoid linear scaling of boxes. You could spin up a new dynamic index for each tenant. I don’t expect any single tenant to create more than a few thousand fields.You can keep the approach you have today for simplicity and keep an eye on the number of fields because we don’t limit the number in our dedicated tier. I have seen customers go to 180,000 fields, though this is obviously an anti-pattern. Atlas Search may still work. I’m not sure what the absolute largest number would be for a mapping explosion, but I know the subsystem will choke at 2.147 Billion fields. Your page cache will probably pop long before that number. I’d bet in the millions of unique fields on very hefty machines.Please note: Dynamic fields do not index boolean fields by default today. Those need to be set for now, but that will change.", "username": "Marcus" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is the number of Atlas Search Dynamic Mappings unlimited?
2022-07-30T19:32:32.070Z
Is the number of Atlas Search Dynamic Mappings unlimited?
1,701
null
[ "dot-net", "field-encryption" ]
[ { "code": "ClientEncryption", "text": "Hello all,I’ve been researching the requirement to rotate the encryption keys manually. From the KMS (GCP) side, we managed to rotate the master key with the data key still working with the old version of the key.However, we would like to reencrypt the same data key with the new master key version instead, as from what I saw in this answer, the datakey remained encrypted with the old master key.I’ve tried to look into the code, and the ClientEncryption class in C# only provides the ability to create new data keys. There is no option to decrypt and reencrypt the CSFLE data key to make sure it’s encrypted with the latest key rotation.", "username": "PBG" }, { "code": "ClientEncryption", "text": "Hey @PBG , see a coming 2.17 release where we’ve introduced a new rewrapManyDataKey method for this goal into ClientEncryption. It should be released in the next few days", "username": "Dmitry_Lukyanov" }, { "code": "", "text": "Oh wow that’s amazing and quite fortunate! Looking forward to it.", "username": "PBG" }, { "code": "", "text": "it’s already released Please, check 2.17 release", "username": "Dmitry_Lukyanov" }, { "code": "Encryption related exception: HMAC validation failure. var vaultDatabase = vaultDbClient.GetDatabase(configuration.DatabaseName);\n var keyVaultCollection = vaultDatabase.GetCollection<BsonDocument>(KeyVaultCollectionName);\n var kmsProvider = configuration.KmsProvider ?? throw new InvalidOperationException($\"'{nameof(configuration.KmsProvider)}' is expected.\");\n var dataKeyDocument = keyVaultCollection.Find($\"{{\\\"masterKey.provider\\\": \\\"{kmsProvider}\\\"}}\").SingleOrDefault();\n var dataKey = dataKeyDocument?.GetValue(\"_id\")?.AsNullableGuid;\n FilterDefinition<BsonDocument> filter = dataKeyDocument;\n var dataKeyOptions = GetDataKeyOptions(configuration);\n var rewrapManyDataKeyOptions = new RewrapManyDataKeyOptions(kmsProvider, dataKeyOptions.MasterKey);\n RewrapManyDataKeyResult result = clientEncryption.RewrapManyDataKey(filter, rewrapManyDataKeyOptions);", "text": "@Dmitry_Lukyanov Thank you for the quick release! We’ve ran the rewrap on our test env, and it managed to reencrypt the key, but now the data is inaccessible.\nIt’s giving the following error:\nEncryption related exception: HMAC validation failure.Please find below the code we used, in case we did something wrong:", "username": "PBG" }, { "code": "rewrapManyDataKeyHMAC validation failure", "text": "Hey, I wasn’t able to reproduce this issue. The gist with code snippet I used: Rewrap · GitHub. Can you please check whether this gist code works for you?\nAlso, can you please provide the following information:", "username": "Dmitry_Lukyanov" }, { "code": " \"masterKey\" : {\n \"provider\" : \"gcp\",\n \"projectId\" : \"project-example\",\n \"location\" : \"location-test\",\n \"keyRing\" : \"example-location-keyring\",\n \"keyName\" : \"gke-master-key-test\"\n }\n ---> MongoDB.Driver.Encryption.MongoEncryptionException: Encryption related exception: HMAC validation failure.\n ---> MongoDB.Libmongocrypt.CryptException: HMAC validation failure\n at MongoDB.Libmongocrypt.Status.ThrowExceptionIfNeeded()\n at MongoDB.Libmongocrypt.CryptContext.FinalizeForEncryption()\n at MongoDB.Driver.Encryption.LibMongoCryptControllerBase.ProcessStates(CryptContext context, String databaseName, CancellationToken cancellationToken)\n at MongoDB.Driver.Encryption.ExplicitEncryptionLibMongoCryptController.DecryptField(BsonBinaryData encryptedValue, CancellationToken cancellationToken)\n --- End of inner exception stack trace ---\n at MongoDB.Driver.Encryption.ExplicitEncryptionLibMongoCryptController.DecryptField(BsonBinaryData encryptedValue, CancellationToken cancellationToken)\n at PTest.Shared.Database.Mongo.Files.MongoEncryptedFileRepository`1.DecryptStream(ClientEncryption clientEncryption, Stream encryptedStream, CancellationToken cancellationToken) in /home/vsts/work/1/s/shared/dotnet/Database.Mongo/Files/MongoEncryptedFileRepository.cs:line 84\n at PTest.Shared.Database.Mongo.Files.MongoEncryptedFileRepository`1.ReadFileAsync(String fileId, CancellationToken cancellationToken) in /home/vsts/work/1/s/shared/dotnet/Database.Mongo/Files/MongoEncryptedFileRepository.cs:line 43\n at PTest.Shared.Database.Mongo.Files.MongoEncryptedFileRepository`1.ReadFileAsync(String fileId, CancellationToken cancellationToken) in /home/vsts/work/1/s/shared/dotnet/Database.Mongo/Files/MongoEncryptedFileRepository.cs:line 43\n at PTest.Services.Impl.PartIndexing.PartIndexingService.GetPTImageAsync(GetPTImage request, CancellationToken cancellationToken) in /home/vsts/work/1/s/PTest-api/Services/PTest.Services.Impl/PTIndex/PTIndexService.cs:line 478\n --- End of inner exception stack trace ---\n at PTest.Services.Impl.PartIndexing.PartIndexingService.GetPTImageAsync(GetPTImage request, CancellationToken cancellationToken) in /home/vsts/work/1/s/PTest-api/Services/PTest.Services.Impl/PartIndexing/PartIndexingService.cs:line 487\n at PTest.BackendApi.WebApi.Controllers.Public.FilesController.PTImageAsync(Guid pId) in /home/vsts/work/1/s/PTest-api/WebApi/Controllers/Public/FilesController.cs:line 379\n at Microsoft.AspNetCore.Mvc.Infrastructure.ActionMethodExecutor.TaskOfIActionResultExecutor.Execute(IActionResultTypeMapper mapper, ObjectMethodExecutor executor, Object controller, Object[] arguments)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeActionMethodAsync>g__Awaited|12_0(ControllerActionInvoker invoker, ValueTask`1 actionResultValueTask)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeNextActionFilterAsync>g__Awaited|10_0(ControllerActionInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Rethrow(ActionExecutedContextSealed context)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeInnerFilterAsync()\n --- End of stack trace from previous location ---\n at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeFilterPipelineAsync>g__Awaited|19_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope)\n at Microsoft.AspNetCore.Routing.EndpointMiddleware.<Invoke>g__AwaitRequestTask|6_0(Endpoint endpoint, Task requestTask, ILogger logger)\n at Microsoft.AspNetCore.Authorization.Policy.AuthorizationMiddlewareResultHandler.HandleAsync(RequestDelegate next, HttpContext context, AuthorizationPolicy policy, PolicyAuthorizationResult authorizeResult)\n at Microsoft.AspNetCore.Authorization.AuthorizationMiddleware.Invoke(HttpContext context)\n at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context)\n at PTest.BackendApi.WebApi.GlobalExceptionHandler.Invoke(HttpContext context) in /home/vsts/work/1/s/PTest-api/WebApi/GlobalExceptionHandler.cs:line 55\n", "text": "Thank you for the link. From a first inspection the code seems similar, but I’ll do a deep dive to try and implement the snippet relevant code instead of the one we have.Regarding your questions:For the full stacktrace:", "username": "PBG" }, { "code": "//rewrapAutoEncryptMongoFileRepositoryGridFS public static Stream EncryptStream(ClientEncryption clientEncryption, Guid dataKeyId, Stream stream, CancellationToken cancellationToken)\n {\n var encryptOptions = new EncryptOptions(\n algorithm: EncryptionAlgorithm.AEAD_AES_256_CBC_HMAC_SHA_512_Deterministic.ToString(),\n keyId: dataKeyId);\n\n var reader = new BinaryReader(stream);\n var data = new BsonBinaryData(reader.ReadBytes((int)stream.Length));\n var encryptedData = clientEncryption.Encrypt(data, encryptOptions, cancellationToken);\n var encryptedStream = new MemoryStream(encryptedData.Bytes);\n return encryptedStream;\n }\n\n public static Stream DecryptStream(ClientEncryption clientEncryption, Stream encryptedStream, CancellationToken cancellationToken)\n {\n using var reader = new BinaryReader(encryptedStream);\n var encryptedData = new BsonBinaryData(reader.ReadBytes((int)encryptedStream.Length));\n var data = clientEncryption.Decrypt(encryptedData, cancellationToken);\n var stream = new MemoryStream(data.AsBsonBinaryData.Bytes);\n return stream;\n }\n", "text": "Alright sorry to be bothering you with this. I’ve implement the //rewrap section of the code, and the issue remains. After looking more closely at the snippet, there is one differences. We’re not using AutoEncrypt. We’re using the MongoFileRepository as a base class to encrypt and decrypt the files stored using GridFS.", "username": "PBG" }, { "code": "", "text": "Can you please provide a full simple self contained repo with grid fs you use?", "username": "Dmitry_Lukyanov" }, { "code": " if (shouldTriggerFail)\n {\n Console.WriteLine(\"Initializing a new client encryption that won't be used to encrypt/decrypt before rewrap\");\n clientEncryption = EncryptionProvider.GetClientEncryption(databaseEncryptionConfiguration, vaultClient);\n }\nvar clientEncryption = new ClientEncryption(clientEncryptionOptions);\nclientEncryption.Decrypt(encryptedData1, CancellationToken.None);\nencryptionProvider.RewrapKeys(databaseEncryptionConfiguration, vaultClient, clientEncryption);\nclientEncryption.Decrypt(encryptedData2, CancellationToken.None);\nvar clientEncryption = new ClientEncryption(clientEncryptionOptions);\nencryptionProvider.RewrapKeys(databaseEncryptionConfiguration, vaultClient, clientEncryption);\nclientEncryption.Decrypt(encryptedData1, CancellationToken.None);\nclientEncryption.Decrypt(encryptedData2, CancellationToken.None);\n", "text": "@Dmitry_Lukyanov Hello again. I’ve debugged for a few days now, and I managed to pin down what was going wrong. The HMAC bug is happening in a very particular case. There needs to be 3 conditions for it to happen.This is the repo as you requested.As you can see, the only difference between a working rewrap and a non-working one is simply:If we use the original clientEncryption object that was initialized for the encryption or decryption, it works perfectly fine. But as soon as we run the rewrap with a yet unused new clientEncryption, the rewrap fails for GridFS.\nSo for example this works:But this will fail:", "username": "PBG" }, { "code": "", "text": "Thanks @PBG , Thanks for your repo, I can confirm that I see the same behavior, we will look at it and come back to you.", "username": "Dmitry_Lukyanov" }, { "code": "", "text": "@PBG , it’s a bug, we’re working on fixing it. You can track related work here", "username": "Dmitry_Lukyanov" }, { "code": "", "text": "@Dmitry_Lukyanov Thank you for the confirmation. Best of luck! ", "username": "PBG" }, { "code": "", "text": "@PBG , please check release 2.17.1 that contains solution for this bug", "username": "Dmitry_Lukyanov" }, { "code": "", "text": "@Dmitry_Lukyanov That works! Thank you! ", "username": "PBG" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to reencrypt the CSFLE Data Key with the newly rotated Master Key?
2022-07-18T18:57:12.685Z
How to reencrypt the CSFLE Data Key with the newly rotated Master Key?
4,214
null
[ "node-js" ]
[ { "code": "", "text": "I want to install Node.js driver whitout using npm, because on this install i don’t have web access. Is this possible with downloaded package node-mongodb-native-main from GitHub?", "username": "Kossak" }, { "code": "npm installnode_modules", "text": "Hi @Kossak and welcome to the forums.While this might be possible to do, you would also need to figure out all the dependencies that the driver relied on and get them, and then all the dependencies for those modules and on and on until you’ve gotten everything. That’s a lot of work, and prone to error.What you might be able to do (I have not tried this so not sure if it would work) is to run npm install on a machine that does have internet connectivity and then copy the node_modules folder over to the other machine. This would do all the dependency resolution for you. You would probably want to do this on a machine that had no modules installed previously just to keep things clean.Good luck if you choose to go down this route.", "username": "Doug_Duncan" } ]
How to install Node.js driver without using npm
2022-07-29T10:57:04.544Z
How to install Node.js driver without using npm
1,483
null
[ "java", "production" ]
[ { "code": "", "text": "The 4.7.0 MongoDB Java & JVM Drivers has been released.The documentation hub includes extensive documentation of the 4.7 driver.You can find a full list of bug fixes here .You can find a full list of improvements here .You can find a full list of new features here .", "username": "Jeffrey_Yemin" }, { "code": "", "text": "Note: The 4.7.1 MongoDB Java & JVM Drivers have now been released", "username": "Ross_Lawley" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB Java Driver 4.7.0 Released
2022-07-20T01:50:24.972Z
MongoDB Java Driver 4.7.0 Released
4,468
null
[ "java" ]
[ { "code": "", "text": "How can I change mongoDB filters in Java drivers to mongoShell format, is there any functionality in Java Driver code base that is actually doing that.I want to send these queries to a node server which will connect to mongoDB and then fetch the data and give it back to my java application.", "username": "Divyansh_N_A" }, { "code": "", "text": "Hi,\nI have similar requirement. Did u get any solution to this???", "username": "Mohib_ur_Rehman" }, { "code": "", "text": "I am not familiar with the builder classes since I avoid using this extra layer and that I usually develops the queries and aggregations in mongosh.I would try to simply print the result of toString(). I imagine that it will output the JSON equivalent, which is the mongosh and nodejs format.", "username": "steevej" }, { "code": "", "text": "yes, u can do that, just use Codecs, and it’s very easy to do", "username": "Divyansh_N_A" }, { "code": " private CodecRegistry codecRegistry = com.mongodb.MongoClientSettings.getDefaultCodecRegistry();\n private UuidRepresentation uuidRepresentation = UuidRepresentation.JAVA_LEGACY;\n\n BsonDocument filterDoc = filter.toBsonDocument(BsonDocument.class, createRegistry(codecRegistry, uuidRepresentation));\n", "text": "", "username": "Divyansh_N_A" } ]
Convert MongoDB Java filters to mongoShell or Node type filter
2022-07-19T10:20:46.052Z
Convert MongoDB Java filters to mongoShell or Node type filter
2,086
null
[ "queries", "java", "crud", "atlas-functions", "android" ]
[ { "code": "exports = async function(arg){\n let collection = context.services.get(\"mongodb-atlas\").db(<Database>).collection(<Collection>);\n let res = await collection.find({_id:BSON.ObjectId(<ID that exists>}).toArray();\n res = \"found: \" + res;\n if( context.user ){\n res += \"\\nUsername is \" + context.user.toString();\n }\n res += \"\\nRunning as system user: \" + context.runningAsSystem() + \".\";\n return res;\n};\nexports = async function(arg){\n let collection = context.services.get(\"mongodb-atlas\").db(<DB>).collection(<Collection>);\n \n let erg = await collection.updateOne({_id:BSON.ObjectId(<some id that exists>)}, {name:\"New Name\"});\n res = \"Changed #: \" + erg.modifiedCount;\n return res;\n \n};\n", "text": "My functions just end with unexpected error every time and do not show anything in the Realm logs (I turned on logs in each of the function’s settings). I tried user authentication functions, system functions, runAsSystemUser=true, etc.My setup is Atlas DB, Realm Web App, connect to Realm from Android Java SDK. I have users set up and log in with email and password from Java SDK before running a function. I initialize Realm and the MongodbApp. then call the function with functions.callFunctionAsync(name, parameters, callback). Function codes are very simple and don’t produce errors when I run them directly from Realm UI.Example functions I tried:and", "username": "SirSwagon_N_A" }, { "code": "exports = function() {...exports = async function() {...})functions.callFunctionAsync('functionName', new ArrayList<String>(), ReturnType.class, new App.Callback<ReturnType>() {...});", "text": "*** FINALLY FIXED ***\nIt’s about how you call the function. You have to put a List-object in the ‘parameters’ parameter (which is the second in the java sdk) of the callFunction[Async] function. Even if your Realm function is defined without parameters (e.g. just exports = function() {...} or exports = async function() {...}) you can NOT put null as the ‘parameters’. The simplest thing to put is new ArrayList(), like this:functions.callFunctionAsync('functionName', new ArrayList<String>(), ReturnType.class, new App.Callback<ReturnType>() {...});No hate but I think it’s so sad that simple stuff like this is not stated in the documentations.", "username": "SirSwagon_N_A" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb Realm every function produces unexpected error, nothing shows in logs
2022-07-28T16:29:59.956Z
Mongodb Realm every function produces unexpected error, nothing shows in logs
2,503
https://www.mongodb.com/…_2_1024x324.jpeg
[]
[ { "code": "", "text": "\nimage1920×609 153 KB\nHey Everyone,We are organizing a live session on MongoDB on 2nd Aug at 8 PM (IST, GMT+5.30). Please join in if you get the time. Here is the link to join the session, around 6 thousand people have already registered - MongoDB Bootcamp | Scaler The old-school relational databases have proven to be reliable and efficient but at the cost of scalability. That is why MongoDB has become one of the most popular ways to store petabytes of data and run millions of queries per second!The ease of scalability helps companies work freely and use data to make better decisions. But what is MongoDB and how does a NoSQL database program make your work with data easier?Join Akhil Sharma’s Free Masterclass, LIVE on 2nd August, at 8 PM and learn how MongoDB can help you quickly scale up your skill set!Meet Akhil Sharma (LinkedIn)Learn the Pre-requisites of this MasterclassCertificates - All attendees get certificates from Anshuman Singh and Scaler Academy! Please be careful while entering your details while registering since they will go on your CertificatesRegister Here: MongoDB Bootcamp | Scaler Look forward to seeing you all!\nAkhil", "username": "Akhil_Sharma" }, { "code": "", "text": "Here is the link to join the session, around 6 thousand people have already registered - MongoDB Bootcamp | Scaler", "username": "Akhil_Sharma" } ]
Get Started with MongoDB - Live Session
2022-07-30T07:56:30.028Z
Get Started with MongoDB - Live Session
3,046
null
[]
[ { "code": "", "text": "HI, I have created 2 oracle VM machines and want to exercise replication mechanism before we do it at the production level. I have completed up to a certain level of configuration as per this, (https://hevodata.com/learn/mongodb-replica-set-config/) and would like to have some technical assistance remotely. Are there any one who can support to complete this successfully?", "username": "Rajitha_Hewabandula" }, { "code": "", "text": "Hi,You should have at least 3 VM’s for the production use case. - Replica sets should always have an odd number of members. This ensures that elections will proceed smoothly. There is a good MongoDB tutorial that describes how to create a three-member replica set with access control.\nYou can also think about x.509 Certificates", "username": "Arkadiusz_Borucki" }, { "code": "", "text": "if you haven’t done so, you may also consider taking this course:Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "hi,\nWhen I do this,\nuncomment replication line 36, and set the replication name to,\nreplication:\nreplSetName: myreplica01\ntherafter, mongod service is unable to start.\ni am referring to this,In this tutorial, I will guide you step-by-step build a replica set in MongoDB. We will use 3 server nodes with CentOS 7 installed on them and then in...", "username": "Rajitha_Hewabandula" }, { "code": "sudo systemctl status mongod/var/log/mongodb/mongod.logsystemLog.path", "text": "it is possible you have removed too many spaces there. mongodb config files are YAML in essence and indentation matters. please check this first.if that does not solve, try sudo systemctl status mongod to see if error is written there and check /var/log/mongodb/mongod.log (or the file set systemLog.path in config file) to see you can see the error.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "its perfectly working in one node. then I copied the same to other replicate 2 nodes. (bindIP is set to 0.0.0.0).\nwhen i comment replcation mongod service is working. when uncomment its not working??\nreplication:\nreplSetName: myreplica01", "username": "Rajitha_Hewabandula" }, { "code": "/etc/mongo.confreplication:\n replSetName: myreplica01\n0.0.0.0# bindIP: 127.0.0.1", "text": "that does not seem an issue from database server, but rather you missed some settings while creating other servers.from that link you gave, please make sure you followed every step from 1 through 4. as your edit on /etc/mongo.conf file requires an edit only on 2 lines, you could do that without copying. when all servers are identical, they should all work fine without a problem.also make sure that lines has correct indentation (2 spaces it seems):and instead of using 0.0.0.0, just comment out that line with a hash sign # bindIP: 127.0.0.1", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "mongod service is unable to start.This is insufficient for us to help you.You have to share the error message you when starting the service and you also need to share the errors you get in the logs.", "username": "steevej" }, { "code": "", "text": "hi,\nAny particular reason for mongod service is running but when type mongo, generate an error as below,\n\nimage802×134 2.67 KB\n", "username": "Rajitha_Hewabandula" }, { "code": "", "text": "Any particular reason for mongod service is running but when type mongo, generate an error as below,anytime you forgot to “run a server” to “listen at a port” on a specific IP, then the connection will be refused by that IP’s system, or you have the wrong connection string.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "i am working on oracle linux v8.5, not on windows.\nmongod service is running but cant get in to mongo shell?", "username": "Rajitha_Hewabandula" }, { "code": "", "text": "look, you are not helping us to help you. you are not giving us any configuration details nor any error logs we pointed out.any operating system will refuse you if you try to connect to a port that is not open for listening. and you are trying to connect to such a port. not to mention the IP you are trying to connect might be wrong.why it happens? clearly, you are doing it wrong in one of the steps. is your mongod really running on all servers? are you using the correct port numbers to connect? did you set all firewall rules so OS allows you to connect? are you using the correct IP of your servers to connect? are those IP addresses set to allow remote connections by the router they are on.so far, you haven’t tell any of these details. All I have is a link to a blog post you say you followed, and its content seems to work when followed correctly.more on that, “connection refused” is not somehing related to “mongod” processes.Now please, take a breathe first and kindly check all the steps you are supposed to do, make sure all are good, then show us error logs other than “connection refused” error.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "mongod.log (499.3 KB)\nhi , thanks for supporting. herewith I have attached the mongod.log file", "username": "Rajitha_Hewabandula" }, { "code": "", "text": "I am having a CloudFront 403 error on the file. It might be a temporary thing or something else. can you check if it uploaded correctly?", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "yes. theres an CloudFront error. Any other way that i can share this with you?Access Google Drive with a Google account (for personal use) or Google Workspace account (for business use).", "username": "Rajitha_Hewabandula" }, { "code": "", "text": "google drive is a good idea, but you need to change that document’s sharing options to public read so other forum users can see and try helping.\nI send an access request, so please check your e-mail and/or drive for a notification.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "access is granted for the doc", "username": "Rajitha_Hewabandula" }, { "code": "mongodb buildInfo.version:4.4.15\nsystem os.name:Oracle Linux Server release 8.5\nconfig net.bindIp:192.168.56.106\n/etc/hosts", "text": "before moving on, I have created a cluster of “vagrant” servers, with an oracle 8 image. Followed all steps in the blog, and other than “sudo” issues and few typos I had, all the steps are working perfectly fine.Now, you seem to have changed many things from that blog post.the first two should not affect, but changing IP may need many other changes outside the database server.servers do not seem to crash from the log. I cannot see (or failed to see) any clear reason other than the above two for not being able to connect to the server.and again, because I had no problem at the end with the steps of that blog post, I suggest you to backtrack your steps carefuly. If your servers are VMs in some cloud provider (and since you had problems starting them ever I assume you have no product data) you can delete them and start fresh. sometimes starting over is better than trying to fix it.PS: In case you know how to work with Vagrant, here is my settings to start 3 VMs.\nVagrantfile.txt (1.8 KB)", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "\"thank you doing it for me. I am redoing whole thing again. Can you share the mongod.conf file if you dont mind?\nSo that I can follow the correct syntax", "username": "Rajitha_Hewabandula" }, { "code": "", "text": "HI Durmaz,’\nI did everything again and moment i enter,\nreplication:\nreplSetName: myreplica01\nsave and restart the mongod service, it does not start?? ", "username": "Rajitha_Hewabandula" } ]
Technical support on replication
2022-07-11T18:27:29.565Z
Technical support on replication
5,705
https://www.mongodb.com/…4_2_1024x512.png
[ "dot-net", "production" ]
[ { "code": "RewrapManyDataKeyRewrapManyDataKey", "text": "This is a patch release that fixes a potential data corruption bug in RewrapManyDataKey when rotating encrypted data encryption keys backed by GCP or Azure key services.The following conditions will trigger this bug:The result of this bug is that the key material for all data encryption keys being rewrapped is replaced by new randomly generated material, destroying the original key material.To mitigate potential data corruption, upgrade to this version or higher before using RewrapManyDataKey to rotate Azure-backed or GCP-backed data encryption keys. A backup of the key vault collection should always be taken before key rotation.The list of JIRA tickets resolved in this release is available at:https://jira.mongodb.org/issues/?jql=project%20%3D%20CSHARP%20AND%20fixVersion%20%3D%202.17.1%20ORDER%20BY%20key%20ASCDocumentation on the .NET driver can be found at:There are no known backwards breaking changes in this release.", "username": "James_Kovacs" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
.NET Driver 2.17.1 Released
2022-07-30T01:16:54.317Z
.NET Driver 2.17.1 Released
1,877
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 5.0.10 is out and is ready for production deployment. This release contains only fixes since 5.0.9, and is a recommended upgrade for all 5.0 users.Fixed in this release:5.0 Release Notes | All Issues | All Downloads As always, please let us know of any issues.– The MongoDB Team", "username": "Aaron_Morand" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 5.0.10 is Released
2022-07-29T22:56:47.560Z
MongoDB 5.0.10 is Released
2,412
null
[ "aggregation", "queries" ]
[ { "code": "$elemMatch$regex{\n ...others arrays,\n\t\"discounts\": [\n\t\t{\n\t\t\t\"_id\": \"628fd4cd7176540e9e1ad6c2\",\n\t\t\t\"name\": \"asd < % $\",\n\t\t\t\"code\": \"EGX2-VA2Q-H9LV\",\n\t\t\t\"createdAt\": \"2022-05-26T19:28:13.168Z\",\n\t\t\t\"updatedAt\": \"2022-05-26T19:28:13.168Z\"\n\t\t},\n\t\t{\n\t\t\t\"_id\": \"62e28bf56b91f508bbd8b703\",\n\t\t\t\"name\": \"test edit\",\n\t\t\t\"code\": \"51AD-QBQG-TRNJ\",\n\t\t\t\"createdAt\": \"2022-07-28T13:15:33.140Z\",\n\t\t\t\"updatedAt\": \"2022-07-28T13:15:33.140Z\"\n\t\t}\n\t],\n ...others fields\n}\ndb.Store.find(\n $or: [\n {\n discounts: {\n $elemMatch: { name: { $regex: searchTerm, $options: 'i' } }\n }\n },\n {\n discounts: {\n $elemMatch: { code: { $regex: searchTerm, $options: 'i' } }\n }\n }\n ])\n{\n discounts: {\n name: 'test edit'\n }\n}\n", "text": "I’m trying to implement a LIKE operator with $elemMatch and $regex to find any document that match the search term I’m passing but I can’t figure out how to do that.The collection I have:What I’m trying to do:The result of this always return all documents of discounts array. If I hardcoded the search term, like:I get the same result, all documents.What Am I doing wrong here? Thanks!", "username": "Rafael_Rodrigues" }, { "code": "\"discounts\": [\n\t\t{\n\t\t\t\"_id\": \"62e28bf56b91f508bbd8b703\",\n\t\t\t\"name\": \"test edit\",\n\t\t\t\"code\": \"51AD-QBQG-TRNJ\",\n\t\t\t\"createdAt\": \"2022-07-28T13:15:33.140Z\",\n\t\t\t\"updatedAt\": \"2022-07-28T13:15:33.140Z\"\n\t\t}\n\t]\n", "text": "The expected result for this will be, e.g. the search term is ‘test edit’:", "username": "Rafael_Rodrigues" }, { "code": "db.Store.find( \n {\"discounts.name\": \"test edit\"}, \n {_id: 0, discounts: {$elemMatch: {name: \"test edit\"}}}\n);\ncode", "text": "I found this solution that works fine:but the goal here is to implement a $OR operator too, must find a way to search by the code field as well.", "username": "Rafael_Rodrigues" }, { "code": "$regex$regex{\n \"discounts\": {\n \"$filter\": {\n \"input\": \"$discounts\",\n \"cond\": {\n \"$or\": [\n {\n \"$regexMatch\": {\n \"input\": \"$$this.name\",\n \"regex\": \"test edit\",\n \"options\": \"i\"\n }\n },\n {\n \"$regexMatch\": {\n \"input\": \"$$this.code\",\n \"regex\": \"test edit\",\n \"options\": \"i\"\n }\n }\n ]\n }\n }\n }\n}\n", "text": "Hello @Rafael_Rodrigues, Welcome to MongoDB Community Forum,The result of this always return all documents of discounts array.The query part does not filter the nested array elements in the result, It requires to do filter in projection.You can use $filter operator to filter elements from an array and $regexMatch operator to check expression string similar to $regex operator, because $regex operator is only supported in the query part,\nProjection:Playground", "username": "turivishal" }, { "code": "await db.Store.find(\n{\n \"discounts\": {\n \"$filter\": {\n \"input\": \"$discounts\",\n \"cond\": {\n \"$or\": [\n {\n \"$regexMatch\": {\n \"input\": \"$$this.name\",\n \"regex\": \"test edit\",\n \"options\": \"i\"\n }\n },\n {\n \"$regexMatch\": {\n \"input\": \"$$this.code\",\n \"regex\": \"test edit\",\n \"options\": \"i\"\n }\n }\n ]\n }\n }\n }\n}\n)\n{\"error\":{\"message\":\"An error occurred while retrieving the discounts and their information.\",\"stack\":\"Error: Can't use $filter with Array\"^5.9.2\"", "text": "Hello @turivishal, thanks for helping me!I must put your filter inside the find() and that is? like this:Because if so, I got this error: {\"error\":{\"message\":\"An error occurred while retrieving the discounts and their information.\",\"stack\":\"Error: Can't use $filter with ArrayI’m using the version \"^5.9.2\" of MongoDB.", "username": "Rafael_Rodrigues" }, { "code": ".find()db.Store.find({\n $or: [\n { \"discounts.name\": { $regex: searchTerm, $options: 'i' } },\n { \"discounts.code\": { $regex: searchTerm, $options: 'i' } }\n ]\n},\n{\n \"discounts\": {\n \"$filter\": {\n \"input\": \"$discounts\",\n \"cond\": {\n \"$or\": [\n {\n \"$regexMatch\": {\n \"input\": \"$$this.name\",\n \"regex\": searchTerm,\n \"options\": \"i\"\n }\n },\n {\n \"$regexMatch\": {\n \"input\": \"$$this.code\",\n \"regex\": searchTerm,\n \"options\": \"i\"\n }\n }\n ]\n }\n }\n }\n})\n", "text": "The query part does not filter the nested array elements in the result, It requires to do filter in projection.I already said, “The query part does not filter the nested array elements in the result, It requires to do filter in projection.”You need to add that in the projection part of .find() method,", "username": "turivishal" }, { "code": "", "text": "@turivishal I undesrtand now, thank you!", "username": "Rafael_Rodrigues" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
LIKE operator into arrays with $elemMatch not working properly
2022-07-29T12:46:59.068Z
LIKE operator into arrays with $elemMatch not working properly
2,973
null
[ "queries", "time-series" ]
[ { "code": "db.sensorDataTS.drop();\n\ndb.createCollection(\"sensorDataTS\", {\n // 2 cases: 1) timeseries is enabled, 2) timeseries is commented out\n timeseries: {\n timeField: \"timestamp\",\n metaField: \"metadata\",\n granularity: \"hours\"\n }\n});\n\ndb.sensorDataTS.insertOne({\n \"metadata\": {\n \"sensorId\": 5578,\n \"location\": {\n type: \"Point\",\n coordinates: [-77.40711, 39.03335]\n }\n },\n \"timestamp\": ISODate(\"2022-01-15T00:00:00.000Z\"),\n \"currentConditions\": {\n \"windDirecton\": 127.0,\n \"tempF\": 71.0,\n \"windSpeed\": 2.0,\n \"cloudCover\": null,\n \"precip\": 0.1,\n \"humidity\": 94.0,\n }\n})\n// 2dsphere index is created correctly in both ts and non-ts collections\ndb.sensorDataTS.createIndex({ \"metadata.location\": \"2dsphere\" })\n\n// this query works when timeseries commented out above\n// else I get \"MongoServerError: $geoNear, $near, and $nearSphere are not allowed in this context\"\ndb.sensorDataTS.find(\n {\n \"metadata.location\": {\n \"$near\": {\n \"$geometry\": {\n \"type\": \"Point\", \"coordinates\": [\n -77.4,\n 39.03\n ]\n }, \"$minDistance\": 0, \"$maxDistance\": 10000\n }\n }\n })\n", "text": "Hi, I am trying to add a 2dsphere index to a timeseries collection as documented here\nRunning the find/$near query as below with timeseries enabled generates “MongoServerError: $geoNear, $near, and $nearSphere are not allowed in this context”\nIf I comment out the timeseries option, the find/$near works fine.", "username": "Tsondru_Tarchin" }, { "code": "db.sensorDataTS.aggregate([\n {\n $geoNear: {\n near: { type: \"Point\", coordinates: [-77.40, 39.03] },\n key: \"metadata.location\",\n distanceField: \"dist.calculated\",\n maxDistance: 10000,\n includeLocs: \"dist.location\",\n spherical: true\n }\n }\n]);\n", "text": "This works:", "username": "Tsondru_Tarchin" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb 6 timeseries w/ 2dsphere index not working
2022-07-26T22:01:07.371Z
Mongodb 6 timeseries w/ 2dsphere index not working
1,831
null
[ "aggregation", "atlas-search" ]
[ { "code": " const searchStep = {\n $search: {\n index: 'autocomplete', // optional, defaults to \"default\"\n compound: {\n minimumShouldMatch: 0, // if one clause fails, still get documents back\n should: [\n {\n autocomplete: {\n query: args.searchText,\n path: 'email',\n },\n },\n {\n autocomplete: {\n query: args.searchText,\n path: 'firstName',\n },\n },\n {\n autocomplete: {\n query: args.searchText,\n path: 'lastName',\n },\n },\n ],\n },\n },\n }\n {\n $match: {\n clinicId: args.clinicId,\n },\n },\n", "text": "I am running into the same problem with this alert: Query Targeting: Scanned Objects / Returned has gone above 1000 - #2 by Pavel_DuchovnyMy search looks like this:Is there anything I can do?After the $search I add a $match like", "username": "Anthony_C" }, { "code": "", "text": "Could you consider using filter inside the compound?", "username": "Elle_Shwer" }, { "code": "", "text": "Strangely when I add either a filter or a $match, it ignores the $search, it’s not filtering the result of the $earch step it’s just running the $mattch…So with a filter or match, I get 2300 results. Without it I get 394.", "username": "Anthony_C" }, { "code": " const searchStep = {\n $search: {\n index: 'autocomplete', // optional, defaults to \"default\"\n compound: {\n minimumShouldMatch: 0, // if one clause fails, still get documents back\n filter: [{ // replaces the need to have the $match stage\n text: {\n query: args.clinicId,\n path: 'clinicId'\n }\n }],\n should: [\n {\n autocomplete: {\n query: args.searchText,\n path: 'email',\n },\n },\n {\n autocomplete: {\n query: args.searchText,\n path: 'firstName',\n },\n },\n {\n autocomplete: {\n query: args.searchText,\n path: 'lastName',\n },\n },\n ],\n },\n },\n }\n", "text": "Similarly to Elle’s feedback about filter inside compound I made the following update to your query. It should eliminate this query targeting issue.", "username": "Marcus" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Atlas search and "Scanned Objects / Returned has gone above 1000" alert
2022-07-28T19:13:51.389Z
Atlas search and &ldquo;Scanned Objects / Returned has gone above 1000&rdquo; alert
2,723
null
[ "queries" ]
[ { "code": "", "text": "Good evening. I would like to know the way how to connect mongodb connector and apache kafka without using confluent in my local. I have referred some google platform could be seen only using confluent.\nI have referred (https://www.mongodb.com/docs/kafka-connector/current/introduction/connect/) this link also I don’t get proper idea to implement. Could you please guide me? If you have any implementing document also please share it to here?Looking forward for favourable reply,\nVijayalakshmi M", "username": "Sherlin_Susanna" }, { "code": "", "text": "What do you mean bywithout using confluent in my local.Are you referring to using the Confluent Platform ?Here is a self-contained mongodb, kafka, kafka connect deployment in Docker https://www.mongodb.com/docs/kafka-connector/current/tutorials/tutorial-setup/#std-label-kafka-tutorials-docker-setup", "username": "Robert_Walters" }, { "code": "", "text": "Thank you sir for your response. I have connected mongodb sink connector with post api, I have configured like below,\ncurl -X POST -H “Content-Type: application/json” -d ‘{“name”:“test-sherlin”,\n“config”:{“topics”:“sunflower”,\n“connector.class”:“com.mongodb.kafka.connect.MongoSinkConnector”,\n“tasks.max”:“1”,\n“connection.uri”:“mongodb://localhost:27017”,\n“database”:“flower”,\n“collection”:“sunflower-collection”,\n“key.converter”:“org.apache.kafka.connect.storage.StringConverter”,\n“value.converter”:“org.apache.kafka.connect.storage.StringConverter”,\n“key.converter.schemas.enable”:“false”,\n“value.converter.schemas.enable”:“false”}}’ localhost:8083/connectorsI would like to know the way to handle PUT and DELETE configuration. how can I proceed?", "username": "Sherlin_Susanna" }, { "code": "", "text": "Are you asking how to delete the connector via a curl statement ?", "username": "Robert_Walters" }, { "code": "", "text": "Yes, I am asking about both update and delete the data which is already saved in mongodb via connector curl.", "username": "Sherlin_Susanna" }, { "code": "", "text": "It sounds like you are looking to replicate mongodb data ? What is your use case ?", "username": "Robert_Walters" }, { "code": "", "text": "Actually, I have done based on my requirements. Thanks for your response", "username": "Sherlin_Susanna" } ]
How to connect mongodb connector and apache kafka?
2022-07-26T12:01:41.512Z
How to connect mongodb connector and apache kafka?
2,502
https://www.mongodb.com/…fe1c5cd0e224.png
[ "queries", "dot-net" ]
[ { "code": "", "text": "Hey @James_Kovacs,I have a similar case to C# MongoDB filter a value from the following structure.I have this structure:\nHow can I filter entries based on the FieldValue.Alias and FieldValue.Value?So, the idea is to return all the entries that have “age” = 32.I tried this: “{ $and: [{ “FieldValues.Alias”: “age”, “FieldValues.Value”: 32}]}” but it returns also the entries that have “age” = whatever, and another field (for example “no”) = 32.", "username": "Arbnor_Zeqiri" }, { "code": "DB> db.data.find()\n[\n {\n _id: ObjectId(\"62e20e051bb9515b8fbd4fed\"),\n arrayField: [\n { value: 32, alias: 'age', type: 7 },\n { value: 2, alias: 'length', type: 8 }\n ]\n }\n]\nDB> y\n{\n '$addFields': {\n filteredEntries: {\n '$filter': {\n input: '$arrayField',\n as: 'element',\n cond: {\n '$and': [\n { '$eq': [ '$$element.value', 32 ] },\n { '$eq': [ '$$element.alias', 'age' ] }\n ]\n }\n }\n }\n }\n}\n\n\nDB> db.data.aggregate(y)\n/// Output:\n[\n {\n _id: ObjectId(\"62e20e051bb9515b8fbd4fed\"),\n arrayField: [\n { value: 32, alias: 'age', type: 7 },\n { value: 2, alias: 'length', type: 8 }\n ],\n filteredEntries: [ { value: 32, alias: 'age', type: 7 } ] /// <--- additional filtered array field\n }\n]\n$addFields$filter", "text": "Hi @Arbnor_Zeqiri - Welcome to the community In future, I would recommend pasting sample documents as text rather than images as it makes reproducing, testing or troubleshooting for other community users easier Check out Formatting code and log snippets in posts for more details on this.I’m not sure what you’re expecting as the output, but I’ve created a example aggregation pipeline which may help you identify elements as you have specified. Please see the example aggregation operation and output below:For your reference, i’ve used the following aggregation stages / operators:Additionally, you can perhaps play around with MongoDB Compass and check out the export aggregation pipeline to language feature which may help with regards to the C# mentioned in your post link.Please test this thoroughly if you believe it may help and ensure it works for all your use case(s) and meets all your requirements.If you require further assistance, please advise the following information:Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How can I filter entries based on the FieldValue.Alias and FieldValue.Value?
2022-07-02T14:18:16.786Z
How can I filter entries based on the FieldValue.Alias and FieldValue.Value?
3,802
null
[ "aggregation" ]
[ { "code": "", "text": "I have a scenario where I will be displaying widgets in my app which exist as a 1 to 1 user enrolment. These widgets are linked 1 to 1 with learning material. If I reference the material with a lookup A sharding becomes impossible and B the searches on these widgets will be slow (an alternative would be populate but that seems less efficient with larger sets).Alternatively I can embed the learning cover image / description and control details into the widget documents which will speed up the search but slow down the generation of the doucment and result in lots of redundancy and the need to keep this nested data from being stale. This also seems poor. Are these really the only two routes to go or am I missing something?Thanks", "username": "Ben_Gibbons" }, { "code": "", "text": "Hi @Ben_GibbonsAre these really the only two routes to go or am I missing something?It’s hard to say without some examples documents, and how you’re expecting them to grow over time.There are advantages & disadvantages for both approaches, which are discussed in these links:Alternatively I can embed the learning cover image / description and control details into the widget documents which will speed up the search but slow down the generation of the doucment and result in lots of redundancyPerhaps a question to answer is: which operation is more common compared to the other? If you’re expecting search to be >90% of the operation, then there’s definite advantage in embedding, with the tradeoff of more housekeeping. About housekeeping: are you expecting to process a lot of documents (i.e. millions) to keep them up to date? If it’s not too many, it might be ok.Thus I would say that there’s not really one correct answer here. You might find that embedding works well for your use case, but I can have an almost identical schema design, but find that my day-to-day workload doesn’t support embedding so well. I believe no one can answer this for you definitively, since it largely depends on your specific situation and your workload projection.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "The issue is the data will be expecting heavy search usage AND semi regular editing. Additionally the embedding causes a large upload latency to the point where I am having to use service workers to handle a part of the endpoint. This seems like such a weird constraint why not just have decent lookup functionality.", "username": "Ben_Gibbons" } ]
Unsure on scenarios efficacy for either $lookup or embedding
2022-07-25T09:20:17.682Z
Unsure on scenarios efficacy for either $lookup or embedding
1,422
null
[ "atlas-device-sync", "react-native", "android", "flexible-sync" ]
[ { "code": "", "text": "Hello friends,\nI am experiencing a problem. After setting up all the objects, subscriptions to the sync I can see that in the web realm application all the schemas auto fill as it suppose to. But the problem comes when I try to create a new object with a click of a button the application just close down without an error (I am using android emulator with expo and react native and realm). But after some time a 10-30 min it adds the new objects and retrieves from them without closing the application. What causes this issues and is there a way to overcome it ?thank you for your help,\nkind regards Lukas", "username": "Lukas_Vainikevicius" }, { "code": "Realm.App.Sync.setLogLevel(app, 'trace')", "text": "Hi @Lukas_Vainikevicius,It’s difficult to point out what may be wrong without looking at your code, can you please post a snippet that clarifies what you’re trying to do? Also, having a look at the logs that the app outputs would help, you can use Realm.App.Sync.setLogLevel(app, 'trace') to retrieve more information about what happens in the background.after some time a 10-30 min it adds the new objects and retrieves from them without closing the application.That’s the way it’s supposed to work: apparently the objects were added to your local DB, and Sync picked them up when it had a chance, sending them to the backend.", "username": "Paolo_Manna" } ]
Question about synchronization
2022-07-29T05:47:12.383Z
Question about synchronization
2,087
null
[ "dot-net" ]
[ { "code": "public bool Authenticate(string username, string password)\n {\n var cmd = new BsonDocument(\"authenticate\", new BsonDocument\n {\n {\"username\",username },\n {\"password\",password },\n {\"mechanism\", \"SCRAM-SHA-1\"}\n });\n\n var queryResult = _context.Database.RunCommand<BsonDocument>(cmd);\n\n return true;\n }\n", "text": "Exception occurs when executing this method “Command authenticate failed: BSON field ‘authenticate.mechanism’ is missing but a required field”How to fix?", "username": "Valeriy_Filippenkov" }, { "code": "", "text": "This topic was automatically closed after 180 days. New replies are no longer allowed.", "username": "system" } ]
Authenticate RunCommand. Command authenticate failed: BSON field 'authenticate.mechanism' is missing but a required field
2022-07-29T05:04:31.749Z
Authenticate RunCommand. Command authenticate failed: BSON field &lsquo;authenticate.mechanism&rsquo; is missing but a required field
1,250
null
[ "crud" ]
[ { "code": "public class ProductCarted\n {\n [BsonId]\n [BsonRepresentation(BsonType.ObjectId)]\n public string? Id { get; set; }\n\n [BsonElement(\"id\")]\n public int SecondID { get; set; }\n [BsonElement(\"title\")]\n public string Title { get; set; } = null!;\n public string Description { get; set; } = null!;\n public int Price { get; set; }\n public double DiscountPercentage { get; set; }\n public double Rating { get; set; }\n public string Brand { get; set; } = null!;\n public string Category { get; set; } = null!;\n public string Thumbnail { get; set; } = null!;\n public string[] Images { get; set; } = null!;\n }\npublic class Cart\n {\n [BsonId]\n [BsonRepresentation(BsonType.ObjectId)]\n public string? SessionId { get; set; }\n public List<ProductCarted> CartItems { get; set; } = new List<ProductCarted>();\n }\npublic async Task<UpdateResult> RemoveOneProduct(string sessionId, ProductCarted p)\n {\n var filter = Builders<Cart>.Filter.Eq(c => c.SessionId, sessionId);\n var update = Builders<Cart>.Update.PullFilter(c => c.CartItems, ci => ci.Id == p.Id);\n var res = await _cartsCollection.UpdateOneAsync(filter, update);\n return res;\n }\n", "text": "The CartItems field has products of the same type I want to remove just one entry of the product but my code removes all the entries as the filter is satisfied by all the products.", "username": "Syed_Uzair" }, { "code": "", "text": "Hello and welcome to the community, @Syed_UzairCould you kindly assist me in understanding the question, if you are attempting to remove an item from the cart and the above code snippet causes you to remove all items from the cart?If so, could you perhaps supply a sample document with the cart’s elements and a self-contained code that might aid in recreating the problem?Could you also give the MongoDB and Driver versions for the above issue?Thanks\nAasawari", "username": "Aasawari" } ]
How to remove one element from an array when there can be multiple entries of the same element?
2022-07-14T17:12:50.758Z
How to remove one element from an array when there can be multiple entries of the same element?
1,748
null
[ "atlas-device-sync", "react-native", "flexible-sync" ]
[ { "code": "", "text": "Hello friends. As I am currently working on a app and using realm/react context within react native. Everything is set it up user information is synced with atlas user documents. but some how I can’t write any data I am receiving this error in particular “Error: Cannot write to class Task when no flexible sync subscription has been created.”. There are no limitations set in the “Device sync” section on “realm.mongodb” and I am receiving the pre-built schema from my locally defined objects. But some how it is saying that the subscription has not been created. By the way I am using flex sync.Thank you for your time.", "username": "Lukas_Vainikevicius" }, { "code": "", "text": "Hi, I think that the error is pretty descriptive but we can work on that perhaps. The error is telling you that you are trying to make modifications but you have no open subscriptions. Because this is “sync” we dont let you write things that you are not currently subscribed to, otherwise if you were to write that object we wouldnt know if you want updates made to that object. I would suggest looking at the following links:https://www.mongodb.com/docs/realm/sdk/react-native/examples/flexible-sync/#bootstrap-the-realm-with-initial-subscriptions\nhttps://www.mongodb.com/docs/realm/sdk/react-native/examples/flexible-sync/#add-a-query-to-the-list-of-subscriptions", "username": "Tyler_Kaye" }, { "code": "", "text": "Thank you that helped a lot", "username": "Lukas_Vainikevicius" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Error: Cannot write to class Task when no flexible sync subscription has been created
2022-07-27T19:47:52.334Z
Error: Cannot write to class Task when no flexible sync subscription has been created
4,813
null
[]
[ { "code": "", "text": "I have a list of User IDs similar to\nlet userIds = [ID1, ID2, ID3, ID4…]Some of the fields within the users documents arefirstName and lastName. I want to first get the ids in the userIds list and then perform a partial text search/autocomplete search on the remaining documents. How do I go about doing that?", "username": "drdre" }, { "code": "filteruserIDfirstNamelastNameshould", "text": "The compound operator with the first clause as a filter clause on the userID field. The firstName and lastName can be autocomplete queries in a should clause. Keep in mind, we can filter best", "username": "Marcus" }, { "code": "", "text": "e first clauseI have the same issue however the search is not giving result.\nQuestion: Do I have to create index for the filter field i.e. userid?\nimage548×555 13.2 KB\n\n{\n“mappings”: {\n“dynamic”: false,\n“fields”: {\n“_id”: {\n“type”: “string”\n},\n“city”: {\n“type”: “autocomplete”\n},\n“contactfirstname”: {\n“type”: “autocomplete”\n},\n“contactlastname”: {\n“type”: “autocomplete”\n},\n“name”: {\n“type”: “autocomplete”\n}\n}\n}\n}", "username": "Manoranjan_Bhol" } ]
Autocomplete with filtered data
2021-06-29T18:49:35.671Z
Autocomplete with filtered data
2,756
null
[ "aggregation", "data-modeling", "python", "performance" ]
[ { "code": "Query_1:\n\ncollection.find_one_and_update(\n{\n \"$or\": [\n {\n \"field1\": value,\n \"field2\": {\"gte\": constant},\n \"field3\": constant \n } for value in acceptable_values_for_field1\n ]\n},\n{some set operations}\n)\nQuery_2:\n\ncollection.find_one_and_update(\n{\n \"field1\": {\"$in\": acceptable_values_for_field1},\n \"field2\": {\"gte\": constant},\n \"field3\": constant \n},\n{some set operations}\n)\n", "text": "Hi I am using python with mongodb and came across this question when developing.\nHere are two querries and I want to know which one will do better.as againstNOTE", "username": "Bhavesh_Navandar" }, { "code": "$or$in$or$in{ field3: 1, field1: 1, field2: 1}$in$in$or$or$or$in$in$or", "text": "Hi Bhavesh,With a small amount of $or or $in items, you may not notice a difference in performance between the two queriesWith a large mount of items in $or or $in (say 10000), and if an index with keys { field3: 1, field1: 1, field2: 1} exists for the $in query, then the $in query will likely outperform $or.$or will use as many IXSCAN plans (you can check by using .explain()) as there are $or conditions; while the $in query will only have 1 IXSCAN plan.I know the docs say that “$in” will be better here but compound index scan vs individual index scans might tip the scales in favor of “or”Compound index scan will likely outperform multiple individual index scans.currently the “acceptable_values_for_field1” have only one element in it, and will probably be the case for a long long time. Do you think “$in” is still better?You’ll likely not notice a difference here with only one element in “acceptable_values_for_field1”. But $in should be considered over $or when possible.Best regards,Raymond", "username": "Raymond_Hu1" }, { "code": "acceptable_values_for_field1$or", "text": "I am doing this in production. As I scale up for more values in acceptable_values_for_field1 will short circuits be better used for for getting faster return on the search? Remember I am just looking for one document back as soon as possible for processing it. The way I have written the $or query could potentially use that. I get that with just one value it wont make much difference but the number of times I am doing this is very high in production so the minor bumps will also addup a lot for me.", "username": "Bhavesh_Navandar" }, { "code": "mongoshstep$in$or(function() {\n \"use strict\";\n \n let n = 0;\n let step = 1000;\n let iters = 20;\n \n function setup() {\n db.c.drop();\n db.c.insertOne({_id: 0, a: 1, b: 2})\n db.c.createIndex({a: 1, b: 1, _id: 1})\n }\n \n function remove() {\n let terms = [];\n for (var i = 0; i < n; i++) {\n terms.push({_id: i})\n }\n db.c.findOne({ _id:{$in: terms}, a: 1, b: 2})\n }\n \n for (let i = 0; i < iters; i++) {\n n += step;\n setup();\n let startTime = new Date();\n remove();\n let endTime = new Date();\n let elapsedTime = endTime - startTime;\n print(n + \"\\t\" + elapsedTime + \"ms\");\n }\n}());\n\n(function() {\n \"use strict\";\n \n let n = 0;\n let step = 1000;\n let iters = 20;\n \n function setup() {\n db.c.drop();\n db.c.insertOne({_id: 0})\n }\n \n function remove() {\n let terms = [];\n for (var i = 0; i < n; i++) {\n terms.push({_id: i})\n }\n db.c.findOne({$or: terms})\n }\n \n for (let i = 0; i < iters; i++) {\n n += step;\n setup();\n let startTime = new Date();\n remove();\n let endTime = new Date();\n let elapsedTime = endTime - startTime;\n print(n + \"\\t\" + elapsedTime + \"ms\");\n }\n}());\n1000 52ms\n2000 165ms\n3000 305ms\n4000 473ms\n5000 685ms\n6000 953ms\n7000 1299ms\n8000 1680ms\n9000 2076ms\n10000 2577ms\n11000 3166ms\n12000 3708ms\n13000 4366ms\n14000 5081ms\n15000 6902ms\n16000 7832ms\n17000 8033ms\n18000 9338ms\n19000 10082ms\n20000 11684ms\n1000 45ms\n2000 60ms\n3000 70ms\n4000 62ms\n5000 90ms\n6000 85ms\n7000 88ms\n8000 115ms\n9000 144ms\n10000 133ms\n11000 175ms\n12000 171ms\n13000 169ms\n14000 180ms\n15000 216ms\n16000 198ms\n17000 216ms\n18000 212ms\n19000 257ms\n20000 236ms\n$or$in$or", "text": "Hi Bhavesh,You can use the following code in mongosh to compare the performance difference in a test environment.\nFeel free to tweak the variable step for the number of items in $in or $orResult:$or$inConclusion:", "username": "Raymond_Hu1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Performance of $or vs $in
2022-07-18T14:58:41.490Z
Performance of $or vs $in
5,393
null
[ "sharding" ]
[ { "code": "", "text": "This may be a dumb question but this is something I have been asking myself a lot. I understand that choosing a bad shard key would result in an uneven distribution on the sharded cluster, and if you get to a point where you get stuck and cannot add more data, you would need to increase the chunk size and manually split the jumbo chunks, however this is only a provisional solution that would fail in the long term since the shard key is bad anyways. Is there an actual way to give this a permanent fix within the same collection, or you would need to make a copy of the whole collection in another cluster and make sure you shard correctly this time?", "username": "mikemachr" }, { "code": "", "text": "I realize it’s been a while, but for those reading this question today, according to the documentation you can now change the shard key as long as you’re on at least MongoDB 5.0.", "username": "Tom_Boutell" }, { "code": "", "text": "Hi @Tom_Boutell,Thanks for adding the extra information! Starting in MongoDB 5.0, there is the option to Reshard a Collection if a suboptimal shard key was chosen.MongoDB 4.4 also added the option to Refine a shard key by adding new field(s) as suffixes to the existing key.If you happen to be using an older version than the above, your only approach would have been to copy into a new collection with the desired shard key. However, since the distribution of shard key values would be known for an existing collection you could speed up the process by pre-splitting chunks in an empty sharded collection before copying the data from an existing collection.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How exactly would you fix a sharded cluster with bad distribution in a production environment
2021-06-20T23:07:54.715Z
How exactly would you fix a sharded cluster with bad distribution in a production environment
2,901
null
[ "queries", "replication", "compass", "sharding" ]
[ { "code": "", "text": "Hello Everyone,I have a concern or doubt, I have a replica set configuration (PSSH) in my environment as below.Production Replica SetNow we want to convert our replica-set to sharded cluster, I am proposing the below structure for sharded cluster.Production Sharded Cluster :Question 1 : We have a user named aditya.sharma with some secret password. This user have tool like mongo shell or robo3t or compass to connect to the database. How this aditya.sharma user will connect to mongodb sharded cluster via mongos to run below query on Secondary-Hidden node only?use test\ndb.testColl.find({ name : “Aditya Kumar Sharma” })Note : Sharding is already enabled on test database and the testColl collection is also sharded using _id field as shard key and the data is also distributed on both the shards evenly.Please help me to understand, if anyone have any Idea.Thank you so much in advance.Regards,\nAditya Sharma", "username": "Aditya_Sharma3" }, { "code": "mongosmongos", "text": "In the hidden member documentation there is the following lines:Clients will not distribute reads with the appropriate read preference to hidden members. As a result, these members receive no traffic other than basic replication. Use hidden members for dedicated tasks such as reporting and backups.This means that you would need to connect to the hidden member directly to read from it as I understand things.Shortly after that is the following line:In a sharded cluster, mongos do not interact with hidden members.Based on this, I don’t think that you can connect to hidden members via a connection to mongos.", "username": "Doug_Duncan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Query secondaries via mongos in self hosted sharded cluster configuration on AWS EC2?
2022-07-28T16:51:33.740Z
Query secondaries via mongos in self hosted sharded cluster configuration on AWS EC2?
1,671
null
[ "python", "atlas-cluster", "serverless", "spark-connector" ]
[ { "code": "pyspark.sql.utils.IllegalArgumentException: requirement failed: Invalid uri: 'mongodb+srv://vani:<password>@versa-serverless.w9yss.mongodb.net/versa?retryWrites=true&w=majority'\nfrom pyspark.sql import SparkSession\n\nspark = SparkSession.builder.appName(\"MongoDB operations\").getOrCreate()\nprint(\" spark \", spark)\n\n\n# cluster0 - is the free version, and i'm able to connect to this\n# mongoConnUri = \"mongodb+srv://vani:[email protected]/?retryWrites=true&w=majority\"\n\nmongoConnUri = \"mongodb+srv://vani:[email protected]/?retryWrites=true&w=majority\"\n\nmongoDB = \"versa\"\ncollection = \"name_map_unique_ip\"\n\n\ndf = spark.read\\\n .format(\"mongo\") \\\n .option(\"uri\", mongoConnUri) \\\n .option(\"database\", mongoDB) \\\n .option(\"collection\", collection) \\\n .load()\n\n22/07/26 12:25:36 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir.\n22/07/26 12:25:36 INFO SharedState: Warehouse path is 'file:/Users/karanalang/PycharmProjects/Versa-composer-mongo/composer_dags/spark-warehouse'.\n spark <pyspark.sql.session.SparkSession object at 0x7fa1d8b9d5e0>\nTraceback (most recent call last):\n File \"/Users/karanalang/PycharmProjects/Kafka/python_mongo/StructuredStream_readFromMongoServerless.py\", line 30, in <module>\n df = spark.read\\\n File \"/Users/karanalang/Documents/Technology/spark-3.2.0-bin-hadoop3.2/python/lib/pyspark.zip/pyspark/sql/readwriter.py\", line 164, in load\n File \"/Users/karanalang/Documents/Technology/spark-3.2.0-bin-hadoop3.2/python/lib/py4j-0.10.9.2-src.zip/py4j/java_gateway.py\", line 1309, in __call__\n File \"/Users/karanalang/Documents/Technology/spark-3.2.0-bin-hadoop3.2/python/lib/pyspark.zip/pyspark/sql/utils.py\", line 117, in deco\npyspark.sql.utils.IllegalArgumentException: requirement failed: Invalid uri: 'mongodb+srv://vani:[email protected]/?retryWrites=true&w=majority'\n22/07/26 12:25:36 INFO SparkContext: Invoking stop() from shutdown hook\n22/07/26 12:25:36 INFO SparkUI: Stopped Spark web UI at http://10.42.28.205:4040\n\nfrom pymongo import MongoClient\n\nclient = MongoClient(\"mongodb+srv://vani:[email protected]/vani?retryWrites=true&w=majority\")\nprint(client)\n\nall_dbs = client.list_database_names()\nprint(f\"all_dbs : {all_dbs}\")\nspark-submit --packages org.mongodb.spark:mongo-spark-connector:10.0.2 ~/PycharmProjects/Kafka/python_mongo/StructuredStream_readFromMongoServerless.py\n", "text": "I’ve been using Apache Spark(pyspark) to read from MongoDB Atlas, I’ve a shared(free) cluster - which has a limit of 512 MB storage I’m trying to migrate to serverless, but somehow unable to connect to the serverless instance - errorPls note : I’m able to connect to the instance using pymongo, but not using pyspark.Here is the pyspark code (Not Working):Error :pymongo code (am able to connect using the same uri):Spark-submit command :any ideas how to debug/fix this ?tia!", "username": "Karan_Alang1" }, { "code": "from pyspark.sql import SparkSession\n\nspark = SparkSession.\\\nbuilder.\\\nappName(\"pyspark-notebook2\").\\\nconfig(\"spark.executor.memory\", \"1g\").\\\nconfig(\"spark.mongodb.read.connection.uri\",\"mongodb+srv://sparkuser:[email protected]/?retryWrites=true&w=majority\").\\\nconfig(\"spark.mongodb.write.connection.uri\",\"mongodb+srv://sparkuser:[email protected]/?retryWrites=true&w=majority\").\\\nconfig(\"spark.jars.packages\", \"org.mongodb.spark:mongo-spark-connector:10.0.3\").\\\ngetOrCreate()\n\ndf = spark.read.format(\"mongodb\").option('database', 'MyDB').option('collection', 'MyCollection').load()\n\ndf.show()\n", "text": "This is working code with an Atlas Serverless instance. I am using version 10.0 of the Spark Connector. Note that the conf parameter names are different than 3.x of the connector as well.You can use both versions of the connector side by side, the 3.x version is “mongo” and the 10.x uses “mongodb”", "username": "Robert_Walters" } ]
Pyspark spark-submit unable to read from Mongo Atlas serverless(can read from free version)
2022-07-26T19:35:54.483Z
Pyspark spark-submit unable to read from Mongo Atlas serverless(can read from free version)
3,517
null
[ "node-js", "connecting" ]
[ { "code": "", "text": "Hi, I have check the official document and source code of MongoDB Node JS driver. I observed that only default values of “reconnectTries=30” and \"reconnectInterval=1000 \"are given, but there is no minimum and maximum values for “reconnectTries” and “reconnectInterval” have provided.Below are the reference link to official document and source code which I have referred:\n1.Connection Settings\n2. node-mongodb-native/pool.js at v3.6.12 · mongodb/node-mongodb-native · GitHub", "username": "GAURAV_PANDEY" }, { "code": "serverSelectionTimeoutMSconnectTimeoutMS", "text": "Hi Gaurav,Wanted to let you know that the docs in question are older docs. The newest docs for Mongo’s connection strings are here: https://www.mongodb.com/docs/manual/reference/connection-string/ Newer versions of the mongo drivers no longer need to configure their reconnect options. Instead, you specify per-command what your tolerance is to wait for a resolution. The initial connection is now captured under serverSelectionTimeoutMS for replica sets and connectTimeoutMS.From a related JIRA ticketWe are radically dialing back what we consider to be an incomplete story for retryability in the driver, focusing first on providing first-class support for the specified retryable writes, and soon to be specified retryable reads. With the new SDAM layer there no longer is a required concept of “connecting”, rather you just create a client and begin to run operations, specifying how long you are willing to wait for a timeout and expecting the driver to automatically retry operations within that window.", "username": "Jakob_Heuser" }, { "code": "", "text": "This topic was automatically closed after 180 days. New replies are no longer allowed.", "username": "system" } ]
What is the Minimum and Maximum values of "reconnectTries" and "reconnectInterval" of MongoDB Node JS driver?
2022-04-04T10:05:23.240Z
What is the Minimum and Maximum values of &ldquo;reconnectTries&rdquo; and &ldquo;reconnectInterval&rdquo; of MongoDB Node JS driver?
2,912
null
[ "kafka-connector" ]
[ { "code": "", "text": "Write errors: [BulkWriteError{index=0, code=61, message=‘Expected replacement document to include all shard key fields, but the following were omitted: { missingShardKeyFields: [ “region”, “clientProfileId”, “fileUuid” ] }’, details={}}]. \\n\\tat", "username": "karthik_amavasai" }, { "code": "", "text": "which version of MongoDB are you using?", "username": "Robert_Walters" } ]
Sink MongoKafkaConnector Issue when try to insert in MongoDb as BulkWriteError
2022-07-28T08:09:33.964Z
Sink MongoKafkaConnector Issue when try to insert in MongoDb as BulkWriteError
1,792
null
[ "queries", "dot-net" ]
[ { "code": "var collection = _client.GetDatabase(_settings.MongoDatabaseName).GetCollection<BsonDocument>(mongoCollection);\nvar documents = collection.Find(doc => true).Limit(batchSize).ToList();\n", "text": "// I am able to pass the batch size but not sure how to pass the custom dates . this line should be work with multiple collections", "username": "daya_vaddempudi" }, { "code": "", "text": "Could you please provide simple input data, and the corresponding desired output?", "username": "sassbalint" } ]
How to read the data dynamically from mongoDb by passing custom dates and batch size
2022-07-19T05:40:56.784Z
How to read the data dynamically from mongoDb by passing custom dates and batch size
1,533
https://www.mongodb.com/…8_2_1024x573.png
[ "aggregation", "queries", "schema-validation", "views" ]
[ { "code": "{\n \"name\": \"find\",\n \"arguments\": [\n {\n \"database\": \"sias\",\n \"collection\": \"siteInfo\",\n \"query\": {}\n }\n ],\n \"service\": \"mongodb-atlas\"\n}\n", "text": "Hi everybody,I have a strange issue on my atlas realm app:I have partition sync enabled and a view that runs several aggregation stages to combine data.There are two mutually exclusive errors that are thrown:when I define a schema for the view, I get TranslatorFatalError with the following message:\nrecoverable event subscription error encountered: failed to configure namespaces for sync: failed to configure namespace ns=‘sias.siteInfo’ for sync: error issuing collMod command for ns=‘sias.siteInfo’: (InvalidOptions) option not supported on a view: recordPreImageswhen I remove the schema for the view, i get NoMatchingRuleFound with the following message:\nAction on service ‘mongodb-atlas’ forbidden: arguments to ‘find’ don’t match rule criteria\nScreenshot_2022_07_28_0011650×924 82.2 KB\nDoes someone encountered the same issue?\nHow can I disable recordPreImages for the view? (is not showed in “linked data source” advanced config options…)thanks", "username": "Armando_Marra" }, { "code": "", "text": "Hi, Sync is not supported on views currently for a multitude of reasons, the chief one being that they are read-only. Perhaps we can surface this in a nicer way, but the recordPreImages check is another thing that Sync needs and views do not support. I will follow up with the Docs team to make sure this is documented somewhere.Thanks,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "Hi @Tyler_Kaye , thank you for your reply.I’ve found a workaround for this:this fixed all the errors for me.I hope this will help other users with the same issue.\nimage996×723 62.9 KB\n", "username": "Armando_Marra" }, { "code": "", "text": "As a warning, if there is no schema defined then the collection is not being synced.", "username": "Tyler_Kaye" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm Sync error when Schema is defined for a view - find() error when schema is not defined for the same view
2022-07-28T10:53:23.056Z
Realm Sync error when Schema is defined for a view - find() error when schema is not defined for the same view
2,959