image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[]
[ { "code": "mongosh \"mongodb://sandbox-shard-00-00.u8drk.mongodb.net:27017/sample_airbnb\" --username m001-studentCurrent sessionID: b35d7728b64c5619a480c2b2\nEnter password: m001-mongodb-basics\nConnecting to: mongodb://sandbox-shard-00-00.u8drk.mongodb.net:27017/sample_airbnb\nMongoServerSelectionError: connection <monitor> to 23.23.26.96:27017 closed\n", "text": "Hello, I’m on Windows 7 and I can’t use the mongodb shell that the class is suggesting. I spoke with your Customer Support team (Clevy) and they suggested I downloaded the shell that supports Windows 7+.https://www.mongodb.com/try/download/shellSo, I installed it and tried to connect to my cluster. I was able to log in with the password using the command:mongosh \"mongodb://sandbox-shard-00-00.u8drk.mongodb.net:27017/sample_airbnb\" --username m001-studentbut I got this response back:Any ideas on how to make this work? Thank you so much.", "username": "_N_A4_1" }, { "code": "", "text": "The mongo shell, the command mongo and the new mongosh are not the same product. This course is geared toward mongo. You will not be able to follow the course with mongosh. The course is using an IDE integrated into the web browser. You have nothing to install.", "username": "steevej" }, { "code": "", "text": "That’s great to know. Thank you.", "username": "_N_A4_1" }, { "code": "", "text": "Hi @_N_A4,I hope you found @steevej-1495’s response helpful.Let us know if you have any other questions.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "Where is the link for the integrated IDE in the web browser to connect to Atlas cluster", "username": "Biplab_Dash" }, { "code": "", "text": "Make sure that the IP Address is 0.0.0.0 i.e. access from anywhere. Because that will also give a connection error.", "username": "ankita_sinha" }, { "code": "", "text": "Hi @Biplab_Dash,Where is the link for the integrated IDE in the web browser to connect to Atlas clusterPlease take a look at this post.Let us know if you have any other doubts.", "username": "Shubham_Ranjan" }, { "code": "", "text": "", "username": "system" } ]
Connecting to an Atlas Cluster from mongosh on Windows 7
2020-11-10T18:25:18.080Z
Connecting to an Atlas Cluster from mongosh on Windows 7
2,775
null
[]
[ { "code": "", "text": "I am trying to connect to Mongo shell in windows machine. I had set the class path and launching the Mongo shell, which just open for 1-2 second and then crashes out. I tried connecting via Compass as well where request got timed out.\nAny leads would be helpful. Thanks in advance!!", "username": "Vijay_Kumar1_1" }, { "code": "", "text": "Post a screenshot of what you are doing that shows the error you are having.", "username": "steevej" }, { "code": "", "text": "Mongo shell just pops up and get crashed within a seconds.", "username": "Vijay_Kumar1_1" }, { "code": "", "text": "C:\\Users\\C44293>ping cluster0-shard-00-00-jxeqq.mongodb.netPinging ec2-34-195-121-130.compute-1.amazonaws.com [34.195.121.130] with 32 bytes of data:\nRequest timed out.\nRequest timed out.\nRequest timed out.\nRequest timed out.Ping statistics for 34.195.121.130:\nPackets: Sent = 4, Received = 0, Lost = 4 (100% loss),", "username": "Vijay_Kumar1_1" }, { "code": "", "text": "You should be using the IDE supplied by the course.You should be connecting to your own cluster, not to jxeqq.I can ping the shared cluster jxeqq. If you can’t then you have a firewall or VPN issue that prevents you from going to this address.You should be posting screenshot as requested.", "username": "steevej" }, { "code": "", "text": "Hi @Vijay_Kumar1,In addition to @steevej-1495,Mongo shell just pops up and get crashed within a seconds.Looks like you are trying to run the executable file directly.Open command prompt and type this command and share are output with us as well :mongo --nodb~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "", "username": "system" } ]
Mongo shell connection issue
2020-11-15T16:19:29.091Z
Mongo shell connection issue
2,341
null
[]
[ { "code": "", "text": "The guide is asking you to use the command line but this won’t work since mongo isn’t installed locally.Please advise.", "username": "Ken_Mathieu_Beaudin" }, { "code": "mongo", "text": "Hi @Ken_Mathieu_Beaudin !at the end of the chapter there is an Interactive Developer Environment (IDE). This is bash shell with mongo and other bins already installed for you.To run it locally head over to the installation chapter in the docs.", "username": "santimir" }, { "code": "", "text": "I am not able to connect to my cluster using IDE environment, can any one help me?", "username": "Muzaffar_ali_53011" }, { "code": "", "text": "What issue you are facing\nPlease show us the screenshot or error details", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi,i am not able to connect to mogodb using shell .\nfollowing the same procedure as informed.\nthe IDE show error.\nattaching the screenshot .\nconnection string :mongo “mongodb+srv://sandbox.wr6ht.mongodb.net/” --username m001-student\nScreenshot (20)1920×1080 91.7 KB", "username": "aditya_rana" }, { "code": "", "text": "What error are you getting?\nI don’t see any error in your snapshot.It shows test result failed\nDid you run the command in correct area of IDE\nDid you hit enter after typing/pasting the connect string?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I am not able to connect to Atlas cluster. I am getting following erros.\nconnecting to: mongodb://sandbox-shard-00-02.ftsyv.mongodb.net.:27017,sandbox-shard-00-00.ftsyv.mongodb.net.:27017,sandbox-shard-00-01.ftsyv.mongodb.net.:27017/%3Cdbname%3E?authSource=admin&gssapiServiceName=mongodb&replicaSet=atlas-exfvdg-shard-0&ssl=true\n2020-10-27T10:02:45.134+0000 I NETWORK [js] Starting new replica set monitor for atlas-exfvdg-shard-0/sandbox-shard-00-02.ftsyv.mongodb.net.:27017,sandbox-shard-00-00.ftsyv.mongodb.net.:27017,sandbox-shard-00-01.ftsyv.mongodb.net.:27017\n2020-10-27T10:02:45.771+0000 I NETWORK [js] Successfully connected to sandbox-shard-00-01.ftsyv.mongodb.net.:27017 (1 connections now open to sandbox-shard-00-01.ftsyv.mongodb.net.:27017 with a 5 second timeout)\n2020-10-27T10:02:45.771+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Successfully connected to sandbox-shard-00-00.ftsyv.mongodb.net.:27017 (1 connections now open to sandbox-shard-00-00.ftsyv.mongodb.net.:27017 with a 5 second timeout)\n2020-10-27T10:02:45.959+0000 I NETWORK [js] changing hosts to atlas-exfvdg-shard-0/sandbox-shard-00-00.ftsyv.mongodb.net:27017,sandbox-shard-00-01.ftsyv.mongodb.net:27017,sandbox-shard-00-02.ftsyv.mongodb.net:27017 from atlas-exfvdg-shard-0/sandbox-shard-00-00.ftsyv.mongodb.net.:27017,sandbox-shard-00-01.ftsyv.mongodb.net.:27017,sandbox-shard-00-02.ftsyv.mongodb.net.:27017\n2020-10-27T10:02:46.535+0000 I NETWORK [js] Successfully connected to sandbox-shard-00-01.ftsyv.mongodb.net:27017 (1 connections now open to sandbox-shard-00-01.ftsyv.mongodb.net:27017 with a 5 second timeout)\n2020-10-27T10:02:46.538+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Successfully connected to sandbox-shard-00-00.ftsyv.mongodb.net:27017 (1 connections now open to sandbox-shard-00-00.ftsyv.mongodb.net:27017 with a 5 second timeout)\n2020-10-27T10:02:47.363+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Successfully connected to sandbox-shard-00-02.ftsyv.mongodb.net:27017 (1 connections now open to sandbox-shard-00-02.ftsyv.mongodb.net:27017 with a 5 second timeout)\n2020-10-27T10:02:47.728+0000 I NETWORK [js] Marking host sandbox-shard-00-01.ftsyv.mongodb.net:27017 as failed :: caused by :: Location40659: can’t connect to new replica set master [sandbox-shard-00-01.ftsyv.mongodb.net:27017], err: Location8000: bad auth : Authentication failed.\n2020-10-27T10:02:48.104+0000 I NETWORK [js] Marking host sandbox-shard-00-02.ftsyv.mongodb.net:27017 as failed :: caused by :: SocketException: can’t authenticate against replica set node sandbox-shard-00-02.ftsyv.mongodb.net:27017 :: caused by :: socket exception [CONNECT_ERROR] server [sandbox-shard-00-02.ftsyv.mongodb.net:27017] connection pool error: network error while attempting to run command ‘isMaster’ on host ‘sandbox-shard-00-02.ftsyv.mongodb.net:27017’\n2020-10-27T10:02:48.478+0000 I NETWORK [js] Marking host sandbox-shard-00-00.ftsyv.mongodb.net:27017 as failed :: caused by :: SocketException: can’t authenticate against replica set node sandbox-shard-00-00.ftsyv.mongodb.net:27017 :: caused by :: socket exception [CONNECT_ERROR] server [sandbox-shard-00-00.ftsyv.mongodb.net:27017] connection pool error: network error while attempting to run command ‘isMaster’ on host ‘sandbox-shard-00-00.ftsyv.mongodb.net:27017’\n2020-10-27T10:02:49.612+0000 I NETWORK [js] Marking host sandbox-shard-00-01.ftsyv.mongodb.net:27017 as failed :: caused by :: Location40659: can’t connect to new replica set master [sandbox-shard-00-01.ftsyv.mongodb.net:27017], err: Location8000: bad auth : Authentication failed.\n2020-10-27T10:02:49.613+0000 E QUERY [js] Error: can’t authenticate against replica set node sandbox-shard-00-01.ftsyv.mongodb.net:27017 :: caused by :: can’t connect to new replica set master [sandbox-shard-00-01.ftsyv.mongodb.net:27017], err: Location8000: bad auth : Authentication failed. :\nconnect@src/mongo/shell/mongo.js:328:13\n@(connect):1:6\nexception: connect failed", "username": "Sandeep_41860" }, { "code": "", "text": "Hi follow the step to sucess the exercice mongo university step1398×703 57.4 KB", "username": "Jean-Claude_ADIBA" }, { "code": "", "text": "bad authentication means wrong combination of userid/pwd\nWhat did you give as password?\nMay be some invalid character or space got introduced while pasting the password at the time of creating your sandbox cluster", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi @Muzaffar_ali_53011,I am not able to connect to my cluster using IDE environment, can any one help me?Please share the information requested by @Ramachandra_37567 if you are still facing any issue.What issue you are facing\nPlease show us the screenshot or error details", "username": "Shubham_Ranjan" }, { "code": "", "text": "Hi\nI am facing issue while connection to mongoshell.\nError which I facing,I pasted below. please let me know where I need to improve the command.bash-4.4# mongo \"mongodb+srv://sandbox.ofcuo.mongodb.net/\nMongoDB shell version v4.0.5\nEnter password:\nconnecting to: mongodb://sandbox-shard-00-02.ofcuo.mongodb.net.:27017,sandbox-shard-00-00.ofcuo.mongodb.net.:27017,sandbox-shard-00-01.ofcuo.mongodb.net.:27017/%3Cdbname%3E?authSource=admin&gssapiServiceName=mongodb&replicaSet=atlas-2y4u8j-shard-0&ssl=true\n2020-11-01T05:41:19.699+0000 I NETWORK [js] Starting new replica set monitor for atlas-2y4u8j-shard-0/sandbox-shard-00-02.ofcuo.mongodb.net.:27017,sandbox-shard-00-00.ofcuo.mongodb.net.:27017,sandbox-shard-00-01.ofcuo.mongodb.net.:27017\n2020-11-01T05:41:19.865+0000 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:19.865+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 1 checks in a row.\n2020-11-01T05:41:20.406+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:20.406+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 2 checks in a row.\n2020-11-01T05:41:20.947+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:20.947+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 3 checks in a row.\n2020-11-01T05:41:21.489+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:21.489+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 4 checks in a row.\n2020-11-01T05:41:22.031+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:22.031+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 5 checks in a row.\n2020-11-01T05:41:22.572+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:22.572+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 6 checks in a row.\n2020-11-01T05:41:23.112+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:23.112+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 7 checks in a row.\n2020-11-01T05:41:23.653+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:23.653+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 8 checks in a row.\n2020-11-01T05:41:24.195+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:24.195+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 9 checks in a row.\n2020-11-01T05:41:24.737+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:24.737+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 10 checks in a row.\n2020-11-01T05:41:25.277+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:25.278+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 11 checks in a row.\n2020-11-01T05:41:25.819+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:26.359+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:26.900+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:27.448+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:27.989+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:28.530+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:29.071+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:29.612+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:30.153+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:30.696+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:30.696+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 21 checks in a row.\n2020-11-01T05:41:31.238+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:31.778+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:32.319+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:32.860+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:33.400+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:33.941+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:34.481+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:34.482+0000 E QUERY [js] Error: connect failed to replica set atlas-2y4u8j-shard-0/sandbox-shard-00-02.ofcuo.mongodb.net.:27017,sandbox-shard-00-00.ofcuo.mongodb.net.:27017,sandbox-shard-00-01.ofcuo.mongodb.net.:27017 :\nconnect@src/mongo/shell/mongo.js:328:13\n@(connect):1:6\nexception: connect failed\nbash-4.4# mongo \"mongodb+srv://sandbox.ofcuo.mongodb.net/\nMongoDB shell version v4.0.5\nEnter password:\nconnecting to: mongodb://sandbox-shard-00-02.ofcuo.mongodb.net.:27017,sandbox-shard-00-00.ofcuo.mongodb.net.:27017,sandbox-shard-00-01.ofcuo.mongodb.net.:27017/%3Cdbname%3E?authSource=admin&gssapiServiceName=mongodb&replicaSet=atlas-2y4u8j-shard-0&ssl=true\n2020-11-01T05:43:40.571+0000 I NETWORK [js] Starting new replica set monitor for atlas-2y4u8j-shard-0/sandbox-shard-00-02.ofcuo.mongodb.net.:27017,sandbox-shard-00-00.ofcuo.mongodb.net.:27017,sandbox-shard-00-01.ofcuo.mongodb.net.:27017\n2020-11-01T05:43:40.755+0000 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:40.755+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 1 checks in a row.\n2020-11-01T05:43:41.296+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:41.296+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 2 checks in a row.\n2020-11-01T05:43:41.836+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:41.836+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 3 checks in a row.\n2020-11-01T05:43:42.385+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:42.385+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 4 checks in a row.\n2020-11-01T05:43:42.927+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:42.927+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 5 checks in a row.\n2020-11-01T05:43:43.468+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:43.468+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 6 checks in a row.\n2020-11-01T05:43:44.010+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:44.010+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 7 checks in a row.\n2020-11-01T05:43:44.551+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:44.551+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 8 checks in a row.\n2020-11-01T05:43:45.091+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:45.091+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 9 checks in a row.\n2020-11-01T05:43:45.633+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:45.633+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 10 checks in a row.\n2020-11-01T05:43:46.173+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:46.173+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 11 checks in a row.\n2020-11-01T05:43:46.720+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:47.264+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:47.805+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:48.346+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:48.886+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:49.427+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:49.968+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:50.508+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:51.049+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:51.590+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:51.590+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 21 checks in a row.\n2020-11-01T05:43:52.131+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:52.671+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:53.212+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:53.753+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:54.293+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:54.834+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:55.375+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:55.375+0000 E QUERY [js] Error: connect failed to replica set atlas-2y4u8j-shard-0/sandbox-shard-00-02.ofcuo.mongodb.net.:27017,sandbox-shard-00-00.ofcuo.mongodb.net.:27017,sandbox-shard-00-01.ofcuo.mongodb.net.:27017 :\nconnect@src/mongo/shell/mongo.js:328:13\n@(connect):1:6\nexception: connect failed", "username": "jay_bhosale" }, { "code": "", "text": "Is your cluster up and running?\nPlease check status in Atlas.Any errors?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Most likely you forgot to whitelist your IP address.", "username": "steevej" }, { "code": "", "text": "Yess,Its up & running.\nI didn’t see any wrong with configuration.", "username": "jay_bhosale" }, { "code": "", "text": "I added IP (My-Machine) in network access.\nIts seem successfully added without any error.but while i trigger connection command trough console its throwing me error.", "username": "jay_bhosale" }, { "code": "", "text": "3 posts were split to a new topic: Not able to connect to cluster through IDE", "username": "Shubham_Ranjan" }, { "code": "whitelistIPs0.0.0.0", "text": "Hi @jay_bhosale,Can you try to whitelist all the IPs by selecting 0.0.0.0 option?Please take a look at this post for more information.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "I can’t paste the command to terminal. the terminal is not responding. what should I do?", "username": "Binti_Solihah" }, { "code": "", "text": "Please show us the screenshot\nMay be you pasted it in wrong area?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Press the + button.", "username": "santimir" } ]
Lab: Connect to your Atlas Cluster
2020-11-13T20:27:25.243Z
Lab: Connect to your Atlas Cluster
1,917
null
[ "swift", "atlas-device-sync" ]
[ { "code": "", "text": "Hi, I am trying to use the latest v5.3.5 RealmSwift with an existing project but get compiler errors indicating the sync components are missing:SyncUser\nSyncSession\nSyncManager\nSyncCredentialsHowever these are typealiased to the RMLSyncUser in the source files.Should I be able to build the full sync client using the Package Manager installed in Xcode ? If so any idea why these are not being found ?", "username": "Duncan_Groenewald" }, { "code": "", "text": "I’m using 10.1.2 and I also have the same problems when using Package Manager in XCode 12. Hope to hear any update on this issue.", "username": "Sujeevan_Kuganesan" }, { "code": "", "text": "Sync is not supported through Swift Package Manager yet, however it will be in the near future.", "username": "Jason_Flax" }, { "code": "", "text": "@Ian_Ward - I raised a request in GitHub to get a RealmSwift binary build for Xcode 12 and you responded saying that I should download the repo and build it myself. As I understand it Sync is not supported if you do that (see above response) so that would not help since we are in production with Realm Cloud.Wouldn’t it be pretty quick for you to build RealmSwift v4.4.1 with Xcode 12 (Swift 5.3 I believe) - rather than us having to wait for Sync to be included in Package Manager.V5.5.x still has issues so we are not confident to use it in production while we still see random crashes - but presumably we can’t build RealmSwift with Sync for v5.5.0 either anyway.Did I misunderstand something or is there some other way for us to build a Sync version of RealmSwift v4.4.1 for Xcode 12 - and ideally for both x86 and Apple Silicon (universal binary ?).Thanks", "username": "Duncan_Groenewald" }, { "code": "", "text": "hey @Duncan_Groenewald - I appreciate the post.Wouldn’t it be pretty quick for you to build RealmSwift v4.4.1 with Xcode 12 (Swift 5.3 I believe) - rather than us having to wait for Sync to be included in Package Manager.Unfortunately, it would be a lot of work for us to make a build with a pretty old version of RealmSwift 4.4.1 that supports Apple Silicon. We’ve had to add a significant amount of build steps in order to support the new silicon. I’d encourage you to upgrade to a 5.x version of RealmSwift that works for your build requirements - if you are experiencing issues with the 5.x version of RealmSwift; let’s work through them. We are here to help.", "username": "Ian_Ward" }, { "code": "", "text": "There are two issues we are still seeing with 5.5.0", "username": "Duncan_Groenewald" }, { "code": "", "text": "@Duncan_Groenewald For RealmSwift 5.5.0 Unable to open 4.4.1 Realm file - ERROR: Key Already Used · Issue #6884 · realm/realm-swift · GitHubAny information you can provide to our Cocoa team will help us narrow down the issue and fix this for you. Jason asked for a stack trace, if the app is crashing, then a stacktrace should be uploaded to Fabric or similar crash reporting service. If you are able to reproduce and can provide a sample app - we fix this immediately.", "username": "Ian_Ward" }, { "code": "", "text": "I have offered to provide a realm file for 2) which you can open with Realm Studio and you get the same error (there is no crash). I need an email address and I can share a dropbox folder - the file is around 50MB so I can’t email a copy.", "username": "Duncan_Groenewald" }, { "code": "", "text": "Sure you can email me at [email protected]", "username": "Ian_Ward" } ]
RealmSwift with Package Manager cannot find "SyncUser" in scope
2020-08-22T03:59:01.487Z
RealmSwift with Package Manager cannot find &ldquo;SyncUser&rdquo; in scope
4,473
null
[ "queries" ]
[ { "code": "", "text": "I have a collection called Reviews and a collection called Users. Each Review has an ID. Users have a field called “reviewLikes,” which is populated by review IDs. I want to locate Review IDs that match any of the ID numbers in Users.reviewLikes. How in the world can I achieve this. My brain is frying right now. Thank you!", "username": "Christopher_Clark" }, { "code": "", "text": "Using the aggregation framework\nand in particular the $lookup stage", "username": "steevej" } ]
How do I search one collection for information contained in a second collection?
2020-11-16T21:49:15.167Z
How do I search one collection for information contained in a second collection?
1,269
null
[ "connecting", "next-js", "developer-hub" ]
[ { "code": "connectToDatabaseimport { MongoClient } from 'mongodb'\n\nlet uri = process.env.MONGO_URI\nlet dbName = process.env.MONGO_DB\n\nlet promise = null\nlet cached = null\n\nif (!uri) throw new Error('Missing environment variable MONGO_URI')\nif (!dbName) throw new Error('Missing environment variable MONGO_DB')\n\nexport async function connectToDatabase() {\n if (cached) return cached\n if (!promise) {\n promise = MongoClient.connect(uri, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n })\n }\n const client = await promise\n const db = await client.db(dbName)\n cached = {\n client,\n db,\n }\n return cached\n}\n", "text": "I was following this post How to Integrate MongoDB Into Your Next.js App by @ado and noticed that if you make a bunch of requests through your Next.js Api Routes in dev mode (eg yarn dev) and then stop dev mode (eg ctrl+c) - you can see in the MongoDB logs that it releases a tonne of connections. Somewhere along the way the connections aren’t closing after each request and are just staying active until you stop Next.js@ado did you notice anything like this yourself?PS: I also noticed that your connectToDatabase function waits for the promise to complete before caching, which means that if you call this rapidly multiple times you will end up with more than one connection. Here is my revisted version:", "username": "Ash_Connell" }, { "code": "", "text": "It looks like these guys are having the same issues: proper use of mongo db in Next.js · vercel/next.js · Discussion #12229 · GitHub", "username": "Ash_Connell" }, { "code": "", "text": "Hi Ash,Thank you for reaching out. I have not experienced the issue personally, but looking at the linked discussion it seems others are experiencing it as well. Let me do a little bit of digging to see if I can figure out what is going on and maybe get some guidance from the Vercel guys on how to best handle this.In your revised code, are you seeing the results you’d expect or is it still creating too many connections?Thanks,\nAdo", "username": "ado" }, { "code": "connectToDatabase()", "text": "Hi Ado,The revised code prevents a situation where if you call connectToDatabase() a second time (or more) before the first one has cached, it results in another MongoClient being created and overwritten as the cached version.That said, the connection issue still exists, i just wanted to rule that out.", "username": "Ash_Connell" }, { "code": "", "text": "Hi Ash,With the PR you made to the official example, this isn’t an issue any longer right? If it is, I’m happy to dedicate some time to it to investigate further, but I’ve been following the GH PR and it looks like after some back and forth, you provided a solid solution.", "username": "ado" } ]
Next.js issue with multiple unclosed connections
2020-10-06T03:35:38.102Z
Next.js issue with multiple unclosed connections
6,490
null
[]
[ { "code": "", "text": "Hello everyone,\ncan anyone told me with architectural patterns do MongoDB use ? I dont mean schema design patterns, but patterns which are used in implementation of MongoDB. I do research for my school project about MongoDB architecture. Can anyone help ?", "username": "Lukas_Machata" }, { "code": "", "text": "Hello @Lukas_Machata welcome to the MongoDB community!I am not absolute sure if I under stood your goal correct. You can start reading this Mongodb Architecture Guide.In case questions come up while reading the guide or other resources, please feel free to come back with your questions at any time.Cheers\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "Hello @Lukas_Machata,\nAnother resource that may be of help to you are the “Server Internals” documentation pages that can be found directly in the codebase. We currently have sections on Sharding Internals, Replication Internals, Security, Catalog and Storage Execution Layer Internals, and the Storage Engine API. We are continuing to add sections on other components over time. You can see the current list of docs linked from the github mongo wiki in the “Server Internals” side bar.", "username": "Daniel_Pasette" } ]
MongoDB Architectural Patterns
2020-11-16T12:19:54.414Z
MongoDB Architectural Patterns
2,003
null
[ "python", "connecting" ]
[ { "code": "python -c \"import ssl; print(getattr(ssl, 'HAS_SNI', False))\"\nTrue\nopenssl version\nOpenSSL 1.1.0h 27 Mar 2018\npython -c \"import requests; print(requests.get('https://www.howsmyssl.com/a/check', verify=False).json()['tls_version'])\"\nconnectionpool.py:979: InsecureRequestWarning: Unverified HTTPS request is being made to host 'www.howsmyssl.com'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\nwarnings.warn(TLS 1.3)\n ATLAS_CONNECT = r\"mongodb+srv://m001student:[email protected]/\" \\\n r\"sample_airbnb?retryWrites=true&w=majority\"\n\n client = pymongo.MongoClient(ATLAS_CONNECT)\n db = client.get_database(\"sample_airbnb\")\n collection = db.get_collection(\"listingsAndReviews\")\n collection.count_documents({})\n print(collection)\nFile \"test_mongo_connect.py\", line 35, in main\ncollection.count_documents({})\nFile \"...pymongo\\collection.py\", line 1785, in count_documents\nreturn self.__database.client._retryable_read(\nFile \"...pymongo\\mongo_client.py\", line 1460, in _retryable_read\nserver = self._select_server(\nFile \"...pymongo\\mongo_client.py\", line 1278, in _select_server\nserver = topology.select_server(server_selector)\nFile \"...pymongo\\topology.py\", line 241, in select_server\nreturn random.choice(self.select_servers(selector,\nFile \"...pymongo\\topology.py\", line 199, in select_servers\nserver_descriptions = self._select_servers_loop(\nFile \"...pymongo\\topology.py\", line 215, in _select_servers_loop\nraise ServerSelectionTimeoutError(\n\npymongo.errors.ServerSelectionTimeoutError: connection closed,connection closed,connection closed, Timeout: 30s, Topology Description: <TopologyDescription id: 5faf922aa1fe2651f43d4e91, topology_type: ReplicaSetNoPrimary,\nservers: [<ServerDescription ('sandbox-shard-00-00.v16f0.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('connection closed')>,\n<ServerDescription ('sandbox-shard-00-01.v16f0.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('connection closed')>,\n<ServerDescription ('sandbox-shard-00-02.v16f0.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('connection closed')>]>\n", "text": "Good day everyone,I am working on my first Mongo online course, M001: MongoDB BasicsThe course assumes creating Atlas Free Tier database called Sandbox, and populate it with about 300Mb of sample data. Even though I did all exercises related to CLI or Compass, my future work is going to be mostly on developing client applications based on PyMongo, so I tried also connect to Sandbox using the Python script.First I checked admin credentials and opened cluster for all IPs, adding 0.0.0.0/32 to whitelist.My client instance specs: Windows 7, Python 3.8.4, PyMongo 3.11.0As PyMongo documentation says, if I use Free Tier, I have to check my client app on SNI support, OpenSSL version and TLS version.SNI checkOpenSSL versionTLS versionThere’s a code snippet I’m trying to executeExecuting results the following exceptionI was able to reproduce the exception on other system (macOS), so I don’t think it’s some local misconfiguration.", "username": "Atatatko" }, { "code": "", "text": "Is that MongoDB URI really correct?", "username": "Jack_Woehr" }, { "code": "0.0.0.0/320.0.0.0/0", "text": "I solved it.\nThis was probably my stupidest error.\n0.0.0.0/32 notation for allowing all IPs is incorrect.\nCorrect form is 0.0.0.0/0\nWould’ve been nice to have a clue like “access denied”.", "username": "Atatatko" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
PyMongo: error connecting to Sandbox Atlas database from M001 course
2020-11-14T09:30:05.545Z
PyMongo: error connecting to Sandbox Atlas database from M001 course
7,357
null
[ "replication" ]
[ { "code": "", "text": "HelloI have a configuration with 4 MongoDB databases version 3.4.6 on Redhat 6.10•\tNode3 and Node4 are running one site\n•\tNode1 and Node2 are running on another remote site\n•\tNode1 is the PRIMARY node, Node2 is SECONDARY and I have to resync Node3 and Node4.\n•\tI’m trying first to resync Node3 with the primary Node1\n•\tI have launched the standard resync procedure to resync first Node3 as follows:1-\tConnect to primary Node1 and remove Node3 from replica configuration with this command:\n•\trs.remove(“Node3”);2 - Stop mongo service on node Node3 remove data folder and restart mongo service3 - ReAdd the node to replica\n•\trs.add( { host: “Node3” } );•\tDuring the resync Node3 was normally in STARTUP2 status.\n•\tI could see that the Collections and the Indexes were created. It took a lot of time to recreate the Indexes by the way…\n•\tFinally at the end the Node3 to be resynced switched to RECOVERING status without being able to reach SECONDARY statusPlease find below ouput of rs.status() | egrep “name|stateStr” commandname\" : “NODE3”,\n“stateStr” : “RECOVERING”,\n“name” : “NODE1”\n“stateStr” : “PRIMARY”,\n“name” : “NODE2”,\n“stateStr” : “SECONDARY”,Any idea why Node3 is staying in RECOVERING status ? and not be to be fully resynced properly? What can I check?Many thanks for the suggestions.", "username": "MARCELIN_Richard" }, { "code": "", "text": "Hi @MARCELIN_Richard,It is probably due to resync failures when the resync attempts are exhausted the resync will fail in recovery mode.There might be several issues:Check the logs of the syncing secondary to see the issue failure.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Not able to resync my MongoDB database
2020-11-14T23:27:26.121Z
Not able to resync my MongoDB database
1,747
null
[ "performance" ]
[ { "code": "db.test.find().forEach(function(doc){db.test.update({_id:doc._id}, {$set:{counter: 400}})})\ndb.test.find().forEach(function(doc){db.test.update({_id:doc._id}, {$set:{counter: 400}})})\ndb.test.find().forEach(function(doc){db.test.update({_id:doc._id}, {$set:{counter: 400}})})\ndb.test.find().forEach(function(doc){db.test.update({_id:doc._id}, {$set:{counter: 500}})})\ndb.test.find().forEach(function(doc){db.test.update({_id:doc._id}, {$set:{counter: 500}})})\n", "text": "The update query performance varies significantly when the field values are set to the same value. MongoDB version: 4.2.6Run 1:no. of documents: 1 milliontime taken to update 1M documents: 13m31.227sRun 2:no. of documents: 1 milliontime taken to update 1M documents: 7m41.080sRun 3: After mongod restart and cleaning buff/cache manuallyno. of documents: 1 milliontime taken to update 1M documents: 7m41.080sRun 4: Setting counter to new valueno. of documents: 1 milliontime taken to update 1M documents: 13m44.284sRun 5:no. of documents: 1 milliontime taken to update 1M documents: 7m42.356sDoes mongodb perform additional checks while setting the value of a field? What can cause such performance difference for same update operation?", "username": "astro" }, { "code": "", "text": "Hi @astro,Thats a very interesting observation.I assume that since the Wired Tiger storage document does not actually need to be changed it won’t need to write it to disk eventually , easing on overall ack duration which drives client faster.Having said that I haven’t analysed the code to verify so its just a smart guess…Let me know if that makes sense.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks, @Pavel_Duchovny.This has helped. I am observing a lot of update calls making it to logs(above 100ms) in the fresh run. But there are only a few in the logs when the field value is already set.Shouldn’t there be no calls making it to logs when writing to disk is skipped in the second run?", "username": "astro" }, { "code": "", "text": "@astro,The log writes commands that exceed 100ms execution time regardless of their behaviour with the storage engine.Therefore, the likelihood of those commands crossing 100ms is lower than with a new value but it still exists and therefore you see some in the logs. You will probably see that the document was modified as the query layer did updated it from its standpoint .Best\nPavel", "username": "Pavel_Duchovny" } ]
MongoDB update operation performance
2020-11-15T19:54:04.487Z
MongoDB update operation performance
6,535
https://www.mongodb.com/…57f0ea1a3fb.jpeg
[ "compass" ]
[ { "code": "Error creating SSH Tunnel: connect EADDRINUSE <ip>:22 - Local (0.0.0.0:29418)", "text": "Hey, I am unable to connect to my mongodb through SSH. I mostly get EADDRINUSE. I would be happy to at least understand the error – what does it mean?Error creating SSH Tunnel: connect EADDRINUSE <ip>:22 - Local (0.0.0.0:29418)I just tried the same setup with Studio3T/Robo3T (alternative GUI client) and the connection is working on the first try.Sometimes in Compass, without any changes, I even get AUTHENTICATION FAILED. Does that mean mongodb auth or ssh auth? Sometimes, the SSH Port field is highlighted red, sometimes not. So weird.Any ideas why Compass refuses to connect? I’d prefer using the official software but it just doesn’t seem to work. I have zero idea where to start debugging, what my configuration is missing, or if there are good and free alternatives to Compass. Help?I tried with a rsa and a ppk file. I saw one post about the identify file being malformed – What kind of identify file is Compass really expecting here?I have the latest Compass Version (1.23.0).\nI tried from two different computers and two different networks.Thank you for reading.\ncompass574×984 60 KB\n", "username": "thusman" }, { "code": "", "text": "If you’re connecting to localhost use 127.0.0.1 if you need an ip for the Hostname field.\nYou probably don’t have MongoDB configured to listen on the external interface.\nI think basically you have set up more than you need to for localhost.", "username": "Jack_Woehr" }, { "code": "", "text": "I should’ve made it clear that I try to connect to a remote server. On that server mongodb does not accept external interfaces, true. So hostname is my remote server that I ssh into, and then try to connect to its localhost.", "username": "thusman" }, { "code": "ssh -Lssh -L 27017:localhost:27017 me@myhost-Llocalhostssh -L127.0.0.1", "text": "I do this daily, doing it now.Did you redirect the ports properly using ssh -L flag, e.g, something like:ssh -L 27017:localhost:27017 me@myhost(assuming you don’t have mongod running on the local machine, in which case you would need to use another port, e.g., 27117, on the right side of the -L argument and add the port number to your mongodb uri)?Also if you name the interface (localhost) to ssh -L as I did above, that’s the interface, i.e., 127.0.0.1, not the ethernet interface address of your computer.", "username": "Jack_Woehr" }, { "code": "ssh -L 27017:localhost:27017 me@myhost", "text": "ssh -L 27017:localhost:27017 me@myhostHey Jack, thank you, that is super interesting. I completely skip the “More Options” and SSH Login of Compass now and do the following:That’s a great solution, I didn’t know something like this was possible. I still don’t really understand why it doesn’t work with the SSH Options inside Compass, but this looks like a super future proof solution. Thanks!", "username": "thusman" }, { "code": "", "text": "Ha! you got me, @thusman … I’m not much of a tool-using animal I always use the lowest-level approach. SSH is “closer to ground” so I set up any necessary redirections there and then use whatever tool without it having to know about things. Wasn’t even aware Compass handles port redirections in any fashion.", "username": "Jack_Woehr" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Compass Error creating SSH Tunnel: EADDRINUSE
2020-11-14T18:37:11.483Z
Compass Error creating SSH Tunnel: EADDRINUSE
9,190
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 3.6.21 is out and is ready for production deployment. This release contains only fixes since 3.6.20, and is a recommended upgrade for all 3.6 users.\nFixed in this release:", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 3.6.21 is released
2020-11-16T14:28:47.352Z
MongoDB 3.6.21 is released
1,734
null
[ "data-modeling" ]
[ { "code": "", "text": "I have to decide about how to best save Data Objects for different Workspaces in an application in a database in the most performant way. For the sake of this example, let us assume that there are 1000 DataObjects, and 10 of them are in a Workspace. It is a given that I have a DataObject collection containing all DataObjects.I can think of two different approaches:In the first scenario, I would first look up the WorkspaceObject by its WorkspaceId , and then look up the DataObjects contained within their IDs. That would mean 11 database calls, each of which target an object by its _id.In the second scenario, I would look up the DataObjects using a filter for the WorkspaceId. That would be only one database call, but without any _id.So here’s the question: Which one of these two options is faster? I know that generally fewer database calls are better, so I lean towards the second option, but I don’t know if using filters makes it slower again, and how those two scenarios compare.", "username": "Kira_Resari" }, { "code": "$lookup_id{\n _id: 12, \n workspaceName: \"string\", \n dataObjects: [ \n { _id: 34, name: \"string\", anotherFld: \"string\" },\n {...},\n ...\n ]\n } \n\"dataObjects.name\"_id_id_idmongo", "text": "Hello @Kira_Resari,You have two entities workspace and dataobject, with One-to-Many relationship - that is a workspace has many data objects.In the first scenario, the workspace has the datobject references embedded within it. The query would be a $lookup aggregation which is a singe call but accessess two collectons on the server. The query retrieves the information from the workspace and details from the related data objects.You can also consider storing some information related to the data objects within the workspace - instead of just the _id. You can include additional info which doesn’t change often and is queried along with the workspace often. This way, you can get workspace and related dataobject information with a simple query - access one collection only. For example, the workspace document can be like this:Note that there is some data duplication, in this case - the \"dataObjects.name\" is stored in the dataobject collection also.In the second scenario, having the workspace _id reference in the dataobject lets you query with the workspace’s _id. This is also a single call to the server, but accesses the dataobject collection only. An index on the workspace’s _id within the dataobject collection will help the query perform faster.When you make a query from a client application like mongo shell, Compass or a program using your favorite language (like, Java, Python, etc.), the query is sent to the MongoDB Server as a single call. The query gets executed on the server. When the query is executed, it may access one or more collections - once or multiple times to access its documents on the server itself.In general, accessing one collection instead of two collections with a single query, performs better. Also, the query will be simpler.", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Which one is faster? Pointer List Object VS Filters
2020-11-16T10:18:48.124Z
Which one is faster? Pointer List Object VS Filters
2,073
null
[]
[ { "code": "", "text": "Curious to know what software makes this discussion forum and what programming language it is in.", "username": "Jack_WAUGH" }, { "code": "", "text": "Hello @Jack_WAUGHthis forum is build upon Discoursecheers\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "OK. Looks like a typical MVC approach in Ruby. Here’s some model code, for example. discourse/category.rb at main · discourse/discourse · GitHub", "username": "Jack_WAUGH" }, { "code": "", "text": "Hello @Jack_WAUGH\nI assume that your last statement is an informational add on. To help others you can mark your question as solved.\nCheers\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "So that if anyone else wants to know about the tech, and searches here, they will see that it is Ruby. Yes, I marked your first response as the solution. Does marking a response as the solution have the effect of marking the question solved?", "username": "Jack_WAUGH" }, { "code": "status:solved ruby", "text": "Hi,\nsorry I was not precise. Marking a response as solution will “mark” the question as solved.You can click on search and options, to get advanced filters. You can try status:solved ruby in the search bar and you will find your question,\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "Does marking a response as the solution have the effect of marking the question solved?Hi @Jack_WAUGH,Marking a response as a solution identifies the most helpful post in a discussion topic (which will be featured below the question) and allows the topic to automatically close after discussion quiesces.As @michael_hoeller noted, the “solved” status of topics can also be used for searching and filtering.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Technique Underlying The Present Forum
2020-11-12T11:38:59.103Z
Technique Underlying The Present Forum
4,042
null
[ "queries" ]
[ { "code": "db.callers.find({\"fullName\" : \"John Doe\"})\n{\n \"fullName\" : \"John Doe\",\n \"phoneNo\" : \"07123456789\"\n}\ndb.callers.find({\"phoneNo\": \"07123456789\"})\n", "text": "When I query my database (stored in Atlas) using a user’s full name, it returns the expected result, but when I query using phone number, it returns nothing. This is only occurring for one (out of ~half a dozen) user.\nFor example,returnsbutreturns nothing.\nAm I missing something? The data definitely exists (as it is being returned by the first query). Could it be to do with the user’s specific phone number? If so, why?\nThanks", "username": "Ewan_Spence" }, { "code": "phoneNodb.callers.aggregate([\n { $match: { fullName: \"John Doe\" } },\n { $addFields: { phoneDataType: { $type: \"$phoneNo\" } } }\n])\nphoneDataType: \"string\"", "text": "Hello @Ewan_Spence, welcome to the MongoDB community forum!By looking at what you had posted it is difficult to say what the issue is. But, you can try to find the data type of the phoneNo field. The following aggregation query prints the data type of the field:As per your posted information it should return phoneDataType: \"string\". Let me know.", "username": "Prasad_Saya" }, { "code": "", "text": "Since the issue only happens for some, I would also verify the spelling of phoneNo for the documents that do not work. Being, schemaless, some documents might have phoneno or PhoneNo or any variation.", "username": "steevej" }, { "code": "", "text": "Yes, it returns the full document for John Doe, with “phoneDataType: “string”” at the end", "username": "Ewan_Spence" }, { "code": "", "text": "All documents have been inserted the same way - using an API I built in JavaScript.\nMy current working theory is that something in the user’s phone number (it goes without saying that their real phone number is not ‘07123456789’, and that I cannot share their actual number) is triggering some mongo escape sequence that I don’t know about. That’s the only thing I can think that could be causing this.", "username": "Ewan_Spence" }, { "code": "{ $addFields: { phoneDataType: { $type: \"$phoneNo\" } } }", "text": "One thing to check is for leading or trailing white spaces which sometimes are carried with cut-n-paste without being obvious.Quotes are also problematic with cut-n-paste depending from where you cut and where you paste. If you look at the post in this thread sometimes the quotes are the modern matching quotes as inwith “phoneDataType: “string”” at the endor are the traditional quotes as in{ $addFields: { phoneDataType: { $type: \"$phoneNo\" } } }I have also seen issues hard to visualize with uppercase o (O) and zero (0). Some fonts also make one (1) hard to distinguish from lowercase l.One last thing I would check is the presence of an index with a specific collation. Partial indexes can also exhibit funny behaviour but since you find with phoneNo only, that should not be the case.", "username": "steevej" }, { "code": "", "text": "The first two issues I’m confident are not what is causing the problem - there are no white-spaces, nor copy-pastes, nor quotations in my queries.\nThe final point I’m not sure I understand, how would I go about checking that?", "username": "Ewan_Spence" }, { "code": "", "text": "You can get the indexes of your collection with https://docs.mongodb.com/manual/reference/method/db.collection.getIndexes/.Any indexes with particularities that might affects result will stand out compared to the others. I am not sure how they would look like but usually the output is verbose enough.", "username": "steevej" }, { "code": "[\n\t{\n\t\t\"v\" : 2,\n\t\t\"key\" : {\n\t\t\t\"_id\" : 1\n\t\t},\n\t\t\"name\" : \"_id_\",\n\t\t\"ns\" : \"lktdb.callers\"\n\t}\n]\n", "text": "The only output I get from getIndexes isWhich I assume is normal.", "username": "Ewan_Spence" }, { "code": "", "text": "I query using phone number, it returns nothing. This is only occurring for one (out of ~half a dozen) userPlease post a document which you could query without the problem.", "username": "Prasad_Saya" }, { "code": "", "text": "The issue with this is that I’m querying phone numbers, and I don’t want to post that sensitive information online. The example I’m using in the question (John Doe, 07123456789) is not the real document, but I do have a document in the collection under the name John Doe (made for testing purposes) that I can query successfully.", "username": "Ewan_Spence" }, { "code": "", "text": "Which I assume is normal.Totally normal. You may exclude my index theory.", "username": "steevej" }, { "code": "", "text": "Still don’t have an answer for this - very perplexed as to what’s going on. The app is continuing to work for all other users, but the phone number in question still returns nothing.", "username": "Ewan_Spence" }, { "code": "", "text": "The only last thing I could think of is that there is some non-visible characters like a backspace inside the phone number that are not copied with cut-n-paste. To see any non-visible characters you have to print the phone number in hexadecimal. The following should work:", "username": "steevej" }, { "code": "", "text": "When I query my database (stored in Atlas) using a user’s full name, it returns the expected result, but when I query using phone number, it returns nothing. This is only occurring for one (out of ~half a dozen) user.If the issue is with some documents only, check what is the source of the document creation. Are there multiple sources of creating documents? What are these processes or applications?Are these trouble documents still being created with new data in the collection? If not, you can think about how to fix the existing “bad” data.", "username": "Prasad_Saya" } ]
Document in Collection doesn't show searching for Phone Number
2020-11-03T10:49:02.220Z
Document in Collection doesn&rsquo;t show searching for Phone Number
5,471
null
[ "devops" ]
[ { "code": "", "text": "Hi all, I have everything setup, i have a no-ip account, setup a windows mongodb server, which works well in my local network.Now i want to allow remote access to my local mongodb server using a no-ip account …i have configured port-forwarding on my router to my local windows ip address.the windows server has ip binding to 0.0.0.0, and i allowed port 27017 through the windows firewall.then in my connection string i am specifying the domain name provided by no-ip something like mydomain:27017i expect that outside connections to this mydomain will be routed to my internal windows ip, and then the windows mongodb server should be able to respond back.To reiterate, the server is working fine on the local network, I just can’t seem to get it to work remotely with a DDNS domain name despite port forwarding and all…Hopefully someone could help me out…", "username": "Shawn_Law" }, { "code": "bindIPdb.serverCmdLineOpts().parsedmongo", "text": "Welcome to the community @Shawn_Law!Before exposing your MongoDB deployment to the public internet using port forwarding, I strongly recommend reviewing the MongoDB Security Checklist and implementing essential security measures including enabling access control, configuring authentication, and enabling TLS/SSL network encryption.then in my connection string i am specifying the domain name provided by no-ip something like mydomain:27017i expect that outside connections to this mydomain will be routed to my internal windows ip, and then the windows mongodb server should be able to respond back.The setup you’ve described should work as long as:The external DNS you’ve configured with no-ip is pointing at the public IP of your router. Check with what's my ip at DuckDuckGo.Your ISP or network provider isn’t blocking inbound connections to the port you plan to use.Your router configuration is set to forward the external port to the correct internal IP and port.The MongoDB deployment on your internal IP is listening on the specified port and IP address. Check that your custom bindIP configuration appears in the output of db.serverCmdLineOpts().parsed in the mongo shell.The computer hosting your MongoDB allows connections from the router’s internal IP (or any IP).You are connecting to your router from an external host.I suspect the last requirement may be the issue: try connecting from an external host to confirm. Hosts on the internal network should use an internal host name.If all of these prerequisites seem fine and you’re still having trouble, please confirm the driver/tool version you are trying to connect with, and the specific error message returned.If you’re just looking to set up a deployment for development or learning purposes, I would also consider using MongoDB Atlas’ free tier as a faster path for setting up a secure public cluster.Regards,\nStennie", "username": "Stennie_X" } ]
Help with Self-hosting mongodb + DDNS remote access
2020-11-11T02:29:59.692Z
Help with Self-hosting mongodb + DDNS remote access
3,594
null
[ "monitoring" ]
[ { "code": "", "text": "Hello,[Background]\nWe have recently started using MongoDB 4.4.1 after using MongoDB 2.4.6 for a while. One of the changes of MongoDB 4.4.1 is the log format being changed to JSON format. This is a problem for us because we have a log monitoring tool that we used when we were using MongoDB 2.4.6 and it does not work for MongoDB 4.4.1 because the log format has changed.[Question]\nIs there a way to change the new log format of MongoDB 4.4.1 to the old log format by changing the log settings?", "username": "Hiroki_Harigai" }, { "code": "jqjq", "text": "Welcome to the community @Hiroki_Harigai!Is there a way to change the new log format of MongoDB 4.4.1 to the old log format by changing the log settings?The only supported server logging format in MongoDB 4.4 deployments is the new JSON structured logging format.However, a benefit of this format is that you can use standard JSON libraries and utilities like jq (a command-line JSON processor) to parse and manipulate log output. The MongoDB documentation includes some examples of jq usage.Unfortunately older tools will require updates to support this format, or some transformation of relevant log lines or information.What log monitoring tool are you using?Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hello Stennie,Thank you for your reply.We are using a perl script that we made ourselves to monitor logs.\nSo we need to fix the perl script to support the new format.Or do you have any OSS monitoring tools that support the new JSON log format you could recommend?Thanks", "username": "Hiroki_Harigai" }, { "code": "serverStatus", "text": "Or do you have any OSS monitoring tools that support the new JSON log format you could recommend?Hi,I don’t have any top of mind recommendations for OSS log-based alerts, but the new JSON format makes integration much easier than the previous bespoke logging. Common performance problems and alerts are usually identified via aggregate metrics from serverStatus and similar commands (see: Cloud Manager: Database Commands Used by Monitoring). I use log-based analysis for diagnostic investigations rather than real-time alerts.Since you already have a custom script to monitor conditions of interest, I suspect the fastest path would be to update your checks with modern equivalents.What sort of log message conditions are you alerting on?Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hello,Thank you for your reply.We send an alert if an log output consists of a certain string that might relate to an error.\nIt seems like the main reason why our monitoring script is not working anymore is because the ctime timestamp format is no longer supported.\nWe will fix the script so it supports JSON format and the new timestamp format.Thanks for your support Thanks", "username": "Hiroki_Harigai" } ]
Changing new log format of MongoDB 4.4.1 to old log format
2020-11-12T04:42:55.533Z
Changing new log format of MongoDB 4.4.1 to old log format
4,473
null
[ "node-js", "connecting" ]
[ { "code": "2020-11-12T14:52:31.769-0500 I NETWORK [conn181] received client metadata from 127.0.0.1:49487 conn181: { driver: { name: \"nodejs\", version: \"3.5.7\" }, os: { type: \"Darwin\", name: \"darwin\", architecture: \"x64\", version: \"19.6.0\" }, platform: \"'Node.js v11.0.0, LE (unified)\" }\n2020-11-12T14:52:31.770-0500 I ACCESS [conn181] SASL SCRAM-SHA-1 authentication failed for badUser on test_db from client 127.0.0.1:49487 ; UserNotFound: Could not find user badUser@test_db\n2020-11-12T14:52:31.770-0500 I NETWORK [conn181] end connection 127.0.0.1:49487 (0 connections now open)\nAuthentication failed MongoServerSelectionError: Authentication failed.var auth = { user:encodeURIComponent(req.body.user), password:encodeURIComponent(req.body.pass) }\nvar clientOptions = { authSource:dbName, auth:auth, authMechanism:'DEFAULT', useUnifiedTopology:true}\nclient = new MongoClient(process.env.MONGO_SERVER_URI, clientOptions)\n\nclient.connect((err, client) => {\n if (err) {\n res.status(401).send(JSON.stringify({err:`Authentication failed`}))\n return\n } \n}\n", "text": "I’m using an express API in node for mongodb access. My auth route works fine when the user/pass is correct. But when the user/pass is incorrect, the mongod instance issues this message 60 times:…then my node/express API finally returnsAuthentication failed MongoServerSelectionError: Authentication failed.This is correct, but takes about 20 seconds to complete. Therefore the user’s time is wasted waiting for the ‘incorrect user/pass’ message.Here is the connect code in node:Why is mongodb (or MongoClient) taking so long to reject the login?", "username": "Geoff_Marshall" }, { "code": "", "text": "http://mongodb.github.io/node-mongodb-native/3.6/api/MongoClient.html#.connectreconnectTries \tnumber \t30 \toptional Server attempt to reconnect #times", "username": "Jack_Woehr" }, { "code": "reconnectTriesuseUnifiedTopology:trueuseUnifiedTopology:true", "text": "reconnectTries not allowed when useUnifiedTopology:trueso I removed useUnifiedTopology:true and bad logins are rejected immediately. Any idea what is going on?", "username": "Geoff_Marshall" }, { "code": "serverSelectionTimeoutMS", "text": "Looks like Unified Topology is still a work in progress.The only hopeful suggestion I’ve found in 1/2 hour of Googling is serverSelectionTimeoutMS (which is allowed with UT) which may improve your UX.", "username": "Jack_Woehr" } ]
Incorrect user/pass rejection takes a long time from node/express/mongodb
2020-11-15T19:12:04.690Z
Incorrect user/pass rejection takes a long time from node/express/mongodb
2,456
null
[ "data-modeling" ]
[ { "code": "", "text": "We currently receive 7 billion documents for a month and we haven’t compressed the keys in json document we can come up with shortened key for document. Before doing that we wanted to see if there is a list of best practices to follow in this scenario.", "username": "Rajesh_Devabhaktuni" }, { "code": "", "text": "Keep in mind that wiredtiger is performing block compression for collections and prefix compression for indexes, so the end result may not be as bad as you think.Mongo 4.2 introduces zstd compression which is better in terms of compression and cpu usage.Have a read over https://docs.mongodb.com/manual/core/data-model-operations/index.html#storage-optimization-for-small-documents.", "username": "chris" } ]
Any best practice documentation for saving storage space by compression and modifying json document keys in mongoDB
2020-11-13T18:34:22.050Z
Any best practice documentation for saving storage space by compression and modifying json document keys in mongoDB
3,528
null
[ "app-services-user-auth" ]
[ { "code": "app.currentUser.linkCredentials", "text": "I’m calling app.currentUser.linkCredentials with Google credentials that are already hooked up to an existing user. I’m getting the error message “a user already exists with the specified provider” (which makes sense because it does). My question is how do I go about linking that existing user to the active user (I then expect to handle all the logic of merging the two realms in my application)? As an alternative, is there a way to check if there is already a user hooked up to that Google account, or do I have to wait for the error to be thrown?", "username": "Peter_Stakoun" }, { "code": "", "text": "You can only link identities, not users. This means that if a user has already authenticated with the Google identity, this identity can no longer be linked to a different user.There’s no way to check if a user with a specified credential exists, so your only option currently is to handle the error.", "username": "nirinchev" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
linkCredentials with an existing Google user
2020-11-15T03:18:24.882Z
linkCredentials with an existing Google user
3,032
null
[]
[ { "code": " { \n city\n state\n cell_site\n calls : [ \n { id \n calling_nbr \n called_nbr \n start_date \n address\n },\n {...},\n ]\n }\n {\n full_name\n phone_number\n }\n", "text": "Hi everyone, I’m new to mongodb and I’m trying to upload a big CSV file stored fictitious call records.\nThe some fields of this CSV file are:ID FULL_NAME CALLING_NBR CALLED_NBR START_DATE CITY ADDRESS STATE CELL_SITEI want to create 2 collection in the database: City and PersonThe relationships are:Precisely, I want a document like this inside the City collection:Where attribute city is the index of the document.\nAnd a document for Person collection like:Where the phone_number is the index of the document.How can I do that ?\nThank you!", "username": "GIANLUCA_CARBONE" }, { "code": "csvpymongo", "text": "I’d do it in Python.", "username": "Jack_Woehr" }, { "code": "db.staging.aggregate([{ $group : { _id : { city : \"$city\",\n state : \"$state\",\n cell_site: \"$call_site\"},\n calls : { $push : { ... }}\n },\n {$project : { ... }},\n {$out : ... }]);\n", "text": "Hi @GIANLUCA_CARBONE,I would recommend you to do this in 2 stages:So for city aggregation I would go with something like :Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Import data from csv and create relationships
2020-11-13T18:36:00.019Z
Import data from csv and create relationships
4,344
null
[ "node-js", "connecting" ]
[ { "code": "app.db_uri = \"mongodb://127.0.0.1:27017\";\n\ntypeof (app.MongoClient = require(\"mongodb\").MongoClient);\n\ntypeof ( app.mongo_client = new app.MongoClient(\n app.db_uri, {useUnifiedTopology: true}\n) );\n\n( async function () { with (app) {\n try {\n await mongo_client.connect({\n 'auth.user': \"archive\",\n 'auth.password': db_pw,\n authSource: \"archive\"\n });\n console.log(\"Connected.\");\n } catch (err) {\n console.error(err)\n }\n}})();\n\nwith (app) typeof (self.db = mongo_client.db(\"archive\"));\n\n( async function () { with (app) {\n try {\n self.tst_coll = await db.collection(\"tst\");\n console.log(\"Have handle to collection.\");\n } catch (err) {\n console.error(err)\n }\n}})();\n\n( async function () { with (app) {\n try {\n await tst_coll.bulkWrite(\n [\n { insertOne: { document: {\n black: 'lives',\n matter: ['you', 'bet']\n }}}\n ]\n );\n console.log(\"Wrote.\");\n } catch (err) {\n console.error(err)\n }\n}})();\n", "text": "I’m using the driver in Nodejs without Mongoose or any such extra layers. In the following procedures, everything comes from the manual, and it looks as though I am specifying the right credentials (user, password, and database), but nevertheless, an error message comes back “MongoError: command insert requires authentication”.The same server has another database with its own user, and those are working with software that I installed, but didn’t write. The data about the users in the “admin” database looks parallel between the two of them.Here I specify the username and password in the arguments to the “connect” operation rather than in the connection URI as I have seen in some examples. Does sending them this way simply not work? What else is there to investigate to try to explain why I am getting an error message that mentions authentication?MongoError: command insert requires authentication.", "username": "Jack_WAUGH" }, { "code": "archive", "text": "Are you sure the user was created in the archive namespace? Often the authSouce is admin.I would assume an error would show with your try…catch though vs “Connected.”", "username": "chris" }, { "code": "", "text": "I attempted administration both via the nodejs driver and via the mongo shell. In the latter:", "username": "Jack_WAUGH" }, { "code": "> with (tmp) self.r = archive.getUser( \"archive\", {\n... showCredentials: true,\n... showPrivileges: true,\n... showAuthenticationRestrictions: true\n... /* no filter */\n... } )\n{\n\t\"_id\" : \"archive.archive\",\n\t\"userId\" : UUID(\"774f51d9-1a44-4505-8459-5254c767591d\"),\n\t\"user\" : \"archive\",\n\t\"db\" : \"archive\",\n\t\"mechanisms\" : [\n\t\t\"SCRAM-SHA-1\",\n\t\t\"SCRAM-SHA-256\"\n\t],\n\t\"credentials\" : {\n\t\t\"SCRAM-SHA-1\" : {\n\t\t\t\"iterationCount\" : 10000,\n\t\t\t\"salt\" : \"ofZDQzZpxlNuIaLZBCgZcA==\",\n\t\t\t\"storedKey\" : \"bnKjAub+8gK8abOTk2pv1xZCp4c=\",\n\t\t\t\"serverKey\" : \"2iYaWdHyR5oqpkb8oGxhRcXWgAA=\"\n\t\t},\n\t\t\"SCRAM-SHA-256\" : {\n\t\t\t\"iterationCount\" : 15000,\n\t\t\t\"salt\" : \"tQaoqzDL+7z7Lwm9ueAuqDqMWrMEuoq0n8+KKg==\",\n\t\t\t\"storedKey\" : \"2E+qP66fRuIdluy+5AV4p+cvo7HhhjMR5H3DASL+258=\",\n\t\t\t\"serverKey\" : \"cw3hd4SpOJw7QbGYBAWIbB6bKWvP0V/ArSxKXJfd7oY=\"\n\t\t}\n\t},\n\t\"customData\" : {\n\t\t\n\t},\n\t\"roles\" : [\n\t\t{\n\t\t\t\"role\" : \"readWrite\",\n\t\t\t\"db\" : \"archive\"\n\t\t}\n\t],\n\t\"inheritedRoles\" : [\n\t\t{\n\t\t\t\"role\" : \"readWrite\",\n\t\t\t\"db\" : \"archive\"\n\t\t}\n\t],\n\t\"inheritedPrivileges\" : [\n\t\t{\n\t\t\t\"resource\" : {\n\t\t\t\t\"db\" : \"archive\",\n\t\t\t\t\"collection\" : \"\"\n\t\t\t},\n\t\t\t\"actions\" : [\n\t\t\t\t\"changeStream\",\n\t\t\t\t\"collStats\",\n\t\t\t\t\"convertToCapped\",\n\t\t\t\t\"createCollection\",\n\t\t\t\t\"createIndex\",\n\t\t\t\t\"dbHash\",\n\t\t\t\t\"dbStats\",\n\t\t\t\t\"dropCollection\",\n\t\t\t\t\"dropIndex\",\n\t\t\t\t\"emptycapped\",\n\t\t\t\t\"find\",\n\t\t\t\t\"insert\",\n\t\t\t\t\"killCursors\",\n\t\t\t\t\"listCollections\",\n\t\t\t\t\"listIndexes\",\n\t\t\t\t\"planCacheRead\",\n\t\t\t\t\"remove\",\n\t\t\t\t\"renameCollectionSameDB\",\n\t\t\t\t\"update\"\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\t\"resource\" : {\n\t\t\t\t\"db\" : \"archive\",\n\t\t\t\t\"collection\" : \"system.indexes\"\n\t\t\t},\n\t\t\t\"actions\" : [\n\t\t\t\t\"changeStream\",\n\t\t\t\t\"collStats\",\n\t\t\t\t\"dbHash\",\n\t\t\t\t\"dbStats\",\n\t\t\t\t\"find\",\n\t\t\t\t\"killCursors\",\n\t\t\t\t\"listCollections\",\n\t\t\t\t\"listIndexes\",\n\t\t\t\t\"planCacheRead\"\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\t\"resource\" : {\n\t\t\t\t\"db\" : \"archive\",\n\t\t\t\t\"collection\" : \"system.js\"\n\t\t\t},\n\t\t\t\"actions\" : [\n\t\t\t\t\"changeStream\",\n\t\t\t\t\"collStats\",\n\t\t\t\t\"convertToCapped\",\n\t\t\t\t\"createCollection\",\n\t\t\t\t\"createIndex\",\n\t\t\t\t\"dbHash\",\n\t\t\t\t\"dbStats\",\n\t\t\t\t\"dropCollection\",\n\t\t\t\t\"dropIndex\",\n\t\t\t\t\"emptycapped\",\n\t\t\t\t\"find\",\n\t\t\t\t\"insert\",\n\t\t\t\t\"killCursors\",\n\t\t\t\t\"listCollections\",\n\t\t\t\t\"listIndexes\",\n\t\t\t\t\"planCacheRead\",\n\t\t\t\t\"remove\",\n\t\t\t\t\"renameCollectionSameDB\",\n\t\t\t\t\"update\"\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\t\"resource\" : {\n\t\t\t\t\"db\" : \"archive\",\n\t\t\t\t\"collection\" : \"system.namespaces\"\n\t\t\t},\n\t\t\t\"actions\" : [\n\t\t\t\t\"changeStream\",\n\t\t\t\t\"collStats\",\n\t\t\t\t\"dbHash\",\n\t\t\t\t\"dbStats\",\n\t\t\t\t\"find\",\n\t\t\t\t\"killCursors\",\n\t\t\t\t\"listCollections\",\n\t\t\t\t\"listIndexes\",\n\t\t\t\t\"planCacheRead\"\n\t\t\t]\n\t\t}\n\t],\n\t\"inheritedAuthenticationRestrictions\" : [ ],\n\t\"authenticationRestrictions\" : [ ]\n}\n", "text": "In the mongo shell:", "username": "Jack_WAUGH" }, { "code": "{\n \"auth\":{\n \"user\":\"archive\",\n \"password\":\"whatever it is\"\n },\n \"authSource\": \"archive\"\n}\n", "text": "Try passing the auth options as a document vs dot notation, worked for me experimenting with node REPL:", "username": "chris" }, { "code": "", "text": "I tried the breakdown you suggested, but got the same result as before.", "username": "Jack_WAUGH" }, { "code": "$ mongod --version\ndb version v4.0.21\ngit version: 3f68a848c68e993769589dc18e657728921d8367\nOpenSSL version: OpenSSL 1.1.1 11 Sep 2018\nallocator: tcmalloc\nmodules: none\nbuild environment:\n distmod: ubuntu1804\n distarch: x86_64\n target_arch: x86_64\n\n", "text": "I have mongodb driver 3.6.3, nodejs12.18.3.", "username": "Jack_WAUGH" }, { "code": "", "text": "Update: I changed the password so it would not need percent encoding to be included in the URI. Then I moved the username, password, and default database for authentication into the URI. That way, it worked. Therefore, there was nothing wrong with the administrative setup.", "username": "Jack_WAUGH" }, { "code": "", "text": "worked for me experimenting with node REPLWhat node, what mongodb, and what mongod?", "username": "Jack_WAUGH" }, { "code": "", "text": "node v15.2.0 , mongodb driver 3.6.3, mongo 4.4I am a bit busy now, but I might have time later to replicate and then try with a setup similar to yours.", "username": "chris" }, { "code": "", "text": "Thanks; I think you need not imitate my setup; I will either try yours or if I see that your versions are later than mine, I may just assume someone solved it in them.Comparing, I see that we have the same driver, but you have a later DBMS. I doubt whether the nodejs matters.I actually control two VMs, one for the app I am working on and another for my personal use. I can easily experiment with different DBMS versions on the latter and it won’t put any risk on the app.", "username": "Jack_WAUGH" }, { "code": "", "text": "@Jack_WAUGH\nSo the problem was presumably for the password not being ascii encoded, right?May I ask what was the character and how where you encoding it?", "username": "santimir" }, { "code": "", "text": "No, I’m thinking that the problem is linked to my passing the username and password as arguments rather than in the URI. Note that in my original example, the URI only specified the host and port.", "username": "Jack_WAUGH" }, { "code": "use archive; db.auth(username, password)", "text": "I see, thanks. So if you tried use archive; db.auth(username, password) it did work? (sorry if I missed smth).", "username": "santimir" }, { "code": "use archivemongomongo_client.connect(...)", "text": "use archive looks like a mongo shell thing. I am doing all the user operations from nodejs (with just the straight driver, no Mongoose).What worked was encoding the username and password into the connection URI instead of passing them with the mongo_client.connect(...) call. [edit] I assume that when they are passed this way, no encoding is necessary. In any event, the original password I tried had a right curly\nbrace in it (\"}\") and otherwise only digits and Latin letters.", "username": "Jack_WAUGH" } ]
Why Can't I Authenticate?
2020-11-11T23:21:58.413Z
Why Can&rsquo;t I Authenticate?
14,725
null
[ "containers", "configuration" ]
[ { "code": "", "text": "I am getting this error on the NFSv3 filesystem as wellsudo docker run -it -v /mnt/mongo:/data/db --name mongodb e6fa3383f923\n2020-11-12T09:09:15.814+0000 I STORAGE [initandlisten] exception in initAndListen: Location28596: Unable to determine status of lock file in the data directory /data/db: boost::filesystem::status: Permission denied: “/data/db/mongod.lock”, terminating[root@init78a-36 ~]# mount | grep /mnt/mongo\n10.20.20.3:/mongo on /mnt/mongo type nfs (rw,noatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.20.20.3,mountvers=3,mountport=2049,mountproto=udp,local_lock=all,addr=10.20.20.3)", "username": "dawood_MUNNA" }, { "code": "", "text": "@chris\n[root@init78a-36 ~]# ls -dZ /mnt/mongodb\ndrwxr-xr-x. mongod mongod system_u:object_r:nfs_t:s0 /mnt/mongodbChanged the filesystem to be writeable by mongod but still, I get the below error{“t”:{\"$date\":“2020-11-12T17:35:40.193+00:00”},“s”:“E”, “c”:“STORAGE”, “id”:20557, “ctx”:“initandlisten”,“msg”:“DBException in initAndListen, terminating”,“attr”:{“error”:“Location28596: Unable to determine status of lock file in the data directory /data/db: boost::filesystem::status: Permission denied: “/data/db/mongod.lock””}}", "username": "dawood_MUNNA" }, { "code": "dbPath", "text": "Hi @dawood_MUNNA,There are two things to consider in your deployment:Please also take notice of this warning: https://docs.mongodb.com/manual/administration/production-checklist-operations/#filesystemAvoid using NFS drives for your dbPath . Using NFS drives can result in degraded and unstable performance. See: Remote Filesystems for more information.", "username": "chris" }, { "code": "", "text": "@chris would this be applicable for docker containers as well?\nMy deployment is MongoDB on - Docker - NFS storage.[root@init78a-36 ~]# mount | grep /mnt/mong\n10.20.20.3:/mong on /mnt/mong type nfs (rw,noatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.20.20.3,mountvers=3,mountport=2049,mountproto=udp,local_lock=all,addr=10.20.20.3)sudo docker run -it -v /mnt/mong/:/data/db --name mongonew1 ba0c2ff8d362\n{“t”:{\"$date\":“2020-11-13T12:05:24.738+00:00”},“s”:“E”, “c”:“STORAGE”, “id”:20557, “ctx”:“initandlisten”,“msg”:“DBException in initAndListen, terminating”,“attr”:{“error”:“Location28596: Unable to determine status of lock file in the data directory /data/db: boost::filesystem::status: Permission denied: “/data/db/mongod.lock””}}", "username": "dawood_MUNNA" }, { "code": "", "text": "Sure. It might be that the required security context is docker instead of mongodb. But all the rest of the remote filesystem and nfs verbiage apply.", "username": "chris" } ]
Error using NFSv3 mount point: Unable to determine status of lock file in the data directory
2020-11-12T09:37:45.264Z
Error using NFSv3 mount point: Unable to determine status of lock file in the data directory
6,212
null
[ "upgrading" ]
[ { "code": "", "text": "Hi All !!\nHope somebody can give me a clue about the better way to solve a problem.\nCentOS 7, standalone mongoDB server, initially 3.6.1.\nI’m trying to migrate that into the latest and the procedure suggest to proceed gradually.\nI did update without problems from 3.6.1 to 3.6.20 and now I’m trying to migrate from 3.6.20 to 4.0.20.\nI did install the 4.0.20 binaries and pointed it to the same area where the data were stored.\nWhen I try to start it I end up with :ERROR: child process failed, exited with “error number 62”Looking here (https://docs.mongodb.com/manual/reference/exit-codes/) says :62 Returned by mongod if the datafiles in --dbpath are incompatible with the version of mongod currently running.There are some procedure to follow to fix the problem ?\nI’m searching but so far I didn’t find a clear answer or a procedure I can follow to fix this problem or also how to have more details about the nature of the incompatibility.Thanks for any help !\nSTeve", "username": "Stefano_Bodini" }, { "code": "", "text": "Have you checked what was the previous Compatibility version on 3.6.1 & 3.6.20? before the upgrade.\nThis this setup running on 3.6 or migrated from lower version?db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )", "username": "Nagarajan_Palaniappa" }, { "code": "", "text": "It seems I did find the problem.\nI did re-run all the migrating procedure, the compatibility as before was set to 3.6 then I did restart the server.\nI noticed that the old 3.6 was still running somehow, I did remove it but using an automatic procedure.\nI did that manually this time, i.e. removing the 3.6.20, verified that only the 4.0.20 was present on the server and the 4.0.20 finally started without complaining !!Thanks !", "username": "Stefano_Bodini" }, { "code": "", "text": "Hi !!!\nThe compatibility version is set for 3.6 as per instructions.\nI don’t know what version of DB was used when everything was created.\nI started from the 3.6.1, migrated to the 3.6.20 (still everything working) and then I started the migration for the 4.0.20.\nThe first time I checked the compatibility was set to 3.4 so, per instructions, I did set that to 3.6 while still running the 3.6.20.\nThen removed the 3.6.20 and installed the 4.0.20 and I ended up with the 62.\nThanks", "username": "Stefano_Bodini" } ]
Migrating from 3.6.20 to 4.0.20
2020-11-12T22:38:19.526Z
Migrating from 3.6.20 to 4.0.20
2,490
https://www.mongodb.com/…4_2_1024x512.png
[]
[ { "code": "", "text": "Hi there,Apologies for such a simple question: does Mongo allow write concern with in-memory storage for sharded replica sets; for example “write concern: majority”?I think the answer is “yes” since “writeConcernMajorityJournalDefault = false” must be set, in which case, “MongoDB acknowledges the write operation after a majority of the voting members have applied the operation in memory.”Additionally journaling doesn’t apply, so the application must either not include “j” (journaling) in its write operation, or if it does specify, it must be set to “j=false”:Thank you!", "username": "JEC" }, { "code": "", "text": "Hi, any answers or thoughts or other things to consider?I did some brief experimentation with an in-memory 3-member replica set and it accepted “w: majority” without complaint. But hopefully it isn’t ignoring the request. ", "username": "JEC" }, { "code": "writeConcernMajorityJournalDefaultfalsemajority", "text": "Hi @JEC welcome to the community!I believe your questions are answered in the Durability Section of the In-Memory Storage Engine page.As you mentioned (and also mentioned in the linked page above), if any voting member of a replica set uses the in-memory storage engine, you must set writeConcernMajorityJournalDefault to false.The write concern setting will still be honoured by the replica set. However, majority write operations could possibly roll back in the event of a transient loss (e.g. crash and restart) of a majority of nodes in a given replica set. In a sense, since nothing is durable in a replica set using mostly in-memory storage engine, majority write is sort of meaningless. This is because the goal of majority-acknowledged write is to prevent rollbacks.For more examples and use cases, please see the linked page above.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thanks so much Kevin. Yes, understood about potential roll back implications.", "username": "JEC" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Write concern behavior with in-memory storage?
2020-11-09T17:56:07.517Z
Write concern behavior with in-memory storage?
2,017
null
[ "atlas-device-sync" ]
[ { "code": "const config = {\n schema: [MySchema],\n sync: {\n user: user,\n partitionValue: partitionValue,\n },\n };\n", "text": "Hi AllI am working on a react native application with Realm Sync. I have a scenario where the partitionkey value in the collection is all different. 61 documents and 61 values of partition key. I only have to sync 5-10 documents in client application based on partitionkey value . As per the docs, only 1 partition key value can set in config to sync that specific data:Can someone help me in understanding how to send multiple partitionValue. If it is not possible what is the other way of doing such sync.", "username": "Saba_Siddiqui" }, { "code": "", "text": "I have the exact same question!", "username": "Aurelio_Petrone" }, { "code": "", "text": "@Saba_Siddiqui You can only open a realm with a single partitionKey value - however, you can open multiple realms in your client side app - each with a different partitionKey value", "username": "Ian_Ward" }, { "code": " {\n _id:\"order1\",\n _partition:\"merchant=MER1\",\n merchantId: \"MER1\",\n customer: {name:\"CUST1\", phone:1234},\n price:100,\n items:[......]\n } \n", "text": "@Ian_Ward: I have an “order” collection where document is like :I want this document to available on the customer device(CUST1) and merchant device (MER1).The current partition value creates a realm for all Orders belonging to a merchant, but I still don’t understand how to keep the order placed by a customer in sync on their devices. Please suggest.", "username": "Surender_Kumar" }, { "code": "", "text": "I want this document to available on the customer device(CUST1) and merchant device (MER1).I have a similar question but I think we need more clarity on the use case. For this specific question one obvious solution is to open a realm on both the client and merchant devices using the partition “merchant=MER1”. However, the downside is that every client (I assume there’s more than one) has access to all other clients orders.Can we assume that each client should only be able to see their own orders and the merchant should be able to all clients orders?", "username": "Jay" }, { "code": "", "text": "@Jay\nCan we assume that each client should only be able to see their own orders and the merchant should be able to all client’s orders?So the requirement is that all the customers should see the orders placed by them and all the merchants should have the orders received by them from different customers.If I follow the current approach of partitioning “merchant=MER1”, then the realm on the customer device would also get the other orders not placed by him.", "username": "Surender_Kumar" }, { "code": "", "text": "I just wrote an article on Medium about how to implement multiple partition key values for a chat program on MongoDB Realm. All of the code is open source; feel free to download and experiment. We are in the process of writing a React-Native version of this as well.I grew up in Paris France and went to French high-school there. There is a lot I loved about the culture, but one of the most frustrating…\nReading time: 9 min read\n", "username": "Richard_Krueger" }, { "code": "", "text": "That ‘chat use-case’ is only fulfilling the 1 to 1 scenario. But when, for example, a merchant wants to sync ALL his orders and a customer should ONLY see his orders it won’t work, or am I wrong?(And we don’t want to open 1 realm for each order)Let’s consider the partition keys would be like \"merchantID_customerID\":Would be kind of using wildcards in partition values when opening a realm… is that possible?Or maybe there is another way to solve this? ", "username": "Ebrima_Ieigh" }, { "code": "", "text": "Ebrima,So first off, there is no wildcard capability in Realm partition values. You basically open a Realm with a specific partition key value, and read/write to it. But your instincts are correct, you probably want to organize your partitions as merchantID_customerID - where both a merchant and a customer have read/write permissions on this partition key value. Use the technique involve custom user data and sync partitions that I outlined in the paper to accomplish this. The downside with this architecture is that a merchant client will have to open all the realms for its customers, and a customer client will have to open all the realms for its merchants. This list of merchants and customers can probably be kept in the user realms for each merchant and customer respectively.I hope this was useful.", "username": "Richard_Krueger" } ]
Multiple values of partitionKey
2020-10-06T08:48:11.559Z
Multiple values of partitionKey
4,238
null
[ "installation" ]
[ { "code": "*{\"t\":{\"$date\":\"2020-11-12T17:10:10.293+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}*\n\n*{\"t\":{\"$date\":\"2020-11-12T17:10:10.298+01:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}*\n\n*{\"t\":{\"$date\":\"2020-11-12T17:10:10.299+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen in use.\"}*\n\n*{\"t\":{\"$date\":\"2020-11-12T17:10:10.299+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":30094,\"port\":27017,\"dbPath\":\"/data/db\",\"architecture\":\"64-bit\",\"host\":\"Cristianos-MacBook-Air.local\"}}*\n\n*{\"t\":{\"$date\":\"2020-11-12T17:10:10.299+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"4.4.1\",\"gitVersion\":\"ad91a93a5a31e175f5cbf8c69561e788bbc55ce1\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}*\n\n*{\"t\":{\"$date\":\"2020-11-12T17:10:10.299+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"19.6.0\"}}}*\n\n*{\"t\":{\"$date\":\"2020-11-12T17:10:10.299+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}}*\n\n*{\"t\":{\"$date\":\"2020-11-12T17:10:10.300+01:00\"},\"s\":\"E\", \"c\":\"NETWORK\", \"id\":23024, \"ctx\":\"initandlisten\",\"msg\":\"Failed to unlink socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\",\"error\":\"Permission denied\"}}*\n\n*{\"t\":{\"$date\":\"2020-11-12T17:10:10.300+01:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":40486,\"file\":\"src/mongo/transport/transport_layer_asio.cpp\",\"line\":919}}*\n\n*{\"t\":{\"$date\":\"2020-11-12T17:10:10.300+01:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}*\n", "text": "Hello, I’m starting to use MongoDB Community server with python…I try to install it on macOS Catalina, but I met some problems… I try to install it various times, I didn’t succeed.his is the error that comes back to me when I type “mongod” in the terminal:After modifying the dpath with the path, “/ System / Volumes / Data / data / db”, obviously having created the folder, “/ data / db”.could you help me, I also accept alternative installations.Thank you.", "username": "Cristiano_battin_i" }, { "code": "msg\":\"Failed to unlink socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\",\"error\":\"Permission denied\"}}*", "text": "msg\":\"Failed to unlink socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\",\"error\":\"Permission denied\"}}*It is failing with sock file issue.Seeems to be permission issues\nmsg\":“Failed to unlink socket file”,“attr”:{“path”:\"/tmp/mongodb-27017.sock\",“error”:“Permission denied”}}*\nAs which user you started mongod?\nTry to remove the file and start\nMay be it was created with a different user in previous run", "username": "Ramachandra_Tummala" }, { "code": "sudo mongod", "text": "sudo mongod perhaps", "username": "Jack_Woehr" }, { "code": "brewbrew servicessudo mongodmongodsudorootmongoddbPathmongod", "text": "Welcome to the community @Cristiano_battin_i!How did you install MongoDB? I recommend following the official guide to Installing MongoDB on macOS via brew.This will set up default directory paths with the expected permissions as well as a brew services definition that will work fine with Catalina.For more information, see related discussion on Installation issues for MongoDB on Mac OS Catalina - #7 by Stennie_X.sudo mongod perhapsThere is no need to run mongod with sudo (or as the root user). Per the general best practice of Principle of least privilege, the mongod process only needs privileges to write to relevant directories including the configured dbPath and log path. Typically there is a specific user & group ID used by the mongod process (and created for you using the official install package).Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks, @Stennie , for clarifying.", "username": "Jack_Woehr" } ]
Installation in macOS Catalina problem
2020-11-12T16:56:05.372Z
Installation in macOS Catalina problem
8,570
null
[ "sharding", "performance" ]
[ { "code": "", "text": "Performance Issue and uneven sharding.\nHello Team,We are recently experiencing a performance issue on the the mongo cluster, where any query takes up 100% CPU on the cluster and the output is rendered after multiple minutes.\nI have also noted the following error on the mongos instance.e are still 1 deletes from previous migration\" }, ok: 0.0, errmsg: “moveChunk failed to engage TO-shard in the data transfer: can’t accept new chunks because there are still 1 deletes from previous migration” }\n2020-09-20T06:39:22.315+0000 I SHARDING [Balancer] balancer move failed: { cause: { ok: 0.0, errmsg: “can’t accept new chunks because there are still 1 deletes from previous migration” }, ok: 0.0, errmsg: “moveChunk failed to engage TO-shard in the data transfer: can’t accept new chunks because there are still 1 deletes from previous migration” } from: shard0001 to: shard0000 chunk: min: { files_id: ObjectId(‘55fbabfbe4b00ec933a81aa9’) } max: { files_id: ObjectId(‘55fbac2ee4b00ec933a81b4a’) }Environment: Mongo 3.0.4 With 1 Mongos and 3 Mongod instances. AWS Ec2", "username": "Abhinav_Santi1" }, { "code": "", "text": "Welcome to the community @Abhinav_Santi1!MongoDB 3.0.4 is more than 5 years old and the 3.0.x release series reached End of Life (EOL) in February 2018. There have been significant improvements in product architecture and performance since then.I would start by upgrading your deployment to the final 3.0.15 release, as this includes critical fixes and does not introduce any compatibility issues or backward-breaking behaviour changes (see: Release Notes for MongoDB 3.0). I strongly recommend planning and testing the upgrade path to a supported release series (currently 3.6+).moveChunk failed to engage TO-shard in the data transfer: can’t accept new chunks because there are still 1 deletes from previous migration”This log message just indicates a requested destination shard is busy catching up on deletes from a previous migration and not ready to accept a new migration task yet.You’ll have to look into more of your system metrics and activity timeline to understand if (or why) your query performance might be impacted by balancing activity. One possibility is that your deployment has insufficient resources to keep up with your current workload.Have you tried explaining your slow queries to confirm efficient index usage? Queries that are poorly supported by indexes or doing additional processing (regular expressions, JavaScript) are common offenders.Regards,\nStennie", "username": "Stennie_X" } ]
MongoDB Performance Issues
2020-11-12T18:23:45.907Z
MongoDB Performance Issues
2,288
null
[ "containers", "installation", "devops" ]
[ { "code": "", "text": "Hello,I am trying a simple FUSE passthrough filesystem with docker-compose. I start a docker-compose application where one of the application is MongoDB. I point mongo to the mount point of my FUSE filesystem which basically takes all request to my mount point and executes it on another folder. The error that I get is that permission denied regarding the lock file.media-mongodb_1 | {“t”:{\"$date\":“2020-11-09T18:05:47.046+00:00”},“s”:“E”, “c”:“STORAGE”, “id”:20557, “ctx”:“initandlisten”,“msg”:“DBException in initAndListen, terminating”,“attr”:{“error”:“Location28596: Unable to determine status of lock file in the data directory /data/db: boost::filesystem::status: Permission denied: “/data/db/mongod.lock””}}", "username": "Pranav_Bhandari" }, { "code": "", "text": "If you are not using a supported filesystem expect errors or non-optimal performance.In this instance the error looks like a permission error. Changing the filesystem to be writeable by mongod will likely remove this error. The uid and gid for mongod is 999.", "username": "chris" }, { "code": "", "text": "3 posts were split to a new topic: Error using NFSv3 mount point: Unable to determine status of lock file in the data directory", "username": "Stennie_X" } ]
FUSE and MongoDB in docker-compose
2020-11-09T19:26:32.039Z
FUSE and MongoDB in docker-compose
2,958
null
[ "app-services-user-auth", "next-js", "react-js" ]
[ { "code": "", "text": "Hi,I’m a Frontend Developer building a new web application with MongoDB Atlas, MongoDB Realm, Next.js, and React.\nI am reading the documentation for Authentication in MongoDB Realm, and I’m trying to understand whether I need to use Node.js SDK or Web SDK?I’m guessing Web SDK is more suitable for me, but it says: “The Web SDK does not support JavaScript or TypeScript applications written for the Node.js environment”. Can someone clarify how I am going to have Node.js in the backend? I’m confused on the backend part (I’m not a backend developer, as I mentioned above).If I go with Web SDK, I will have to use GraphQL or the MongoDB query language.\nI was planning to use Node.js and Express on the backend. With MongoDB Realm, I can implement Authentication and other features, which makes me question whether I even need Node.js and Express on the backend.Does it mean I don’t have to use Node.js in the backend?\nHow do I talk to the Database then (using Node drivers?).If anyone could explain to me how to best set up my application (especially the backend part) with the tools I mentioned above, that would be helpful. Thank you!", "username": "My_Lokaal" }, { "code": "", "text": "Hi! MongoDB Realm is a backend-as-a-service, so we’ll take care of the backend and deploying your app. You just need to configure parts of your backend using the MongoDB Realm Admin. MongoDB Realm lets you define permissions, handles authentication, and provides SDKs to more easily use these services and access data.I’m guessing Web SDK is more suitable for me, but it says: “The Web SDK does not support JavaScript or TypeScript applications written for the Node.js environmentCorrect, you will need to use the Web SDK here if your application is running on the browser. If you felt like that language is confusing, then we can make sure to clear that up a little more. Next.JS has also provided an example using Realm-webIf I go with Web SDK, I will have to use GraphQL or the MongoDB query language.\nI was planning to use Node.js and Express on the backend.‘’If you use the Web SDK you have 2 options:\na. Use our generated GraphQL API - Once you’ve defined a schema based on what your documents look like, we’ll automatically create a GraphQL API for you that you can use to read and write data to your Atlas cluster\nb. Use the remote Mongo Client - The SDK comes with built in methods to help you read and write data to your Atlas clusterBoth require authentication and will obey the permissions you have set for your application.which makes me question whether I even need Node.js and Express on the backend.You do not need the backend at all.How do I talk to the Database then (using Node drivers?).See above explanation around web SDK and links. Realm gets rid of the need to build your backend/use drivers to read and write to your Database. A small interactive quick-start for our Web SDK might be useful to get up and running here.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Thank you so much for your reply, Sumedha! It makes sense now!", "username": "My_Lokaal" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Help picking the right MongoDB tools
2020-11-10T18:58:49.032Z
Help picking the right MongoDB tools
4,333
null
[ "queries" ]
[ { "code": "", "text": "I have 1 million documents each with 256 bytes of bit flags (so 2k bits) used as a custom search filter in a binary field. Running a search for all records with certain bits set (essentially a bitwise operation on all records) is slow, taking tens of seconds compared to a fraction of a second with a relational database. I am wondering if there is a good way of implemeting this kind of data and query in Mongo?The key operation is in algebraic form is as follows - a check that the field in the document has at least all the bits set that are in the query (and it may also have many more set).query & = queryI tried adding an index on the bitwise field, as that should allow this to be an index scan but mongodb doesn’t use the index for bitwise operations - at least it didn’t last time I tried, I haven’t rerun my tests with the latest versions.Some background for those wondering why one would implement something like this. The documents contain complex scientific graph data which needs to be sub-graph matched, which is an NP problem. The bits each represent complex composite properties of the nodes and edges within the graph data, addressing common cases of the underlying NP problem giving a speed up of several orders of magnitude. Filtering on the bits is used as a pre-filter for fetching documents to avoid fetching the entire database.\nFetching everything will also be a not uncommon use case which I assume would be better addressed by adding caching but that is a separate issue. The result is similar to a bloom filter.", "username": "Matthew_Towler" }, { "code": "", "text": "Hi @Matthew_Towler,Bitwise operations cannot utelize indexes well but other parts of the query can.Maybe consider implement a document sagregation in a way that you can fast filter the needed bitmaps to be queried based on other indicate fields and run bitwise operators on the filter data:For example store the bits in array rather than single tiny document sBest\nPavel", "username": "Pavel_Duchovny" }, { "code": "\"$and\": [\n { \"screensWords.0\": { \"$bitsAllSet\": [14, 15, 16, 29, 30] } },\n { \"screensWords.1\": { \"$bitsAllSet\": [0, 3, 4, 7, 11, 12, 13, 26, 27, 28, 29, 30, 31] } },\n { \"screensWords.2\": { \"$bitsAllSet\": [0, 1, 19, 20, 21, 23, 24, 25, 29, 30, 31] } },\n { \"screensWords.3\": { \"$bitsAllSet\": [2, 3, 4, 5, 6, 7, 10, 11, 12, 13, 14, 18] } },\n { \"screensWords.4\": { \"$bitsAllSet\": [5, 8, 14, 16, 17, 25, 26, 27, 28, 29, 31] } },\n { \"screensWords.5\": { \"$bitsAllSet\": [0, 17, 20, 22, 23, 25, 26, 27] } },\n { \"screensWords.6\": { \"$bitsAllSet\": [15, 18, 20, 21, 28] } },\n { \"screensWords.7\": { \"$bitsAllSet\": [1, 3, 9] } },\n { \"screensWords.9\": { \"$bitsAllSet\": [0, 3, 4, 5, 6, 7, 11, 18, 19, 20, 21, 22, 23, 30, 31] } } ]", "text": "Thanks of the suggestions. In this case the use of the bitscreens is the entire query, so pre-filtering additional fields won’t really help in this case.The bits are already in an array of ten integers, the bits being checked using a condition of the following form (apologies if the quotes are not exactly correct, this is a partial copy from some python client code).", "username": "Matthew_Towler" } ]
Efficient implementation of bit screening or a bloom filter
2020-11-12T11:31:57.250Z
Efficient implementation of bit screening or a bloom filter
2,645
null
[ "data-modeling" ]
[ { "code": "", "text": "Suppose a use case would lead to creation of hundreds of thousands of objects having so few as two fields, with some of the fields pointing to other tiny objects. Of course a naive implementation would just map each of these objects to a single MongoDB “document” (I had rather say “record”), and use the MongoDB _id field as the form of reference from object to object. I suspect this would run like molasses in January. Has anyone built and tested an application that groups tiny objects into records? Then how does referencing between them work, and caching?", "username": "Jack_WAUGH" }, { "code": "", "text": "I would start by looking at https://docs.mongodb.com/manual/core/data-model-design/.I think the bucket pattern at Building with Patterns: The Bucket Pattern | MongoDB Blog might be applicable for your use-case.", "username": "steevej" } ]
Many Tiny Objects
2020-11-12T11:23:34.864Z
Many Tiny Objects
1,408
null
[ "java" ]
[ { "code": "", "text": "Hi all,\nIs the Mongodb JAVA driver based on JDBC ? or is it calling some a Mongodb rest api internally ?[1] - MongoDB JVM DriversThank you,\nDushan", "username": "dushan_Silva" }, { "code": "", "text": "I think the answer is “neither”. Looking at the code it makes pooled socket connections and carries out MongoDB-specific protocol exchanges", "username": "Jack_Woehr" }, { "code": "", "text": "Thank you Jack! that was helpful", "username": "dushan_Silva" }, { "code": "", "text": "Is the Mongodb JAVA driver based on JDBC ? or is it calling some a Mongodb rest api internally ?Hi @dushan_Silva,MongoDB drivers communicate directly with a deployment using the MongoDB Wire Protocol. This is a socket-based, request-response style TCP/IP protocol using MongoDB-specific Database Commands. Driver API and behaviour is based on a set of standard MongoDB Specifications documented on GitHub.JDBC is a higher-level API that was originally designed for tabular databases using SQL. In theory JDBC is database agnostic, with JDBC drivers providing translation between the JDBC API and the destination database protocol. Some of the API assumptions aren’t a great match for MongoDB, but there are several third party JDBC drivers available that try to bridge the gaps.If you’re just getting started with MongoDB, I recommend using the official MongoDB Java Driver directly rather than adding abstraction layers and overhead.MongoDB also has a Connector for BI which provides read-only access to query MongoDB using SQL, but the intended use case is for BI & reporting tools rather than application backends.Can you provide some more context on your question: are you looking for a JDBC driver, just curious about how drivers communicate with the server, or something else?Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is the mongodb java driver based on JDBC?
2020-11-11T10:16:29.463Z
Is the mongodb java driver based on JDBC?
4,524
null
[]
[ { "code": "", "text": "Is there a convenient way to reduce the size of a payload by only updating with the diff of two objects?In Javascript I can get the before and after states of an object, I’d like to diff them and only send the update query of that diff.", "username": "Zack_Beyer" }, { "code": "", "text": "Hi @Zack_Beyer welcome to the community!If I don’t misunderstand, you mean to update a document by sending the difference between the old document and the new one instead of having to send the new document as a whole to the server. Is this correct?You can update part of a document using the update method. See the linked page for extensive behaviour documentation and examples. This update method is also available in all officially supported drivers.If this is not what you mean, could you provide some examples?Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "This may not be what you are looking for, but there are some libraries out there at apply the JSON Patch protocol to MongoDB. For example: GitHub - eBay/bsonpatch: A BSON implementation of RFC 6902 to compute the difference between two BSON documentsI cannot vouch for this library, I use a proprietary implementation at work, but the layout appears good.", "username": "Jai_Hirsch" } ]
Update by diff?
2020-11-05T19:12:48.069Z
Update by diff?
3,844
null
[ "compass" ]
[ { "code": "", "text": "Hello everyone,I would like to install MongoDB Compass on an Amazon Linux 2 instance but I am having some difficulty.Here are the steps that I took so far and the error message that I am gettingDownloaded the Red Hat Enterprise Linux version of MongoDb Compass (https://docs.mongodb.com/compass/master/install)Ran the command sudo yum install mongodb-compass-1.23.0.x86_64.rpmI am able to get the installer to run but I get the following error message. Public key for mongoldb-compass-1.23.0.x86_64.rpm is not installedIs MongoDb Compass (Red Hat Enterprise Linux) compatible with Amazon Linux 2?If so, how do I download a Public Key for MongoDb Compass and where?Any guidance or assistance on this issue is greatly appreciated. Thank you.", "username": "Gregory_Wiley" }, { "code": "sudo yum install mongodb-compass-1.23.0.x86_64.rpm --nogpgcheck\n", "text": "I don’t believe we sign the Compass RPMs, that’s probably why Amazon Linux is complaining.Can you tryif the RPM policy allows it, that will skip verifying the public key.", "username": "Massimiliano_Marcon" }, { "code": "--nogpgcheck", "text": "As a side note, I just tried to install Compass inside the Amazon Linux 2 Docker image and Compass installed with no issues even without the --nogpgcheck flag. I wonder if you have some stricter security policies.", "username": "Massimiliano_Marcon" }, { "code": "", "text": "Hi Massimiliano,Thank you very much for your help on this. I appreciate your quick response. I was able to successfully install MongoDB Compass on my Amazon Linux 2 Workspace. The MongoDB community and customer support has always been extremely knowledgeable, helping a new web developer like me to build cool things with MongoDb.", "username": "Gregory_Wiley" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Installing MongoDb Compass on Amazon Linux 2
2020-11-08T22:52:57.691Z
Installing MongoDb Compass on Amazon Linux 2
4,420
null
[]
[ { "code": "", "text": "I just spent 10 minutes trying to sign back into my account and struggling to get past the captcha. I have a valid and relatively active account. I went through five pages of “find the X” before giving up, refreshing, failing the audio prompt and then finally somehow making it past another round of the picture prompts.Just wanted to suggest turning the captcha setting down a little bit.", "username": "Travis_N_A" }, { "code": "", "text": "Hi @Travis_N_A,Welcome to the MongoDB community! Apologies if the CAPTCHA was a hassle – I will share your feedback with the team that manages our MongoDB Account login & registration services.Our current CAPTCHAs are provided by the Google reCAPTCHA service and I believe we are using a configuration variation which tries to avoid prompting for user interaction where possible.However, reCAPTCHA tends to be more challenging if you are using ad blockers. Can you confirm your browser version and any ad/privacy blocking plugins used?Thanks,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi, Brave 1.16.68 on OSX. Thanks!", "username": "Travis_N_A" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Please adjust captcha difficulty
2020-11-01T20:15:09.706Z
Please adjust captcha difficulty
3,639
null
[ "data-modeling", "capacity-planning" ]
[ { "code": "", "text": "Hi,We are building a multi-tenanted system and planning to have a collection per tenant.Is there a maximum number of collections that can exist in a database? we are planning on using MongoDB atlas so therefore I want to know whether this limit applies to that mainly.Thank you,\nDushan", "username": "dushan_Silva" }, { "code": "tenant", "text": "Hey @dushan_Silva,I don’t think it’s a good idea. Why not use just one collection and a field tenant with an index on it to support your queries?One collection = at least one index (_id) and 2 files. One for the index and one for the collection (with the WiredTiger engine). More than 10K collection is usually not a good news for your file system.Having a lot of collections is an anti-pattern:https://www.mongodb.com/article/schema-design-anti-pattern-massive-number-collectionsJust like an array in a document should be bounded, the number of collections should also be limited. 10 collections with 10 documents each isn’t the same as 100 documents in 1 collection. The later is more optimal.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi @MaBeuLux88,\nThanks for the reply,\nYes, I do understand the stress on the file system, while having a field called tenant is a valid option I did not think about texting that filed so thanks for that :), I was looking at the [1], which mentions about database per tenant or collection per tenant as a valid option if we are building large scale multi-tenant application. Are the ideas presented in this presentation not applicable in the new engine?[1] - Cloud Architecture: A Comprehensive Guide | MongoDBThanks\nDushan", "username": "dushan_Silva" }, { "code": "", "text": "Hey @dushan_Silva,I just listened to this presentation and even if it’s from 2017, everything in it still applies I think. I didn’t hear anything that was clearly outdated. All the pros and cons Justin mentions for each strategy are still valid.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi @dushan_Silva,Each multi-tenant strategy has pros and cons. Targeting large scale multi-tenant use cases might mean having a small number of tenants that are very large, or a large number of tenants that are very small, or anything in-between. My recommendation is to consider growth over time. It’s difficult to assess but things like customer onboarding, operational overhead, and cost per customer are all important considerations.Imagine having a collection per tenant. Onboarding customers becomes cumbersome because collections and indexes must be created appropriately and (ideally) automated. Any changes to indexes would need application to every existing collection per tenant. And there are more considerations. But the additional overhead doesn’t matter much for a small number of tenants.Most organizations want a single collection where sharding can provide scale. I don’t know if that applies here, but most customers succeed with this approach.I hope this helps!Justin", "username": "Justin" }, { "code": "", "text": "Thank you justin your input is much appreciated", "username": "dushan_Silva" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Maximum number of Collections
2020-10-20T10:49:07.012Z
Maximum number of Collections
11,312
null
[ "node-js" ]
[ { "code": "", "text": "Usually the examples and advice about administration say to use the mongo shell. Is it possible to do all necessary admin operations via the nodejs driver instead (no Mongoose)? I thought I wanted to use that route because I am going to be programming in that environment anyway and could cut down on the count of environments I have to learn by avoiding the shell.I have mongodb installed on my VM and it works for a \"nodebb\" database with a \"nodebb\" user. I created an additional database \"archive\" with a user \"archive\". While authenticated as a database server administrator, I submit against the database named \"archive\", a command to update the password of the user named \"archive\". A message comes back saying user archive@admin is not found. But \"admin\" is the wrong database for the server to be looking for the user in, because I submitted the command against database \"archive\".MongoError: User archive@admin not found", "username": "Jack_WAUGH" }, { "code": "", "text": "Update: I did changeUserPassword through the mongo shell and it ran without error (addressing the same user and database).", "username": "Jack_WAUGH" }, { "code": "", "text": "Yes, you can do all the admin stuff using node. That’s what the mongo shell is, a node REPL interpreter with some very clever predefines.", "username": "Jack_Woehr" }, { "code": "mongomongors.add<enter>mongors.add()mongomongosh", "text": "Usually the examples and advice about administration say to use the mongo shell. Is it possible to do all necessary admin operations via the nodejs driver instead (no Mongoose)?Hi @Jack_WAUGH,The general answer is that all actions that can be performed via the mongo shell can also be actioned via a driver, since ultimately the shell is just sending database commands to your MongoDB deployment.However, the shell provides helper functions with convenience methods and validation for administrative commands that may not have helpers in the standard driver API. Many administrative commands are run rarely during the lifetime of a deployment (for example, a replica set is only initialised once) or are typically invoked by an administrator rather than a developer or application.You can write equivalent functions in your preferred driver language, but I would recommend using an admin shell or tool for convenient (and confident) updates.For example, creating or updating replica set configurations involves manipulating a configuration document. You can view the implementation for mongo shell helpers such as Replication Methods by invoking a helper function without the parentheses. For example: try running rs.add<enter> in the mongo shell. You can also find this source on GitHub if you’re curious: rs.add().The classic mongo shell is a custom REPL which embeds a JavaScript interpreter, but is not based on Node.There’s also a new MongoDB Shell (aka mongosh) which was introduced in June of this year. This one is a Node REPL, as mentioned by @Jack_Woehr. The new shell (currently available as a Beta release) is designed to be embeddable, extensible, and more developer friendly than the classic shell. This shell has already been embedded into MongoDB Compass, MongoDB for VS Code, and other tools including JetBrains’ IDEs for a more streamlined development experience.If you want to automate administrative actions for deployments, tooling like MongoDB Cloud/Ops Manager provides more convenient APis and enables integrations like the MongoDB Enterprise Kubernetes Operator.I thought I wanted to use that route because I am going to be programming in that environment anyway and could cut down on the count of environments I have to learn by avoiding the shell.I think the best approach for your “one environment” goal would be to use an IDE with MongoDB Shell integration (such as VS Code or one of the Jetbrains IDEs). However, if learning shell commands seems daunting or tedious, you may find a GUI like MongoDB Compass a better fit for your admin needs.Regards,\nStennie", "username": "Stennie_X" }, { "code": "updateUser", "text": "I seem to have gotten around my immediate problem of not being able to double check the password for this user, but it’s still puzzling that the updateUser didn’t work.", "username": "Jack_WAUGH" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Newb -- Administration via nodejs
2020-11-11T02:01:37.979Z
Newb &ndash; Administration via nodejs
2,217
null
[]
[ { "code": "", "text": "Can i do this one transaction ? Currently i have this\ndb().collection(‘Users’).updateOne({ email: data.email },\n{ $set: insertData },\n{ upsert: true })that works well if the data does not exisit, but if it does exist, then i need to $push : {item : arrayItem }\ninstead of $set insertData", "username": "Rishi_uttam" }, { "code": "db().collection(‘Users’).updateOne({ email: data.email },\n{ $push: { item : arrayItem },\n$setOnInsert : { insertData} },\n{ upsert: true })\n", "text": "Hi @Rishi_uttam,Welcome to MongoDB community!What you probably looking for can be achieved with separating the insert logic to $setOnInsert operator:\nhttps://docs.mongodb.com/manual/reference/operator/update/setOnInsert/This allows you to do $push or $addToSet clause for updates and a different logic for $setOnInsert.Example:Thanks\nPavel", "username": "Pavel_Duchovny" } ]
UpdateOne and Upsert , if not exist insert another data.
2020-11-10T18:57:19.010Z
UpdateOne and Upsert , if not exist insert another data.
19,695
null
[ "containers" ]
[ { "code": "# mongo\nMongoDB shell version v4.4.1\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nError: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:374:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1\n\n# mongod\n{\"t\":{\"$date\":\"2020-11-09T15:12:48.994+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2020-11-09T15:12:49.014+00:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2020-11-09T15:12:49.015+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2020-11-09T15:12:49.017+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":6,\"port\":27017,\"dbPath\":\"/data/db\",\"architecture\":\"64-bit\",\"host\":\"runc\"}}\n{\"t\":{\"$date\":\"2020-11-09T15:12:49.017+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"4.4.1\",\"gitVersion\":\"ad91a93a5a31e175f5cbf8c69561e788bbc55ce1\",\"openSSLVersion\":\"OpenSSL 1.1.1 11 Sep 2018\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"ubuntu1804\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2020-11-09T15:12:49.017+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Ubuntu\",\"version\":\"18.04\"}}}\n{\"t\":{\"$date\":\"2020-11-09T15:12:49.017+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}}\n{\"t\":{\"$date\":\"2020-11-09T15:12:49.022+00:00\"},\"s\":\"E\", \"c\":\"NETWORK\", \"id\":23024, \"ctx\":\"initandlisten\",\"msg\":\"Failed to unlink socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\",\"error\":\"Read-only file system\"}}\n{\"t\":{\"$date\":\"2020-11-09T15:12:49.022+00:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":40486,\"file\":\"src/mongo/transport/transport_layer_asio.cpp\",\"line\":919}}\n{\"t\":{\"$date\":\"2020-11-09T15:12:49.022+00:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n\n", "text": "I am trying to run mongo on a runc container. I am facing the following problems. I ran the following commands inside the container. Please help. Thanks.", "username": "Rohit_Das" }, { "code": "", "text": "Your mongo command failed as mongod is not up and running on port 27017\nYou tried to start mongod but it failed with read only permissions,aborting fassert() messageMay be file permission issues or mongod not able to write to /tmp\nAs which user you are trying to start mongod?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Thanks for your reply. runc usually runs as root. I actually used a mongo docker image to create the container.", "username": "Rohit_Das" } ]
Unable to run MongoDB on runc container
2020-11-10T18:57:03.128Z
Unable to run MongoDB on runc container
4,822
null
[ "realm-studio" ]
[ { "code": "", "text": "I run a database of a MongoDB Realm Cluster and can edit it using the “Collections” section on the website. How can I edit the database using Realm Studio? Or how do you normally get data where the classes have multiple relations to each other into the cluster?", "username": "Martin122" }, { "code": "", "text": "MongoDB Realm Studio doesn’t support opening synced Realms at this point in time, unfortunately. To inspect the documents in a collection you might be able to use Compass?To add data into the cluster, you might also be successful in writing a small Node.js script that perform the import, using the Realm JS SDK.", "username": "kraenhansen" }, { "code": "", "text": "Yeah, you don’t need to worry about Realm Studio - which is really only used to inspect local realms. Compass is great, it is significantly more featured than the old Realm Studio, but you can also just use the Collections tab in the Atlas view of the portal. I usually find this faster than Compass.", "username": "Richard_Krueger" } ]
MongoDB Realm Studio
2020-11-03T19:33:59.811Z
MongoDB Realm Studio
3,788
null
[ "data-modeling", "atlas-device-sync" ]
[ { "code": "", "text": "Our customers are independent businesses, each with their own Realm Cloud database. This provides them with data isolation.It appears from the naming conventions used in MongoDB Realm that databases are shards, or partitions, not separate databases. Is that the case?", "username": "Nosl_O_Cinnhoj" }, { "code": "", "text": "@Nosl_O_Cinnhoj Yes - partitions are analogous to realms on the legacy realm object server/cloud. They are logical partitions of a MongoDB Database on MongoDB Atlas. An interesting new feature of the new MongoDB Realm - is that you can now use MongoDB drivers to do cross partition queries and writes, which was a limitation in the legacy realm.", "username": "Ian_Ward" }, { "code": "", "text": "So we have the option of each of our customers having their own MongoDB database using MongoDB Realm?", "username": "Nosl_O_Cinnhoj" }, { "code": "", "text": "You have the capability to sync to multiple logical databases within a single MongoDB Atlas cluster. Currently a single MongoDB Realm cloud app syncs with a single cluster - we will look to remove this restriction in the future.", "username": "Ian_Ward" }, { "code": "", "text": "we will look to remove this restriction in the futureWe might have to wait until that restriction is removed. Our customers are skittish about sharing a database with their competitors. It was hard enough convincing them to move from on-prem to the cloud. A shared database will freak many of them.", "username": "Nosl_O_Cinnhoj" }, { "code": "", "text": "Will this restriction be removed before Realm Cloud reaches end-of-life on 22 November 2021?We require timely advice on this matter. If data isolation is not available before that date we need to consider alternatives to Realm.", "username": "Nosl_O_Cinnhoj" }, { "code": "", "text": "It is not currently in our scheduled work so I cannot guarantee when it will land", "username": "Ian_Ward" }, { "code": "", "text": "Thank you for being frank @Ian_Ward. Many of our customers would walk if they knew they were sharing a database with their competitors. We’d best start looking elsewhere.", "username": "Nosl_O_Cinnhoj" }, { "code": "", "text": "Hi @Nosl_O_Cinnhoj,Our customers are skittish about sharing a database with their competitors.Alternatively you could create a separate database per customer, and a separate Realm application per customer as well. This is to make sure that each application is to be configured to sync on the database for that particular customer.You can have multiple applications sync against the same cluster, as long as they are not linked to the same set of databases for the underlying data in the rules config. See also Realm Sync Rules.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "a separate Realm application per customerThanks @wan but that would not work for us. Customers download, install and setup our app without intervention from us. We don’t have the ability (nor the desire) to configure a new app for each customer.", "username": "Nosl_O_Cinnhoj" }, { "code": "", "text": "I am a Realm developer, so let me chime in with my two cents. You could give each customer a separate Realm application, so effectively each customer would have a separate MongoDB Atlas attached to that application. This does not mean that you need a separate mobile app for each customer, rather each customer simply needs a separate MongoDB Realm app id - that’s it. You would deploy one app for all your customers with separate realm apps for each one. Then comes the tricky part, so you would need the same universe of users for all these apps. The solution there is to go with a JWT authentication system for Realm - that keeps around one set users for all these apps. This is totally doable within the MongoDB Realm system. The JWT authentication system would keep around the customer’s realm app id and return it as part of the metadata. Upon signing in the customer would then open that realm app and use it to sync with. Each customer would be totally isolated from the other customers. Lastly each customer could even maintain administrative control over their Realm app and matching Atlas Cluster.I would even go so far as to say that this is trivial to the most casual of observers. By the way, good luck doing this in Firebase - talk about the great monolithic system from the megalithic. Disclosure, I was a Firebase developer for three years.", "username": "Richard_Krueger" } ]
Data isolation in MongoDB Realm
2020-10-19T00:21:26.194Z
Data isolation in MongoDB Realm
3,250
https://www.mongodb.com/…f1d16a95a347.png
[ "php", "release-candidate" ]
[ { "code": "mongodbcomposer require mongodb/mongodb^1.8.0-RC1\nmongodb", "text": "The PHP team is happy to announce that the first release candidate of version 1.8.0 of the MongoDB PHP library is now available. This library is a high-level abstraction for the mongodb extension.Release HighlightsThis release makes the library compatible with PHP 8.With this release, errors that occur while copying GridFS stream contents will now cause an exception instead of relying on a PHP Warning being emitted.A complete list of resolved issues in this release may be found at:\nhttps://jira.mongodb.org/secure/ReleaseNote.jspa?projectId=12483&version=30259DocumentationDocumentation for this library may be found at:FeedbackIf you encounter any bugs or issues with this library, please report them via this form:\nhttps://jira.mongodb.org/secure/CreateIssue.jspa?pid=12483&issuetype=1InstallationThis library may be installed or upgraded with:Installation instructions for the mongodb extension may be found in the PHP.net documentation.", "username": "Andreas_Braun" }, { "code": "", "text": "", "username": "system" } ]
MongoDB PHP Library 1.8.0-RC1 Released
2020-11-10T21:30:49.969Z
MongoDB PHP Library 1.8.0-RC1 Released
2,471
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 4.2.11-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.2.10. The next stable release 4.2.11 will be a recommended upgrade for all 4.2 users.\nFixed in this release:", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.2.11-rc0 is released
2020-11-10T20:54:48.251Z
MongoDB 4.2.11-rc0 is released
2,340
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 4.0.21 is out and is ready for production deployment. This release contains only fixes since 4.0.20, and is a recommended upgrade for all 4.0 users.\nFixed in this release:", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.0.21 is released
2020-11-10T20:52:32.192Z
MongoDB 4.0.21 is released
1,687
null
[ "php", "release-candidate" ]
[ { "code": "MongoDB\\Driver\\Exception\\InvalidArgumentExceptionpecl install mongodb-1.9.0RC1\npecl upgrade mongodb-1.9.0RC1\n", "text": "The PHP team is happy to announce that the first release candidate for version 1.9.0 of the mongodb PHP extension is now available on PECL.Release HighlightsThis release makes the extension compatible with PHP 8.This release also ensures that all argument errors are correctly exposed as MongoDB\\Driver\\Exception\\InvalidArgumentException on PHP 7. PHP 8 handles argument checks differently, so extra arguments or wrong types will trigger built-in errors.A complete list of resolved issues in this release may be found at: Release Notes - MongoDB JiraDocumentationDocumentation is available on PHP.net:\nPHP: MongoDB - ManualFeedbackWe would appreciate any feedback you might have on the project:\nhttps://jira.mongodb.org/secure/CreateIssue.jspa?pid=12484&issuetype=6InstallationYou can either download and install the source manually, or you can install the extension with:or update with:Windows binaries are available on PECL:\nhttp://pecl.php.net/package/mongodb", "username": "Andreas_Braun" }, { "code": "", "text": "", "username": "system" } ]
MongoDB PHP Extension 1.9.0-RC1 Released
2020-11-10T20:06:52.980Z
MongoDB PHP Extension 1.9.0-RC1 Released
2,423
https://www.mongodb.com/…d783b57c8603.png
[ "production", "php" ]
[ { "code": "mongodbcreateIndexescomposer require mongodb/mongodb^1.7.2\nmongodb", "text": "The PHP team is happy to announce that version 1.7.2 of the MongoDB PHP library is now available. This library is a high-level abstraction for the mongodb extension.Release HighlightsThis release removes an unnecessary field from createIndexes commands.A complete list of resolved issues in this release may be found at:\nConfigure Release Notes - MongoDB Jira 30122DocumentationDocumentation for this library may be found at:FeedbackIf you encounter any bugs or issues with this library, please report them via this form:\nhttps://jira.mongodb.org/secure/CreateIssue.jspa?pid=12483&issuetype=1InstallationThis library may be installed or upgraded with:Installation instructions for the mongodb extension may be found in the PHP.net documentation.", "username": "Andreas_Braun" }, { "code": "", "text": "", "username": "system" } ]
MongoDB PHP Library 1.7.2 Released
2020-11-10T19:39:54.422Z
MongoDB PHP Library 1.7.2 Released
2,263
null
[ "production", "php" ]
[ { "code": "mongodb+srv://pecl install mongodb-1.8.2\npecl upgrade mongodb-1.8.2\n", "text": "The PHP team is happy to announce that version 1.8.2 of the mongodb PHP extension is now available on PECL.Release HighlightsThis release fixes compilation issues on AIX platforms. It updates the bundled libmongoc library to 1.17.2, which includes fixes for DNS lookups when using mongodb+srv:// connection strings.A complete list of resolved issues in this release may be found at: Release Notes - MongoDB JiraDocumentationDocumentation is available on PHP.net:\nPHP: MongoDB - ManualFeedbackWe would appreciate any feedback you might have on the project:\nhttps://jira.mongodb.org/secure/CreateIssue.jspa?pid=12484&issuetype=6InstallationYou can either download and install the source manually, or you can install the extension with:or update with:Windows binaries are available on PECL:\nhttp://pecl.php.net/package/mongodb", "username": "Andreas_Braun" }, { "code": "", "text": "", "username": "system" } ]
MongoDB PHP Extension 1.8.2 released
2020-11-10T19:39:13.849Z
MongoDB PHP Extension 1.8.2 released
2,081
null
[ "mongoose-odm" ]
[ { "code": "[\n {\n “_id”: “5f830078a046f42267ffe5c3\",\n “categoryName”: “Eggs, Meat & Fish”,\n “subcategory”: [\n {\n “_id”: “5f860e3797c86845eab73f53”,\n “subcategoryName”: “Country Eggs”,\n “productCode”: “EC01”,\n “productName”: “Kadaknath Country Eggs”,\n “description”: “Mpm Eggs - Kadaknath Chicken, 6 pcs Carton”,\n “price”: 123,\n “quantity”: 13,\n “updated”: “2020-10-13T22:51:49.389Z”,\n “created”: “2020-10-13T20:29:43.581Z”\n },\n {\n “_id”: “5f860e3797c86845eab73f54”,\n “subcategoryName”: “Country Eggs”,\n “productCode”: “EC02”,\n “productName”: “Country Chicken Eggs”,\n “description”: “Mpm Eggs - Country Chicken, 6 pcs Carton”,\n “price”: 60,\n “quantity”: 10,\n “updated”: “2020-10-13T20:29:43.589Z”,\n “created”: “2020-10-13T20:29:43.589Z”\n },\n {\n “_id”: “5f860e3797c86845eab73f55”,\n “subcategoryName”: “Country Eggs”,\n “productCode”: “EC03”,\n “productName”: “Country Duck Eggs”,\n “description”: “Mpm Eggs - Country Duck, 6 pcs Carton”,\n “price”: 90,\n “quantity”: 10,\n “updated”: “2020-10-13T20:29:43.591Z”,\n “created”: “2020-10-13T20:29:43.591Z”\n },\n {\n “_id”: “5f860e3797c86845eab73f56”,\n “subcategoryName”: “Farm Eggs”,\n “productCode”: “EF01”,\n “productName”: “Speciality Eggs”,\n “description”: “Suguna Speciality Eggs with Omega 3 - Heart, 6 pcs Carton”,\n “price”: 70,\n “quantity”: 10,\n “updated”: “2020-10-13T20:29:43.592Z”,\n “created”: “2020-10-13T20:29:43.592Z”\n },\n {\n “_id”: “5f860e3797c86845eab73f57”,\n “subcategoryName”: “Farm Eggs”,\n “productCode”: “EF02”,\n “productName”: “Farm Made Eggs”,\n “description”: “Farm Made Eggs - Free Range, 6 pcs”,\n “price”: 120,\n “quantity”: 10,\n “updated”: “2020-10-13T20:29:43.594Z”,\n “created”: “2020-10-13T20:29:43.594Z”\n },\n {\n “_id”: “5f860e3797c86845eab73f58”,\n “subcategoryName”: “Protein Eggs”,\n “productCode”: “EP01”,\n “productName”: “Suguna Specialty Eggs with Vitamin D”,\n “description”: “Suguna Specialty Eggs - With Vitamin D, 6 pcs”,\n “price”: 80,\n “quantity”: 10,\n “updated”: “2020-10-13T20:29:43.596Z”,\n “created”: “2020-10-13T20:29:43.596Z”\n },\n {\n “_id”: “5f860e3797c86845eab73f59”,\n “subcategoryName”: “Protein Eggs”,\n “productCode”: “EP02”,\n “productName”: “Best Diabetes Eggs”,\n “description”: “Best Diaabet Egg - Selenium Enriched Good for Diabetes, 6 pcs Pouch”,\n “price”: 50,\n “quantity”: 10,\n “updated”: “2020-10-13T20:29:43.597Z”,\n “created”: “2020-10-13T20:29:43.597Z”\n },\n {\n “_id”: “5f860e3797c86845eab73f5a”,\n “subcategoryName”: “Protein Eggs”,\n “productCode”: “EP03”,\n “productName”: “Double Yolk”,\n “description”: “Mpm Eggs - Double Yolk, 6 nos”,\n “price”: 50,\n “quantity”: 10,\n “updated”: “2020-10-13T20:29:43.599Z”,\n “created”: “2020-10-13T20:29:43.599Z”\n },\n {\n “_id”: “5f860e3797c86845eab73f5b”,\n “subcategoryName”: “Other Eggs”,\n “productCode”: “EO01”,\n “productName”: “Quail Eggs”,\n “description”: “Mpm Quail Eggs, 12 pcs”,\n “price”: 65,\n “quantity”: 10,\n “updated”: “2020-10-13T20:29:43.608Z”,\n “created”: “2020-10-13T20:29:43.608Z”\n },\n {\n “_id”: “5f860e3797c86845eab73f5c”,\n “subcategoryName”: “Other Eggs”,\n “productCode”: “EO02”,\n “productName”: “Panchakavya Eggs”,\n “description”: “Mpm Eggs - Panchakavya, 300 g”,\n “price”: 70,\n “quantity”: 10,\n “updated”: “2020-10-13T20:29:43.609Z”,\n “created”: “2020-10-13T20:29:43.609Z”\n }\n ],\n “__v”: 0\n }\n]\nexports.getAllProduct = (req, res, next) => {\ncategories.find({$or:[{“categoryName”:{ $regex: req.params.id, $options: ‘i’ }},\n{“subcategory.$.productName”:{ $regex: req.params.id, $options: ‘i’ }}]},\n(err, products) => {\n console.log(“Products:“+JSON.stringify(products));\n if (err) {\n return res.status(400).json({\n error: “No Products found”+err\n });\n }\n res.status(200).json(products)\n });\n};\nconst categorySchema = new mongoose.Schema({\n categoryName: {\n type: String,\n trim: true\n },\n subcategory: [ {\n subcategoryName: {\n type: String,\n trim:true\n },\n productCode: {\n type: String,\n trim: true\n }]});\n", "text": "this is the sample data i’m testing.{“subcategory.$.productName”:{ $regex: req.params.id, $options: ‘i’ }}]},When i removed the $ sign from field it retrieves records but the filter is not workingEven if i select the subcategory name as farm or other or country or protein instead of fetching that particular subdocument it is fetching the entire document. i’m getting all subdocuments instead of getting subdocuments matching the name farm or country or protein…all records are getting retrieved. only in the parent level filter workswhy? any fix required from mongoose side or mongoDB side??http://localhost:3000/category/getAllProduct/fruits", "username": "Gayathri_S" }, { "code": "$$elemMatch", "text": "{“subcategory.$.productName”:{ $regex: req.params.id, $options: ‘i’ }}The positional $ is a projection operator used with arrays. The usage is within a query’s projection (not in a filter as you had tried).Note that the array projection operators $ and $elemMatch return only the first matching element. You can use an aggregation query with $filter array operator to filter all the matching sub-documents.", "username": "Prasad_Saya" }, { "code": "$filter: {\n input: req.params.id\n as: \"input\",\n cond: { $or: [\n {“categoryName”:{ $regex: input, $options: ‘i’ }},\n {“subcategory.productName”:{ $regex: input, $options: ‘i’ }}\n ] }\n }\n}\n", "text": "Is the above code $filter with $or will work??? Is it right??", "username": "Gayathri_S" }, { "code": "$oras: 'input'\"$$input\"$cond", "text": "Is the above code $filter with $or will work???Yes, the filter’s condition can have aggregation $or operator. Also, note that you may have to use the aggregation operator $regexMatch.Also, note that the variable defined as: 'input' should be referred as \"$$input\" as used within the $cond.", "username": "Prasad_Saya" }, { "code": "", "text": "Thanks a lot, will try this and let you know if any doubts.", "username": "Gayathri_S" } ]
Filtering/Search not working at subdocument level for $or
2020-11-10T00:16:20.837Z
Filtering/Search not working at subdocument level for $or
4,370
null
[]
[ { "code": "", "text": "Have an M10 Cluster with 1 DB. OSX Clients using sync.\nTwice now, it appears the connection between Atlas and the Realm platform has stopped replicating data.\nI am running in Dev mode … so no rules or filters interfering.\nOSX clients can connect to Realm Servers without issue. Run Functions and they even seem to persist newly Client generated data at the Realm layer (as i can delete the App along with the local Realm DB and this new data will replicate down to the Client). Any historic data residing in Atlas will not replicate down and new Client Schema or data will not replicate to Atlas.\nI have tried clearing Realm Schema definitions and terminating Sync and restarting (several times) without joy. I am at a loss. I Have logged with Support … but they are not very responsive … with the ‘it’s Beta’ response … anybody else experiencing anything or have advice ? thanks", "username": "Damian_Raffell" }, { "code": "", "text": "@Damian_Raffell A good way to troubleshoot problems like this, is to go to the logs and filter for errors only, you will see that you have schema errors there. Fix the schema errors, terminate sync, re-enable, and wipe your simulator/emulator. Rinse and repeat", "username": "Ian_Ward" }, { "code": "", "text": "Hi Ian … thanks for coming back… i’m struggling with this one.\nI have checked my server logs … I only have 8 errors registered today and none show a schema error ?\nI see them yesterday tho.\nAm i looking in the wrong place?\nI have stopped and flushed the simulator, terminated and restarted Sync … to no avail\nI have run validation on all the schema rules at Realm with no issues.i have another one of these tho ?\nthere are no mongodb/atlas services with sync enabled", "username": "Damian_Raffell" }, { "code": "", "text": "@Damian_Raffell I think when you restart sync it clears out the logs - it probably makes sense to just create a completely new app and start fresh to clear out any lingering erroneous state.", "username": "Ian_Ward" }, { "code": "", "text": "The logs don’t seem to be disappearing on restart … ?\nWish it was that easy …but i have Functions and Hosting on this Realm App.\nOne thing i have noticed is that even after terminating Sync there is still some Schema definitions listed. I would have thought it should flush this info if it doesn’t know which DB you’re ultimately going to connect to ?\nI will try some more things … cheers", "username": "Damian_Raffell" }, { "code": "", "text": "@Damian_Raffellc Yeah terminating sync does not clear out the schemas - you will need to delete those from the schema page.", "username": "Ian_Ward" } ]
Connection between Atlas Cluster and Realm Servers broken?
2020-11-10T13:34:36.341Z
Connection between Atlas Cluster and Realm Servers broken?
3,364
null
[ "realm-web" ]
[ { "code": "", "text": "I’m trying to use Realm’s user authentication via the Web SDK, but am having trouble understanding how to check if a user is currently logged in.This example in the docs suggests that each use object should have a ‘isLoggedIn’ property. But for me this property is undefined.Is there another way to check if a user is currently logged in?", "username": "Luke_Thorburn" }, { "code": "user.state === UserState.Active\nisLoggedIn", "text": "Hi @Luke_Thorburn - welcome to the forum!This is odd, what version of Realm Web are you using? Are you sure you’re using the latest? (1.0.0)The user object also has a state property and you can check if it’s logged in withThis is what isLoggedIn does under the hood.", "username": "kraenhansen" }, { "code": "", "text": "Hi @kraenhansen, thanks for the quick response.It was the version that was the issue, I was using 0.8.0 (which is currently used in the CDN example in the docs). May be worth updating.Switched to 1.0.0 and it works fine.Thank you!", "username": "Luke_Thorburn" } ]
user.isLoggedIn property doesn't exist
2020-11-09T23:43:24.095Z
user.isLoggedIn property doesn&rsquo;t exist
2,333
null
[ "queries" ]
[ { "code": "", "text": "Hi,I followed MongoDB documentation and took backup of MongoDB server using filesystem snapshot using lvcreate.\nPost which, when I tried a restore in one of the database, for 2 collections I am seeing a mismatch in the document count. It is less than the previous value.\nI used db.collection.count() to get the count value.\nI am seeing a difference of almost 1k documents.\nI am using MongoDB version 4.2 enterprise version.\nIs it expected? What is causing the data loss here?Thanks,\nAkshaya Srinivasan", "username": "Akshaya_Srinivasan" }, { "code": "", "text": "Try db.collection.countDocuments({})\ndb.collection.count() uses metadata and may not give correct count", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Tried with this too but same results.\nI exported all the documents to csv file before and after backup. I see a clear difference in count in that too.\nThis looks like a dataloss issue\nAlso I tried the case of full backup using lvcreate and then took a dump of oplog collection from the timestamp snapshot was taken. When I restored the snapshot and dump too I am facing a mismatch in document count. Is it specific to this version?", "username": "Akshaya_Srinivasan" }, { "code": "", "text": "Hi,I tried this in MongoDB 4.2 community version and I dont see this issue. Seeing this only on MongoDB 4.2 enterprise version. Is this possibly a bug?Thanks,\nAkshaya Srinivasan", "username": "Akshaya_Srinivasan" }, { "code": "mongo", "text": "Hi @Akshaya_SrinivasanWe need more details here:I would also like more details here:I tried this in MongoDB 4.2 community version and I dont see this issue. Seeing this only on MongoDB 4.2 enterprise version.Did you perform the same procedure on both servers? E.g. are you following the same backup procedure, both databases are not in use during the snapshot, etc.?Is this possibly a bug?Not likely. The core database code is exactly the same between the two versions. Enterprise versions adds features not related to low-level database operations, like LDAP.Best regards,\nKevin", "username": "kevinadi" }, { "code": "mongo", "text": "hi Kevin,[root@mongoclient /]# mongodump --version\nmongodump version: r4.2.9\ngit version: 06402114114ffc5146fd4b55402c96f1dc9ec4b5\nGo version: go1.12.17\nos: linux\narch: amd64\ncompiler: gc[root@mongoclient /]# mongorestore --version\nmongorestore version: r4.2.9\ngit version: 06402114114ffc5146fd4b55402c96f1dc9ec4b5\nGo version: go1.12.17\nos: linux\narch: amd64\ncompiler: gcDid you perform the same procedure on both servers? E.g. are you following the same backup procedure, both databases are not in use during the snapshot, etc.? --> YesThanks,\nAkshaya Srinivasan", "username": "Akshaya_Srinivasan" }, { "code": "", "text": "Hi @Akshaya_SrinivasanSince your deployment is a sharded cluster, did you follow the procedure in Back Up a Sharded Cluster with File System Snapshots? Note that due to its nature, a sharded cluster backup needs to be taken at the same time across all part of the cluster (individual shards, config servers), otherwise you’ll see inconsistencies.If the cluster is not locked down during the backup process (e.g. balancer still active, writes still going in, etc.) then the backup will not be a point in time snapshot.There are more resources in Backup and Restore Sharded Clusters, including restore procedures.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi @kevinadi,Thank you so much. Identified the issue. It was due to oplog collection roll over.\nFree space in the disk was less when compared to the documents inserted and the oplog collection was rolled over. This caused document mismatch issue during restore.\nWhen I resized oplog collection, there was no document mismatch issue.Thanks,\nAkshaya Srinivasan", "username": "Akshaya_Srinivasan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Document count mismatch in a collection
2020-10-30T09:14:53.190Z
Document count mismatch in a collection
4,760
null
[ "backup" ]
[ { "code": "", "text": "Hello,\nI’m new to MongoDB and I have a question regarding the Point in Time Recovery feature in Atlas.\nWhat timestamp I need to provide to the fields “Timestamp” and “Increment”?My understanding is that I need to provide the timestamp of the operation, until which I want to restore, excluding the operation itself.For example, I am executing several updates in a collection, followed by a delete. During the point in time restore, I am providing the timestamp of the delete operation. I expect data to be restored until the delete operation, excluding the delete itself. However the restore replays also the delete operation.According to the documentation, the oplog replay will include all operations occurring up to the critical timestamp, but will not include the operation with the critical timestamp. However in my case the replay includes the bad transaction.In the Atlas web interface I think the field Oplog timestamp -> Timestamp & Increment is the parameter --oplogLimit ? Seems this is not the case.\n\"–oplogLimit: Prevents mongorestore from applying oplog entries with timestamp newer than or equal to \"", "username": "vvpe" }, { "code": "", "text": "Hi… The timestamp we need for the purposes of PIT restore is documented here → https://docs.mongodb.com/manual/reference/bson-types/#timestampsLet me know if this makes.", "username": "bencefalo" }, { "code": "/* 1 */\n{\n \"op\" : \"i\",\n \"ns\" : \"students.from\",\n \"ui\" : UUID(\"f69dcfbc-90a0-4cd2-b83d-55716b8e6742\"),\n \"o\" : {\n \"_id\" : ObjectId(\"5fa3d7a1a757767a99114d71\"),\n \"DBAName\" : \"StudentBE-EU\",\n \"MongoDBDateCourse\" : ISODate(\"2020-11-05T10:44:50.107Z\")\n },\n \"ts\" : Timestamp(1604573089, 1),\n \"t\" : NumberLong(12),\n \"wall\" : ISODate(\"2020-11-05T10:44:49.998Z\"),\n \"v\" : NumberLong(2)\n}\n \n/* 2 */\n{\n \"op\" : \"i\",\n \"ns\" : \"students.from\",\n \"ui\" : UUID(\"f69dcfbc-90a0-4cd2-b83d-55716b8e6742\"),\n \"o\" : {\n \"_id\" : ObjectId(\"5fa3d7a2a757767a99114d7c\"),\n \"DBAName\" : \"StudentNL-EU\",\n \"MongoDBDateCourse\" : ISODate(\"2020-11-05T10:44:50.131Z\")\n },\n \"ts\" : Timestamp(1604573090, 1),\n \"t\" : NumberLong(12),\n \"wall\" : ISODate(\"2020-11-05T10:44:50.022Z\"),\n \"v\" : NumberLong(2)\n}\n \n/* 3 */\n{\n \"op\" : \"i\",\n \"ns\" : \"students.from\",\n \"ui\" : UUID(\"f69dcfbc-90a0-4cd2-b83d-55716b8e6742\"),\n \"o\" : {\n \"_id\" : ObjectId(\"5fa3d7a2a757767a99114d87\"),\n \"DBAName\" : \"StudentLU-EU\",\n \"MongoDBDateCourse\" : ISODate(\"2020-11-05T10:44:50.154Z\")\n },\n \"ts\" : Timestamp(1604573090, 2),\n \"t\" : NumberLong(12),\n \"wall\" : ISODate(\"2020-11-05T10:44:50.045Z\"),\n \"v\" : NumberLong(2)\n}\n \n/* 4 */\n{\n \"op\" : \"i\",\n \"ns\" : \"students.from\",\n \"ui\" : UUID(\"f69dcfbc-90a0-4cd2-b83d-55716b8e6742\"),\n \"o\" : {\n \"_id\" : ObjectId(\"5fa3d7a2a757767a99114d93\"),\n \"DBAName\" : \"StudentUS-NA\",\n \"MongoDBDateCourse\" : ISODate(\"2020-11-05T10:44:50.178Z\")\n },\n \"ts\" : Timestamp(1604573090, 3),\n \"t\" : NumberLong(12),\n \"wall\" : ISODate(\"2020-11-05T10:44:50.069Z\"),\n \"v\" : NumberLong(2)\n}\n \n/* 5 */\n{\n \"op\" : \"i\",\n \"ns\" : \"students.from\",\n \"ui\" : UUID(\"f69dcfbc-90a0-4cd2-b83d-55716b8e6742\"),\n \"o\" : {\n \"_id\" : ObjectId(\"5fa3d7a2a757767a99114d9e\"),\n \"DBAName\" : \"StudentCA-NA\",\n \"MongoDBDateCourse\" : ISODate(\"2020-11-05T10:44:50.202Z\")\n },\n \"ts\" : Timestamp(1604573090, 4),\n \"t\" : NumberLong(12),\n \"wall\" : ISODate(\"2020-11-05T10:44:50.093Z\"),\n \"v\" : NumberLong(2)\n}\n \n/* 6 */\n{\n \"op\" : \"i\",\n \"ns\" : \"students.from\",\n \"ui\" : UUID(\"f69dcfbc-90a0-4cd2-b83d-55716b8e6742\"),\n \"o\" : {\n \"_id\" : ObjectId(\"5fa3d7a2a757767a99114da9\"),\n \"DBAName\" : \"StudentMX-LA\",\n \"MongoDBDateCourse\" : ISODate(\"2020-11-05T10:44:50.230Z\")\n },\n \"ts\" : Timestamp(1604573090, 5),\n \"t\" : NumberLong(12),\n \"wall\" : ISODate(\"2020-11-05T10:44:50.122Z\"),\n \"v\" : NumberLong(2)\n}\n \n/* 7 */\n{\n \"op\" : \"i\",\n \"ns\" : \"students.from\",\n \"ui\" : UUID(\"f69dcfbc-90a0-4cd2-b83d-55716b8e6742\"),\n \"o\" : {\n \"_id\" : ObjectId(\"5fa3d7a2a757767a99114db4\"),\n \"DBAName\" : \"StudentBR-LA\",\n \"MongoDBDateCourse\" : ISODate(\"2020-11-05T10:44:50.256Z\")\n },\n \"ts\" : Timestamp(1604573090, 6),\n \"t\" : NumberLong(12),\n \"wall\" : ISODate(\"2020-11-05T10:44:50.148Z\"),\n \"v\" : NumberLong(2)\n}\n \n/* 8 */\n{\n \"op\" : \"i\",\n \"ns\" : \"students.from\",\n \"ui\" : UUID(\"f69dcfbc-90a0-4cd2-b83d-55716b8e6742\"),\n \"o\" : {\n \"_id\" : ObjectId(\"5fa3d7a2a757767a99114dbf\"),\n \"DBAName\" : \"StudentAR-LA\",\n \"MongoDBDateCourse\" : ISODate(\"2020-11-05T10:44:50.282Z\")\n },\n \"ts\" : Timestamp(1604573090, 7),\n \"t\" : NumberLong(12),\n \"wall\" : ISODate(\"2020-11-05T10:44:50.174Z\"),\n \"v\" : NumberLong(2)\n}\n \n/* 9 */\n{\n \"op\" : \"d\",\n \"ns\" : \"students.from\",\n \"ui\" : UUID(\"f69dcfbc-90a0-4cd2-b83d-55716b8e6742\"),\n \"o\" : {\n \"_id\" : ObjectId(\"5fa3d7a2a757767a99114d93\")\n },\n \"ts\" : Timestamp(1604574293, 1),\n \"t\" : NumberLong(12),\n \"wall\" : ISODate(\"2020-11-05T11:04:53.510Z\"),\n \"v\" : NumberLong(2)\n}\n \n/* 10 */\n{\n \"op\" : \"i\",\n \"ns\" : \"students.from\",\n \"ui\" : UUID(\"f69dcfbc-90a0-4cd2-b83d-55716b8e6742\"),\n \"o\" : {\n \"_id\" : ObjectId(\"5fa3dc7ea757767a99118a1c\"),\n \"DBAName\" : \"StudentBH-ME\",\n \"MongoDBDateCourse\" : ISODate(\"2020-11-05T11:05:34.534Z\")\n },\n \"ts\" : Timestamp(1604574334, 1),\n \"t\" : NumberLong(12),\n \"wall\" : ISODate(\"2020-11-05T11:05:34.423Z\"),\n \"v\" : NumberLong(2)\n}\n \n/* 11 */\n{\n \"op\" : \"i\",\n \"ns\" : \"students.from\",\n \"ui\" : UUID(\"f69dcfbc-90a0-4cd2-b83d-55716b8e6742\"),\n \"o\" : {\n \"_id\" : ObjectId(\"5fa3dc7ea757767a99118a28\"),\n \"DBAName\" : \"StudentSA-ME\",\n \"MongoDBDateCourse\" : ISODate(\"2020-11-05T11:05:34.561Z\")\n },\n \"ts\" : Timestamp(1604574334, 2),\n \"t\" : NumberLong(12),\n \"wall\" : ISODate(\"2020-11-05T11:05:34.449Z\"),\n \"v\" : NumberLong(2)\n}\n \n/* 12 */\n{\n \"op\" : \"i\",\n \"ns\" : \"students.from\",\n \"ui\" : UUID(\"f69dcfbc-90a0-4cd2-b83d-55716b8e6742\"),\n \"o\" : {\n \"_id\" : ObjectId(\"5fa3dc7ea757767a99118a34\"),\n \"DBAName\" : \"StudentOM-ME\",\n \"MongoDBDateCourse\" : ISODate(\"2020-11-05T11:05:34.588Z\")\n },\n \"ts\" : Timestamp(1604574334, 3),\n \"t\" : NumberLong(12),\n \"wall\" : ISODate(\"2020-11-05T11:05:34.477Z\"),\n \"v\" : NumberLong(2)\n}\n/* 9 */\n{\n \"op\" : \"d\",\n \"ns\" : \"students.from\",\n \"ui\" : UUID(\"f69dcfbc-90a0-4cd2-b83d-55716b8e6742\"),\n \"o\" : {\n \"_id\" : ObjectId(\"5fa3d7a2a757767a99114d93\")\n },\n \"ts\" : Timestamp(1604574293, 1),\n \"t\" : NumberLong(12),\n \"wall\" : ISODate(\"2020-11-05T11:04:53.510Z\"),\n \"v\" : NumberLong(2)\n}\n", "text": "Hi Benjamin, thank you for the answer.Well, I still have the same question - see one of my scenarios attached. I still don’t understand if the PITR timestamp has to be the last known good transaction timestamp or the critical(bad) transaction timestamp.I think the critical transaction should not be replayed during a PITR.I will try to illustrate:My oplog looks like:Critical bad transaction - the delete -Restore:Option 1 - provide a date - RPO 1 minute -> 11:03 – works as expected“wall” : ISODate(“2020-11-05T11:04:53.510Z”)Result - good - as expected\n************ 8 documents, without the 3 last updatesOption 2 - provide the critical timestamp – doesn’t work as expected“ts” : Timestamp(1604574293, 1)Result - not good\n************ only 7 documents – the delete operation is replayed as well.I’m investigating why the PITR timestamp has to be the last know good and not the last critical (bad) transaction timestamp.", "username": "vvpe" }, { "code": "", "text": "You would want to use the timestamp before the delete operation… not the timestamp of the delete operation, as it is expected to apply the timestamp your restoring too", "username": "bencefalo" }, { "code": "", "text": "Thank you, Benjamin, for the confirmation. So PITR requires the last known good timestamp.\nWe can close the topic as resolved.", "username": "vvpe" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Point in time restore in Atlas
2020-11-04T19:06:47.996Z
Point in time restore in Atlas
4,970
null
[ "data-modeling", "mongoose-odm" ]
[ { "code": " const mongoose = require('mongoose');\n\n const CustomerSchema = new mongoose.Schema({\n name: {\n type: String\n },\n account: {\n type: String\n },\n purchase: {\n type: Number\n },\n purchase_date: {\n type: Date\n }\n });\n\n module.exports = Customer = mongoose.model('customer', CustomerSchema);\n [\n {\n \"account\": \"ABC123\",\n \"name\": \"John Doe\",\n \"purchase-history\": [\n {\n \"month1\": \"January\",\n \"transitions\": [\n {\n \"date\": \"01/01/20\",\n \"purchase\": 120\n },\n {\n \"date\": \"01/03/20\",\n \"purchase\": 195\n },\n {\n \"date\": \"01/15/20\",\n \"purchase\": 49\n },\n {\n \"date\": \"01/23/20\",\n \"purchase\": 63\n }\n ]\n },\n {\n \"month2\": \"February\",\n \"transitions\": [\n {\n \"date\": \"02/01/20\",\n \"purchase\": 120\n },\n {\n \"date\": \"02/03/20\",\n \"purchase\": 85\n },\n {\n \"date\": \"02/15/20\",\n \"purchase\": 300\n },\n {\n \"date\": \"02/23/20\",\n \"purchase\": 51\n }\n ]\n },\n {\n \"month3\": \"March\",\n \"transitions\": [\n {\n \"date\": \"03/01/20\",\n \"purchase\": 120\n },\n {\n \"date\": \"03/03/20\",\n \"purchase\": 225\n },\n {\n \"date\": \"03/15/20\",\n \"purchase\": 9\n },\n {\n \"date\": \"03/23/20\",\n \"purchase\": 72\n }\n ]\n },\n {\n \"month4\": \"April\",\n \"transitions\": [\n {\n \"date\": \"04/03/20\",\n \"purchase\": 195\n },\n {\n \"date\": \"04/23/20\",\n \"purchase\": 63\n }\n ]\n },\n {\n \"month5\": \"May\",\n \"transitions\": [\n {\n \"date\": \"05/01/20\",\n \"purchase\": 161\n },\n {\n \"date\": \"05/03/20\",\n \"purchase\": 155\n },\n {\n \"date\": \"05/15/20\",\n \"purchase\": 52\n },\n {\n \"date\": \"05/23/20\",\n \"purchase\": 84\n }\n ]\n }\n ]\n },\n {\n \"account\": \"ACD134\",\n \"name\": \"Nancy Drew\",\n \"purchase-history\": [\n {\n \"month1\": \"January\",\n \"transitions\": [\n {\n \"date\": \"01/01/20\",\n \"purchase\": 130\n },\n {\n \"date\": \"01/03/20\",\n \"purchase\": 185\n },\n {\n \"date\": \"01/15/20\",\n \"purchase\": 59\n },\n {\n \"date\": \"01/23/20\",\n \"purchase\": 33\n }\n ]\n },\n {\n \"month2\": \"February\",\n \"transitions\": [\n {\n \"date\": \"02/01/20\",\n \"purchase\": 110\n },\n {\n \"date\": \"02/03/20\",\n \"purchase\": 25\n },\n {\n \"date\": \"02/15/20\",\n \"purchase\": 340\n },\n {\n \"date\": \"02/23/20\",\n \"purchase\": 71\n }\n ]\n },\n {\n \"month3\": \"March\",\n \"transitions\": [\n {\n \"date\": \"03/01/20\",\n \"purchase\": 150\n },\n {\n \"date\": \"03/03/20\",\n \"purchase\": 215\n },\n {\n \"date\": \"03/15/20\",\n \"purchase\": 49\n },\n {\n \"date\": \"03/23/20\",\n \"purchase\": 82\n }\n ]\n },\n {\n \"month4\": \"April\",\n \"transitions\": [\n {\n \"date\": \"04/03/20\",\n \"purchase\": 125\n },\n {\n \"date\": \"04/23/20\",\n \"purchase\": 83\n }\n ]\n },\n {\n \"month5\": \"May\",\n \"transitions\": [\n {\n \"date\": \"05/01/20\",\n \"purchase\": 181\n },\n {\n \"date\": \"05/03/20\",\n \"purchase\": 115\n },\n {\n \"date\": \"05/15/20\",\n \"purchase\": 62\n },\n {\n \"date\": \"05/23/20\",\n \"purchase\": 14\n }\n ]\n }\n ]\n }\n", "text": "I hope this is the right place to ask about modeling the data. I put together a simple data and am trying to figure out what I need to do for model.In my MERN app which is a reward program, I just want to display the list of customers, and once you click on the customer, it takes you to the customer’s recorded purchases per month. I will be using React to calculate how many reward points the customer earn for each purchase in a month, and the total reward points per month, and the total of 3 months’ reward points.So basically, the database would be just the customers and its purchases and I will be using React to calculate the reward points.So to put together the model for this, do I need one model like Customer.js or two models: Customer.js and Purchases.js?Here’s what I put together so far for Customer.js:Here’s a sample of my data:", "username": "Kristina_Bressler" }, { "code": "{\n \"account\": \"ABC123\",\n \"name\": \"John Doe\",\n \"purchase-history\" : {\n month: \"January\",\n monthDate: ISODate(\"2020-01-01\"),\n \"transitions\" : [ ... ]}\npurchase_2020\n", "text": "Hi @Kristina_Bressler,The problem with the described data model is that in purchases collection there is a possibility to have unbonded arrays which is a. Big no no and antipattern for MongoDB.Since those arrays will constantly update it will introduce update overhead as well as complexity.I would suggest going the following alternative:I would suggest reading our anti pattern blog for more useful schema design known no nos\nhttps://www.mongodb.com/article/schema-design-anti-pattern-summary/Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "month01month02month03", "text": "In my project, I’m just going to limit it to just 3 months only. I’ll probably edit my data to change it to month01, month02, and month03. So basically, just a list of customers with 3 months of purchases each… Would this be fine? I’m not going to code the app to add more purchases every month…", "username": "Kristina_Bressler" }, { "code": "", "text": "Hi @Kristina_Bressler,Can the transitions array grow unboundly?", "username": "Pavel_Duchovny" }, { "code": "", "text": "I don’t think so. I’m just creating the data so that I can display the list of customers, click on the customer to display their purchase history, and use React to calculate how much reward points they received each purchase, and the total reward points each month.", "username": "Kristina_Bressler" } ]
Data Modeling for Reward Program
2020-11-10T03:35:48.585Z
Data Modeling for Reward Program
3,722
null
[]
[ { "code": "", "text": "Hi I’m unable to locate a big green button as displayed in the image of the lab, “Create an Organization”. There’s an \" Organization Access\" under the “Access Manager” tab that I selected and I see an Organization that I must’ve created some time ago. Am I in the right place? If so, is there anyway for me to delete this Organization or rename as “MDBU” and then make sure that my cloud service is “Atlas”?", "username": "Monique_Bennett-Lowe" }, { "code": "", "text": "Please post a screenshot of what you are doing that illustrate the issues you are having.", "username": "steevej" }, { "code": "", "text": "Hi, this is the part I’m up to in the lab. It says click on Create an Org.image1397×633 78.3 KBWhen I log in I’m not seeing that option. I do see a tab named “Organization Access” and under that “Organization Access”.image1920×632 66 KBI’m wondering if I’ve already created an Org in the past? If so, am I able to rename this one somehow to “MDBU”? If not, what are my options?Thanks!", "username": "Monique_Bennett-Lowe" }, { "code": "", "text": "I do not think the name of the organization is important for the lab. Mine is named PERSONNAL and I have no issue so far.", "username": "steevej" }, { "code": "", "text": "Thanks. I really appreciate it. . I wasn’t sure how the naming of the Org would or could impact a later lesson.", "username": "Monique_Bennett-Lowe" }, { "code": "", "text": "Hi @Monique_Bennett-Lowe,If you face any issue in the later lessons then let us know and we will look into it.", "username": "Shubham_Ranjan" }, { "code": "", "text": "", "username": "system" } ]
Lab: Create and Deploy Atlas Cluster - Create an Organization
2020-11-06T23:23:58.820Z
Lab: Create and Deploy Atlas Cluster - Create an Organization
2,288
null
[ "atlas-functions" ]
[ { "code": "", "text": "I’m trying to implement pagination with alphabetical sort in a Realm application using Application functions, on a collection with more than 100,000 documents.Unfortunately it seems we can’t use “collation” in Realm application functions, so I can’t sort my records properly. It’s also quite challenging to handle the pagination part because of other limitations like “skip” not being available.Has anyone figured this out? Is this eventually going to change?Thanks.", "username": "Benoit_Werner" }, { "code": "", "text": "Hi @Benoit_Werner,Welcome to MongoDB community!I believe you will be able to use collation and $skip as an aggregation stage, have you encounter issues through .aggregate command?Collation in queries is available when using a SYSTEM context function at the moment:Skip is not a suggested way to do paging for large collections.What I would suggest is doing the following alternative:Let me know if you have any questions.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": " return collection.aggregate(\n [\n {\n $match: {\n columnKey: \"xxx\"\n },\n },\n { $sort: { bodyTxt: 1 } },\n { $limit: 100 },\n {\n $project: {\n bodyTxt: 1,\n },\n },\n ],\n { collation: { \"locale\": \"fr\", strength: 2 }}\n );", "text": "Hi Pavel,Thanks for your help, I’ve manage to make pagination work.\nBut it seems like the collation index I created is not being used.\nBecause I always get upper cased letters first A-Z then lower cased…\nI’ve tried also with strength 1, 2 and 3 in both index collation options and function call, but I always get the same results. What am I missing?This is my code:", "username": "Benoit_Werner" }, { "code": "", "text": "I’ve tested my code in MongoDB Compass to see if the index was applied. Not only is it applied, but I get the sorting that I expected.\nSo I’m guessing that it’s just within Realm that the collation is not applied. (confirmed by the documentation here: https://docs.mongodb.com/realm/mongodb/crud-and-aggregation-apis/#database-command-availability).\nIs this ever going to change, is collation ever going to be supported?\nI really love MongoDB Realm and its potential and I wish I could use it for my project.", "username": "Benoit_Werner" }, { "code": "", "text": "Hi @Benoit_Werner,Potentially you can use views created directly on Atlas but it will come with performance panalty.Let me know if you have any questions.CC: @Drew_DiPalmaBest\nPavel", "username": "Pavel_Duchovny" } ]
Pagination with alphabetical sort
2020-11-01T20:15:27.831Z
Pagination with alphabetical sort
5,460
null
[ "aggregation", "java" ]
[ { "code": "seasonsseasons{\n totalWins: 12,\n seasons: {\n 1: {\n wins: 10\n },\n 2: {\n wins: 2\n }\n }\n}\nwinsseasonstotalWinsseasonstotalWinsdb.test.aggregate([\n { \n $project: { \n seasons: { \"$objectToArray\": \"$seasons\" } \n } \n },\n { \n $project: { \n totalWins: { \n $reduce: { \n input: { $ifNull: [ \"$seasons\", [] ] }, \n initialValue: 0, \n in: { $add: [ \"$$value\", \"$$this.v.wins\" ] }\n }\n }\n }\n }\n])\ntotalWinstotalWinsupdateMany_idtotalWinstotalWins", "text": "This is a continuation of this thread but it got locked.I marked the solution that works with seasons arrays. However, I also got a solution from @Prasad_Saya that works with seasons objects (which is the desired solution)My collection has objects that are like this:I want to sum the wins from the seasons objects and put that sum into the field totalWins . However, it is possible that the seasons field might not exist, or it has no objects in itself, in which case totalWins should be 0.The aggregation I got is:This returns a collection with totalWins in the objects. However, I want the totalWins to be updated/inserted to the original collection so I did updateMany along with the aggregation. The problem is that doing so causes the original documents to be replaced by the documents returned by the aggregation. As a result, I lose all of the other fields that were in the original documents before, and now my documents only have _id and totalWins. So my question is how do I run the aggregation, take the field totalWins and append it to the existing documents without overwriting them?", "username": "Diamond_Block" }, { "code": "$project$set: { totalWins: {}}$set", "text": "You don’t need to $project anything.Just use a $set: { totalWins: {expression}} and your aggregated document will emerge from the $set stage with an extra, added field.", "username": "Jack_Woehr" }, { "code": "totalWinstotalWinsupdateMany_idtotalWinstotalWins$set$addFields$project", "text": "This returns a collection with totalWins in the objects. However, I want the totalWins to be updated/inserted to the original collection so I did updateMany along with the aggregation. The problem is that doing so causes the original documents to be replaced by the documents returned by the aggregation. As a result, I lose all of the other fields that were in the original documents before, and now my documents only have _id and totalWins . So my question is how do I run the aggregation, take the field totalWins and append it to the existing documents without overwriting them?As @Jack_Woehr had mentioned, for the update do use $set. For the query\nuse $addFields instead of $project in the aggregation - this will ensure that the existing fields will remain in addition to the updated/created fields.", "username": "Prasad_Saya" }, { "code": "db.test.updateMany({}, [\n { \n $project: { \n seasons: { \"$objectToArray\": \"$seasons\" } \n } \n },\n { \n $set: { \n totalWins: { \n $reduce: { \n input: { $ifNull: [ \"$seasons\", [] ] }, \n initialValue: 0, \n in: { $add: [ \"$$value\", \"$$this.v.wins\" ] }\n }\n }\n }\n }\n])\n_idseasonstotalWins", "text": "Do you mean like this? If so this didn’t work. Fields in the original documents were gone, now the documents only have _id, seasons as an array instead of an object, and totalWins. Am I misunderstanding something here?", "username": "Diamond_Block" }, { "code": "$project$addFields", "text": "Just replace the $project with $addFields in the first stage of the aggregation.", "username": "Prasad_Saya" }, { "code": "seasons", "text": "Doing so mutates the seasons field, but the other fields are fine. Instead of ‘seasons’ originally being an object, now it’s an array. I want to preserve to original ‘seasons’ object as well.", "username": "Diamond_Block" }, { "code": "seasonsseasons_array: { \"$objectToArray\": \"$seasons\" }input: { $ifNull: [ \"$seasons_array\", [] ] }", "text": "This will preserve the seasons:seasons_array: { \"$objectToArray\": \"$seasons\" }Then, use that in the following reduce operation:input: { $ifNull: [ \"$seasons_array\", [] ] }", "username": "Prasad_Saya" }, { "code": "seasonsseasons_array$unset: seasons_array", "text": "Doing so results in both seasons and seasons_array in each document. So I ended up adding $unset: seasons_array to remove the field.Now my question is: is this the best way to approach this? Or is there like another way to sort of declare a local variable to store the temporary array so that it doesn’t get added to the document in the end.", "username": "Diamond_Block" }, { "code": "seasonsseasons_array$unset: seasons_array", "text": "Doing so results in both seasons and seasons_array in each document. So I ended up adding $unset: seasons_array to remove the field.Yes, thats the correct way of doing it.", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Upsert field with aggregation
2020-11-04T20:53:35.915Z
Upsert field with aggregation
3,429
null
[ "aggregation" ]
[ { "code": "[\n {\n \"_id\": {\n \"$oid\": \"5fa80c13752bc1ad5a4abe71\"\n },\n \"Requirements\": {\n \"Experience\": \"3 - years\",\n \"Languages\": \";javascript;SQL;C#;Rust\",\n \"Education\": \";Bachelor's Degree in computer science\"\n },\n \"Date_posted\": {\n \"$date\": {\n \"$numberLong\": \"1604337459493\"\n }\n },\n \"Job_title\": \"Junior Programmer\",\n \"Search_indeed\": \"programmer - South Africa\",\n \"company_name\": \"Growthpoint Properties\",\n \"Spoken_Languages\": \"\",\n \"location\": \"Sandton, Gauteng\",\n \"Site_link\": \"https://za.indeed.com/jobs?q=programmer&l=South Africa&/rc/clk?jk=e8e8f3aecdfaa26e&fccid=d67f4db96a6da922&vjs=3\",\n \"payment_type\": {\n \"type\": \"\",\n \"amount\": {\n \"$numberInt\": \"0\"\n }\n }\n },\n {\n \"_id\": {\n \"$oid\": \"5fa80c0b752bc1ad5a4abe66\"\n },\n \"Requirements\": {\n \"Experience\": \"3 - year\",\n \"Languages\": \";java;SQL;XML;C#\",\n \"Education\": \";Bachelor's Degree\"\n },\n \"Date_posted\": {\n \"$date\": {\n \"$numberLong\": \"1600535851513\"\n }\n },\n \"Job_title\": \"C# Analyst Programmer (intergration)\",\n \"Search_indeed\": \"programmer - South Africa\",\n \"company_name\": \"definitive recruitment\",\n \"Spoken_Languages\": \"\",\n \"location\": \"Stellenbosch, Western Cape\",\n \"Site_link\": \"https://za.indeed.com/jobs?q=programmer&l=South Africa&/rc/clk?jk=ce457ee55b1c2ba7&fccid=e7b09f8169c7519f&vjs=3\",\n \"payment_type\": {\n \"type\": \"\",\n \"amount\": {\n \"$numberInt\": \"0\"\n }\n }\n }\n", "text": "I have a collection that looks like this.I’m trying to count all the unique Languages in Requirements.Languages\nbut I have no idea how I would go about doing this.How would I go about doing this, I’ve looked at $split, but I’m definitely a little lost with syntax.", "username": "Pieter_van_zyl" }, { "code": "{ $split: ['$Requirements.Languages', ';'] }\n[\n { $addFields: { language: { $split: ['$Requirements.Languages', ';'] } } }\n]\n", "text": "Hi @Pieter_van_zyl -You’re on the right track. You should add a Calculated Field with an expression like this:Or alternatively you could do it in the query bar:HTH\nTom", "username": "tomhollander" }, { "code": "{ $split: ['$Requirements.Languages', ';'] },{ $match : { calculatedfieldname: /[A-Z]/ } },\n", "text": "That works wonderfully !I do have one problem now, sometimes Requirements.Languages is “” but is still being added to the count.When go to the chart filters and attempt to filter “EMPTY STRING”\nThe chart stops displaying.I cant figure out how to use a match with this\nYou’re on the right track. You should add a Calculated Field with an expression like this:Is that even possible in a calculated field ?", "username": "Pieter_van_zyl" }, { "code": "[\n { $addFields: { language: { $split: ['$Requirements.Languages', ';'] } } },\n { $unwind: '$language' }, \n { $match: { language: { $ne: '' } } }\n]\n", "text": "Ah - the problem is that the filtering is happening before the array is being unwound. Since every document probably contains the unwanted empty string, it is resulting in all documents being filtered out.You can address this by manually doing the unwind in the query, e.g:", "username": "tomhollander" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Aggregation for MongoDB Charts with ; separated data
2020-11-08T19:46:21.670Z
Aggregation for MongoDB Charts with ; separated data
1,608
null
[ "replication" ]
[ { "code": "", "text": "Hi, all. I want to do some tweaks. What I encounter is that I haven’t figured out how to differentiate slaves’ getMore command from users’ getMore command. Is there any native way to do this such as some kind of flag or something ?", "username": "Lewis_Chan" }, { "code": "getmoredb.currentOp(){\n\t\t\t\"type\" : \"op\",\n\t\t\t\"host\" : \"covid-19-shard-00-02-hip2i.mongodb.net:27017\",\n\t\t\t\"desc\" : \"conn128308\",\n\t\t\t\"connectionId\" : 128308,\n\t\t\t\"client\" : \"192.168.248.179:49888\",\n\t\t\t\"clientMetadata\" : {\n\t\t\t\t\"driver\" : {\n\t\t\t\t\t\"name\" : \"NetworkInterfaceTL\",\n\t\t\t\t\t\"version\" : \"4.2.10\"\n\t\t\t\t},\n\t\t\t\t\"os\" : {\n\t\t\t\t\t\"type\" : \"Linux\",\n\t\t\t\t\t\"name\" : \"CentOS Linux release 7.8.2003 (Core)\",\n\t\t\t\t\t\"architecture\" : \"x86_64\",\n\t\t\t\t\t\"version\" : \"Kernel 3.10.0-1127.13.1.el7.x86_64\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"active\" : true,\n\t\t\t\"currentOpTime\" : \"2020-10-12T15:12:19.339+0000\",\n\t\t\t\"effectiveUsers\" : [\n\t\t\t\t{\n\t\t\t\t\t\"user\" : \"__system\",\n\t\t\t\t\t\"db\" : \"local\"\n\t\t\t\t}\n\t\t\t],\n\t\t\t\"opid\" : 10078976,\n\t\t\t\"secs_running\" : NumberLong(1),\n\t\t\t\"microsecs_running\" : NumberLong(1052351),\n\t\t\t\"op\" : \"getmore\",\n\t\t\t\"ns\" : \"local.oplog.rs\",\n\t\t\t\"command\" : {\n\t\t\t\t\"getMore\" : NumberLong(\"324745879443392226\"),\n\t\t\t\t\"collection\" : \"oplog.rs\",\n\t\t\t\t\"batchSize\" : 13981010,\n\t\t\t\t\"maxTimeMS\" : NumberLong(2500),\n\t\t\t\t\"term\" : NumberLong(56),\n\t\t\t\t\"lastKnownCommittedOpTime\" : {\n\t\t\t\t\t\"ts\" : Timestamp(1602515538, 1),\n\t\t\t\t\t\"t\" : NumberLong(56)\n\t\t\t\t},\n\t\t\t\t\"$replData\" : 1,\n\t\t\t\t\"$oplogQueryData\" : 1,\n\t\t\t\t\"$readPreference\" : {\n\t\t\t\t\t\"mode\" : \"secondaryPreferred\"\n\t\t\t\t},\n\t\t\t\t\"$clusterTime\" : {\n\t\t\t\t\t\"clusterTime\" : Timestamp(1602515538, 1),\n\t\t\t\t\t\"signature\" : {\n\t\t\t\t\t\t\"hash\" : BinData(0,\"79XMfmD01mPxPuIxfr/BS/97GAM=\"),\n\t\t\t\t\t\t\"keyId\" : NumberLong(\"6843755954945654786\")\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"$db\" : \"local\"\n\t\t\t},\n\t\t\t\"planSummary\" : \"COLLSCAN\",\n\t\t\t\"cursor\" : {\n\t\t\t\t\"cursorId\" : NumberLong(\"324745879443392226\"),\n\t\t\t\t\"createdDate\" : ISODate(\"2020-10-08T15:09:57.900Z\"),\n\t\t\t\t\"lastAccessDate\" : ISODate(\"2020-10-12T15:12:18.286Z\"),\n\t\t\t\t\"nDocsReturned\" : NumberLong(259965967),\n\t\t\t\t\"nBatchesReturned\" : NumberLong(677767),\n\t\t\t\t\"noCursorTimeout\" : false,\n\t\t\t\t\"tailable\" : true,\n\t\t\t\t\"awaitData\" : true,\n\t\t\t\t\"originatingCommand\" : {\n\t\t\t\t\t\"find\" : \"oplog.rs\",\n\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\"ts\" : {\n\t\t\t\t\t\t\t\"$gte\" : Timestamp(1602169755, 78240)\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"tailable\" : true,\n\t\t\t\t\t\"oplogReplay\" : true,\n\t\t\t\t\t\"awaitData\" : true,\n\t\t\t\t\t\"maxTimeMS\" : NumberLong(60000),\n\t\t\t\t\t\"batchSize\" : 13981010,\n\t\t\t\t\t\"term\" : NumberLong(56),\n\t\t\t\t\t\"readConcern\" : {\n\t\t\t\t\t\t\"afterClusterTime\" : Timestamp(0, 1)\n\t\t\t\t\t},\n\t\t\t\t\t\"$replData\" : 1,\n\t\t\t\t\t\"$oplogQueryData\" : 1,\n\t\t\t\t\t\"$readPreference\" : {\n\t\t\t\t\t\t\"mode\" : \"secondaryPreferred\"\n\t\t\t\t\t},\n\t\t\t\t\t\"$clusterTime\" : {\n\t\t\t\t\t\t\"clusterTime\" : Timestamp(1602169796, 1),\n\t\t\t\t\t\t\"signature\" : {\n\t\t\t\t\t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\t\t\t\t\t\t\t\"keyId\" : NumberLong(0)\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"$db\" : \"local\"\n\t\t\t\t},\n\t\t\t\t\"operationUsingCursorId\" : NumberLong(10078976)\n\t\t\t},\n\t\t\t\"numYields\" : 2,\n\t\t\t\"locks\" : {\n\t\t\t\t\n\t\t\t},\n\t\t\t\"waitingForLock\" : false,\n\t\t\t\"lockStats\" : {\n\t\t\t\t\"ReplicationStateTransition\" : {\n\t\t\t\t\t\"acquireCount\" : {\n\t\t\t\t\t\t\"w\" : NumberLong(2)\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"Global\" : {\n\t\t\t\t\t\"acquireCount\" : {\n\t\t\t\t\t\t\"r\" : NumberLong(2)\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"Database\" : {\n\t\t\t\t\t\"acquireCount\" : {\n\t\t\t\t\t\t\"r\" : NumberLong(2)\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"Mutex\" : {\n\t\t\t\t\t\"acquireCount\" : {\n\t\t\t\t\t\t\"r\" : NumberLong(1)\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"oplog\" : {\n\t\t\t\t\t\"acquireCount\" : {\n\t\t\t\t\t\t\"r\" : NumberLong(2)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"waitingForFlowControl\" : false,\n\t\t\t\"flowControlStats\" : {\n\t\t\t\t\n\t\t\t}\n\t\t}\n\"effectiveUsers\" : [\n\t\t\t\t{\n\t\t\t\t\t\"user\" : \"__system\",\n\t\t\t\t\t\"db\" : \"local\"\n\t\t\t\t}\n\t\t\t]\n__systemgetmore", "text": "Hi @Lewis_Chan,Here is a getmore operation that I retrieved from the COVID-19 Open Data cluster using the db.currentOp() command.I think this part is what you are looking for:The replication is handled by the system, so it’s using the internal special __system user. Any other getmore operation issued by a user would use a “normal” user account.I hope it helps .\nI’m not sure what you are trying to do with this information though but good luck !Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Many thanks for your tip. What I actually need is a kind of programmable way in 4.0.", "username": "Lewis_Chan" }, { "code": "", "text": "Maybe you are looking for this then?", "username": "MaBeuLux88" }, { "code": "", "text": "Hi,If you are planning to implement your tweaks in the core database, you could use the TagMask stated by the client’s session to check if the client corresponds to an internal connection or not .", "username": "Zikker" } ]
How to differentiate slave's getMore from user's getMore?
2020-10-12T03:18:01.421Z
How to differentiate slave&rsquo;s getMore from user&rsquo;s getMore?
2,721
null
[ "server", "configuration" ]
[ { "code": "", "text": "Hi there,I may be missing something and the manual seems clear that there has been no change between 4.0 and 4.2 (CE).However, in my 4.0 implementations the slowms is set to 100 and on 4.2 to 0. II did not do the initial setup but the config file has no parameter set , so I assumed that the default would be 100.Anybody with any ideas what I may be missing?RegardsJohn", "username": "John_Clark" }, { "code": "slowms100slowmsslowmsprofile", "text": "Hello @John_Clark,slowms is the slow operation time threshold, in milliseconds. Its default value is 100. Operations that run for longer than this threshold are considered slow. slowms is associated with the Database Profiler. The database profiler collects detailed information about Database Commands executed against a running mongod instance; the profiler is off by default.You can view the slowms value (from the shell). This is what I found on my MongoDB v4.2 CE:db.getProfilingStatus()\n{ “was” : 0, “slowms” : 100, “sampleRate” : 1 }You can change the slow operation threshold value in one of the following ways:", "username": "Prasad_Saya" }, { "code": "", "text": "Hi Prasad, thank you so much for coming back to me. I am leaning towards a glitch in the deployment which has made the slowms 0 rather than 100. Today, I did three upgrades (stepwise) from 3.6 to 4.2 and all OK. I then created a brand new Centos 8 box and did a vanilla install, again all OK The above was all on my home lab but when I deployed a new instance via the company front end with me doing nothing in between; slowms came up as 0.A bit weird but will let you know the outcome.RegardsJohn", "username": "John_Clark" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Slowms default 4.0 and 4.2
2020-11-08T19:45:11.960Z
Slowms default 4.0 and 4.2
2,955
null
[ "kafka-connector" ]
[ { "code": "", "text": "Hi, We are using mongo source connector in our environment. Some of our data are complex. Example: Document having an array say for example students which have 100k-150K ids and the source connectors when it processes multiple documents in a short time frame gets out of memory error. We were able to reproduce the error by following the steps below.This results in below error:[2020-09-01 13:46:05,201] INFO WorkerSourceTask{id=mongo-source-1-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask)\n[2020-09-01 13:46:05,201] INFO WorkerSourceTask{id=mongo-source-1-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask)\n[2020-09-01 13:46:05,201] DEBUG WorkerSourceTask{id=mongo-source-1-0} Finished offset commitOffsets successfully in 0 ms (org.apache.kafka.connect.runtime.WorkerSourceTask)\n[2020-09-01 13:46:05,203] ERROR WorkerSourceTask{id=mongo-source-1-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)\njava.lang.OutOfMemoryError: Java heap space\nat java.util.Arrays.copyOf(Arrays.java:3332)\nat java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)\nat java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)\nat java.lang.StringBuffer.append(StringBuffer.java:270)\nat java.io.StringWriter.write(StringWriter.java:101)\nat org.bson.json.StrictCharacterStreamJsonWriter.write(StrictCharacterStreamJsonWriter.java:368)\nat org.bson.json.StrictCharacterStreamJsonWriter.preWriteValue(StrictCharacterStreamJsonWriter.java:288)\nat org.bson.json.StrictCharacterStreamJsonWriter.writeStartObject(StrictCharacterStreamJsonWriter.java:203)\nat org.bson.json.ExtendedJsonInt64Converter.convert(ExtendedJsonInt64Converter.java:22)\nat org.bson.json.ExtendedJsonInt64Converter.convert(ExtendedJsonInt64Converter.java:19)\nat org.bson.json.JsonWriter.doWriteInt64(JsonWriter.java:174)\nat org.bson.AbstractBsonWriter.writeInt64(AbstractBsonWriter.java:447)\nat org.bson.codecs.BsonInt64Codec.encode(BsonInt64Codec.java:36)\nat org.bson.codecs.BsonInt64Codec.encode(BsonInt64Codec.java:28)\nat org.bson.codecs.EncoderContext.encodeWithChildContext(EncoderContext.java:91)\nat org.bson.codecs.BsonArrayCodec.encode(BsonArrayCodec.java:82)\nat org.bson.codecs.BsonArrayCodec.encode(BsonArrayCodec.java:37)\nat org.bson.codecs.EncoderContext.encodeWithChildContext(EncoderContext.java:91)\nat org.bson.codecs.BsonDocumentCodec.writeValue(BsonDocumentCodec.java:136)\nat org.bson.codecs.BsonDocumentCodec.encode(BsonDocumentCodec.java:115)\nat org.bson.codecs.BsonDocumentCodec.encode(BsonDocumentCodec.java:41)\nat org.bson.internal.LazyCodec.encode(LazyCodec.java:38)\nat org.bson.codecs.EncoderContext.encodeWithChildContext(EncoderContext.java:91)\nat org.bson.codecs.BsonDocumentCodec.writeValue(BsonDocumentCodec.java:136)\nat org.bson.codecs.BsonDocumentCodec.encode(BsonDocumentCodec.java:115)\nat org.bson.codecs.BsonDocumentCodec.encode(BsonDocumentCodec.java:41)\nat org.bson.codecs.EncoderContext.encodeWithChildContext(EncoderContext.java:91)\nat org.bson.codecs.BsonDocumentCodec.writeValue(BsonDocumentCodec.java:136)\nat org.bson.codecs.BsonDocumentCodec.encode(BsonDocumentCodec.java:115)\nat org.bson.BsonDocument.toJson(BsonDocument.java:835)\nat org.bson.BsonDocument.toJson(BsonDocument.java:825)\nat com.mongodb.kafka.connect.source.MongoSourceTask.poll(MongoSourceTask.java:192)We did create a ticket with mongo source connector but they advised us to follow up with support. This is the ticket we created with mongo source connector: https://jira.mongodb.org/browse/KAFKA-151. We did try increasing the heap size and tuning the \"poll.max.batch.size \" configuration but we still hit the issue. Please let us know for additional information. Thanks in advance.", "username": "Sabari_Gandhi1" }, { "code": "", "text": "", "username": "Jack_Woehr" }, { "code": "poll.max.batch.size", "text": "Also to add from the ticket:With regards to the OOM error - the line in question converts the Change stream document into a raw json string. The polling mechanism in Source connectors batch up changes before publishing them to the topic. This can be configured by setting poll.max.batch.size which by default will try to batch 1,000 source records and publish them to the topic. Reducing this max batch size should prevent OOM errors.With out error logs, configuration examples and JVM configuration I can’t provide more insight here.What are your JVM settings? How much memory have you allocated to the heap? It sounds like you dont have enough. Lowering the poll.max.batch.size will reduce the size cached before polling. What did you try?", "username": "Ross_Lawley" }, { "code": "docker exec -i mongo1 sh -c 'mongoimport -c investigate1 -d test' < sample_data.json", "text": "Thanks for your comments. Please see below the steps to reproduce the issue. These are not the exact data but I was able to reproduce the issue following the stepsRegarding JVM Setting. I allocated 4G by adding this in docker-compose using the below config KAFKA_HEAP_OPTS: “-Xmx4G”. As mentioned I see the issue in prod environment where we use containers with 6G and allocated heap size of 5G. We did try lowering the batch size as low as 300 but still end up hitting the issue in some scenario.Please let me know for additional information or questions regarding steps.", "username": "Sabari_Gandhi1" }, { "code": "", "text": "Files mentioned in above comments are added to this ticket: https://jira.mongodb.org/browse/KAFKA-151. I was not able to add files to this topic.", "username": "Sabari_Gandhi1" } ]
Out of Memory Issue with source connector in certain scenario
2020-11-05T19:31:12.177Z
Out of Memory Issue with source connector in certain scenario
4,724
null
[ "configuration" ]
[ { "code": "", "text": "Hello !I changed the IP whitelist expecting to see a reconfiguration of the replica set and track this on the Connection (I think) panel in Atlas UI.AFAIK to do it locally we should stepDown the primary, and so forth. But I couldn’t notice any change for the servers in Atlas.I’ve checked the oplog, waiting to see something there, but there is nothing like that neither.Probably I got confused. Any ideas here? Shouldn’t this trigger an election?PS: the UI is absolutely amazing", "username": "santimir" }, { "code": "", "text": "I really really do not know how Atlas handles the IP white list issue. But, with my experience, I would think that this is handled by at a lower level than the application level, mongod in this case. I would assume it is more a firewall issue and that mongod is not even aware when a non white-listed connection is attempted.", "username": "steevej" }, { "code": "rs.config()", "text": "Oops, that makes sense. Thanks @steevej I should’ve run rs.config() but many of those commands are not allowed, so I didn’t even try.i’LL wait in case anyone wants too add up.", "username": "santimir" }, { "code": "", "text": "Atlas manages the IP Access List firewall at the level of the compute instance in the target VPC/VNet, e.g. above the MongoDB database itself", "username": "Andrew_Davidson" }, { "code": "", "text": "Thanks for the clarification.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Change IP whitelist in Atlas
2020-11-07T22:25:25.291Z
Change IP whitelist in Atlas
2,057
null
[ "swift" ]
[ { "code": "let b = realm.objects(Business.self).first!\n\n//make an unmanaged copy of a business\nlet someBusiness = Business(value: b)\nsomeBusiness.name = \"New Business Name\"\n\ntry! realm.write {\n realm.add(someBusiness, update: .modified)\n}\n", "text": "I am not able to update an existing object (as an in memory object) that contains an embedded object. Maybe in-memory copies are not compatible with embedded objects or I’m overlooking something.Using the classes from the documentation here, I have an existing business with a List of embedded addresses. When making an in memory copy of the business, changing the name property and then attempting to save it, Realm throws an error:Cannot add an existing managed embedded object to a List.Which is not at all what’s being done. Here’s the codeIf I remove the embedded List property from the Business class, it works correctly.macOS 10.15\nRealm 10.1.1\nXCode 12\nNo sync - local only", "username": "Jay" }, { "code": "// You have the object already so you don't need a copy\nlet b = realm.objects(Business.self).first! \ntry! realm.write {\n b.name = \"New Business Name\"\n}\n", "text": "Hi @Jay,\nIn your code you’re trying to add a new Business in the realm.\nInstead you could try this snippet:Please let me know if it helps.", "username": "Pavel_Yakimenko" }, { "code": "", "text": "Thanks for the reply @Pavel_YakimenkoWe’re not trying to add a new business, we are modifying the embedded object that is part of an existing business (it’s already saved and managed by Realm)For our use case, we need a copy. It’s a macOS app with a sheet that pulls down to allow editing of the object and we need an in-memory copy so we can change it around before the user saves it. e.g. the user can click cancel on the sheet and the object is deallocated and no changes are saved.", "username": "Jay" }, { "code": " let someBusiness = Business(value: b)\n let addr = Address(value: b.addresses.first!)\n someBusiness.name = \"New Business Name\"\n someBusiness.addresses.replace(index: 0, object: addr)\n try! realm.write {\n realm.add(someBusiness, update: .modified)\n }\n", "text": "In this case I’d recommend to create a copy of EmbeddedObject too.\nBusiness(value: b) creates a shallow copy so you need to copy Address models.", "username": "Pavel_Yakimenko" }, { "code": "", "text": "@Jay, as soon as your original code is correct, there is the ticket created to fix this issue.Crash during upsert of an existing object with a list of EmbeddedObject inside.\n…\n\n## Goals / Expected Results\nI can upsert a record.\n\n## Actual Results\nException: ** Terminating app due to uncaught exception 'RLMException', reason: 'Cannot add an existing managed embedded object to a List.'**\n\n## Steps for others to Reproduce\nCreate unmanaged copy of existing object with the property - list of EmbeddedObject's children objects.\nTry to add it with `UpdatePolicy.update`\n\n## Workaround\nCreate a deep copy of a target object.\n\n## Code Sample\n```\nclass Address: EmbeddedObject {\n @objc dynamic var street: String? = nil\n @objc dynamic var city: String? = nil\n @objc dynamic var country: String? = nil\n @objc dynamic var postalCode: String? = nil\n}\n\n// Define an object with an array of embedded objects\nclass Business: Object {\n @objc dynamic var _id = ObjectId.generate()\n @objc dynamic var name = \"\"\n let addresses = List<Address>() // Embed an array of objects\n \n override static func primaryKey() -> String? {\n return \"_id\"\n }\n \n convenience init(name: String, addresses: [Address]) {\n self.init()\n self.name = name\n self.addresses.append(objectsIn: addresses)\n }\n}\n\n let b = realm.objects(Business.self).first!\n \n //make an unmanaged copy of a business\n let someBusiness = Business(value: b)\n someBusiness.name = \"New Business Name\"\n\n try! realm.write {\n realm.add(someBusiness, update: .modified)\n }\n```\n## Stack trace\n```\n*** First throw call stack:\n(\n\t0 CoreFoundation 0x00007fff2043a126 __exceptionPreprocess + 242\n\t1 libobjc.A.dylib 0x00007fff20177f78 objc_exception_throw + 48\n\t2 delcrash 0x00000001036918c5 _ZN18RLMAccessorContext12createObjectEP11objc_objectN5realm12CreatePolicyEbNS2_6ObjKeyE + 3125\n\t3 delcrash 0x00000001036ea20d RLMAddObjectToRealm + 285\n\t4 delcrash 0x00000001038876c4 $s10RealmSwift0A0V3add_6updateySo0aB6ObjectC_AC12UpdatePolicyOtF + 1252\n\t5 delcrash 0x00000001034fc24b $s8delcrash14ViewControllerC9addActionyyFyyXEfU_ + 251\n\t6 delcrash 0x00000001034fb84f $ss5Error_pIgzo_ytsAA_pIegrzo_TR + 15\n\t7 delcrash 0x00000001034fc2a4 $ss5Error_pIgzo_ytsAA_pIegrzo_TRTA.1 + 20\n\t8 delcrash 0x00000001038866cb $s10RealmSwift0A0V5write16withoutNotifying_xSaySo20RLMNotificationTokenCG_xyKXEtKlF + 299\n\t9 delcrash 0x00000001034fbfb8 $s8delcrash14ViewControllerC9addActionyyF + 1112\n\t10 delcrash 0x00000001034fbace $s8delcrash14ViewControllerC6runAddyyF + 46\n\t11 delcrash 0x00000001034fb0d3 $s8delcrash14ViewControllerC11viewDidLoadyyF + 723\n\t12 delcrash 0x00000001034fba8b $s8delcrash14ViewControllerC11viewDidLoadyyFTo + 43\n\t13 UIKitCore 0x00007fff23f37de3 -[UIViewController _sendViewDidLoadWithAppearanceProxyObjectTaggingEnabled] + 88\n\t14 UIKitCore 0x00007fff23f3c6ca -[UIViewController loadViewIfRequired] + 1084\n\t15 UIKitCore 0x00007fff23f3cab4 -[UIViewController view] + 27\n\t16 UIKitCore 0x00007fff246ac28b -[UIWindow addRootViewControllerViewIfPossible] + 313\n\t17 UIKitCore 0x00007fff246ab978 -[UIWindow _updateLayerOrderingAndSetLayerHidden:actionBlock:] + 219\n\t18 UIKitCore 0x00007fff246ac93d -[UIWindow _setHidden:forced:] + 362\n\t19 UIKitCore 0x00007fff246bf950 -[UIWindow _mainQueue_makeKeyAndVisible] + 42\n\t20 UIKitCore 0x00007fff248fa524 -[UIWindowScene _makeKeyAndVisibleIfNeeded] + 202\n\t21 UIKitCore 0x00007fff23ace736 +[UIScene _sceneForFBSScene:create:withSession:connectionOptions:] + 1671\n\t22 UIKitCore 0x00007fff2466ed47 -[UIApplication _connectUISceneFromFBSScene:transitionContext:] + 1114\n\t23 UIKitCore 0x00007fff2466f076 -[UIApplication workspace:didCreateScene:withTransitionContext:completion:] + 289\n\t24 UIKitCore 0x00007fff2415dbaf -[UIApplicationSceneClientAgent scene:didInitializeWithEvent:completion:] + 358\n\t25 FrontBoardServices 0x00007fff25a6a136 -[FBSScene _callOutQueue_agent_didCreateWithTransitionContext:completion:] + 391\n\t26 FrontBoardServices 0x00007fff25a92bfd __94-[FBSWorkspaceScenesClient createWithSceneID:groupID:parameters:transitionContext:completion:]_block_invoke.176 + 102\n\t27 FrontBoardServices 0x00007fff25a77b91 -[FBSWorkspace _calloutQueue_executeCalloutFromSource:withBlock:] + 209\n\t28 FrontBoardServices 0x00007fff25a928cb __94-[FBSWorkspaceScenesClient createWithSceneID:groupID:parameters:transitionContext:completion:]_block_invoke + 352\n\t29 libdispatch.dylib 0x00000001054dfa88 _dispatch_client_callout + 8\n\t30 libdispatch.dylib 0x00000001054e29d0 _dispatch_block_invoke_direct + 295\n\t31 FrontBoardServices 0x00007fff25ab88f1 __FBSSERIALQUEUE_IS_CALLING_OUT_TO_A_BLOCK__ + 30\n\t32 FrontBoardServices 0x00007fff25ab85d7 -[FBSSerialQueue _targetQueue_performNextIfPossible] + 433\n\t33 FrontBoardServices 0x00007fff25ab8a9c -[FBSSerialQueue _performNextFromRunLoopSource] + 22\n\t34 CoreFoundation 0x00007fff203a8845 __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 17\n\t35 CoreFoundation 0x00007fff203a873d __CFRunLoopDoSource0 + 180\n\t36 CoreFoundation 0x00007fff203a7c81 __CFRunLoopDoSources0 + 346\n\t37 CoreFoundation 0x00007fff203a23f7 __CFRunLoopRun + 878\n\t38 CoreFoundation 0x00007fff203a1b9e CFRunLoopRunSpecific + 567\n\t39 GraphicsServices 0x00007fff2b793db3 GSEventRunModal + 139\n\t40 UIKitCore 0x00007fff2466d40f -[UIApplication _run] + 912\n\t41 UIKitCore 0x00007fff24672320 UIApplicationMain + 101\n\t42 libswiftUIKit.dylib 0x00007fff53c487b2 $s5UIKit17UIApplicationMainys5Int32VAD_SpySpys4Int8VGGSgSSSgAJtF + 98\n\t43 delcrash 0x000000010350100a $sSo21UIApplicationDelegateP5UIKitE4mainyyFZ + 122\n\t44 delcrash 0x0000000103500f7e $s8delcrash11AppDelegateC5$mainyyFZ + 46\n\t45 delcrash 0x0000000103501059 main + 41\n\t46 libdyld.dylib 0x00007fff20257409 start + 1\n\t47 ??? 0x0000000000000001 0x0 + 1\n)\nlibc++abi.dylib: terminating with uncaught exception of type NSException\n*** Terminating app due to uncaught exception 'RLMException', reason: 'Cannot add an existing managed embedded object to a List.'\nterminating with uncaught exception of type NSException\nCoreSimulator 732.18 - Device: iPhone 8 (A8CA0A6C-C943-4C70-8EC4-EF9FC5E0F5F5) - Runtime: iOS 14.1 (18A8394) - DeviceType: iPhone 8\n```\n\n## Version of Realm and Tooling\nRealm framework version: 10.1.2\nXcode version: 12\niOS/OSX version: 14\nDependency manager + version: SPM\nThank you for the information provided.", "username": "Pavel_Yakimenko" } ]
Unmanaged Object with Embedded Object
2020-11-08T14:28:15.282Z
Unmanaged Object with Embedded Object
3,453
null
[ "vscode", "podcast" ]
[ { "code": "", "text": "You may be asking that very question… and I’m here to help. We’re going to host a live-stream chat on the VSCode extension this Thursday, November 12th at 12noon ET. on Twitch@JoeKarlsson, and @Massimiliano_Marcon will join the podcast live stream and we’ll explain all the details you could ever want to know about the VSCode extension. Be there live to have your questions answered - or respond to this thread with questions and we’ll answer them live on Thursday.See you there!", "username": "Michael_Lynn" }, { "code": "", "text": "I’m soooo excited to be a part of this podcast - I’m a huge fan of MongoDB and VS Code ", "username": "JoeKarlsson" } ]
What the heck does VSCode have to do with MongoDB?
2020-11-09T14:31:14.667Z
What the heck does VSCode have to do with MongoDB?
3,091
null
[ "queries" ]
[ { "code": "", "text": "It is not quite clear to me if running an explain actually runs the query or just samples to output the winning plan/index.which of the following explain options run the actual query and which ones dont\nqueryPlanner - does not run ?\nexecutionStats - runs ?\nallPlansExecution - runs ?Im concerned that testing a query over a huge collection will produce query targetting alerts in atlas while testing.What about findAndModify queries also using the above options?\ndo they actually get run or just sampled to output the winning plan/index?", "username": "Mark_Emerson" }, { "code": "allPlansExecution", "text": "Optional choice. See https://docs.mongodb.com/manual/reference/method/cursor.explain/#allplansexecution-modeallPlansExecution Mode MongoDB runs the query optimizer to choose the winning plan and executes the winning plan to completion.", "username": "Jack_Woehr" } ]
Using explain to test queries
2020-11-09T10:32:07.876Z
Using explain to test queries
1,534
null
[ "configuration" ]
[ { "code": "2020-08-28T12:14:20.570+0000 W NETWORK [listener] Error accepting new connection TooManyFilesOpen: error in creating eventfd: Too many open files\n2020-08-28T12:14:20.578+0000 W NETWORK [listener] Error accepting new connection TooManyFilesOpen: error in creating eventfd: Too many open files\n2020-08-28T12:14:20.612+0000 I SHARDING [conn54569] Marking collection *****.componenttypes as collection version: <unsharded>\n2020-08-28T12:14:20.616+0000 W NETWORK [listener] Error accepting new connection TooManyFilesOpen: error in creating eventfd: Too many open files\n2020-08-28T12:14:20.623+0000 W NETWORK [listener] Error accepting new connection TooManyFilesOpen: error in creating eventfd: Too many open files\n2020-08-28T12:14:20.623+0000 W NETWORK [listener] Error accepting new connection TooManyFilesOpen: error in creating eventfd: Too many open files\n2020-08-28T12:14:20.625+0000 W NETWORK [listener] Error accepting new connection TooManyFilesOpen: error in creating eventfd: Too many open files\n2020-08-28T12:14:20.629+0000 W NETWORK [listener] Error accepting new connection TooManyFilesOpen: error in creating eventfd: Too many open files\n2020-08-28T12:14:20.630+0000 W NETWORK [listener] Error accepting new connection TooManyFilesOpen: error in creating eventfd: Too many open files\n2020-08-28T12:14:20.644+0000 I NETWORK [listener] Error accepting new connection on 0.0.0.0:27017: Too many open files\n2020-08-28T12:14:20.644+0000 I NETWORK [listener] Error accepting new connection on 0.0.0.0:27017: Too many open files\n2020-08-28T12:14:20.644+0000 I NETWORK [listener] Error accepting new connection on 0.0.0.0:27017: Too many open files\n2020-08-28T12:14:20.644+0000 I NETWORK [listener] Error accepting new connection on 0.0.0.0:27017: Too many open files\n[Unit]\nDescription=An object/document-oriented database\nDocumentation=man:mongod(1)\n\n[Service]\n# Other directives omitted\n# (file size)\nLimitFSIZE=infinity\n# (cpu time)\nLimitCPU=infinity\n# (virtual memory size)\nLimitAS=infinity\n# (open files)\nLimitNOFILE=64000\n# (processes/threads)\nLimitNPROC=64000\nUser=mongodb\nExecStart=/usr/bin/mongod --config /etc/mongod.conf\n\n[Install]\nWantedBy=multi-user.target\n Limit Soft Limit Hard Limit Units\n Max cpu time unlimited unlimited seconds\n Max file size unlimited unlimited bytes\n Max data size unlimited unlimited bytes\n Max stack size 8388608 unlimited bytes\n Max core file size 0 unlimited bytes\n Max resident set unlimited unlimited bytes\n Max processes 64000 64000 processes\n Max open files 64000 64000 files\n Max locked memory 65536 65536 bytes\n Max address space unlimited unlimited bytes\n Max file locks unlimited unlimited locks\n Max pending signals 64122 64122 signals\n Max msgqueue size 819200 819200 bytes\n Max nice priority 0 0\n Max realtime priority 0 0\n Max realtime timeout unlimited unlimited us \n", "text": "Hi Folks,We have a MongoDB replica set configured - primary, secondary & arbiter. In the past few weeks one of the instances has crashed multiple times. The logs show the following :The /lib/systemd/system/mongod.service file looks like this:The proc/pid/limits file is :Any ideas what could be causing this?Mongo version 4.2.8\nUbuntu 16.04Thanks in advance.", "username": "Niamh_Gibbons" }, { "code": "", "text": "You are hitting the operating system limit for the number of open file descriptors which is non unusual on database servers. Please refer to your operating system documentation for how to increase this limit.", "username": "Joe_Drumgoole" }, { "code": "", "text": "you can check it, from unix doing an ulimit\n$ ulimit -a ( and set the appropiate limit )", "username": "Willy_Latorre" }, { "code": "", "text": "@Willy_Latorre Setting with the ulimit command only effects the shell session. This parameter need to be set in the appropriate script(or init system) that starts mongod.ulimit is referenced in the Operations Checklist\nSpecifically in UNIX ulimit Settings", "username": "chris" }, { "code": "fs.file-max = 100000", "text": "Hi Guys,Thanks for all the responses. I had already set the Mongo recommended ulimit settings in the service file and the proc limits file showed that the 64000 was set. However when I ran ulimit -u as the Mongo user, it was returning 1024. I increased this limit to 100000, but the instance has crashed again.Steps to increase limit to 100000'* soft nofile 100000\n'* hard nofile 100000\nmongodb soft nofile 100000\nmongodb hard nofile 100000\nroot soft nofile 100000\nroot hard nofile 100000Any other ideas?Cheers,\nNiamh", "username": "Niamh_Gibbons" }, { "code": "", "text": "Any other ideas?You missed the critical piece:This parameter need to be set in the appropriate script(or init system) that starts mongod.This means systemd, upstart or init script. There are examples toward the end of UNIX ulimit Settings for upstart and systemd.", "username": "chris" }, { "code": "systemctl daemon-reload", "text": "After modifying the service script, did you run systemctl daemon-reload?", "username": "ken.chen" }, { "code": "", "text": "Hey Chris,Thanks for checking again, but I already have that in place in my /lib/systemd/system/mongod.service file…see original post.I must be missing something else?Thanks,\nNiamh", "username": "Niamh_Gibbons" }, { "code": "", "text": "Yep. See @ken.chen comment.This might need to be updated in the documentation too.", "username": "chris" }, { "code": "", "text": "Hey Ken / Chris,The config in the service file has been in place for a long time, I inherited the system like this Although I recently upgraded this Mongo instance and issued systemctl daemon-reload at the Disable Transparent Huge Pages (THP) step, where I created the /etc/systemd/system/disable-transparent-huge-pages.service file.Any other suggestions as to what I could be missing? I’m new to Mongo and your help is really appreciated.Thanks,\nNiamh", "username": "Niamh_Gibbons" }, { "code": "**2020-09-30T11:05:40.172+0000 I ACCESS [conn94647] Successfully authenticated as principal DbUser on admin from client 13.0.1.189:34528\n2020-09-30T11:05:40.172+0000 I ACCESS [conn94559] Successfully authenticated as principal DbUser on admin from client 13.0.1.189:34356\n2020-09-30T11:05:40.180+0000 I ACCESS [conn94566] Successfully authenticated as principal DbUser on admin from client 13.0.1.189:34358\n2020-09-30T11:05:40.181+0000 E - [conn95032] cannot open /dev/urandom Too many open files\n2020-09-30T11:05:40.181+0000 F - [conn95032] Fatal Assertion 28839 at src/mongo/platform/random.cpp 159\n2020-09-30T11:05:40.183+0000 F - [conn95032]\n\n***aborting after fassert() failure\n\n\n2020-09-30T11:05:40.183+0000 E - [conn95028] cannot open /dev/urandom Too many open files\n2020-09-30T11:05:40.183+0000 F - [conn95028] Fatal Assertion 28839 at src/mongo/platform/random.cpp 159\n2020-09-30T11:05:40.183+0000 F - [conn95028]\n\n***aborting after fassert() failure\n\n\n2020-09-30T11:05:40.183+0000 E - [conn95031] cannot open /dev/urandom Too many open files\n2020-09-30T11:05:40.183+0000 F - [conn95031] Fatal Assertion 28839 at src/mongo/platform/random.cpp 159\n2020-09-30T11:05:40.183+0000 F - [conn95031]\n\n***aborting after fassert() failure\n\n\n2020-09-30T11:05:40.183+0000 E - [conn95025] cannot open /dev/urandom Too many open files\n2020-09-30T11:05:40.183+0000 F - [conn95025] Fatal Assertion 28839 at src/mongo/platform/random.cpp 159\n2020-09-30T11:05:40.183+0000 F - [conn95025]\n\n***aborting after fassert() failure**\n", "text": "Hi FolksThis mongo instance has crashed a few more times since, with the below in the logs the latest time. Any other troubleshooting advice?Cheers,\nNiamh", "username": "Niamh_Gibbons" }, { "code": "", "text": "Increase nofile limit further.You should inspect how many connections you have open too.Poorly written clients will consume open files due to an open socket == file handle.", "username": "chris" }, { "code": "", "text": "Thanks Chris. There seems to be thousands of open connections coming from the app. I looked into it further and can see the mongoose driver is outdated so getting that updated and hope it helps. Thanks for the help with this", "username": "Niamh_Gibbons" }, { "code": "", "text": "Hi folks - updating the mongoose driver seems to have done the trick! Thanks all for the help/advice", "username": "Niamh_Gibbons" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Too many open files - Mongo 4.2
2020-09-01T12:01:56.824Z
Too many open files - Mongo 4.2
26,824
null
[ "aggregation" ]
[ { "code": "", "text": "when we have to extract data from nested arrays in aggregate pipeline we can use $unwind or $concatArrays( may be grouped) as far as I know. Is $unwind is better than $concatArrays for this kind of task? Is there any standard to follow?", "username": "reshad_hasan" }, { "code": "$unwind$concatArrays", "text": "Hello @reshad_hasan, welcome to the MongoDB community forum.Aggregation query can use $unwind and $concatArrays to work with array fields (scalar or nested objects).They have different purposes, but are used for working with arrays. First of all, the two are not comparable as to say which is better - they have their use cases. The provided links also have examples showing the usage.", "username": "Prasad_Saya" } ]
Working with nested array in aggregate. $unwind VS $concatArrays
2020-11-09T10:32:25.391Z
Working with nested array in aggregate. $unwind VS $concatArrays
4,799
https://www.mongodb.com/…_2_1024x1003.png
[ "compass" ]
[ { "code": "<dbname><dbname>", "text": "I’m about to start a MERN app but my skills were a bit rusty so I looked up several tutorials on how to start the project. However, I realized that I was stuck at this point when I logged in my MongoDB account which is on Atlas. There, I created a new project and then a cluster. I followed the steps like this tutorial (Learn the MERN stack by building an exercise tracker — MERN Tutorial | by Beau Carnes | Medium) said to get the connection string. However, the point that I’m stuck on is the <dbname> in the string which wasn’t in the example picture in the tutorial. So I’m confused. Do I need to create a database as well or what? What do I do for the <dbname> in the string?Is there any other way to connect like using MongoDB Compass? I tried looking for tutorials that use Compass but I can’t find it…In the tutorial:\n\n1390×1362 95.5 KB\n\nMy code screenshot:\n\nScreen Shot 2020-11-09 at 1.23.02 AM1346×1230 107 KB\n", "username": "Kristina_Bressler" }, { "code": "test<dbname><tab>", "text": "Hello @Kristina_Bressler, welcome to the MongoDB community forum.You can specify test as the <dbname>; this is the database to connect to. Within the application / Compass you can specify (or switch to) the database name you intend work with - the database will be switched to that database, then.Also, see:", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
<dbname> in connection string
2020-11-09T08:29:34.430Z
&lt;dbname&gt; in connection string
4,446
null
[ "aggregation" ]
[ { "code": "[\n {\n \"id\": 1,\n \"data\": [\n {\n \"val1\": \"xyz\",\n \"val2\": \"abc\"\n },\n {\n \"val1\": \"a\",\n \"val2\": \"b\"\n }\n ]\n },\n {\n \"id\": 2,\n \"data\": [\n {\n \"val1\": \"d\",\n \"val2\": \"e\"\n },\n {\n \"val1\": \"f\",\n \"val2\": \"f\"\n }\n ]\n },\n \n]\ndata.0.val1=\"xyz\"data.0.val2=\"abc\"db.collection.find({\n \"$and\": [\n {\n \"data.0.val1\": {\n $eq: \"xyz\"\n }\n },\n {\n \"data.0.val2\": {\n $eq: \"abc\"\n }\n }\n ]\n})\ndb.collection.find({\n $expr: {\n $and: [\n {\n $eq: [\n \"$data.0.val1\",\n \"xyz\"\n ]\n },\n {\n $eq: [\n \"$data.0.val2\",\n \"abc\"\n ]\n }\n ]\n }\n})\n", "text": "My MongoDB document schema looks like this:I need to find documents with data.0.val1=\"xyz\" and data.0.val2=\"abc\" and following is my query which works fine:PlaygroundBut when I tried using $eq aggregation operator along with array dot notation, it is not returning expected results:PlaygroundDoes $eq aggregation operator works with array dot notation in find() ? What am I doing wrong?", "username": "Tiya_Jose" }, { "code": " db.collection.find( { \"data.0.val1\" : \"xyz\" , \"data.0.val2\" : \"abc\" } ) ­­ db.collection.find( { \"data.0\" : { \"val1\" : \"xyz\" , \"val2\" : \"abc\" } ) ­­", "text": "First, $and is redundant because there is an implicit and in the query.Second, $eq is also redundant because FieldName : Value implicitly mean equality.I would try with simply­­ db.collection.find( { \"data.0.val1\" : \"xyz\" , \"data.0.val2\" : \"abc\" } ) ­­or (which I find easier to read)­­ db.collection.find( { \"data.0\" : { \"val1\" : \"xyz\" , \"val2\" : \"abc\" } ) ­­", "username": "steevej" }, { "code": "", "text": "Thank you for the answer. But I was wondering How to use $eq aggregation operator along with array dot notation?", "username": "Tiya_Jose" }, { "code": "\"$data.0.val1\"dataval1$arrayElemAtdb.test.aggregate([ { $project: { first_ele: { $arrayElemAt: [ \"$data\", 0 ] } } } ]){ \"_id\" : 1, \"first_ele\" : { \"val1\" : \"xyz\", \"val2\" : \"abc\" } }\n{ \"_id\" : 2, \"first_ele\" : { \"val1\" : \"d\", \"val2\" : \"e\" } }\nfirst_ele{ $match: { \"first_ele.val1\": \"xyz\" } }_id: 1db.test.aggregate([\n { $addFields: { first_ele: { $arrayElemAt: [ \"$data\", 0 ] } } },\n { $match: { \"first_ele.val1\": \"xyz\" } }\n])\n$match$exprdb.test.aggregate([\n { $match: { \n $expr: {\n $eq: [ { \n $reduce: { \n input: [ { $arrayElemAt: [ \"$data\", 0 ] } ], \n initialValue: false,\n in: { $eq: [ \"$$this.val1\", \"xyz\" ] }\n } }, \n true \n ]\n }\n } }\n]).pretty()\ndb.test.find( { \n $expr: { \n $eq: [ { \n $reduce: { \n input: [ { $arrayElemAt: [ \"$data\", 0 ] } ], \n initialValue: false,\n in: { $eq: [ \"$$this.val1\", \"xyz\" ] }\n } }, \n true \n ]\n } \n}).pretty()", "text": "But I was wondering How to use $eq aggregation operator along with array dot notation?I think, \"$data.0.val1\" is not a correct expression in aggregation, when used with array fields. To get the data array’s first element’s val1 value, you need something like this using the aggregation operator $arrayElemAt. For example,db.test.aggregate([ { $project: { first_ele: { $arrayElemAt: [ \"$data\", 0 ] } } } ])returns two documents with first element of the array:Then, you can match with a specific field within the first_ele:\n{ $match: { \"first_ele.val1\": \"xyz\" } }This returns the document with _id: 1, the expected result.So, the aggregation with both stages would be:", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Does "$eq" aggregation operator works with array dot notation?
2020-11-07T08:55:36.972Z
Does &ldquo;$eq&rdquo; aggregation operator works with array dot notation?
6,351
null
[]
[ { "code": "", "text": "I want to update two additional fields (counter and date) only if another field has changed.\nA straightforward solution is to query this field and to update data only if it differs. I sure should be a more effective way to do it. Any help? \nAlso, I want to support upsert, i.e. the document existence is not guaranteed before the update)", "username": "Orlovsky_Alexander" }, { "code": "", "text": "Hi @Orlovsky_Alexander,If you need to update the fields based on another specific field or operation you better use a change stream logic:\nhttps://docs.mongodb.com/manual/changeStreams/#modify-change-stream-outputYou can set a pipeline to look for an existance of the field you watch in updateDescription change event field and perform the other 2 updates based on _id of the changed document.For more details see the change event description:Since the logic triggered can be any MongoDB code, you can run an upsert and it will trigger the event.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks! It’s a clever solution. But looks not very practical. Because I need to start a change stream on every update/upsert call and guarantee concurrency safety to avoid the double counter updates. Documents are (only) updated concurrently by external API calls.", "username": "Orlovsky_Alexander" }, { "code": "", "text": "Hi @Orlovsky_Alexander,Not sure what you mean by every update call… You can have a change stream running on the whole database and capturing events as they come in:\nhttps://docs.mongodb.com/manual/reference/method/db.watch/#exampleHowever collection level watch will also continually trigger in up coming updates. Moreover, you can resume the change stream so it is actually practical and used in many MongoDB components like Atlas triggers and mongodb connectors. Theoretical option is to use a kafka connector to sync collections but it sounds like an overkill to me.Each event will trigger it once and with retrayble writes you can make sure that it is retried currectly.Another way to do this is by calculating the needed values as part of every update statement if the update is done on the same collection, possibly by doing pipeline updates:The following page provides examples of updates with aggregation pipelines.Best\nPavel", "username": "Pavel_Duchovny" } ]
Update fields only on other field change
2020-11-08T19:44:50.623Z
Update fields only on other field change
6,195
null
[]
[ { "code": "", "text": "Hello! One of our Realm databases has been failing since yesterday (Sat) morning - our others are OK. It attempts to restart, then fails again. We do not know what to do! There has been no response from the MongoDB/Realm support teams to our queries. This is affecting our live users.I know that this is more of a developer forum but has anyone ever seen this happen before? Or can anyone help or suggest a way to get the attention of someone who can help?", "username": "Keith_Sohl" }, { "code": "", "text": "The realm.io support portal has been deprecated - you need to raise an issue with MongoDB’s support portal with realm details and we will take a look.", "username": "Ian_Ward" }, { "code": "", "text": "Hi Ian, Thanks for getting back to me - much appreciated. We had already tried that but without any reply. Thankfully someone at your “info@” email address pick up our query and was able to fix our failing instance. Cheers.", "username": "Keith_Sohl" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Our Realm database is down
2020-11-08T19:44:33.152Z
Our Realm database is down
1,936
null
[ "java" ]
[ { "code": "String connectionString = \"mongodb+srv://....\";\nMongoClient mongoClient = MongoClients.create(connectionString);\nMongoDatabase database = mongoClient.getDatabase(\"test_db\");\nSystem.out.println(database.getName()); \nMongoCollection<Document> booksCollection = database.getCollection(\"books\");\nDocument doc = booksCollection.find(new Document(\"id\", 1)).first();\nreturn doc.toJson();\nmongoClient.close();\ncom.mongodb.MongoSocketWriteException: Exception sending message\n\tat com.mongodb.internal.connection.InternalStreamConnection.translateWriteException(InternalStreamConnection.java:551)\n\tat com.mongodb.internal.connection.InternalStreamConnection.sendMessage(InternalStreamConnection.java:433)\n\tat com.mongodb.internal.connection.InternalStreamConnection.sendCommandMessage(InternalStreamConnection.java:273)\n\tat com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:257)\n\tat com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:83)\n\tat com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:33)\n\tat com.mongodb.internal.connection.InternalStreamConnectionInitializer.initializeConnectionDescription(InternalStreamConnectionInitializer.java:105)\n\tat com.mongodb.internal.connection.InternalStreamConnectionInitializer.initialize(InternalStreamConnectionInitializer.java:62)\n\tat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:129)\n\tat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117)\n\tat java.lang.Thread.run(Thread.java:748)\nCaused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target\n\tat sun.security.ssl.Alerts.getSSLException(Alerts.java:192)\n\tat sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1946)\n\tat sun.security.ssl.Handshaker.fatalSE(Handshaker.java:316)\n\tat sun.security.ssl.Handshaker.fatalSE(Handshaker.java:310)\n\tat sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1688)\n\tat sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)\n\tat sun.security.ssl.Handshaker.processLoop(Handshaker.java:1038)\n\tat sun.security.ssl.Handshaker.process_record(Handshaker.java:966)\n\tat sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1064)\n\tat sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1367)\n\tat sun.security.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:750)\n\tat sun.security.ssl.AppOutputStream.write(AppOutputStream.java:123)\n\tat com.mongodb.internal.connection.SocketStream.write(SocketStream.java:99)\n\tat com.mongodb.internal.connection.InternalStreamConnection.sendMessage(InternalStreamConnection.java:430)\n\t... 9 more\nCaused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target\n\tat sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:397)\n\tat sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:302)\n\tat sun.security.validator.Validator.validate(Validator.java:262)\n\tat sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:330)\n\tat sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:237)\n\tat sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:132)\n\tat sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1670)\n\t... 18 more\nCaused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target\n\tat sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141)\n\tat sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126)\n\tat java.security.cert.CertPathBuilder.build(CertPathBuilder.java:280)\n\tat sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:392)\n\t... 24 more\nDocument doc = booksCollection.find(new Document(\"id\", 1)).first();\n", "text": "Hi all,I am new to MongoDB so I use the Atlas free tier to build a simple project. I built a simple java program (using java 8 and mongo-java-driver version 3.11.2) :This works fine in a main function but when I try to put it in a web service and call it I get the following exception:I must note that the exception happens in line :Any ideas?Thanks!Leonidas", "username": "Leonidas_Sioutis" }, { "code": "", "text": "Hi @Leonidas_Sioutis ,Please look at my answer hereBest\nPavel", "username": "Pavel_Duchovny" }, { "code": "\tcom.mongodb.MongoException: java.lang.NoClassDefFoundError: Could not initialize class sun.security.ssl.SSLExtension\nat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:138)\nat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117)\nat java.lang.Thread.run(Thread.java:748)\nCaused by: java.lang.NoClassDefFoundError: Could not initialize class sun.security.ssl.SSLExtension\nat sun.security.ssl.SSLConfiguration.getEnabledExtensions(SSLConfiguration.java:380)\nat sun.security.ssl.ClientHello$ClientHelloKickstartProducer.produce(ClientHello.java:562)\nat sun.security.ssl.SSLHandshake.kickstart(SSLHandshake.java:509)\nat sun.security.ssl.ClientHandshakeContext.kickstart(ClientHandshakeContext.java:110)\nat sun.security.ssl.TransportContext.kickstart(TransportContext.java:233)\nat sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:394)\nat sun.security.ssl.SSLSocketImpl.ensureNegotiated(SSLSocketImpl.java:808)\nat sun.security.ssl.SSLSocketImpl.access$200(SSLSocketImpl.java:75)\nat sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1093)\nat com.mongodb.internal.connection.SocketStream.write(SocketStream.java:99)\nat com.mongodb.internal.connection.InternalStreamConnection.sendMessage(InternalStreamConnection.java:430)\nat com.mongodb.internal.connection.InternalStreamConnection.sendCommandMessage(InternalStreamConnection.java:273)\nat com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:257)\nat com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:83)\nat com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:33)\nat com.mongodb.internal.connection.InternalStreamConnectionInitializer.initializeConnectionDescription(InternalStreamConnectionInitializer.java:105)\nat com.mongodb.internal.connection.InternalStreamConnectionInitializer.initialize(InternalStreamConnectionInitializer.java:62)\nat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:129)\n... 2 more\n|#]\ncom.mongodb.MongoException: java.lang.NoClassDefFoundError: sun/security/ssl/HelloExtension\nat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:138)\nat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117)\nat java.lang.Thread.run(Thread.java:748)\nCaused by: java.lang.NoClassDefFoundError: sun/security/ssl/HelloExtension\nat sun.security.ssl.SSLExtension.<clinit>(SSLExtension.java:225)\nat sun.security.ssl.SSLConfiguration.getEnabledExtensions(SSLConfiguration.java:380)\nat sun.security.ssl.ClientHello$ClientHelloKickstartProducer.produce(ClientHello.java:562)\nat sun.security.ssl.SSLHandshake.kickstart(SSLHandshake.java:509)\nat sun.security.ssl.ClientHandshakeContext.kickstart(ClientHandshakeContext.java:110)\nat sun.security.ssl.TransportContext.kickstart(TransportContext.java:233)\nat sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:394)\nat sun.security.ssl.SSLSocketImpl.ensureNegotiated(SSLSocketImpl.java:808)\nat sun.security.ssl.SSLSocketImpl.access$200(SSLSocketImpl.java:75)\nat sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1093)\nat com.mongodb.internal.connection.SocketStream.write(SocketStream.java:99)\nat com.mongodb.internal.connection.InternalStreamConnection.sendMessage(InternalStreamConnection.java:430)\nat com.mongodb.internal.connection.InternalStreamConnection.sendCommandMessage(InternalStreamConnection.java:273)\nat com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:257)\nat com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:83)\nat com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:33)\nat com.mongodb.internal.connection.InternalStreamConnectionInitializer.initializeConnectionDescription(InternalStreamConnectionInitializer.java:105)\nat com.mongodb.internal.connection.InternalStreamConnectionInitializer.initialize(InternalStreamConnectionInitializer.java:62)\nat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:129)\n", "text": "Hi @Pavel_Duchovny ,thank you for your help! So I upgraded to jdk 8u271 and removed any previous jdks. The exception didn’t happen again, but now there is a new one:I run my test using glassfish 5.0.1 through netbeans 12.1Thank you again,Leonidas", "username": "Leonidas_Sioutis" }, { "code": "", "text": "Hi @Leonidas_Sioutis,I would recommend testing with a plain latest java driver as s this error seems to be something related to the local driver files.Best\nPavel", "username": "Pavel_Duchovny" } ]
Connection problem in JAVA webservice (MongoDB Atlas)
2020-11-06T16:23:52.435Z
Connection problem in JAVA webservice (MongoDB Atlas)
6,422
null
[ "replication", "performance" ]
[ { "code": "", "text": "Hi Team,I have a test cluster with 1shard(3node Rep set), 3config and 1router. I have created ~7gb of total data(including indexes) on primary. Now i have tried to added a new node to the replicaset. I observed that the initial data sync took 25mins to complete. I have tested this with and without enabling compression multiple times and the time taken is similar. There is plenty of ram configured.I feel that 25mins is too long for 7GB of data. if i have 50GB/100GB of data, the initial sync time would ~3hrs/6hrs respectively. Please suggest.Thanks,\nSaran", "username": "Sharan_Kumar" }, { "code": "", "text": "Hello @Sharan_Kumar, welcome to the MongoDB Community forum.You can try copying the existing member’s data to the new member being added (before adding the member to the replica-set) - this can reduce the amount of time for the new member to sync. This is explained in Add Members to a Replica Set - Prepare the Data Directory.See the the second bullet point:Before adding a new member to an existing replica set, prepare the new member’s data directory using one of the following strategies:", "username": "Prasad_Saya" } ]
Time taken for initial data sync in a sharded environment
2020-11-06T16:22:00.471Z
Time taken for initial data sync in a sharded environment
3,153
null
[ "dot-net" ]
[ { "code": " {\n \"Suppression\": {\n \"Id\": 0,\n \"Name\": \"string\",\n \"Description\": \"string\",\n \"AssetKey\": \"string\",\n \"Prod\": true,\n \"StartTime\": \"2020-11-05T22:16:32.452Z\",\n \"EndTime\": \"2020-11-05T22:16:32.452Z\",\n \"EnteredBy\": \"string\",\n \"EnteredTime\": \"2020-11-05T22:16:32.452Z\",\n \"Status\": true,\n \"PlannedMaintenance\": true\n },\n \"Filters\": [\n {\n \"Type\": \"string\",\n \"Value\": \"string\"\n }\n ]\n }\n \"Suppression\": [\n [\n []\n ],\n [\n []\n ],\n [\n []\n ],\n [\n []\n ],\n [\n []\n ],\n [\n []\n ],\n [\n []\n ],\n [\n []\n ],\n [\n []\n ],\n [\n []\n ],\n [\n []\n ]\n ],\n \"Filters\": [\n [\n [\n []\n ],\n [\n []\n ]\n ]\n ]\n", "text": "I am working on creating a C# .NET application that will be storing data in Mongo. This application will be storing documents that have extra elements that are complex objects, and I have added [BsonExtraElements] to the Dictionary that contains the extra elements. However, what is actually being stored is not what I am expecting. I can see the top level fields of the extra data but then there are just a lot of empty arrays.The extra data being passed in:Compared with how these are stored in the database", "username": "Brady_Shober" }, { "code": "", "text": "If you make a simple example that fails you as above (and describe also what you expected to see!) surely some C#.NET expert will come along and correct your code.", "username": "Jack_Woehr" }, { "code": " public class Schedule\n {\n public string Id { get; set; }\n [JsonProperty(Required = Required.Always)]\n public string Asset { get; set; }\n public string NamedEnvironment { get; set; }\n [JsonProperty(Required = Required.Always)]\n public string Name { get; set; }\n [JsonProperty(Required = Required.Always)]\n public ScheduleType Type { get; set; }\n public string Description { get; set; }\n [JsonProperty(Required = Required.Always)]\n public DateTime StartTime { get; set; }\n [JsonProperty(Required = Required.Always)]\n public DateTime EndTime { get; set; }\n [BsonExtraElements]\n public IDictionary<string, object> Data { get; set; }\n }\n public Schedule AddSchedule(Schedule s)\n {\n _scheduleCollection.InsertOne(s);\n return s;\n }\n {\n \"asset\": \"Example\",\n \"name\": \"Example\",\n \"type\": \"Suppression\",\n \"description\": \"Example\",\n \"startTime\": \"2020-11-06T14:49:36.779Z\",\n \"endTime\": \"2020-11-06T14:49:36.779Z\",\n \"data\": {\n \"Suppression\": {\n \"Id\": 0,\n \"Name\": \"string\",\n \"Description\": \"string\",\n \"AssetKey\": \"string\",\n \"Prod\": true,\n \"StartTime\": \"2020-11-06T14:50:09.494Z\",\n \"EndTime\": \"2020-11-06T14:50:09.494Z\",\n \"EnteredBy\": \"string\",\n \"EnteredTime\": \"2020-11-06T14:50:09.494Z\",\n \"Status\": true,\n \"PlannedMaintenance\": true\n },\n \"Filters\": [\n {\n \"Type\": \"string\",\n \"Value\": \"string\"\n }\n ]\n }\n }\n {\n \"id\": \"5fa5631d5b2a8f5d2f9cdecd\",\n \"asset\": \"Example\",\n \"namedEnvironment\": null,\n \"name\": \"Example\",\n \"type\": \"Suppression\",\n \"description\": \"Example\",\n \"startTime\": \"2020-11-06T14:49:36.779Z\",\n \"endTime\": \"2020-11-06T14:49:36.779Z\",\n \"data\": {\n \"suppression\": [\n [\n []\n ],\n [\n []\n ],\n [\n []\n ],\n [\n []\n ],\n [\n []\n ],\n [\n []\n ],\n [\n []\n ],\n [\n []\n ],\n [\n []\n ],\n [\n []\n ],\n [\n []\n ]\n ],\n \"filters\": [\n [\n [\n []\n ],\n [\n []\n ]\n ]\n ]\n }\n }\n", "text": "Here is some simplified code of what is in the application.This is the class that I am trying to store in the database:The Data dictionary exists to hold additional dynamic properties.\nThe code to add this to the database is very simpleThen here is the full object being passed in, and what I am expecting to be stored in the databaseBut when I try to retrieve this what I get back is", "username": "Brady_Shober" }, { "code": "Suppressionsuppression", "text": "Suppressionis not suppression … case sensitivity.", "username": "Jack_Woehr" }, { "code": "", "text": "It is not, we have an API object which has the JSON property names set to lowercase, but this will map to the uppercase C# property names.I was able to come up with a solution today. The API object will accept a dictionary as it already is, however in the Schedule object I have changed the Data property to a BsonDocument. During the mapping of the API object to a Schedule, I convert the Dictionary to a JSON string, then parse this JSON to a BsonDocument. This stores the data correctly, then we just reverse the process when retrieving the data.", "username": "Brady_Shober" }, { "code": "", "text": "Yes, I encounter the same sort of thing in PHP where sometimes I want to preserve the BSON objects into the PHP objects and sometimes I want to convert in and out. Good work.", "username": "Jack_Woehr" } ]
Extra Elements Stored as Empty Arrays
2020-11-05T22:48:45.680Z
Extra Elements Stored as Empty Arrays
3,819
null
[ "node-js", "production" ]
[ { "code": "MongoError: not mastercreateIndexcreateIndexheartbeatFrequencyMSkerberossetImmediateprocess.nextTicksetImmediatepackage.json", "text": "The MongoDB Node.js team is pleased to announce version 3.6.3 of the driverA regression introduced in v3.6.2 meant that createIndex operations would not be executed with a fixed\nprimary read preference. This resulted in the driver selecting any server for the operation, which would\nfail if a non-primary was selected.The driver periodically monitors members of the replicaset for changes in the topology, but ensures that\nthe “monitoring thread” is never woken sooner than 500ms. Measuring this elapsed time depends on a\nstable clock, which is not available to us in some virtualized environments like AWS Lambda. The result\nwas that periodically operations would think there were no available servers, and the driver would force\na wait of heartbeatFrequencyMS (10s by default) before reaching out to servers again for a new\nmonitoring check. The internal async interval timer has been improved to account for these environmentsA regression introduced in v3.6.0 forced the driver to reuse a single kerberos client for all\nauthentication attempts. This would result in incomplete authentication flows, and occaisionally even\na crash in the kerberos module. The driver has been reverted to creating a kerberos client per\nauthentication attempt.A change introduced in v3.6.1 switched all our usage of process.nextTick in the connection pool with\nsetImmediate per Node.js core recommendation. This was observed to introduce noticeable latency when the event loop\nwas experiencing pressure, so the change was reverted for this release pending further investigation.Reference: MongoDB Node.js Driver\nAPI: Index\nChangelog: node-mongodb-native/HISTORY.md at 3.6 · mongodb/node-mongodb-native · GitHubWe invite you to try the driver immediately, and report any issues to the NODE project.Thanks very much to all the community members who contributed to this release!", "username": "mbroadst" }, { "code": "", "text": "", "username": "system" } ]
MongoDB Node.js Driver 3.6.3 Released
2020-11-06T13:50:23.469Z
MongoDB Node.js Driver 3.6.3 Released
2,347
null
[ "graphql" ]
[ { "code": "", "text": "Hi,we use graphql with realm and don’t understand how fields are ordered.Is it with the schema?", "username": "Nabil_Ben" }, { "code": "", "text": "Yes - the order of how fields appear in your GraphQL schema is a result of how they are ordered in your Realm Schema", "username": "Sumedha_Mehta1" }, { "code": "", "text": "It’s seems that when i save / deploy a schema, it’s ordered by alphabetic order.\nAnd it doesn’t even order them this way when i save a document.", "username": "Nabil_Ben" }, { "code": "", "text": "Ah you’re right - when I confirmed, my schema happened to be in alphabetical order already.Is there a reason why you want to choose the order of your fields?", "username": "Sumedha_Mehta1" }, { "code": "", "text": "son why you want to choose the order of your fields?For readability, specially in developement mode, we end up with different order for each document.", "username": "Nabil_Ben" }, { "code": "", "text": "Can you elaborate on what you mean by development mode? Are you referring to developing or the “Development Mode” related to our Sync offering?I’m also not quite sure what you mean by different order for each document, do you mind providing an example?", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Thank you Sumedha for your quick replies.I meant “when not in production mode”, we edit fields manually sometimes. To have them not in the same order is kind frustating.Without editing the schema or the front we have this:Plus we have an issue with date we don’t understand, from the front we send a new date(). But it retrieve 2 hours in atlas. We use graphql.", "username": "Nabil_Ben" }, { "code": "", "text": "Without editing the schema or the front we have thisWhere are you getting these screenshots from? Is this from postman/a client call? or are you getting this somewhere from Atlas?Plus we have an issue with date we don’t understand, from the front we send a new date(). But it retrieve 2 hours in atlas. We use graphql.Do you mind going more into detail about your issue with date? Is it adding 2 hours to the date added? And do you mind pasting your code that’s inserting the date?", "username": "Sumedha_Mehta1" }, { "code": "", "text": "From mongoDB Atlas.For the date, it compare with local date of the server in the triggers. So we have +2h.\nWe are in france.\nCan we setup the GMT in the mongodb realm app ? To have same hours.", "username": "Nabil_Ben" }, { "code": "", "text": "Any idea, will it be fixed?", "username": "Nabil_Ben" }, { "code": "", "text": "Hi @Nabil_Ben - the question about order on Atlas is for the Atlas team. I suggest filing a request for them to respect order here - https://feedback.mongodb.com/forums/924145-atlasOn the other hand, respecting the schema order on GraphQL is something that we are considering, but would still be at least a few months out if implemented.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "@Nabil_Ben Wanted to follow up on my original answer here, apologies for not understanding your question the first time around -This is a limitation of the GraphQL API at the moment due to some pre-processing we do on the GraphQL API before we perform the MongoDB insert due to the un-ordered nature of JS objects.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How fields order are chosen
2020-10-14T08:04:47.311Z
How fields order are chosen
2,658
null
[ "mongoose-odm" ]
[ { "code": " request_rec:[\n {\n type: mongoose.Schema.Types.ObjectId,\n ref: 'Trip',\n }\n ],\n request_send:[\n {\n type: mongoose.Schema.Types.ObjectId,\n ref: 'Trip',\n }\n ],\n", "text": "Hi all in my user model i am storing the following dataI am keeping the requests_rec, request_send arrays limited to 3 recent items by using $push $slice. So that i can show 3 items under request_send, request_received if he needs to view more then i am planning to fetch from trip model.I feels i can easily handle addition of new request_send, request_received. But i am feeling confused regarding how i shall keep this list updated when i remove items from request_send,request_received after accepting a request. Please guide me regarding how i shall keep the request_send,request_received array filled after removing an item(ie. if i removed an item from request_received in user model then if there are more requests pending how shall copy it to request_received in usermodel).", "username": "Jose_Kj" }, { "code": "", "text": "Hi @Jose_Kj,Have you considered using change stream yo track removal and act upon it? When a request is removed you can index the users having this request and update them by pulling it.If you need strong consistency you can potentially use transactions to commit a delete after both delete and update are performed.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "const tripSchema = new mongoose.Schema(\n {\n location:{\n type: String,\n required: true,\n index: true\n },\n desc:{\n type: String,\n required: true\n },\n members:[{\n type: mongoose.Schema.Types.ObjectId,\n ref: 'User',\n }],\n pending:[{\n type: mongoose.Schema.Types.ObjectId,\n ref: 'User',\n }],\n },\n {\n timestamps: true\n });\n tripSchema.index({ admin: 1, location: 1, date: 1}, { unique: true });\n module.exports = mongoose.model('Trip', tripSchema);\n", "text": "my trip schemaok thanks @Pavel_Duchovny, i am sorry if i was not clear. My main concern is like if there are say more than 5 requests received(pending) and we removed one from it. so what shall be the optimum criteria to select one request_received so as to copy to requests_received in user model(here in user model i am keeping copy of last 3 items).if i removed one request , present in request_received in user model and there are more requests. then i will need to copy one from trip model to user model.is my use model design optimum? should i remove the subset pattern request_send, request_received and directly access request_send,request_received from trip model? i will be displaying latest 3 request_send ,request_received in a single page.\nthanks", "username": "Jose_Kj" }, { "code": "", "text": "Hi @Jose_Kj ,If received and sent requests are shown as part of main user context it makes sense to keep them together as long as this array is not unboundedly growing. Since you mentioned 3 last elemnts will be stored I suppose you should have a timeDate per element and when you do the $puah with $slice sort based on the time.Since as you mentioned only last 3 will be stored inside the user document, whenever you pull a user from the trip document you will need to check if this operation does not also required to be reflected in user document.There are several ways to do so:I recommend reading the following article which has some great tips on how not follow known anti patterns. Relationships are one with relevant consideration for you.\nhttps://www.mongodb.com/article/schema-design-anti-pattern-summaryThanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to keep my capped array updated
2020-11-05T17:44:46.820Z
How to keep my capped array updated
2,223
null
[]
[ { "code": "", "text": "Hey, everyone. So I got some mongodb atlas credits via Github Education, but I don’t have a credit card. I was wondering if there is a way for me to use these credits without a credit card.Thanks.", "username": "Allen_He" }, { "code": "", "text": "Hi Allen,The other option is to use PayPal. Unfortunately we do not offer a way to consume paid-tier Atlas clusters without a payment method on file, even if you’re using the promo credits you’ve received.However, please note that you can always use our free forever M0 free tier.Cheers\n-Andrew", "username": "Andrew_Davidson" } ]
Using MongoDb credit without credit card
2020-11-03T09:33:36.774Z
Using MongoDb credit without credit card
3,470
null
[ "queries" ]
[ { "code": "", "text": "I have a very big MongoDB 140 GB and I need to reduce it drastically before doing a resync. For that I would like to remove all collections older than 2 days ? Could you please help me to find the right command to do that ?MAny thanks in advance\nREgards\nRichard", "username": "MARCELIN_Richard" }, { "code": "birthOfCollection", "text": "Add a document with a single datetimestamp field called birthOfCollection or whatever and search each collection for it once a day and drop the collection if the age is over the limit.", "username": "Jack_Woehr" }, { "code": "", "text": "Hi Jack\nThanks! I’m kind of beginner in Mongo… Could you please tell me what would be the Mongo query to do that ?", "username": "MARCELIN_Richard" }, { "code": "", "text": "What language are you using?", "username": "Jack_Woehr" }, { "code": "", "text": "NOt sure I got it…What do you mean which language ? I have been using Mongo 3.4.6 on Red Hat 6.1…Or maybe just tell me how to find out the language I’m using on my Mongo VM ?Thanks Jack", "username": "MARCELIN_Richard" }, { "code": "mongomongosh", "text": "@MARCELIN_Richard, I mean, you’re writing programs using MongoDB, right? What language are you writing them in? Python? Node.js? PHP? etc.Or are these commands you are executing entirely in the mongo or mongosh shell?", "username": "Jack_Woehr" }, { "code": "", "text": "Hi JackThanks for your feedback\nI would like to execute this command directly in Mongo or using a Mongo shell script", "username": "MARCELIN_Richard" }, { "code": "creationDateload", "text": "Again, I don’t know if there’s a magic command that deletes collections by date. Doubt it.Pretty easy to do in Python, or you could write a mongo script and load it into a mongo shell.", "username": "Jack_Woehr" }, { "code": "tempdata_20201105tempdata_idmongodb.getCollectionNames().forEach(function(coll) {\n // Find document with smallest key\n let doc = db[coll].find({}).sort({_id:1}).limit(1).next();\n if (doc) {\n if (typeof doc._id.getTimestamp === 'function') {\n // Print collection name, ObjectID, timestamp \n print(\n coll,\n doc._id,\n doc._id.getTimestamp()\n )\n } else {\n print(\n coll,\n doc._id,\n '(not an ObjectID)'\n )\n }\n }\n})\nmongo", "text": "Welcome to the community forum @MARCELIN_Richard!The MongoDB server does not track collection creation dates.As @Jack_Woehr suggested, one approach would be to add a document explicitly tracking the creation date. I think a more straightforward alternative would be to create your collections using a date-based naming convention, eg: tempdata_20201105 for a tempdata collection created on the 5th of Nov.Another option for automatically removing old data would be to create a Time-To-Live (TTL) Index on a date field. Depending on your index definition, a TTL index will automatically remove documents from a collection after a certain amount of time or at a specific clock time. This would be less efficient than using a naming convention to identify collections to drop, as the TTL index still needs to remove individual documents.However, it sounds like you already have collection data and would like to try to determine when each was created. If you are using default ObjectIDs for _id (primary key), you can infer the collection creation data based on the leading timestamp in the ObjectID.For example, in the mongo shell:Instead of printing timestamps, you could compare with the current time and drop the collection if it is older than your desired expiry. I would recommend writing this sort of script using a full MongoDB driver (eg Python or Node) rather than the mongo shell. Language drivers have more robust error handling and debugging features, which will be especially useful for destructive actions like dropping collections.Definitely take a backup before running any custom scripts that might delete or modify data, and test in a representative staging or development environment before running in production. There’s no proper “undo” for deleting data aside from restoring from a recent backup.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to remove all Collections older than 2 days?
2020-11-04T18:53:23.058Z
How to remove all Collections older than 2 days?
15,771
null
[ "aggregation" ]
[ { "code": "[\n {\n $match: {\n $and: [\n { $or: [{ make: null }, { make: \"BMW\" }] },\n { $or: [{ model: null }, { model: \"320\" }] },\n { $or: [{ price: null }, { price: { $gte: 1000 } }] },\n { $or: [{ price: null }, { price: { $lte: 80000 } }] },\n ],\n },\n },\n { $project: { _id: 0, make: 1, model: 1, price: 1 } },\n];\n[\n {\n $match: {\n $and: [\n { $or: [{ make: null }, { make: \"asds\" }] },\n { $or: [{ model: null }, { model: \"320\" }] },\n { $or: [{ price: null }, { price: { $gte: 1000 } }] },\n { $or: [{ price: null }, { price: { $lte: 80000 } }] },\n ],\n },\n },\n { $project: { _id: 0, make: 1, model: 1, price: 1 } },\n];\n", "text": "i tried to convert into mongodb aggregation famework and every thing runs as expectedbut when not matching a “make” it returns nothingi want to filtter cars by make, model, price\nand when not match make, go to search for rest of query model, price\nwhat should i do, or any other idea to make this filter", "username": "Mustafa_Wael" }, { "code": "make : nullmake : { $exists : false}", "text": "Hi @Mustafa_Wael,Welcome to MongoDB community!I am not sure I understand your end goal here.Since the “make” field conditions are part of AND once your documents does have documents with field “make” populated with null values or with “asds” values the query is expected to give no results.Please note that make : null is different to make : { $exists : false} so if your docs actually don’t have this field this will yield results.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "\"\nFROM car\n WHERE (@make is NULL OR @make = make)\n AND (@model is NULL OR @model = model)\n AND (@minPrice is NULL OR @minPrice <= price)\n AND (@maxPrice is NULL OR @maxPrice >= price)\n\"\n", "text": "The end goal is\nIf the “make” field not match the condition go to the “model” field if not go to “mode” fieldLike what this sqlite query do", "username": "Mustafa_Wael" }, { "code": "[\n {make: 'Audi', model: 'A2'},\n {make: 'BMW', model: '116'},\n {make: 'BMW', model: 'X1'},\n {make: 'BMW', model: '320'},\n {make: 'Ford', model: 'Fiesta'},\n {make: 'Mazda', model: '6'},\n {make: 'Merces-Benz', model: '200'},\n {make: 'Mazda', model: '6'},\n {make: 'Mazda', model: 'e250'},\n]\n[\n {\n $match: {\n $and: [\n { $or: [{ make: null }, { make: \"BMW\" }] },\n { $or: [{ model: null }, { model: \"320\" }] },\n { $or: [{ price: null }, { price: { $gte: 1000 } }] },\n { $or: [{ price: null }, { price: { $lte: 80000 } }] },\n ],\n },\n },\n { $project: { _id: 0, make: 1, model: 1, price: 1 } },\n];\n { \"make\" : \"BMW\", \"model\" : \"320\", \"price\" : 18999 }[\n {\n $match: {\n $and: [\n { $or: [{ make: null }, { make: \"asds\" }] },\n { $or: [{ model: null }, { model: \"320\" }] },\n { $or: [{ price: null }, { price: { $gte: 1000 } }] },\n { $or: [{ price: null }, { price: { $lte: 80000 } }] },\n ],\n },\n },\n { $project: { _id: 0, make: 1, model: 1, price: 1 } },\n];\n { \"make\" : \"BMW\", \"model\" : \"320\", \"price\" : 18999 }", "text": "i have this car collectionand this aggregation that works as expectedthis aggregation will return this document\n { \"make\" : \"BMW\", \"model\" : \"320\", \"price\" : 18999 }but when i use the below aggregation returns nothingi know that this aggregation should return nothing because “make” field conditions are part of ANDso i want the correct aggregation to return this document => { \"make\" : \"BMW\", \"model\" : \"320\", \"price\" : 18999 }the main idea is if “make” field is not match the condition will go to the next condition and if it not match will go to the next condition and so asthanks for reply ", "username": "Mustafa_Wael" }, { "code": "", "text": "Hi @Mustafa_Wael,What you describe is an $or condition and not $and which has a short circuit mechanism, means first expression false everything is false.However, it sounds that you should implement this search in stages , meaning query for “make” if no results query for model etc.Index each field to speed up search.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "{field: null}{field: {$exists: <false> || <true> } }$match: {\n $and: [\n { $or: [{ make: { $exists: false } }, { make: \"BMW\" }] },\n { $or: [{ model: { $exists: true } }, { model: \"asd\" }] },\n { $or: [{ price: { $exists: false } }, { price: { $gte: 1000 } }] },\n { $or: [{ price: { $exists: false } }, { price: { $lte: 50000 } }] },\n ],\n },\n", "text": "solved this by replace {field: null} to {field: {$exists: <false> || <true> } }will check if “make” field is exist or not.if not exist ($exists: false) will continue to the next condition that is in “$or” operator and if is true will continue to the next “$or” condition in ‘$and’ operator,if exist ($exists: true) will only continue to the next “$or” condition in ‘$and’ operator and so on.Thanks for helping", "username": "Mustafa_Wael" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Getting specified data with mongodb aggregation framework
2020-10-31T19:36:18.986Z
Getting specified data with mongodb aggregation framework
2,131
null
[]
[ { "code": "", "text": "Hi everybodyI’d like to run a mongo query and put the output of this query within a linux file.\nFor instance, I’d like to put the output in a file so-called /tmp/MongoQuery…How to do that please?Many thanks for your suggestions\nRegards\nRichard", "username": "MARCELIN_Richard" }, { "code": "", "text": "One way would be to use –query option of mongoexport for the query and –out to specify /tmp/MongoQuery.See https://docs.mongodb.com/database-tools/mongoexport/", "username": "steevej" } ]
How to put the output of a mongo query in a linux file?
2020-11-05T19:12:44.369Z
How to put the output of a mongo query in a linux file?
1,750
null
[ "dot-net" ]
[ { "code": " if (BsonClassMap.IsClassMapRegistered(typeof(MessageContainer)))\n {\n BsonClassMap.RegisterClassMap<MessageBase>(cm =>\n {\n cm.AutoMap();\n cm.SetIsRootClass(true);\n });\n }\n[BsonIgnoreExtraElements(true)]\npublic class MessageContainer\n{\n [BsonId]\n public ObjectId Id { get; set; }\n\n [BsonDateTimeOptions(Kind = DateTimeKind.Utc)]\n public DateTime TimeStamp { get; set; }\n\n [BsonElement]\n public string MessageType { get; set; }\n\n public MessageBase Message { get; set; }\n\n [BsonConstructor]\n public MessageContainer()\n {\n\n }\n\n [BsonConstructor]\n public MessageContainer(MessageBase message)\n {\n Message = message ?? throw new ArgumentNullException(nameof(message));\n TimeStamp = DateTime.UtcNow;\n MessageType = message.GetType().Name;\n }\n\n [BsonConstructor]\n public MessageContainer(DateTime timeStamp, string messageType, MessageBase message)\n {\n TimeStamp = timeStamp;\n MessageType = messageType ?? throw new ArgumentNullException(nameof(messageType));\n Message = message ?? throw new ArgumentNullException(nameof(message));\n } \n}\npublic abstract class MessageBase\n {\n protected MessageBase();\n\n public MessageBase CreateCopy();\n }\npublic bool Write(MessageContainer message)\n{\n if (message != null && _mongoCollection != null)\n {\n try\n {\n\n if (!BsonClassMap.IsClassMapRegistered(typeof(MessageContainer)))\n {\n BsonClassMap.RegisterClassMap<MessageContainer>();\n BsonClassMap.RegisterClassMap<MessageBase>(cm =>\n {\n cm.AutoMap();\n cm.SetIsRootClass(true);\n });\n }\n\n _mongoCollection.InsertOne(message);\n \n return true; \n }\n catch (Exception Ex)\n {\n Console.WriteLine(Ex.Message);\n }\n }\n\t\n return false;\n}\npublic bool GetFirstAndLastMessageTime(out DateTime firstMessageTime, out DateTime lastMessageTime)\n{\n if (BsonClassMap.IsClassMapRegistered(typeof(MessageContainer)))\n {\n BsonClassMap.RegisterClassMap<MessageBase>(cm =>\n {\n cm.AutoMap();\n cm.SetIsRootClass(true);\n });\n }\n \n var filter = Builders<MessageContainer>.Filter.Empty; \n var first = _mongoCollection.Find(filter).Sort(Builders<MessageContainer>.Sort.Ascending(\"TimeStamp\")).Limit(5).ToList().First();\n var last = _mongoCollection.Find(filter).Sort(Builders<MessageContainer>.Sort.Descending(\"TimeStamp\")).Limit(5).ToList().First(); \n firstMessageTime = first.TimeStamp; \n lastMessageTime = last.TimeStamp;\n return true;\n}\n", "text": "I have two functions one for write and one for read when i try to use the read function first i get this error:An error occurred while deserializing the Message property of class DDSRecorder.MessageContainer: Instances of abstract classes cannot be createdHere is what i dont get, if i use the write first at least once then the read works fine. i dont understand what happens in the background that makes it ok to initialize abstract class if we used it once to write.Adding the map for it didn’t resolve the problem:Here is the class i am using for the mongo collection.And the abstract class inside:Example of write method:Example of read method:What am i missing for it to be able to initialize the abstract class without the need of writing first?", "username": "Arkady_Levin" }, { "code": "", "text": "I’ve never written a line of .NET but the answer looks pretty obvious:Apparently in .NET abstract classes don’t get registered unless an implementing class has already been instanced.I think you are in a life-and-death struggle with the language model, @Arkady_Levin ", "username": "Jack_Woehr" } ]
Instances of abstract class cannot be created unless I write to database first
2020-11-05T17:44:10.848Z
Instances of abstract class cannot be created unless I write to database first
5,229
null
[]
[ { "code": "", "text": "Did you know that each MongoDB.live event is preceded by a special, fun, virtual event just for the community? Hosted by the local user group for each region, these events are open to anyone who wants to attend! Meet other MongoDB users, your local Champions, and our Developer Advocacy team. Play games and participate in fun activities (with prizes!). Learn some cool MongoDB stuff with presentations from our experts.The pre-game shows start NEXT MONDAY (9 November), so don’t wait to RSVP! Check out @Celina_Zamora’s post here for more info:.live Pre-Game User Group November events are happening SOON!", "username": "Jamie" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Register today for MongoDB .live User Group Pre-Game Events (Free!)
2020-11-05T18:38:27.869Z
Register today for MongoDB .live User Group Pre-Game Events (Free!)
3,023
null
[]
[ { "code": "const toSave = new Model({key: value});\ntoSave.save()\n{\n \"message\": {\n \"operationTime\": \"6890830329813139457\",\n \"ok\": 0,\n \"code\": 13,\n \"codeName\": \"Unauthorized\",\n \"$clusterTime\": {\n \"clusterTime\": \"6890830329813139457\",\n },\n \"name\": \"MongoError\"\n }\n}\n", "text": "I followed the Mlab migration guide to migrate a Mlab Sandbox database to an Atlas. When I use the provided connection string in my application, I am unable to write to the database. I can only read from it.I do something like belowI can read from the databaseI can read and write to the database from the terminal, I can’t write with my applicationUbuntu 16 LTSNetwork access is at allow all 0.0.0.0/0 (includes your current IP address)User access: readWriteAnyDatabase@adminMongoose version: v5.10MongoDB server version v4.2Type: Replica setCluster Tier: M0 Sandbox (General)Connection string: mongodb+srv://username:[email protected]/dbname?retryWrites=true&w=majority", "username": "fibre_dev" }, { "code": "atlasAdmin", "text": "Just a wild guess, there’s something in your connection string that isn’t being passed by your application correctly (you didn’t say what language) e.g., a metachar.In any case, if you just want to brute-force debug it, try giving the user atlasAdmin privileges once and see if it works.If it doesn’t work, I suppose it’s telling you the truth: you’re genuinely not authorized. In which case it’s back for looking for the metachar in your password that doesn’t work passed in a string in the language you are using.Again, just a guess.", "username": "Jack_Woehr" }, { "code": "", "text": "Hello,Can you email [email protected]? We’ll be happy to help troubleshoot.", "username": "Adam_Harrison" } ]
Unable to write to migrated Mongo Atlas Database
2020-11-03T10:43:17.791Z
Unable to write to migrated Mongo Atlas Database
2,120
null
[ "dot-net", "field-encryption" ]
[ { "code": "", "text": "Are there any C# Driver examples showing how to use Field Level Encryption? Do the models define the encrypted fields as byte arrays or does the driver convert the string values to the bindata subtype 6?", "username": "Eddie_Conner" }, { "code": "/* Copyright 2019-present MongoDB Inc.\n*\n* Licensed under the Apache License, Version 2.0 (the \"License\");\n* you may not use this file except in compliance with the License.\n* You may obtain a copy of the License at\n*\n* http://www.apache.org/licenses/LICENSE-2.0\n*\n* Unless required by applicable law or agreed to in writing, software\n* distributed under the License is distributed on an \"AS IS\" BASIS,\n* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n* See the License for the specific language governing permissions and\n* limitations under the License.\n*/\n\nusing MongoDB.Driver.Core.Misc;\nusing MongoDB.Driver.Core.TestHelpers.XunitExtensions;\nusing System;\nusing System.Collections.Generic;\nusing System.Threading;\n", "text": "For those who might be interested, I found the following example.", "username": "Eddie_Conner" }, { "code": "", "text": "Hi @Eddie_Conner,\nglad that you found the example on the driver repository. Another example .NET/C# client encryption can be found atmaster/c-sharpsample code for the client-side field level encryption project - field-level-encryption-sandbox/c-sharp at master · mongodb-labs/field-level-encryption-sandboxAlso, please see Client-Side Field Level Encryption Guide for more information.", "username": "wan" }, { "code": "", "text": "@Eddie_Conner Did you successfully implemented this feature? I am trying to do it for almost 3 days, still not successful. Can you please send me the steps, what and how you did this, and where you running this code in Linux containers?", "username": "Ramesh_M" } ]
C# Field Level Encryption Example
2020-03-13T03:22:09.653Z
C# Field Level Encryption Example
2,872
null
[ "react-native" ]
[ { "code": "", "text": "I am trying to encode a PDF with BSON, pass the buffer into MongoDB via Mongoose, and then fetch the data from the database, deserialize it, and render the PDF using react-pdf. However, I am currently getting the following error:Error: Invalid parameter object: need either .data, .range or .urlBased on my troubleshooting, it would seem that my problem is coming from the process of encoding/decoding. Before I encode, this is my PDF file:You’ll notice the File prototype and also that it is labelled “File” at the start (I’m not sure what that label is called).\nNow, after I have decoded it, this is the file:As you can see, it is now just a generic object.Is there a way to encode/decode a PDF file (or any file) while maintaining this prototype information? Am I perhaps doing something else wrong? React-PDF should be working regardless, as I’m pretty sure the object still has a .data parameter… Though I will double check that. I know the file must because it renders fine before I encode it.Any and all help is very much appreciated!", "username": "Donovan_Thacker" }, { "code": "", "text": "Sorry I don’t really like to bump things but I could really use help with this.", "username": "Donovan_Thacker" } ]
Encoding/Decoding PDF for react-pdf
2020-10-29T16:52:45.639Z
Encoding/Decoding PDF for react-pdf
3,256
null
[ "indexes" ]
[ { "code": "", "text": "As the topic said, We find some feature for hot-warm-cold-delete in mongodb . some like Elasticsearch 6.0 ILM", "username": "hm_li" }, { "code": "", "text": "Can you help me understand what you want here? Maybe if you can describe your use case for this feature?", "username": "Daniel_Pasette" }, { "code": "", "text": "I am finding some way to do hot, warn and cold data storage on mongodb. Do you have any idear?", "username": "hm_li" }, { "code": "", "text": "Hi @hm_li,I believe we usually do it with sharding :Is that related?Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "I think the new Online Archive service would be a more simple solution to data tiering. Take a look and see if this would meet your needs. Let us know!", "username": "Daniel_Pasette" }, { "code": "", "text": "Thanks for your tips, As the article said, It uses the feature of sharding zone to split the database.collection which data would be stored diff zone such as named hot zone, warm zone, etc.There are some ideas about Hot, warm, and cold data storage requirement as below:The hot place just has the last data or the frequent read of data. It’not all data in this place.It will transfer the hot data which doesn’t use during the period to a warm place and will be in a cold place finally and The hot place will erase the old data.The data of cold place will store the hot place if the data have operation frequently", "username": "hm_li" } ]
Does MongoDB plan to support Index Lifecycle Management?
2020-11-04T18:52:30.117Z
Does MongoDB plan to support Index Lifecycle Management?
3,116
null
[]
[ { "code": "", "text": "What is invalid URI when i am working on mflix data???", "username": "Ravi_Ranjan_Sahay" }, { "code": "", "text": "Please post a screenshot of the issue.I suspect you are taking one of the 220 MongoDB University course. If this is the case, it is more appropriate to use the course specific forum of MongoDB University.", "username": "steevej" } ]
What is this error???
2020-11-05T11:06:57.640Z
What is this error???
1,196
null
[ "dot-net" ]
[ { "code": "", "text": "Hi @nirinchev,I’m contemplating using Realm’s GetMongoClient for a server component to directly access the documents in Atlas. But I can’t seem to find any transaction handling exposed. Is this something we will see in the near future for the .NET Realm API ?Regards,\nTim", "username": "timah" }, { "code": "", "text": "Unfortunately, transactions are only available when using the MongoDB driver (which is what the linked function example does). All Realm SDKs (the .NET one included) don’t use the wire protocol to communicate with MongoDB, but rather go via an HTTP API exposed by MongoDB Realm. Unfortunately, there’s no HTTP API exposed to allow interacting with transactions. I’ll ask the cloud team to see if that’s something on their TODO list, but in the meantime, feel free to suggest it on our feedback portal.", "username": "nirinchev" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
GetMongoClient and Transactions?
2020-11-05T11:46:30.300Z
GetMongoClient and Transactions?
1,510
https://www.mongodb.com/…d40c69b8cf3.jpeg
[ "sharding" ]
[ { "code": "", "text": "Hi,According to MongoDB memory utilization in cluster environment.\nIn our environment has 3 config nodes and 3 shard nodes.\nWe found Node4 in Shard nodes consume memory higher than aside nodes (Node5 and Node6 but Database Active on Node5).\n\nNode4828×338 29.4 KB\n\n\nNode5830×337 29.9 KB\n\n\nNode6824×334 27.3 KB\nCould you please help us to check which command to review which thing drain memory from non active node?Best regards,\nJeans", "username": "Jumphol_Buntuek" }, { "code": "", "text": "Hi @Jumphol_Buntuek,Welcome to MongoDB community!Best way to clear memory of a secondary is to reboot it.However, to understand what consume memory you will need profiling tools (ops manager/mongotop) and best is to have a support engineer from Mongo to analyse it.However, this is done via a support subscription and not through this community forum.Best\nPavel", "username": "Pavel_Duchovny" } ]
One shard consumes higher memory
2020-11-05T04:53:23.581Z
One shard consumes higher memory
1,875
null
[ "aggregation" ]
[ { "code": "{\n \"_id\" : \"BRS1736451\",\n \"BookingNum\" : \"BRS1736451\",\n \"AccountId\" : \"0010Q00000xMDB6QAO\",\n \"BookingLines\" : [ \n {\n \"BookingLines\" : {\n \"BookingLine_id\" : 11289009,\n \"ParentLineId\" : null,\n \"MarketKey\" : \"Brazil\"\n }\n }, \n {\n \"BookingLines\" : {\n \"BookingLine_id\" : 11289014,\n \"ParentLineId\" : null,\n \"MarketKey\" : \"Brazil\"\n }\n }\n\t]\n}\n{\n \"_id\" : ObjectId(\"5f9ff6744bcc4ae435f10760\"),\n \"bookingNumber\" : \"BRS1736451\",\n \"accountId\" : \"0010Q00000xMDB6QAO\",\n \"bookingLines\" : [ \n {\n\t\t\t\"bookingLineId\" : 11289009,\n\t\t\t\"lineId\" : null,\n\t\t\t\"marketKey\" : \"Brazil\"\n }, \n {\n\t\t\t\"bookingLineId\" : 11289014,\n\t\t\t\"lineId\" : null,\n\t\t\t\"marketKey\" : \"Brazil\"\n\n }\n\t]\n}\nexports = function(fullDocument){\n\nconsole.log('BookingNumber: ' + fullDocument.BookingNum);\n\nconst pipeline = [\n {\n $project: {\n _id: 0,\n bookingNumber: fullDocument.BookingNum,\n accountId: fullDocument.AccountId,\n bookingLines: fullDocument.BookingLines,\n }\n },\n {\n $addFields: {\n bookingLines: {\n $map: {\n input: \"$bookingLines\",\n as: \"bookingLine\",\n in: {\n\t\t\t bookingLineId: \"$$bookingLine.BookingLine_id\",\n\t\t\t\tparentBookingLineId: \"$$bookingLine.ParentBookingLine_id\",\n\t\t\t\t marketKey: \"$$bookingLine.MarketKey\"\n}}}\n}},\n {\n $merge: {\n into: 'B', \n on: 'bookingNumber',\n whenMatched: 'replace', \n whenNotMatched: 'insert'\n }\n }\n];\n\nconst collectionA = context.services.get(\"Cluster0\").db(\"XXXX\").collection(\"A\");\n\nreturn collectionA.aggregate(pipeline).toArray().then(() => {\n console.log(`Successfully moved ${fullDocument.bookingNumber} data to B collection.`);\n })\n .catch(err => console.error(`Failed to move ${fullDocument.bookingNumber} data to B collection: ${err}`));\n};\n", "text": "Hi,I have a document in the below structure in collection A and I am using Database triggers with aggregation to transform the data and push the transformed data to collection B.Document in Collection A:In the collection B, I need below changesTransformed Data needed in Collection B will look like below:I am trying to use the below code which is working fine for first 2 requirements but for the 3rd requirement i.e. booking Lines are not getting transformed according to my expectation.Code snippet:Please help me on this.Thanks,\nVinay", "username": "Vinay_Gangaraj" }, { "code": "$addFieldsBooking Lines{\n $addFields: {\n bookingLines: {\n $map: {\n input: \"$bookingLines\",\n in: {\n bookingLineId: \"$$this.BookingLines.BookingLine_id\",\n lineId: \"$$this.BookingLines.ParentLineId\",\n marketKey: \"$$this.BookingLines.MarketKey\"\n }\n }\n }\n }\n}", "text": "Hello @Vinay_Gangaraj.I am trying to use the below code which is working fine for first 2 requirements but for the 3rd requirement i.e. booking Lines are not getting transformed according to my expectation.Try this $addFields stage. Replace with this one in the aggregation to transform the 3rd requirement about the Booking Lines:", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Data transformation using triggers with aggregation
2020-11-04T19:06:33.945Z
Data transformation using triggers with aggregation
3,314
null
[ "security" ]
[ { "code": "", "text": "Hi!I’m running CI on Github Actions. The integration tests include a connection to a test database within my Atlas cluster. Github Actions keep rotating through a large amount of IPs, so I can’t whitelist them very easily… As you understand, I don’t value the IP guard to my test database that much (it gets wiped after every test suite). Is there anyway to disable the IP check on this very database within my cluster, while keeping the IP check on all other dbs in the cluster?If that’s not allowed I guess I can setup another cluster without the IP guard, in which I only keep my test db, but this approach seems a bit more cumbersome than the one described above.Thanks in advance!", "username": "petas" }, { "code": "0.0.0.0", "text": "Hi @petas,Atlas IP Access whitelist is done on a project level , so I would recommend having a separate project if you intend to whitelist 0.0.0.0 address.Now I saw github actions have a list of ips updated weekly : About GitHub-hosted runners - GitHub DocsPerhaps you can use Atlas API and dynamically whitelist when changed.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks a lot, Pavel! ", "username": "petas" }, { "code": "", "text": "Hey @Pavel_Duchovny, I just ran into problems again with this…I have now created a separate project (2) with a test cluster and my test db in it. However, I don’t see how I will be able access data from that test project (2) from my original app (1). According to the docs (https://docs.mongodb.com/realm/mongodb/link-a-data-source/): \" You can use MongoDB Realm to work with a data source – either a MongoDB Atlas cluster or Data Lake associated with the same Atlas project as your Realm app.\"If you only look at Github Workflow, this approach should probably work. However, my integration tests run the deployed functions on my main app (1). Could they connect to a separate project’s (2) cluster in some way?Cheers,", "username": "petas" }, { "code": "", "text": "Hi @petas,What I meant is that the cluster your github will communicate to will also need to move to that dedicated project.Not sure how realm is used here? Do you wish to use Realm as your data access? For realm connections there is no need to whitelist origin and authentication is done with one of realms providers…You can though specify github.com as a domain origin.You can also use realm wire connection to perform standard crud:Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Disable IP security on database level
2020-11-02T11:17:00.663Z
Disable IP security on database level
6,611
null
[ "aggregation" ]
[ { "code": "_id : 5fa2bf16987ad021094dbf6f,\ndate : 2020-11-05T08:18:00.000+00:00,\nidTwo : 5f9925c095b8a429083fb45d,\nprest : 2\n_id : 5f9925c095b8a429083fb45d,\nidAgenda : [\n\t{\t\n\t\tid : 1,\n\t\tprice: 122,\n\t\tenable : true,\n\t\tlabel : \"label 1\"\n\t},{\n\t\tid : 2,\n\t\tprice: 25,\n\t\tenable : true,\n\t\tlabel : \"label 2\"\n\t},{\n\t\tid : 3,\n\t\tprice: 149,\n\t\tenable : true,\n\t\tlabel : \"label 3\"\n\t}],\nSurname: \"Surname\",\nName: \"Name\n{\n from: 'Collection2',\n localField: 'idTwo',\n foreignField: '_id',\n as: 'string'\n}\n", "text": "I have this two example of collection:\nCollection 1Collection 2from _id of collection 1, I would like to merge collection 1 with collection 2\nwhere idTwo of collection1 is = _id of collection2\nand idAgenda of collection 2 is = prest of collection 1I suppose that I need pipeline (for $unwind for example idAgenda), but I don’ t understand how use it.\nI have try with this $lookup:But in **string ** I have an array with all collection2…I need only the Collection2.idAgenda can match with Collection1.prest.\nDo you have suggestion for me?", "username": "Francesco_Di_Battist" }, { "code": "_id", "text": "Could be nice to include the expected output.Btw, collection2 is missing the _id i guess that’s just a typo.", "username": "santimir" }, { "code": "[\n\t{\n\t\t\"$lookup\" : {\n\t\t\t\"from\" : \"Collection2\",\n\t\t\t\"localField\" : \"idTwo\",\n\t\t\t\"foreignField\" : \"_id\",\n\t\t\t\"as\" : \"lookupRes\"\n\t\t}\n\t},\n\t{\n\t\t\"$addFields\" : {\n\t\t\t\"prest\" : \"$lookupRes.idAgenda\",\n\t\t\t\"idTwo\" : \"$lookupRes._id\"\n\t\t}\n\t},\n\t{\n\t\t\"$unwind\" : \"$prest\"\n\t},\n\t{\n\t\t\"$unwind\" : \"$idTwo\"\n\t},\n\t{\n\t\t\"$unset\" : \"lookupRes\"\n\t}\n]\n", "text": "Think now I got what you mean,This pipeline", "username": "santimir" }, { "code": "_id : 5fa2bf16987ad021094dbf6f,\ndate : 2020-11-05T08:18:00.000+00:00,\nidTwo : 5f9925c095b8a429083fb45d,\nprest : 2\n_id : 5fa2bf16987ad021094dbf6f,\ndate : 2020-11-05T08:18:00.000+00:00,\nidTwo : 5f9925c095b8a429083fb45d,\nprest : 2,\nidAgenda: {\n\t\tid : 2,\n\t\tprice: 25,\n\t\tenable : true,\n\t\tlabel : \"label 2\"\n\t},\nSurname: \"Surname\",\nName: \"Name", "text": "Sorry, id in Collection2 is _id. I modify it.\nMy excepted output is:", "username": "Francesco_Di_Battist" }, { "code": "", "text": "Hi @santimir,\nno in result I need only one “json”: Collection2.idAgenda.id = Collection1.prest", "username": "Francesco_Di_Battist" }, { "code": "[\n\t{\n\t\t\"$lookup\" : {\n\t\t\t\"from\" : \"Collection2\",\n\t\t\t\"localField\" : \"idTwo\",\n\t\t\t\"foreignField\" : \"_id\",\n\t\t\t\"as\" : \"lookupRes\"\n\t\t}\n\t},\n\t{\n\t\t\"$unwind\" : \"$lookupRes\"\n\t},\n\t{\n\t\t\"$addFields\" : {\n\t\t\t\"Surname\" : \"$lookupRes.Surname\",\n\t\t\t\"Name\" : \"$lookupRes.Name\",\n\t\t\t\"idAgenda\" : {\n\t\t\t\t\"$filter\" : {\n\t\t\t\t\t\"input\" : \"$lookupRes.idAgenda\",\n\t\t\t\t\t\"as\" : \"obj\",\n\t\t\t\t\t\"cond\" : {\n\t\t\t\t\t\t\"$eq\" : [\n\t\t\t\t\t\t\t\"$$obj.id\",\n\t\t\t\t\t\t\t\"$prest\"\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t},\n\t{\n\t\t\"$unwind\" : \"$idAgenda\"\n\t},\n\t{\n\t\t\"$unset\" : \"lookupRes\"\n\t}\n]\n{\n\t\"_id\" : \"5fa2bf16987ad021094dbf6f\",\n\t\"date\" : \"2020-11-05T08:18:00.000+00:00\",\n\t\"idTwo\" : \"5f9925c095b8a429083fb45d\",\n\t\"prest\" : 2,\n\t\"Surname\" : \"Surname\",\n\t\"Name\" : \"Name\",\n\t\"idAgenda\" : \n\t\t{\n\t\t\t\"id\" : 2,\n\t\t\t\"price\" : 25,\n\t\t\t\"enable\" : true,\n\t\t\t\"label\" : \"label 2\"\n\t\t}\n\t\n}\n$project", "text": "Maybe this one?@Francesco_Di_BattistOtherwise I hope someone else help OutputAnother option may be to use $project, I didn’t try that.", "username": "santimir" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Help with lookup and pipeline
2020-11-04T20:47:42.355Z
Help with lookup and pipeline
1,826
null
[ "aggregation" ]
[ { "code": "{\n rootField: \"value\",\n anotherRootField: \"value1\",\n products:[\n {\n productName: \"p1_name\",\n currencies: [\n {\n currencyName: \"p1c1_name\"\n },\n {\n currencyName: \"p1c2_name\" \n }\n ]\n },\n {\n productName: \"p2_name\",\n currencies: [\n {\n currencyName: \"p2c1_name\"\n },\n {\n currencyName: \"p2c2_name\"\n },\n ]\n },\n ],\n ships: [\n {\n shipDate: \"shipDate\"\n },\n ...\n ]\n}\n{\n productName: \"p1_name\",\n}\n{\n rootField: \"value\",\n anotherRootField: \"value1\",\n products:[\n {\n productName: \"p1_name\",\n currencies: [\n {\n currencyName: \"p1c1_name\"\n },\n {\n currencyName: \"p1c2_name\" \n }\n ]\n }\n ]\n}\n{\n currencyName: \"p2c1_name\",\n}\n{\n rootField: \"value\",\n anotherRootField: \"value1\",\n products:[\n {\n productName: \"p2_name\",\n currencies: [\n {\n currencyName: \"p2c1_name\"\n },\n ]\n },\n ]\n}\nawait Proforma.aggregate([\n { $match: { rootField: \"value\" } },// sample query\n { $unwind: \"$products\" },\n { $match: { \"products.productName\": \"p1_name\" } }, // sample query\n { $unwind: \"$products.currencies\" },\n { $match: { \"products.currencies.currencyName\": \"p1c1_name\"}}, // sample query\n { // back to array\n $group: {\n _id: { _id: \"$_id\", product__id: \"$products._id\" },\n currencies: { $push: \"$products.currencies\" },\n },\n },\n { // back to array\n $group: {\n _id: \"$_id._id\",\n products: {\n $push: {\n _id: \"$id.product__id\",\n currencies: \"$currencies\",\n },\n },\n },\n },\n ]);\n\nMongoDB aggregate error: expression must have exactly one field\n", "text": "Hi My database structure is like this:For some reason users can send different queries which means I don’t know what exactly they want and also they can search for each level of my document.My goal is to process the query and return the result as a microsoft excel file. for that I need to remove products/currencies/ships if they don’t match the query. for example for this query:I need something like this:or if search for currency:I need this:I have a hard time to understanding the aggregation framework and this is what I working on:As I said my query could be very complex and contains multiple fields from root, products, currencies and ships so I don’t know is this works for my situations or not but my current problem is my rootFields get ignored from the result. I just want to filter nested fields and have rootFields untouched.and also I don’t know how to add ships to this queryalso filter does not work for me because sometimes my conditions contain a lot of nested $or and $and and i get this error:", "username": "m_z" }, { "code": "", "text": "Much simpler if you make separate queries for each type of query.\nHow is user indicating query? A web form? In what environment?\nMaybe you can identify the type of query in your web language and have an array of 3 different queries to suit the 3 different cases.", "username": "Jack_Woehr" } ]
Delete each element of nested array of documents if they don't match my query
2020-11-04T10:56:27.022Z
Delete each element of nested array of documents if they don&rsquo;t match my query
3,294
null
[ "aggregation" ]
[ { "code": "{\n\t\"_id\":{\"$oid\":\"5e27ea5707f2a21cecb0dc77\"},\n\t\"LanguageId\":{\"$oid\":\"5a6304ffc3c3f119fc0e60c9\"},\n\t\"FormId\":{\"$oid\":\"5da7f7dfa6334912bc30a3f5\"},\n\t\"SubscriberId\":{\"$oid\":\"5db1278dc6f6570e9121edad\"},\n\t\"FormVersionId\":{\"$oid\":\"5da7f7dfa6334912bc30a3f6\"},\n\t\"FolderName\":\"7ff60e99-ab67-75d8-d2e7-e2c93f26bcfd\",\n\t\"IsSync\":false,\n\t\"UpdatedBy\":0,\n\t\"IsCompleted\":false,\n\t\"ProjectId\":{\"$oid\":\"5e25966d07f2a21cecb0cd27\"},\n\t\"ClientId\":{\"$oid\":\"5e258f9807f2a21cecb0cd23\"},\n\t\"LocalInLineId\":null,\n\t\"ParentSubmissionId\":null,\n\t\"IsDeleted\":true,\"UpdatedOn\":{\"$date\":\"2020-02-03T07:40:42.626Z\"},\n\t\"SubmittedOn\":{\"$date\":\"2020-01-22T06:23:19.237Z\"},\n\t\"CreatedOn\":{\"$date\":\"2020-01-22T06:23:19.237Z\"},\n\t\"EmployeeId\":{\"$oid\":\"5db12946c6f6570e9121edb2\"},\n\t\"Longitude\":\"72.5298174\",\n\t\"Latitude\":\"23.0172479\",\n\t\"DeviceId\":\"b896c36665a93cb6\",\n\t\"AppInfoId\":null,\n\t\"__v\":0\n}\n\n{\n\t\"_id\":{\"$oid\":\"5e340be407f2a21cecb2663e\"},\n\t\"LanguageId\":{\"$oid\":\"5a6304ffc3c3f119fc0e60c9\"},\n\t\"FormId\":{\"$oid\":\"5da7f7dfa6334912bc30a3f5\"},\n\t\"SubscriberId\":{\"$oid\":\"5db1278dc6f6570e9121edad\"},\n\t\"FormVersionId\":{\"$oid\":\"5e27f4a807f2a21cecb0dcf5\"},\n\t\"FolderName\":\"6450638c-0d3f-f406-1a2e-6ca2762a4a40\",\n\t\"IsSync\":false,\n\t\"UpdatedBy\":0,\n\t\"IsCompleted\":false,\n\t\"ProjectId\":{\"$oid\":\"5e25966d07f2a21cecb0cd27\"},\n\t\"ClientId\":{\"$oid\":\"5e258f9807f2a21cecb0cd23\"},\n\t\"LocalInLineId\":null,\n\t\"ParentSubmissionId\":null,\n\t\"IsDeleted\":true,\n\t\"UpdatedOn\":{\"$date\":\"2020-02-03T07:40:55.157Z\"},\n\t\"SubmittedOn\":{\"$date\":\"2020-01-31T11:13:40.410Z\"},\n\t\"CreatedOn\":{\"$date\":\"2020-01-31T11:13:40.410Z\"},\n\t\"EmployeeId\":{\"$oid\":\"5e2fd5a607f2a21cecb15bba\"},\n\t\"Longitude\":\"72.5298021\",\n\t\"Latitude\":\"23.0172461\",\n\t\"DeviceId\":\"b4e958698555a31d\",\n\t\"AppInfoId\":null,\n\t\"__v\":0\n}\n{\n\t\"_id\":{\"$oid\":\"5e27ea5707f2a21cecb0dcc1\"},\n\t\"FormId\":{\"$oid\":\"5da7f7dfa6334912bc30a3f5\"},\n\t\"FormVersionId\":{\"$oid\":\"5da7f7dfa6334912bc30a3f6\"},\n\t\"FormSubmissionId\":{\"$oid\":\"5e27ea5707f2a21cecb0dc77\"},\n\t\"LanguageId\":{\"$oid\":\"5a6304ffc3c3f119fc0e60c9\"},\n\t\"Answers\":[{\n\t\t\t\"_id\":{\"$oid\":\"5e27ea5707f2a21cecb0dc78\"},\n\t\t\t\"FormQuestionId\":{\"$oid\":\"5e27e1b607f2a21cecb0daf5\"},\n\t\t\t\"ElementType\":8,\n\t\t\t\"Value\":\"ABC\"\n\t\t},\n\t\t{\n\t\t\t\"_id\":{\"$oid\":\"5e27ea5707f2a21cecb0dc79\"},\n\t\t \"FormQuestionId\":{\"$oid\":\"5e27e1b607f2a21cecb0daf6\"},\n\t\t\t\"ElementType\":2,\n\t\t\t\"Value\":\"3\"\n\t },\n\t {\n\t\t\t\"_id\":{\"$oid\":\"5e27ea5707f2a21cecb0dc7a\"},\n\t\t\t\"FormQuestionId\":{\"$oid\":\"5e27e1b607f2a21cecb0daf7\"},\n\t\t\t\"ElementType\":45,\n\t\t\t\"Value\":\"22012020/114920\"\n\t },],\n\t\"__v\":0\n}\ndb.FormSubmissions.aggregate([\n {\n $match: {\n $and: [{\n FormId: ObjectId('5da7f7dfa6334912bc30a3f5')\n }, \n\t\t {\n IsDeleted: false\n\t\t }]\n },\n },\n {\n $group: {\n _id: null,\n submissionIds: {\n $addToSet: '$_id',\n },\n },\n },\n {\n $lookup: {\n from: 'FormAnswers',\n localField: 'submissionIds',\n foreignField: 'FormSubmissionId',\n as: 'FormAnswers',\n },\n },\n {\n $project: {\n _id: null,\n TotalSubmissions: {\n $size: '$FormAnswers',\n },\n },\n },\n])\n", "text": "I have 2 collections. One for FormSubmissions and other for FormAnswers.I want find out Total No of Submission by given FormId1.FormSubmissions - (No of Documents = 17,318)\n2.FormAnswers - (No of Documents = 17,191)Sample data for FormSubmissions.and sample data for FormAnswersHere is the query:When Execute above query it give below error\nTotal size of documents in FormAnswers matching pipeline’s $lookup stage exceeds 104857600 bytesPlease provide me best solutions for above query.Thanks.", "username": "hiren_parejiya" }, { "code": "$lookupformAnswers{\n$graphLookup : {\nfrom:\"formSubmissions\",\nstartWith:\"$formSubmissionId\",\nconnectToField:\"_id\",\nconnectFromField:\"formSubmissionId\",\nas:\"find\",\nmaxDepth:0,\nrestrictSearchWithMatch : { \nFormId:ObjectId('5da7f7dfa6334912bc30a3f5'), \nisDeleted:false \n}\n}}\n", "text": "What about running a similar pipeline from the other collection? @hiren_parejiyaIt seems that the block you’ll move over with a $lookup will be smaller, because the average document size seems smaller, and the number of documents is quite similar.Take all documents in the formAnswers collection, and do a graph lookup like this:It’s not completed yet. Think grouping of some sort is needed afterwards.", "username": "santimir" } ]
Total size of documents matching pipeline’s $lookup stage exceeds 104857600 bytes
2020-09-04T06:55:15.930Z
Total size of documents matching pipeline’s $lookup stage exceeds 104857600 bytes
10,257
null
[ "swift", "atlas-device-sync" ]
[ { "code": " app.login(credentials: Credentials.anonymous) {(result) in\n // Remember to dispatch back to the main thread in completion handlers\n // if you want to do anything on the UI.\n DispatchQueue.main.async {\n switch result {\n case .failure(let error):\n print(\"Login failed: \\(error)\")\n case .success(let user):\n print(\"Login as \\(user) succeeded!\")\n // Continue below\n self.OnLogin()\n default:\n self.OnLogin()\n }\n }\n }\n", "text": "I have enabled Realm Sync,\nWhen I tried Realm Sync client Swift in Xcode 12 after adding realm sync framework and pod configuration, the following exception is blocking me to proceed in Future, Promise code.The code where the problem isand the exception is\nContextual closure type ‘(RLMUser?, Error?) → Void’ expects 2 arguments, but 1 was used in closure body.Can any expert help me as soon as possible", "username": "ALALASUNDARAM_SARAVA" }, { "code": "{ xxxx, error in{ result in pod 'RealmSwift', '=10.0.0'pod updatesudo gem install -n /usr/local/bin cocoapods", "text": "You’re using an older version of the Pod/SDK. With 10.0.0 all Realm closures changed from{ xxxx, error into{ result inUpdate your podfile to use this pod 'RealmSwift', '=10.0.0'and thenpod updateYou may also need to update cocoapods in general withsudo gem install -n /usr/local/bin cocoapods", "username": "Jay" }, { "code": "pod --version\n", "text": "Got stuck on this too. Make sure you have cocoa pods version 1.10.0to check type", "username": "Richard_Krueger" }, { "code": "", "text": "Thanks for the response and Solution,\nI have made some changes in the Ream Database by mistake, Since then I could not open the Realm Object in the Client.\nThe Error message is “The Server has forgotten about this client to resume synchronization”\nAnd the server side Error message is\nclient file not found { sessionIdent: 1, clientFileIdent: 4 } (ProtocolErrorCode=208)\nPlease help, Tried re create new project still does not help.", "username": "ALALASUNDARAM_SARAVA" }, { "code": "", "text": "I have tried work around, like creating new Realm app in the Realm (Mongodb Atlas) and try to run the client.\nBecause of this I need to change the app id in the client.\nBut this time , the login is failed ( unanimous login ) and the error message is “Malformed JSON”\nWhen I revert back the old app id it is logging in successfully.", "username": "ALALASUNDARAM_SARAVA" }, { "code": "", "text": "The Server has forgotten about this client to resume synchronizationYou’ll probably need to go into the console, Terminate Sync and re-set it up. Ensure you select Developer Mode tab before doing so.Also, you may need to completely delete all of the local files - that will cause it to re-sync the data from the server to the client (ensure you have a backup in case there’s local data you need).", "username": "Jay" }, { "code": "", "text": "Thanks Jason,\nThe deletion all at server only? (Mongodb atlas , realm app).\nPlease help me to understand which local files to be deleted if so.", "username": "ALALASUNDARAM_SARAVA" }, { "code": "", "text": "But still I have to cross the new issue I have posted ( Malformed Json while logging)", "username": "ALALASUNDARAM_SARAVA" }, { "code": "let anonymousCredentials = Credentials.anonymoususer/library/Application Support/com.your_company.app_name", "text": "But this time , the login is failed ( unanimous login )I think you mean anonymous login. You need to ensure anonymous login is on. In console, go into any app then left column->Authentication then Authentication Providers tab and ensure it’s turned on. Don’t forget to review and deploy changes.Second thing is there were recent changes to how users authenticate. See Anonymous Authentication - ensure your code matches let anonymousCredentials = Credentials.anonymousThen as far as local files, if you’re doing iOS you need to reset the simulator. If your doing macOS you need to delete the files in user/library/Application Support/com.your_company.app_name. Again though, deleting those files will reset the client and cause all of the data on the server to re-sync so don’t do it unless you want local data erased.", "username": "Jay" }, { "code": "", "text": "Thanks a lot Jayson for the clear and detailed clarifications. It helped me lot to understand.I find the anonymous authentication is enabled, and re sync,\nBut app.login for anonymous object comes out with nil RLMUser object and never return result, and I dont find and log showing in the server for login failure.\nI could not able to make out for nil RLMUser object at client side.\nPlease help if it has been already encountered by some one resolved it.", "username": "ALALASUNDARAM_SARAVA" }, { "code": "your_app.login(credentials: creds) { result in\n switch result {\n case .failure(let error):\n print(error.localizedDescription)\n case .success(let user):\n print(\"anon login success\")\n print(user.id) //just to show the id\n }\n}\n", "text": "That’s a little unclear. Based on your updated code that should look like thisdoes the code in the .failure case run or does the code in the .success run, put printing user.id prints nil?", "username": "Jay" }, { "code": "", "text": "Yes exactly, I am using the same code, it never goes to switch statement and skips it.\nMeans, it is not either success nor failure.\nAnd I find in debug, that the RLMUser object is nil and therefore further I cannot use this user object to open the Realm.", "username": "ALALASUNDARAM_SARAVA" }, { "code": "", "text": "One more clue is I find it returns RLMTranslator error in one of the functions before come back to app.login method.", "username": "ALALASUNDARAM_SARAVA" }, { "code": "", "text": "@ALALASUNDARAM_SARAVA you mention ‘it never goes to switch statement and skips it’ Are you waiting for the result to complete? As this is an asynchronous call.", "username": "Lee_Maguire" }, { "code": "", "text": "Dear Lee_MaguireThanks for the response, yes, it is my fault, after correcting the code it is back to the error “Malformed Json”", "username": "ALALASUNDARAM_SARAVA" }, { "code": "", "text": "@ALALASUNDARAM_SARAVA do you have any logs in the MongoDB Realm UI that can help narrow down the issue?", "username": "Lee_Maguire" }, { "code": "", "text": "I would like to see the exact code in question and how any vars are assigned.", "username": "Jay" } ]
Realm Sync With Swift
2020-10-28T08:48:07.546Z
Realm Sync With Swift
3,867
null
[ "indexes" ]
[ { "code": "", "text": "Hello Everyone, I am preparing for some mongodb interview questions and I am confused as to understand can I create an index on an array field in MongoDB? If yes, what happens in this case? Please elaborate to me.", "username": "shivam_bhatele" }, { "code": "colors: [ \"red\", \"blue\" , \"black\" ]db.collection.createIndex( { colors: 1 } )contacts: [\n { name: \"John\", email: \"[email protected]\" }, \n { name: \"Pete\", email: \"[email protected]\" }\n ]\nnamedb.collection.createIndex( { \"contacts.name\": 1 } )", "text": "Hello @shivam_bhatele, welcome to the MongoDB community forum.You can create an index on a field of type array. For example, consider an array field:colors: [ \"red\", \"blue\" , \"black\" ]You can create the index on the array field with the normal index creation syntax:db.collection.createIndex( { colors: 1 } )In the above created index there will be three keys.You can also create an index on a field of an array of objects. Another example array field:Create an index on the name field:db.collection.createIndex( { \"contacts.name\": 1 } )", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Array Field in MongoDB
2020-11-04T08:50:24.437Z
Array Field in MongoDB
2,554
null
[ "replication", "configuration" ]
[ { "code": "", "text": "Good morning!I am mounting a Replica Set environment with mongo 4.4. I already have a replica set environment with 3 members working correctly. Now what I want to do is add two more members to the replica set environment.I have the configuration of Replica Set with internal authentication through a keyfile file as well as external authentication to have administrator users who can access and manage the databases.The two members that are going to be part of the Replica Set that already exists, I already have MongoDB installed in them, which I would need to know how to add to the replica set that I already have running.I have read the documentation and there are steps that are not clear to me. I would like you to tell me how to solve it step by step.Thank you very much in advance!", "username": "Eduardo_HM" }, { "code": "security:\n authorization: enabled\n keyFile: /path/your-replica-keyfile\nreplication:\n replSetName: your-repl-set-name\n", "text": "Hello @Eduardo_HM,This is the main documentation you need to follow for adding new members to the existing replica-set: Add Members to a Replica Set.I have highlighted some sub-headings from the documentation for you to go through and understand. See these topics:Additional Info About Configuring the New Members:The new member configuration file should include these in addition to the other configuration parameters like - storage, net, systemLog, etc., depending upon your setup. For example:You will be using the same key file which was used for the existing members of the replica-set.", "username": "Prasad_Saya" }, { "code": "", "text": "Thank you very much for the reply,Then following the documentation the steps would be the following:Right?Thanks in advanced!!!", "username": "Eduardo_HM" }, { "code": "", "text": "Then following the documentation the steps would be the following: …Yes, in summary. Please do follow the detailed steps from the provided links from my post.", "username": "Prasad_Saya" }, { "code": "", "text": "Thank you very much for the answer.What is still not clear to me is what difference there is between the first point “Data Files” and the point of “Prepare the data dictionary”.\nIn the first point “Data Files” tells me that what we have to do is copy the data that is in the path of one of the members of the Replicaset. At the point of preparing the data dictionary, according to the documentation, there are two ways to do it, check that there is nothing in the data directory, or manually copy data from one of the members to the new member. To copy the data, I understand that it will be enough to do a scp from a member node of the replicaset to the new member, right?Thank you very much in advance", "username": "Eduardo_HM" }, { "code": "", "text": "What is still not clear to me is what difference there is between the first point “Data Files” and the point of “Prepare the data dictionary”.In the first point “Data Files” tells me that what we have to do is copy the data that is in the path of one of the members of the Replica-set.Data Files:This is not a step in the process, but is a note about data files and making a copy of them.In general, in a replica-set’s each member has is own copy of data (replicated data). The new members being added initially will not have any data. To get the replicated data initially into a new member you can have the existing member’s data copied.The documentation says:If you have a backup or snapshot of an existing member, you can move the data files (e.g. the dbPath directory) to a new system and use them to quickly initiate a new member.IMPORTANT\nAlways use filesystem snapshots to create a copy of a member of the existing replica set.See Back Up and Restore with Filesystem SnapshotsPrepare the Data Directory:At the point of preparing the data dictionary, according to the documentation, there are two ways to do it, check that there is nothing in the data directory, or manually copy data from one of the members to the new member. To copy the data, I understand that it will be enough to do a scp from a member node of the replicaset to the new member, right?See above link on Back Up and Restore.", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Adding new member for existing Replica Set
2020-11-03T09:01:35.756Z
Adding new member for existing Replica Set
5,522
null
[ "data-modeling" ]
[ { "code": "", "text": "Hello Everybody,My database will contain documents with document values in which the value contains both text and numbers. For example: M42 or 3rd power.So my question is: Can I define the data type as string, or are strings not allowed to contain numbers?Thanks in advance for your feedback.Carel.", "username": "Carel_Schilp" }, { "code": "", "text": "Hi Carel,Welcome to the community!BSON strings are UTF-8 Strings (see BSON (Binary JSON): Specification). These strings can contain numbers.An easy way to test this out is to create a free cluster on Atlas and insert a document. For each field in the document, you can select what type the field should be.\nScreen Shot 2020-11-03 at 3.19.19 PM749×375 18.8 KB\n", "username": "Lauren_Schaefer" }, { "code": "", "text": "Hi Lauren,Thanks for the quick reply. I’m not a developer, but I’m in the process of preparing my dataand need to work out the fields and field types for the different documents in my database.So it’s a relief to hear that strings can also contain numbers. I’m just starting out in the MongoDB worldand I like what I see, because I could not make it work in a relational database structure.The possibilities of MongoDB for my situation give me the reassurance that I can continue and accommodate the differentdocument structures I need and still be able to retrieve what I want and how I want it.Regards, thanks and greeting from Holland,Carel.", "username": "Carel_Schilp" } ]
Question about data types
2020-11-03T18:36:36.761Z
Question about data types
1,803
https://www.mongodb.com/…4_2_1024x512.png
[ "dot-net", "production" ]
[ { "code": "", "text": "This is a patch release that addresses some issues reported since 2.11.3 was released.The list of JIRA tickets resolved in this release is available at:https://jira.mongodb.org/issues/?jql=project%20%3D%20CSHARP%20AND%20fixVersion%20%3D%202.11.4%20ORDER%20BY%20key%20ASCDocumentation on the .NET driver can be found at:There are no known backwards breaking changes in this release.", "username": "James_Kovacs" }, { "code": "", "text": "", "username": "system" } ]
.NET Driver 2.11.4 Released
2020-11-04T01:42:04.089Z
.NET Driver 2.11.4 Released
1,687