image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[]
[ { "code": "", "text": "The guide is asking you to use the command line but this won’t work since mongo isn’t installed locally.Please advise.", "username": "Ken_Mathieu_Beaudin" }, { "code": "mongo", "text": "Hi @Ken_Mathieu_Beaudin !at the end of the chapter there is an Interactive Developer Environment (IDE). This is bash shell with mongo and other bins already installed for you.To run it locally head over to the installation chapter in the docs.", "username": "santimir" }, { "code": "", "text": "I am not able to connect to my cluster using IDE environment, can any one help me?", "username": "Muzaffar_ali_53011" }, { "code": "", "text": "What issue you are facing\nPlease show us the screenshot or error details", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi,i am not able to connect to mogodb using shell .\nfollowing the same procedure as informed.\nthe IDE show error.\nattaching the screenshot .\nconnection string :mongo “mongodb+srv://sandbox.wr6ht.mongodb.net/” --username m001-student\nScreenshot (20)1920×1080 91.7 KB", "username": "aditya_rana" }, { "code": "", "text": "What error are you getting?\nI don’t see any error in your snapshot.It shows test result failed\nDid you run the command in correct area of IDE\nDid you hit enter after typing/pasting the connect string?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I am not able to connect to Atlas cluster. I am getting following erros.\nconnecting to: mongodb://sandbox-shard-00-02.ftsyv.mongodb.net.:27017,sandbox-shard-00-00.ftsyv.mongodb.net.:27017,sandbox-shard-00-01.ftsyv.mongodb.net.:27017/%3Cdbname%3E?authSource=admin&gssapiServiceName=mongodb&replicaSet=atlas-exfvdg-shard-0&ssl=true\n2020-10-27T10:02:45.134+0000 I NETWORK [js] Starting new replica set monitor for atlas-exfvdg-shard-0/sandbox-shard-00-02.ftsyv.mongodb.net.:27017,sandbox-shard-00-00.ftsyv.mongodb.net.:27017,sandbox-shard-00-01.ftsyv.mongodb.net.:27017\n2020-10-27T10:02:45.771+0000 I NETWORK [js] Successfully connected to sandbox-shard-00-01.ftsyv.mongodb.net.:27017 (1 connections now open to sandbox-shard-00-01.ftsyv.mongodb.net.:27017 with a 5 second timeout)\n2020-10-27T10:02:45.771+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Successfully connected to sandbox-shard-00-00.ftsyv.mongodb.net.:27017 (1 connections now open to sandbox-shard-00-00.ftsyv.mongodb.net.:27017 with a 5 second timeout)\n2020-10-27T10:02:45.959+0000 I NETWORK [js] changing hosts to atlas-exfvdg-shard-0/sandbox-shard-00-00.ftsyv.mongodb.net:27017,sandbox-shard-00-01.ftsyv.mongodb.net:27017,sandbox-shard-00-02.ftsyv.mongodb.net:27017 from atlas-exfvdg-shard-0/sandbox-shard-00-00.ftsyv.mongodb.net.:27017,sandbox-shard-00-01.ftsyv.mongodb.net.:27017,sandbox-shard-00-02.ftsyv.mongodb.net.:27017\n2020-10-27T10:02:46.535+0000 I NETWORK [js] Successfully connected to sandbox-shard-00-01.ftsyv.mongodb.net:27017 (1 connections now open to sandbox-shard-00-01.ftsyv.mongodb.net:27017 with a 5 second timeout)\n2020-10-27T10:02:46.538+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Successfully connected to sandbox-shard-00-00.ftsyv.mongodb.net:27017 (1 connections now open to sandbox-shard-00-00.ftsyv.mongodb.net:27017 with a 5 second timeout)\n2020-10-27T10:02:47.363+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Successfully connected to sandbox-shard-00-02.ftsyv.mongodb.net:27017 (1 connections now open to sandbox-shard-00-02.ftsyv.mongodb.net:27017 with a 5 second timeout)\n2020-10-27T10:02:47.728+0000 I NETWORK [js] Marking host sandbox-shard-00-01.ftsyv.mongodb.net:27017 as failed :: caused by :: Location40659: can’t connect to new replica set master [sandbox-shard-00-01.ftsyv.mongodb.net:27017], err: Location8000: bad auth : Authentication failed.\n2020-10-27T10:02:48.104+0000 I NETWORK [js] Marking host sandbox-shard-00-02.ftsyv.mongodb.net:27017 as failed :: caused by :: SocketException: can’t authenticate against replica set node sandbox-shard-00-02.ftsyv.mongodb.net:27017 :: caused by :: socket exception [CONNECT_ERROR] server [sandbox-shard-00-02.ftsyv.mongodb.net:27017] connection pool error: network error while attempting to run command ‘isMaster’ on host ‘sandbox-shard-00-02.ftsyv.mongodb.net:27017’\n2020-10-27T10:02:48.478+0000 I NETWORK [js] Marking host sandbox-shard-00-00.ftsyv.mongodb.net:27017 as failed :: caused by :: SocketException: can’t authenticate against replica set node sandbox-shard-00-00.ftsyv.mongodb.net:27017 :: caused by :: socket exception [CONNECT_ERROR] server [sandbox-shard-00-00.ftsyv.mongodb.net:27017] connection pool error: network error while attempting to run command ‘isMaster’ on host ‘sandbox-shard-00-00.ftsyv.mongodb.net:27017’\n2020-10-27T10:02:49.612+0000 I NETWORK [js] Marking host sandbox-shard-00-01.ftsyv.mongodb.net:27017 as failed :: caused by :: Location40659: can’t connect to new replica set master [sandbox-shard-00-01.ftsyv.mongodb.net:27017], err: Location8000: bad auth : Authentication failed.\n2020-10-27T10:02:49.613+0000 E QUERY [js] Error: can’t authenticate against replica set node sandbox-shard-00-01.ftsyv.mongodb.net:27017 :: caused by :: can’t connect to new replica set master [sandbox-shard-00-01.ftsyv.mongodb.net:27017], err: Location8000: bad auth : Authentication failed. :\nconnect@src/mongo/shell/mongo.js:328:13\n@(connect):1:6\nexception: connect failed", "username": "Sandeep_41860" }, { "code": "", "text": "Hi follow the step to sucess the exercice mongo university step1398×703 57.4 KB", "username": "Jean-Claude_ADIBA" }, { "code": "", "text": "bad authentication means wrong combination of userid/pwd\nWhat did you give as password?\nMay be some invalid character or space got introduced while pasting the password at the time of creating your sandbox cluster", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi @Muzaffar_ali_53011,I am not able to connect to my cluster using IDE environment, can any one help me?Please share the information requested by @Ramachandra_37567 if you are still facing any issue.What issue you are facing\nPlease show us the screenshot or error details", "username": "Shubham_Ranjan" }, { "code": "", "text": "Hi\nI am facing issue while connection to mongoshell.\nError which I facing,I pasted below. please let me know where I need to improve the command.bash-4.4# mongo \"mongodb+srv://sandbox.ofcuo.mongodb.net/\nMongoDB shell version v4.0.5\nEnter password:\nconnecting to: mongodb://sandbox-shard-00-02.ofcuo.mongodb.net.:27017,sandbox-shard-00-00.ofcuo.mongodb.net.:27017,sandbox-shard-00-01.ofcuo.mongodb.net.:27017/%3Cdbname%3E?authSource=admin&gssapiServiceName=mongodb&replicaSet=atlas-2y4u8j-shard-0&ssl=true\n2020-11-01T05:41:19.699+0000 I NETWORK [js] Starting new replica set monitor for atlas-2y4u8j-shard-0/sandbox-shard-00-02.ofcuo.mongodb.net.:27017,sandbox-shard-00-00.ofcuo.mongodb.net.:27017,sandbox-shard-00-01.ofcuo.mongodb.net.:27017\n2020-11-01T05:41:19.865+0000 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:19.865+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 1 checks in a row.\n2020-11-01T05:41:20.406+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:20.406+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 2 checks in a row.\n2020-11-01T05:41:20.947+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:20.947+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 3 checks in a row.\n2020-11-01T05:41:21.489+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:21.489+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 4 checks in a row.\n2020-11-01T05:41:22.031+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:22.031+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 5 checks in a row.\n2020-11-01T05:41:22.572+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:22.572+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 6 checks in a row.\n2020-11-01T05:41:23.112+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:23.112+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 7 checks in a row.\n2020-11-01T05:41:23.653+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:23.653+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 8 checks in a row.\n2020-11-01T05:41:24.195+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:24.195+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 9 checks in a row.\n2020-11-01T05:41:24.737+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:24.737+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 10 checks in a row.\n2020-11-01T05:41:25.277+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:25.278+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 11 checks in a row.\n2020-11-01T05:41:25.819+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:26.359+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:26.900+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:27.448+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:27.989+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:28.530+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:29.071+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:29.612+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:30.153+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:30.696+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:30.696+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 21 checks in a row.\n2020-11-01T05:41:31.238+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:31.778+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:32.319+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:32.860+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:33.400+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:33.941+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:34.481+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:34.482+0000 E QUERY [js] Error: connect failed to replica set atlas-2y4u8j-shard-0/sandbox-shard-00-02.ofcuo.mongodb.net.:27017,sandbox-shard-00-00.ofcuo.mongodb.net.:27017,sandbox-shard-00-01.ofcuo.mongodb.net.:27017 :\nconnect@src/mongo/shell/mongo.js:328:13\n@(connect):1:6\nexception: connect failed\nbash-4.4# mongo \"mongodb+srv://sandbox.ofcuo.mongodb.net/\nMongoDB shell version v4.0.5\nEnter password:\nconnecting to: mongodb://sandbox-shard-00-02.ofcuo.mongodb.net.:27017,sandbox-shard-00-00.ofcuo.mongodb.net.:27017,sandbox-shard-00-01.ofcuo.mongodb.net.:27017/%3Cdbname%3E?authSource=admin&gssapiServiceName=mongodb&replicaSet=atlas-2y4u8j-shard-0&ssl=true\n2020-11-01T05:43:40.571+0000 I NETWORK [js] Starting new replica set monitor for atlas-2y4u8j-shard-0/sandbox-shard-00-02.ofcuo.mongodb.net.:27017,sandbox-shard-00-00.ofcuo.mongodb.net.:27017,sandbox-shard-00-01.ofcuo.mongodb.net.:27017\n2020-11-01T05:43:40.755+0000 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:40.755+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 1 checks in a row.\n2020-11-01T05:43:41.296+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:41.296+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 2 checks in a row.\n2020-11-01T05:43:41.836+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:41.836+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 3 checks in a row.\n2020-11-01T05:43:42.385+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:42.385+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 4 checks in a row.\n2020-11-01T05:43:42.927+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:42.927+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 5 checks in a row.\n2020-11-01T05:43:43.468+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:43.468+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 6 checks in a row.\n2020-11-01T05:43:44.010+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:44.010+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 7 checks in a row.\n2020-11-01T05:43:44.551+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:44.551+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 8 checks in a row.\n2020-11-01T05:43:45.091+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:45.091+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 9 checks in a row.\n2020-11-01T05:43:45.633+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:45.633+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 10 checks in a row.\n2020-11-01T05:43:46.173+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:46.173+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 11 checks in a row.\n2020-11-01T05:43:46.720+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:47.264+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:47.805+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:48.346+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:48.886+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:49.427+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:49.968+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:50.508+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:51.049+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:51.590+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:51.590+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 21 checks in a row.\n2020-11-01T05:43:52.131+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:52.671+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:53.212+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:53.753+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:54.293+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:54.834+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:55.375+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:55.375+0000 E QUERY [js] Error: connect failed to replica set atlas-2y4u8j-shard-0/sandbox-shard-00-02.ofcuo.mongodb.net.:27017,sandbox-shard-00-00.ofcuo.mongodb.net.:27017,sandbox-shard-00-01.ofcuo.mongodb.net.:27017 :\nconnect@src/mongo/shell/mongo.js:328:13\n@(connect):1:6\nexception: connect failed", "username": "jay_bhosale" }, { "code": "", "text": "Is your cluster up and running?\nPlease check status in Atlas.Any errors?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Most likely you forgot to whitelist your IP address.", "username": "steevej" }, { "code": "", "text": "Yess,Its up & running.\nI didn’t see any wrong with configuration.", "username": "jay_bhosale" }, { "code": "", "text": "I added IP (My-Machine) in network access.\nIts seem successfully added without any error.but while i trigger connection command trough console its throwing me error.", "username": "jay_bhosale" }, { "code": "", "text": "3 posts were split to a new topic: Not able to connect to cluster through IDE", "username": "Shubham_Ranjan" }, { "code": "whitelistIPs0.0.0.0", "text": "Hi @jay_bhosale,Can you try to whitelist all the IPs by selecting 0.0.0.0 option?Please take a look at this post for more information.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "I can’t paste the command to terminal. the terminal is not responding. what should I do?", "username": "Binti_Solihah" }, { "code": "", "text": "Please show us the screenshot\nMay be you pasted it in wrong area?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Press the + button.", "username": "santimir" } ]
Lab: Connect to your Atlas Cluster
2020-10-03T19:39:21.117Z
Lab: Connect to your Atlas Cluster
5,329
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 4.0.21-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.0.20. The next stable release 4.0.21 will be a recommended upgrade for all 4.0 users.\nFixed in this release:4.0 Release Notes | All Issues | Downloads\n\nAs always, please let us know of any issues.\n\n– The MongoDB Team", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.0.21-rc0 is released
2020-10-07T13:59:36.331Z
MongoDB 4.0.21-rc0 is released
2,226
null
[ "sharding" ]
[ { "code": "...\nsplitting for 3599\n{\n \"ok\" : 0,\n \"errmsg\" : \"split failed :: caused by :: chunk operation commit failed: version 1|2||5f77ae0f5f721c6a0fa5412d doesn't exist in namespace: cdrarch.cdr_af_20200913. Unable to save chunk ops. Command: { applyOps: [ { op: \\\"u\\\", b: true, ns: \\\"config.chunks\\\", o: { _id: \\\"cdrarch.cdr_af_20200913-SHARD_MINSEC_MinKey\\\", lastmod: Timestamp(1, 1), lastmodEpoch: ObjectId('5f77ae0f5f721c6a0fa5412d'), ns: \\\"cdrarch.cdr_af_20200913\\\", min: { SHARD_MINSEC: MinKey }, max: { SHARD_MINSEC: 3599.0 }, shard: \\\"db_rs002\\\", history: [ { validAfter: Timestamp(1601678863, 5), shard: \\\"db_rs002\\\" } ] }, o2: { _id: \\\"cdrarch.cdr_af_20200913-SHARD_MINSEC_MinKey\\\" } }, { op: \\\"u\\\", b: true, ns: \\\"config.chunks\\\", o: { _id: \\\"cdrarch.cdr_af_20200913-SHARD_MINSEC_3599.0\\\", lastmod: Timestamp(1, 2), lastmodEpoch: ObjectId('5f77ae0f5f721c6a0fa5412d'), ns: \\\"cdrarch.cdr_af_20200913\\\", min: { SHARD_MINSEC: 3599.0 }, max: { SHARD_MINSEC: MaxKey }, shard: \\\"db_rs002\\\", history: [ { validAfter: Timestamp(1601678863, 5), shard: \\\"db_rs002\\\" } ] }, o2: { _id: \\\"cdrarch.cdr_af_20200913-SHARD_MINSEC_3599.0\\\" } } ], preCondition: [ { ns: \\\"config.chunks\\\", q: { query: { ns: \\\"cdrarch.cdr_af_20200913\\\", min: { SHARD_MINSEC: MinKey }, max: { SHARD_MINSEC: MaxKey } }, orderby: { lastmod: -1 } }, res: { lastmodEpoch: ObjectId('5f77ae0f5f721c6a0fa5412d'), shard: \\\"db_rs002\\\" } } ], writeConcern: { w: 1, wtimeout: 0 } }. Result: { applied: 1, code: 11000, codeName: \\\"DuplicateKey\\\", errmsg: \\\"E11000 duplicate key error collection: config.chunks index: ns_1_min_1 dup key: { ns: \\\"cdrarch.cdr_af_20200913\\\", min: { SHARD_MINSEC: MinKey } }\\\", results: [ false ], ok: 0.0, keyPattern: { ns: 1, min: 1 }, keyValue: { ns: \\\"cdrarch.cdr_af_20200913\\\", min: { SHARD_MINSEC: MinKey } }, $gleStats: { lastOpTime: { ts: Timestamp(1601679212, 23), t: 12 }, electionId: ObjectId('7fffffff000000000000000c') }, lastCommittedOpTime: Timestamp(1601679212, 23), $clusterTime: { clusterTime: Timestamp(1601679212, 23), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1601679212, 23) } :: caused by :: E11000 duplicate key error collection: config.chunks index: ns_1_min_1 dup key: { ns: \\\"cdrarch.cdr_af_20200913\\\", min: { SHARD_MINSEC: MinKey } }\",\n \"code\" : 11000,\n \"codeName\" : \"DuplicateKey\",\n \"keyPattern\" : {\n \"ns\" : 1,\n \"min\" : 1\n },\n \"keyValue\" : {\n \"ns\" : \"cdrarch.cdr_af_20200913\",\n \"min\" : {\n \"SHARD_MINSEC\" : { \"$minKey\" : 1 }\n }\n },\n \"operationTime\" : Timestamp(1601679209, 31),\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1601679212, 24),\n \"signature\" : {\n \"hash\" : BinData(0,\"xr0fqbLl5LvtpBTMfrb1vi4DUu0=\"),\n \"keyId\" : NumberLong(\"6821045628173287455\")\n }\n }\n}\n", "text": "hi all,I just created another sharded collection, which per default make a single chunk for it in my default shard “db_rs002”. I then deliberately run myself a sequence of “sh.splitAt()” commands on this collection to split up that initial chunk into 3600 new chunks.This process went fine for several weeks, but now it fails on the last split, for a new collection “cdr_af_20200913” with the following error:What can cause this “E11000 duplicate key error collection…” situation, and how to fix it?", "username": "Rob_De_Langhe" }, { "code": "", "text": "Seems related to https://jira.mongodb.org/browse/SERVER-40061 : that issue is closed, but we hit this similar (or same?) problem with our MongoDB version 4.4.1 (we recently upgraded from 4.2.0, and now we hit this issue)", "username": "Rob_De_Langhe" }, { "code": " ...\n splitting for 3599\n{\n \"ok\" : 0,\n \"errmsg\" : \"split failed :: caused by :: chunk operation commit failed: version 1|2||5f7d9b275f721c6a0f3fa945 doesn't exist in namespace: cdrarch.cdr_mobi_20200927. Unable to save chunk ops. Command: { applyOps: [ { op: \\\"u\\\", b: true, ns: \\\"config.chunks\\\", o: { _id: \\\"cdrarch.cdr_mobi_20200927-SHARD_MINSEC_MinKey\\\", lastmod: Timestamp(1, 1), lastmodEpoch: ObjectId('5f7d9b275f721c6a0f3fa945'), ns: \\\"cdrarch.cdr_mobi_20200927\\\", min: { SHARD_MINSEC: MinKey }, max: { SHARD_MINSEC: 3599.0 }, shard: \\\"db_rs002\\\", history: [ { validAfter: Timestamp(1602067239, 5), shard: \\\"db_rs002\\\" } ] }, o2: { _id: \\\"cdrarch.cdr_mobi_20200927-SHARD_MINSEC_MinKey\\\" } }, { op: \\\"u\\\", b: true, ns: \\\"config.chunks\\\", o: { _id: \\\"cdrarch.cdr_mobi_20200927-SHARD_MINSEC_3599.0\\\", lastmod: Timestamp(1, 2), lastmodEpoch: ObjectId('5f7d9b275f721c6a0f3fa945'), ns: \\\"cdrarch.cdr_mobi_20200927\\\", min: { SHARD_MINSEC: 3599.0 }, max: { SHARD_MINSEC: MaxKey }, shard: \\\"db_rs002\\\", history: [ { validAfter: Timestamp(1602067239, 5), shard: \\\"db_rs002\\\" } ] }, o2: { _id: \\\"cdrarch.cdr_mobi_20200927-SHARD_MINSEC_3599.0\\\" } } ], preCondition: [ { ns: \\\"config.chunks\\\", q: { query: { ns: \\\"cdrarch.cdr_mobi_20200927\\\", min: { SHARD_MINSEC: MinKey }, max: { SHARD_MINSEC: MaxKey } }, orderby: { lastmod: -1 } }, res: { lastmodEpoch: ObjectId('5f7d9b275f721c6a0f3fa945'), shard: \\\"db_rs002\\\" } } ], writeConcern: { w: 1, wtimeout: 0 } }. Result: { applied: 1, code: 11000, codeName: \\\"DuplicateKey\\\", errmsg: \\\"E11000 duplicate key error collection: config.chunks index: ns_1_min_1 dup key: { ns: \\\"cdrarch.cdr_mobi_20200927\\\", min: { SHARD_MINSEC: MinKey } }\\\", results: [ false ], ok: 0.0, keyPattern: { ns: 1, min: 1 }, keyValue: { ns: \\\"cdrarch.cdr_mobi_20200927\\\", min: { SHARD_MINSEC: MinKey } }, $gleStats: { lastOpTime: { ts: Timestamp(1602067429, 63), t: 12 }, electionId: ObjectId('7fffffff000000000000000c') }, lastCommittedOpTime: Timestamp(1602067429, 63), $clusterTime: { clusterTime: Timestamp(1602067429, 63), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1602067429, 63) } :: caused by :: E11000 duplicate key error collection: config.chunks index: ns_1_min_1 dup key: { ns: \\\"cdrarch.cdr_mobi_20200927\\\", min: { SHARD_MINSEC: MinKey } }\",\n \"code\" : 11000,\n \"codeName\" : \"DuplicateKey\",\n \"keyPattern\" : {\n \"ns\" : 1,\n \"min\" : 1\n },\n \"keyValue\" : {\n \"ns\" : \"cdrarch.cdr_mobi_20200927\",\n \"min\" : {\n \"SHARD_MINSEC\" : { \"$minKey\" : 1 }\n }\n },\n \"operationTime\" : Timestamp(1602067422, 33),\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1602067429, 65),\n \"signature\" : {\n \"hash\" : BinData(0,\"E6blKGQrwEdQmky1zS1LY0txJGc=\"),\n \"keyId\" : NumberLong(\"6821045628173287455\")\n }\n }\n}\n", "text": "ok, by lack of any feedbacks, I continue my own desparate attempts to get MongoDB usable again:The error message after the last chunk was being split:-> getting more desparate now…", "username": "Rob_De_Langhe" } ]
Split of chunks fails with "unable to save chunk ops"
2020-10-02T23:05:29.645Z
Split of chunks fails with “unable to save chunk ops”
2,315
null
[]
[ { "code": "", "text": "hi everyone! so I need your proposals to face my problem I throw myself into the water , let’s go!\ni have a remote central database that must contain all records coming from using a application by many clients! the application save data in local firsteble then save data in remote database ! we’re good until now… the problem persist when we go in offline mode (no internet) so lets suppose that we have a remote db central 2 clients : client A and client B => local database A + local database B (the two databases are independent) . we are offline ! i have create a record rec1 in the DB -A and record rec2 in the DB -B . opppa\nonline mode is active so the db A must push the rec1 in DB central also the db B must push the rec2 in DB central :\n____________________Central DB\n_______________________ rec 1\n_______________________ rec 2\nDB A --------------------------| |----------------------------------DB B\nrec1 _______________________________________________ rec 2when im using replica set data will be inserted everywhere and that is my problem !!!\nprimary(db master) will synchronize with secondary databases(db slaves)\ni don’t want that the DB-A data to be inserted into the DB-B and vice versa !", "username": "spoke996" }, { "code": "", "text": "Hi @spoke996,I actually tried to do this in MongoDB World Hackathon 2019. You could do automatic sync between your local databases and your central database, even after your device goes offline.I did it with MongoDB Mobile, which was a part of legacy MongoDB Stitch but currently I guess MongoDB Realm provides this sync, and sync configurations. Maybe @Drew_DiPalma might be able to help you with this more.Documentation - MongoDB Realm SyncDocumentation of Legacy MongoDB Mobile", "username": "shrey_batra" }, { "code": "", "text": "@shrey_batra thank you for your return ! @Drew_DiPalma can you help me about this problem?", "username": "spoke996" }, { "code": "", "text": "Hi – It might help to get a little more information about your use case, but this seems like a place where you would use Sync and create a Partition Key that separated your data by user (so that rec A / rec B can live within the same backend database but are not shared by users). If you need to duplicate/move data between rec A/rec B you may also want to look at Triggers for this use case.", "username": "Drew_DiPalma" } ]
Synchronize multiple local database with a remote central database
2020-10-01T14:55:40.398Z
Synchronize multiple local database with a remote central database
6,767
null
[]
[ { "code": "", "text": "Hello all,I am looking for a MongoDB trainer to teach a course online / remotely.If you’re interested, please let me know. You can reply here under this post or write to me: [email protected] you for reading. ", "username": "Poppy_Huang" }, { "code": "", "text": "Hi Poppy, I have forwarded your request to over professional services team. Someone should be in touch. If not ping me again here or at [email protected].", "username": "Joe_Drumgoole" }, { "code": "", "text": "Hi Joe,Thank you so much, and sorry for my late reply.As of now, no one has contacted me for the training project.Hopefully I’ll start receiving messages very soon. Thanks again!Poppy", "username": "Poppy_Huang" }, { "code": "", "text": "We don’t have any japanese speakers who can teach MongoDB. Could you use an English speaking trainer?", "username": "Joe_Drumgoole" }, { "code": "", "text": "Hi Joe,Thank you for checking and getting back to me.The client is Japanese and they ask for a Japanese-speaking trainer.I’ll check if an English-speaking would work for them and let you know.Thank you again for your help! Poppy", "username": "Poppy_Huang" }, { "code": "", "text": "Hi Poppy,I work in Japan supporting MongoDB and open source databases. Can I be of help to you?こんにちは。日本でMongoDBの技術サポートの仕事をしています。何か役に立てることはあるでしょうか?[email protected]", "username": "Satoshi_OKANO" }, { "code": "", "text": "Hi Satoshi san,Thank you very much for your message.Yes, I’m still looking for a Japanese-speaking trainer to teach a course on MongoDB online.I just sent the details to your email. Please check it when you get a chance.Thank you and looking forward to hearing from you!Best,\nPoppy", "username": "Poppy_Huang" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Looking for MongoDB trainer to teach a course online in Japanese
2020-09-14T06:44:00.561Z
Looking for MongoDB trainer to teach a course online in Japanese
1,937
null
[ "atlas-device-sync", "flexible-sync" ]
[ { "code": "", "text": "Hi switching from realm.io to MongoDB Realm I’m missing the query based sync. For example I’m having a “User”-object that has a private partition-key. This user has a “Classroom”-object as a linking object. The problem is, that the classroom should be linked to multiple user and must have another partition-key.I could add the classroom partition-key to my user and open the classroom with another realm. But this is getting really complicated with many objects from different partitions. Also this way makes it hard to pass the change event to the object that is linking.I was very happy to find Ian_Ward’s statement that you are already working on this with high priority. Can you roughly say when this feature will come? Will it take weeks, months or even longer?I hope this helps. We understand that managing different realms can become a bit of an implementation detail and we are endeavoring to fix this. It is our top priority right now to implement a flexible syncing model - a la query-based sync in the legacy realm cloud which would make this schema design much easier.Thanks and best regards!Marcel", "username": "Marcel_Breska" }, { "code": "", "text": "@Marcel_Breska Definitely months I am sorry to say. It is a complex technical problem; and that takes time unfortunately. I will say that we are working on it every week and hope to deliver an alpha in the short term, with follow-on iterations after that - beta, GA, etc. I know this may not be the timeline you are hoping for but in full candor, if you need something right now then it may be best to look elsewhere.", "username": "Ian_Ward" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Flexible syncing model - Roadmap
2020-10-02T11:03:59.325Z
Flexible syncing model - Roadmap
3,117
null
[]
[ { "code": "let credentials = Credentials.google(serverAuthCode: \"<token>\")", "text": "Hi all,I’ve recently been having issues similar to the ones found in this thread, except I’m using iOS 14/Xcode 12 and Realm Beta 10.0.0-beta.5.Specifically, both the FB and Google OAuth process lead to this error code:‘error exchanging access code with OAuth2 provider’, code: 47}I double-checked that I was using the right Client ID/Secret as well as the proper Redirect URI, and all seems ok on the backend for both Facebook and Google. Now I’ve tried using the updated serverAuthCode method found in the docs here, but the Credentials shoot me an error:let credentials = Credentials.google(serverAuthCode: \"<token>\")Error: Error: Type ‘Credentials’ (aka ‘RLMCredentials’) has no member ‘google’Is there an update I’m missing or am I missing something in the Credentials setup? Thanks!", "username": "Aabesh_De" }, { "code": "", "text": "Hey Aabesh - can you upgrade to beta.6 and see if this issue persists? This syntax changed in the newest version.Note: Google OAuth Credentials takes the Auth Code as a parameter and Facebook OAuth credentials take the access token as a parameter, as both orgs provide different OAuth implementation recommendations.", "username": "Sumedha_Mehta1" }, { "code": "let credentials = Credentials.init(googleAuthCode: \"<token>\")\napp.login(credentials: credentials) { (user, error) in\nDispatchQueue.main.sync {\n...\n}\n", "text": "let credentials = Credentials.google(serverAuthCode: “”)Hey there, thanks for the reply - unfortunately still seeing the same issue with RLMCredentials having no member ‘google’ and not seeing any instance of serverAuthCode working successfully. Here’s what the pod looks like, and I confirmed that the target was correct and all dependencies were updated to 10.0.0.6:pod ‘RealmSwift’, ‘=10.0.0-beta.6’Let me know if there’s any specifics I can provide as well.UPDATE: So the syntax is showing successfully as googleAuthCode instead of googleToken, but I’m still getting Error Code = 47 “error exchanging access code with OAuth2 provider”. Here’s a sample of what I’m doing, note that I can’t do Credentials.google as that’s not being recognized:I’ve also made sure to setup a Web OAuth Client ID and an iOS OAuth Client ID, and I’m using the Web application creds in the Realm configuration.", "username": "Aabesh_De" }, { "code": " serverClientId clientIDGID.SharedInstance()user.serverAuthCodeAppDelegate.swift pod 'GoogleSignIn', '= 4.4.0''RealmSwift', '=10.0.0-beta.5'@UIApplicationMain\nclass AppDelegate: UIResponder, UIApplicationDelegate, GIDSignInDelegate {\n\nvar window: UIWindow?\n\nfunc application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {\n \n // google iOS client ID\n GIDSignIn.sharedInstance()?.clientID = Constants.GOOGLE_CLIENT_ID\n // google web client ID\n GIDSignIn.sharedInstance()?.serverClientID = Constants.GOOGLE_SERVER_CLIENT_ID\n GIDSignIn.sharedInstance()?.delegate = self\n\n window = UIWindow(frame: UIScreen.main.bounds)\n window?.makeKeyAndVisible()\n window?.rootViewController = UINavigationController(rootViewController: WelcomeViewController())\n return true\n}\n\n// added for google sign-in\nfunc application(_ app: UIApplication, open url: URL, options: [UIApplication.OpenURLOptionsKey : Any] = [:]) -> Bool {\n return GIDSignIn.sharedInstance().handle(url as URL?,\n sourceApplication: options[UIApplication.OpenURLOptionsKey.sourceApplication] as? String,\n annotation: options[UIApplication.OpenURLOptionsKey.annotation])\n}\n\nfunc sign(_ signIn: GIDSignIn!, didSignInFor user: GIDGoogleUser!, withError error: Error!) {\n if let error = error {\n print(\"error received when logging in with Google: \\(error.localizedDescription)\")\n } else {\n switch user.serverAuthCode {\n case .some:\n \n // Fetch Google token via the Google SDK\n app.login(credentials: Credentials(googleToken: user.serverAuthCode)) { (user, error) in\n DispatchQueue.main.sync {\n guard error == nil else {\n print(\"Login failed: \\(error!)\")\n return\n }\n // Now logged in, do something with user\n }\n }\n\n case .none:\n print(\"serverAuthCode not retreived\")\n GIDSignIn.sharedInstance()?.signOut()\n }\n \n }\n}\n", "text": "Hey Aabesh,Ah yeah - I realized there might have been an additional typo for the first error. For the second, I need a bit more information on how you’ve set up your Google Sign in. You do need a Web Oauth Client ID and a iOS OAuth Client Id.Then you need to take the following steps:\nimage2398×1032 176 KB\nAdd the GoogleSignIn SDK to your iOS project and add your OAuth iOS Keychain to your Xcode projectset the serverClientId (Web Client Id) and the clientID(iOS client Id) properties on your GID.SharedInstance() object (legacy docs show a small snippet for iOS here)Use user.serverAuthCode on Google login to create your Realm credentialfull code example from AppDelegate.swift that I wrote (I’m using pod 'GoogleSignIn', '= 4.4.0', pod 'RealmSwift', '=10.0.0-beta.5')", "username": "Sumedha_Mehta1" }, { "code": "Button(action: {\nself.GoogleSignIn(GIDSignIn?, didSignInFor: GIDGoogleUser?, withError: Error?)\n})\n", "text": "Ah I think my mistake was missing the GoogleSignIn SDK entirely and not setting the serverClientID/Client ID correctly. Thanks for the detailed walkthrough! Was I missing this somewhere in the MongoDB iOS/Google OAuth docs? Might just have totally spaced there but I’m not sure where the steps for the GoogleSignIn SDK were referenced?I’ve got most of it setup, and now I just need to call the last function for signing in programmatically through a button action. Any tips on how to reference the last function and call the arguments correctly?I tried this but it’s a no-go given I’m just referencing the GID types and not the actual arguments themselves:", "username": "Aabesh_De" }, { "code": "GIDSignInDelegatesignGIDSignInUIDelegateclass WelcomeViewController: UIViewController, GIDSignInUIDelegate {\n override func viewDidLoad() {\n super.viewDidLoad()\n \n title = \"Welcome\"\n\n // Do any additional setup after loading the view.\n GIDSignIn.sharedInstance()?.uiDelegate = self\n \n // this notification is called when signIn fails or succeeds\n // NotificationCenter.default.addObserver(self, selector: #selector(signInAndPushVC), name: NSNotification.Name(\"OAUTH_SIGN_IN\"), object: nil)\n\n \n if //user is logged in {\n self.navigationController?.pushViewController(TodoTableViewController(), animated: true)\n } else {\n \n let googleButton = GIDSignInButton(frame: CGRect(x: self.view.frame.width / 2 - 150, y: 200, width: 300, height: 50))\n \n self.view.addSubview(googleButton)\n \n }\n }", "text": "No, they’re not in our docs yet but we’re working on documenting these steps in the near future.I tried this but it’s a no-go given I’m just referencing the GID types and not the actual arguments themselves:As for the way I did it, I made sure my AppDelegate conformed to GIDSignInDelegate\nand logged into Realm in the sign method.I would highly suggest you also check other tutorials/examples as my example may not be best practice.For my button implementation, the GoogleSignIn Button lives in the first ViewController that is shown first and that ViewController conforms to GIDSignInUIDelegate. I believe once you set the delegate to self, you can also set a notification + handler that will push a new view controller on login.View Controller + Button Snippet", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Thanks so much @Sumedha_Mehta1! Appreciate the guidance. Looks like GIDSignIn can also be directly called in SwiftUI with GIDSignIn.sharedInstance().signIn().I found some good examples with SwiftUI in this Medium article, and for those interested, Google has examples on reference classes and the GIDSignIn button here.", "username": "Aabesh_De" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Facebook + Google OAuth Issues?
2020-10-06T14:36:11.363Z
Facebook + Google OAuth Issues?
4,353
null
[ "queries" ]
[ { "code": "", "text": "I know query plan is stored in cache.But if a better query plan appears, I wonder if the previous plan will be erased or both.", "username": "Kim_Hakseon" }, { "code": "", "text": "I hope you will find the following useful.", "username": "steevej" }, { "code": "", "text": "I saw that and asked the question.", "username": "Kim_Hakseon" }, { "code": "works", "text": "Hi @Kim_Hakseon,Per the documentation that @steevej referenced, a query plan that is Active can move to an Inactive state in the cache if there is a better performing plan:The planner also evaluates the entry’s performance and if its works value no longer meets the selection criterion, it will transition to Inactive state.The query cache is also affected by catalog operations like index changes and collection drops (see: Plan Cache Flushes).There have been some changes to query plan caching in successive MongoDB versions, so if you are using an older release series of MongoDB please select the relevant version of the manual from the selection drop-down near the top left of the Query Plans page. The description above is applicable to MongoDB 4.2 and 4.4 (which added states for plan cache entries), but all modern versions of MongoDB have a query replanning mechanism.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "So, if a better plan appears in version 4.2, is it okay to think that the previous plan will be erased from the cache?", "username": "Kim_Hakseon" }, { "code": "", "text": "Hi @Kim_Hakseon,In general you should expect that a cached plan will automatically be replaced if a more efficient plan is available.The plan cache in MongoDB 4.2+ associates a state for each cache entry to provide more nuanced replanning behaviour than the outright eviction of prior versions. With this version of the plan cache, inactive cache entries (with their associated cost) are removed from the plan cache on a Least Recently Used (LRU) basis.For comparison, see the query plan flow chart for MongoDB 4.0: Query Plans — MongoDB Manual.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thank you~👍It helped me a lot.", "username": "Kim_Hakseon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
About query plan caching
2020-10-06T11:27:21.114Z
About query plan caching
3,753
null
[ "o-fish" ]
[ { "code": "{\n \"error\":\"insert not permitted\",\n \"error_code\":\"NoMatchingRuleFound\",\n\"link\":\"https://realm.mongodb.com/groups/5f733d3366d46e349c2c49de/apps/5f735f71a3478af2476f85cf/logs?co_id=5f7cb685eeff56d1d2903d33\"\n}\n{\"name\":\"insertOne\",\"service\":\"mongodb-atlas\",\"arguments\":[{\"collection\":\"User\",\"database\":\"wildaid\",\"document\":{\"email\":\"[email protected]\",\"name\":{\"first\":\"asdf\",\"last\":\"lkj\"},\"active\":true,\"userGroup\":\"User Group\",\"createdOn\":{\"$date\":{\"$numberLong\":\"1602007707341\"}},\"agency\":{\"name\":\"joquendo\",\"admin\":false},\"global\":{\"admin\":false},\"_id\":{\"$oid\":\"5f7cb29bd86f74790d9a72d3\"}}}]}\n", "text": "I created an account in the sandbox and I tried to create a new “field officer” user role, but I’m currently getting a 403 on the ‘call’ api.Sandbox: https://wildaidsandbox-mxgfy.mongodbstitch.com/\nMy admin user: joquendoWhen I try to add a new user, the register API completes , but the call api Returns a 403 with the following response:Do I need to change some settings in order to create a new user?I am attempting to create new users to look into Field officers should see all agency users on the users page (web) · Issue #168 · WildAid/o-fish-realm · GitHubPayload:Thanks,James", "username": "James_Oquendo" }, { "code": "", "text": "Hi James,Thanks for alerting us to this - we were able to replicate this. We are also getting similar errors when logging in as a user and trying to add a boarding record in the app. We are looking into the problem - it’s not anything you did or can change, it’s affecting all users of the instance.Hopefully we’ll have this fixed soon, I’ll let you know when we do!-Sheeri", "username": "Sheeri_Cabral" }, { "code": "", "text": "Hi @James_Oquendo - @Andrew_Morgan has fixed this issue - you should be able to create a user now (I was able to, when I wasn’t before!) .", "username": "Sheeri_Cabral" }, { "code": "", "text": "This is working for me now too. Thank you.", "username": "James_Oquendo" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
O-Fish Sandbox creating a user with a field officer role
2020-10-06T19:06:17.903Z
O-Fish Sandbox creating a user with a field officer role
3,787
null
[ "production", "c-driver" ]
[ { "code": "", "text": "It is my pleasure to announce the MongoDB C Driver 1.17.1.libmongocBug fixes:libbsonNo changes since 1.17.0; release to keep pace with libmongoc’s version.", "username": "Kevin_Albertson" }, { "code": "", "text": "", "username": "system" } ]
MongoDB C driver 1.17.1 released
2020-10-06T22:14:17.567Z
MongoDB C driver 1.17.1 released
2,137
null
[]
[ { "code": "2020-10-05T09:25:52.839-0500 Read timestamp from bookmark file: {1598788862 3}\n2020-10-05T09:25:52.839-0500 Proceeding to tail oplog.\n2020-10-05T09:25:52.841-0500 Current lag from source: 866h24m47s\n2020-10-05T09:25:52.841-0500 NewOplogReader start time is greater than the final buffered timestamp\n2020-10-05T09:25:52.841-0500 Tailing the oplog on the source cluster starting at timestamp: {1598788862 3}\n2020-10-05T09:25:52.877-0500 Oplog tailer has shut down. Oplog applier will exit.\n2020-10-05T09:25:52.877-0500 Waiting for new oplog entries to apply.\n2020-10-05T09:25:52.877-0500 Fatal error while tailing the oplog on the source: Checkpoint not available in oplog! expected: {1598788862 3}; got: {1601235\n193 1}\n2020-10-05T09:25:52.877-0500 Timestamp file written to /var/lib/mongo/mongomirror-linux-x86_64-rhel70-0.9.1/bin/mongomirror.timestamp.\n2020-10-05T09:25:52.877-0500 Failed: error while tailing the oplog on the source: Checkpoint not available in oplog! expected: {1598788862 3}; got: {16012\n35193 1}\nrs.printReplicationInfo()\nconfigured oplog size: 77824MB\nlog length start to end: 677656secs (188.24hrs)\noplog first event time: Sun Sep 27 2020 14:33:13 GMT-0500 (CDT)\noplog last event time: Mon Oct 05 2020 10:47:29 GMT-0500 (CDT)\nnow: Mon Oct 05 2020 10:47:37 GMT-0500 (CDT)\n", "text": "Hello,I’ve been testing mongomirror on a 3.4.2 linux environment (3 node rs,not sharded) to do a data migration to Atlas. It worked initially, but took 4 days to move 400 Gb so we made some revisions to the destination config and are trying again, but can never get it started as there are oplog errors. I’ve resized the oplog across the cluster, but still cannot start mongomirror.Here’s my errors:Is there some way to reset the oplog so that it does not cause this ? Is the mongomirror for one-time only use ?Here’s the oplog info :", "username": "Pierre_Evans" }, { "code": "", "text": "It is too far behind to pickup from the oplog 866h behind. You’ll have to restart.Monogomirror is resumable after initial sync and catchup, but you have to remain in the oplog window.MongoDB support were a fantastic help when I went through this.", "username": "chris" }, { "code": "", "text": "Thanks Chris, I figured I’d have to restart , just still a little unsure on how the oplog side of things works. Back in my relational db days, the equivalent of the oplog would basically cycle out based on a commit point, so I was thinking that rerunning mongomirror would not try to continue from the endpoint.", "username": "Pierre_Evans" }, { "code": "", "text": "So, If i resized my oplog in the intervening period, shouldn’t it have ‘reset’ the oplog ? As part of the steps i dropped the oplog on each of the 3 nodes, so ithought that would’ve cleared/reset the oplog window. Is that the case ?", "username": "Pierre_Evans" }, { "code": "--drop ", "text": "You’ll have to remove the bookmark file from mongomirror. It is trying to restart from that point.You’ll want to drop all the databases in the target cluster manually or with thte --drop option.The oplog will remove the oldest items when it hits the configured limit (size in GB or time(4.4_)). Not based on any commit.", "username": "chris" }, { "code": "", "text": "Chris , you gave me the exact hint that I needed, I’m rerunning mongomirror as we speak, having renamed the bookmark file and specifying the forceDump option\nThanks !", "username": "Pierre_Evans" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongomirror & oplog issue
2020-10-05T19:52:50.917Z
Mongomirror &amp; oplog issue
3,828
https://www.mongodb.com/…7_2_1024x373.png
[ "atlas-search" ]
[ { "code": "", "text": "Hi,I dont understand why i have network when my search index was indexing and updating ?\nLast month i was charged 4 TO of Network Out because of this.image1664×607 49.1 KB", "username": "Jonathan_Gautier" }, { "code": "", "text": "Hi @Jonathan_Gautier,I think the best way to handle atlas cluster specific billing issues is with our Support through your atlas project.If you don’t have a support subscription please send a message in the Atlas chat.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "@Pavel_Duchovny\nNo problem i have made an issue.But community need to know you charged Network Out when Search Index, Indexing or Updating…\nAnd for me is not normal for something work localy. Is not my fault i didn’t want this network out.And i think you need to fix this issue because is not just billing issue, is also cluster architecture or communication management between mongodb and mongot.I have M40 with three server and each have data in there disk, also why data need to network out ?Thanks !", "username": "Jonathan_Gautier" }, { "code": "", "text": "Hi @Jonathan_Gautier,I think that Atlas members are deployed in different availability zones therefore the cloud provider charge costs for Data Transfer between zones/regions.Replicating indexes might cause this traffic to increase.I think there is no way to avoid it , as per my opinion but I don’t know the specific of your case.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Atlas Search has no impact on network costs and Atlas Search data never leaves or enters a node externally.Each Atlas Search node only connects to the local mongod via collection scan during initial sync, and changestreams during steady state, which is how indexes are created and maintained.", "username": "Doug_Tarr" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Atlas Search Network Out Billing?
2020-10-05T03:36:32.729Z
Atlas Search Network Out Billing?
2,139
https://www.mongodb.com/…d783b57c8603.png
[ "production", "php" ]
[ { "code": "mongodbcomposer require mongodb/mongodb^1.7.1\nmongodb", "text": "The PHP team is happy to announce that version 1.7.1 of the MongoDB PHP library is now available. This library is a high-level abstraction for the mongodb extension.Release HighlightsThis release fixes errors during PHP shutdown if GridFS streams were left open in a dirty state.A complete list of resolved issues in this release may be found at:\nhttps://jira.mongodb.org/secure/ReleaseNote.jspa?projectId=12483&version=29618DocumentationDocumentation for this library may be found at:FeedbackIf you encounter any bugs or issues with this library, please report them via this form:\nhttps://jira.mongodb.org/secure/CreateIssue.jspa?pid=12483&issuetype=1InstallationThis library may be installed or upgraded with:Installation instructions for the mongodb extension may be found in the PHP.net documentation.", "username": "Andreas_Braun" }, { "code": "", "text": "", "username": "system" } ]
MongoDB PHP Library 1.7.1 Released
2020-10-06T11:28:04.583Z
MongoDB PHP Library 1.7.1 Released
1,539
null
[ "production", "php" ]
[ { "code": "pecl install mongodb-1.8.1\npecl upgrade mongodb-1.8.1\n", "text": "The PHP team is happy to announce that version 1.8.1 of the mongodb PHP extension is now available on PECL.Release HighlightsThis release fixes a memory leak when committing transactions. It also fixes a bug where error labels attached to write concern errors were not exposed in exceptions.A complete list of resolved issues in this release may be found at: Release Notes - MongoDB JiraDocumentationDocumentation is available on PHP.net:\nPHP: MongoDB - ManualFeedbackWe would appreciate any feedback you might have on the project:\nhttps://jira.mongodb.org/secure/CreateIssue.jspa?pid=12484&issuetype=6InstallationYou can either download and install the source manually, or you can install the extension with:or update with:Windows binaries are available on PECL:\nhttp://pecl.php.net/package/mongodb", "username": "Andreas_Braun" }, { "code": "", "text": "", "username": "system" } ]
MongoDB PHP Extension 1.8.1 Released
2020-10-06T11:25:25.864Z
MongoDB PHP Extension 1.8.1 Released
5,211
null
[]
[ { "code": "", "text": "We have run into several cases where URLs generated by Realm do not follow the standard URL schema. Specifically Realm appends query strings to the end of URLs even when a URL fragment (#) is present. This is causing issues with several frontend routing solutions that expect standards compliant URLs.URL Structure: URL - Wikipedia", "username": "veysi_yalcin" }, { "code": "", "text": "Email confirmation and reset password links are the specific cases we ran into.This is the URL generated by Realm:\n“…site.com/#auth/confirm-user?token=12345”This would be the correct URL structure:\n“…site.com?token=12345#auth/confirm-user”", "username": "veysi_yalcin" }, { "code": "", "text": "Can we expect this to be fixed anytime soon or should we work around this on our end? Thanks!", "username": "veysi_yalcin" } ]
Some URLs generated by Realm do not follow the standard URL schema
2020-09-29T17:48:33.661Z
Some URLs generated by Realm do not follow the standard URL schema
2,249
null
[]
[ { "code": "", "text": "While I want to install Mongo on my PC I get such an error (executing mongo --nodb):\n“mongo” cannot be opened because the developer cannot be verified.\nmacOS cannot verify that this app is free from malware.Do you happen to know what should I do?", "username": "Agnieszka_Morzywolek_63138" }, { "code": "", "text": "This link from should help:macOS includes a technology called Gatekeeper, that's designed to ensure that only trusted software runs on your Mac.", "username": "007_jb" }, { "code": "", "text": "Hi @Agnieszka_Morzywolek_63138,I hope you found @007_jb’s response helpful. Please let us know if you are still having any issue.Thanks,\nShubham Ranjan\nCurriculum Services Engineer", "username": "Shubham_Ranjan" }, { "code": "", "text": "I have the same issue. I used the information from the link but still same error.", "username": "Liliane_Top" }, { "code": "", "text": "When I ran mongo --nodb command in terminal, I see the below error as command not found\nimage1034×180 12 KB", "username": "Krishna_K" }, { "code": "", "text": "Responded to you on other threadCorrect your mongodb path\nIt should work", "username": "Ramachandra_Tummala" }, { "code": "", "text": "What do you mean by correct your mongodb path?\nIs the following not correct?\nbash-3.2$ /Users/lilianetop/Desktop/mongodb-macos-x86_64-4.4.0/bin", "username": "Liliane_Top" }, { "code": "", "text": "Have you been able to solve it?", "username": "Liliane_Top" }, { "code": "", "text": "I solved it as I downloaded the wrong package. The confusion comes from the screens shown are outdated and incomplete.", "username": "Liliane_Top" }, { "code": "", "text": "Yes, I resolved it and agree with you as some of the instructions are confusing and outdated.", "username": "Krishna_K" }, { "code": "", "text": "Hi @Liliane_Top,I’m glad your issue got resolved and thanks for the feedback. We will shortly update the instructions.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "Hi! I’m facing the same issue with installing Mongo on macOS.\nI’ve set the path and when trying to run mongo I receive this error:\nScreenshot 2020-09-21 at 12.38.51986×186 12.8 KB", "username": "Anastasia_Alexandrova" }, { "code": "", "text": "follow the instruction to update your path or specify the full path name of the command", "username": "steevej" }, { "code": "", "text": "I followed the instruction. The .tar file is mongodb-macos-x86_64-enterprise-4.4.1.tgz\nThe path specified in the /etc/paths is /Users/anastasia/Desktop/mongodb-macos-x86_64-enterprise-4.4.1/bin\nyet after the restart the mongo is not started", "username": "Anastasia_Alexandrova" }, { "code": "echo $PATHbash shellbash shell", "text": "Hi @Anastasia_Alexandrova,What’s the output of this command ?echo $PATHI can see you are currently using bash shell but did you switch to bash shell before setting the path ?", "username": "Shubham_Ranjan" }, { "code": "", "text": "@Shubham_Ranjan , I did switch to bash shell before updating /etc/paths\nthe output of echo $PATH is /Users/anastasia/.pyenv/shims:/Users/anastasia/go/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin", "username": "Anastasia_Alexandrova" }, { "code": "", "text": "But mongodb/bin is missing in your path\nIf you have edited the file and saved it you should see the path\nCan you show contents of /etc/paths", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi @Anastasia_Alexandrova,Please share the information requested by @Ramachandra_37567.But mongodb/bin is missing in your path\nIf you have edited the file and saved it you should see the path\nCan you show contents of /etc/paths~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "Hi all! I’ve added the path to mongodb in my .zshrc file and then I was able to start mongo shell.", "username": "Anastasia_Alexandrova" }, { "code": "", "text": "Hi @Anastasia_Alexandrova,I’m glad your issue got resolved. Please feel free to get back to us if you face any other issues.~ Shubham", "username": "Shubham_Ranjan" } ]
Can't install mongo on macOS
2020-03-04T17:14:10.940Z
Can&rsquo;t install mongo on macOS
7,546
null
[ "atlas-search" ]
[ { "code": "", "text": "Hi,When we make Sort or Count after $search, which engine is running for sort or count ?\nIt’s Lucene directly ( mongot ) or MongoDB ?Thanks !", "username": "Jonathan_Gautier" }, { "code": "", "text": "Hi @Jonathan_Gautier,As far as I understand only the $search stage runs against the mongot index. Therefore one of the performance recommendations is to perform any query heavy filtering or operations in $search stage.Considering this I think consecutive stages runs as part of regular MongoDB aggregation framework executor.However, I can try and verify it with the search team.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "@Pavel_DuchovnyYes you can ! I have lot of trouble when i have large database like more than millions documents. Sort and Count was toooooo slow for user experience. We need feature sort and count inside mongot, especialy when we have large database. I never had this problem with elasticsearch. I know Apache Lucene can handle Sort and Count or other operation like faceting.I think if this dont work inside mongot, you need to develop this features with atlas search team.", "username": "Jonathan_Gautier" }, { "code": "", "text": "Hi @Jonathan_Gautier,So I have confirmed that mongot is responsible only for everything inside $search stage and nothing more at this point.You should try to do anything possible there or in your data model to tune those queries at the moment.Having said that, we are working on introducing new capabilities for the search server.If you want to raise interest for our product team please open a feedback ticket at https://feedback.mongodb.com under the atlas sections.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Atlas Search Sort and Count which engine works?
2020-10-05T02:11:37.710Z
Atlas Search Sort and Count which engine works?
3,150
null
[ "replication", "performance" ]
[ { "code": "$ time mongo localhost --port 27017 --username \"admin\" --password \"admin\" --authenticationDatabase \"admin\" < w1.txt\nMongoDB shell version v4.2.8\nreal 0m23.195s\n$ time mongo localhost --port 27017 --username \"admin\" --password \"admin\" --authenticationDatabase \"admin\" < w2.txt\nMongoDB shell version v4.2.8\nreal 1m41.394s\ntime mongo localhost --port 27017 --username \"admin\" --password \"admin\" --authenticationDatabase \"admin\" < w1.txt\nMongoDB shell version v4.2.8\nreal 3m36.208s\n$ time mongo localhost --port 27017 --username \"admin\" --password \"admin\" --authenticationDatabase \"admin\" < w2.txt\nMongoDB shell version v4.2.8\nreal 16m33.480s\n", "text": "Hi, I would like to know why the writeConcern w>1 is very slow for writes when de PRIMARY e SECONDARY are on the same network. I just want use SECONDARY for consistent READS for use in application. Below my tests.for (i=0; i<number; i++) {db.inventory.insert({ sku: “abcdxyz”, qty : 100, category: “Clothing” },{ writeConcern: { w: x, j: true, wtimeout: 1000 } })}w1: 10.000 documentsw2: 10.000 documentsw1: 100.000 documentsw2: 100.000 documents", "username": "Eduardo_Legatti" }, { "code": "", "text": "Hi @Eduardo_Legatti,As you increase your write concern your client is required to wait for asynchronous replication to replicate the written to n number of secondaries as specified in write concern.The more consistent your write is configured to be the more time it might take to write the data…Moreover, the more documents you write simultaneously the bigger the replication lag might be causing subsequent writes to get slower and slower as they are waiting longer.Even though the members are in the same network does not mean that asynchronous replication can’t be impacted by disk,cpu or ram factors for slow operations.Specific mongo versions might introduce additional behaviour on the way reads are impacting replication, be aware.Please read more herePavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi Pavel,So, I think using secondary members to scale reads is not a good approach, rigth?Thanks.", "username": "Eduardo_Legatti" }, { "code": "", "text": "Hi @Eduardo_Legatti,I suggest you to read the following answer:Thanks\nPavle", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
WriteConcern w>1 is very slow for writing
2020-10-02T18:50:01.335Z
WriteConcern w&gt;1 is very slow for writing
3,594
null
[]
[ { "code": "", "text": "The guide is asking you to use the command line but this won’t work since mongo isn’t installed locally.Please advise.", "username": "Ken_Mathieu_Beaudin" }, { "code": "mongo", "text": "Hi @Ken_Mathieu_Beaudin !at the end of the chapter there is an Interactive Developer Environment (IDE). This is bash shell with mongo and other bins already installed for you.To run it locally head over to the installation chapter in the docs.", "username": "santimir" }, { "code": "", "text": "I am not able to connect to my cluster using IDE environment, can any one help me?", "username": "Muzaffar_ali_53011" }, { "code": "", "text": "What issue you are facing\nPlease show us the screenshot or error details", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi,i am not able to connect to mogodb using shell .\nfollowing the same procedure as informed.\nthe IDE show error.\nattaching the screenshot .\nconnection string :mongo “mongodb+srv://sandbox.wr6ht.mongodb.net/” --username m001-student\nScreenshot (20)1920×1080 91.7 KB", "username": "aditya_rana" }, { "code": "", "text": "What error are you getting?\nI don’t see any error in your snapshot.It shows test result failed\nDid you run the command in correct area of IDE\nDid you hit enter after typing/pasting the connect string?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I am not able to connect to Atlas cluster. I am getting following erros.\nconnecting to: mongodb://sandbox-shard-00-02.ftsyv.mongodb.net.:27017,sandbox-shard-00-00.ftsyv.mongodb.net.:27017,sandbox-shard-00-01.ftsyv.mongodb.net.:27017/%3Cdbname%3E?authSource=admin&gssapiServiceName=mongodb&replicaSet=atlas-exfvdg-shard-0&ssl=true\n2020-10-27T10:02:45.134+0000 I NETWORK [js] Starting new replica set monitor for atlas-exfvdg-shard-0/sandbox-shard-00-02.ftsyv.mongodb.net.:27017,sandbox-shard-00-00.ftsyv.mongodb.net.:27017,sandbox-shard-00-01.ftsyv.mongodb.net.:27017\n2020-10-27T10:02:45.771+0000 I NETWORK [js] Successfully connected to sandbox-shard-00-01.ftsyv.mongodb.net.:27017 (1 connections now open to sandbox-shard-00-01.ftsyv.mongodb.net.:27017 with a 5 second timeout)\n2020-10-27T10:02:45.771+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Successfully connected to sandbox-shard-00-00.ftsyv.mongodb.net.:27017 (1 connections now open to sandbox-shard-00-00.ftsyv.mongodb.net.:27017 with a 5 second timeout)\n2020-10-27T10:02:45.959+0000 I NETWORK [js] changing hosts to atlas-exfvdg-shard-0/sandbox-shard-00-00.ftsyv.mongodb.net:27017,sandbox-shard-00-01.ftsyv.mongodb.net:27017,sandbox-shard-00-02.ftsyv.mongodb.net:27017 from atlas-exfvdg-shard-0/sandbox-shard-00-00.ftsyv.mongodb.net.:27017,sandbox-shard-00-01.ftsyv.mongodb.net.:27017,sandbox-shard-00-02.ftsyv.mongodb.net.:27017\n2020-10-27T10:02:46.535+0000 I NETWORK [js] Successfully connected to sandbox-shard-00-01.ftsyv.mongodb.net:27017 (1 connections now open to sandbox-shard-00-01.ftsyv.mongodb.net:27017 with a 5 second timeout)\n2020-10-27T10:02:46.538+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Successfully connected to sandbox-shard-00-00.ftsyv.mongodb.net:27017 (1 connections now open to sandbox-shard-00-00.ftsyv.mongodb.net:27017 with a 5 second timeout)\n2020-10-27T10:02:47.363+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Successfully connected to sandbox-shard-00-02.ftsyv.mongodb.net:27017 (1 connections now open to sandbox-shard-00-02.ftsyv.mongodb.net:27017 with a 5 second timeout)\n2020-10-27T10:02:47.728+0000 I NETWORK [js] Marking host sandbox-shard-00-01.ftsyv.mongodb.net:27017 as failed :: caused by :: Location40659: can’t connect to new replica set master [sandbox-shard-00-01.ftsyv.mongodb.net:27017], err: Location8000: bad auth : Authentication failed.\n2020-10-27T10:02:48.104+0000 I NETWORK [js] Marking host sandbox-shard-00-02.ftsyv.mongodb.net:27017 as failed :: caused by :: SocketException: can’t authenticate against replica set node sandbox-shard-00-02.ftsyv.mongodb.net:27017 :: caused by :: socket exception [CONNECT_ERROR] server [sandbox-shard-00-02.ftsyv.mongodb.net:27017] connection pool error: network error while attempting to run command ‘isMaster’ on host ‘sandbox-shard-00-02.ftsyv.mongodb.net:27017’\n2020-10-27T10:02:48.478+0000 I NETWORK [js] Marking host sandbox-shard-00-00.ftsyv.mongodb.net:27017 as failed :: caused by :: SocketException: can’t authenticate against replica set node sandbox-shard-00-00.ftsyv.mongodb.net:27017 :: caused by :: socket exception [CONNECT_ERROR] server [sandbox-shard-00-00.ftsyv.mongodb.net:27017] connection pool error: network error while attempting to run command ‘isMaster’ on host ‘sandbox-shard-00-00.ftsyv.mongodb.net:27017’\n2020-10-27T10:02:49.612+0000 I NETWORK [js] Marking host sandbox-shard-00-01.ftsyv.mongodb.net:27017 as failed :: caused by :: Location40659: can’t connect to new replica set master [sandbox-shard-00-01.ftsyv.mongodb.net:27017], err: Location8000: bad auth : Authentication failed.\n2020-10-27T10:02:49.613+0000 E QUERY [js] Error: can’t authenticate against replica set node sandbox-shard-00-01.ftsyv.mongodb.net:27017 :: caused by :: can’t connect to new replica set master [sandbox-shard-00-01.ftsyv.mongodb.net:27017], err: Location8000: bad auth : Authentication failed. :\nconnect@src/mongo/shell/mongo.js:328:13\n@(connect):1:6\nexception: connect failed", "username": "Sandeep_41860" }, { "code": "", "text": "Hi follow the step to sucess the exercice mongo university step1398×703 57.4 KB", "username": "Jean-Claude_ADIBA" }, { "code": "", "text": "bad authentication means wrong combination of userid/pwd\nWhat did you give as password?\nMay be some invalid character or space got introduced while pasting the password at the time of creating your sandbox cluster", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi @Muzaffar_ali_53011,I am not able to connect to my cluster using IDE environment, can any one help me?Please share the information requested by @Ramachandra_37567 if you are still facing any issue.What issue you are facing\nPlease show us the screenshot or error details", "username": "Shubham_Ranjan" }, { "code": "", "text": "Hi\nI am facing issue while connection to mongoshell.\nError which I facing,I pasted below. please let me know where I need to improve the command.bash-4.4# mongo \"mongodb+srv://sandbox.ofcuo.mongodb.net/\nMongoDB shell version v4.0.5\nEnter password:\nconnecting to: mongodb://sandbox-shard-00-02.ofcuo.mongodb.net.:27017,sandbox-shard-00-00.ofcuo.mongodb.net.:27017,sandbox-shard-00-01.ofcuo.mongodb.net.:27017/%3Cdbname%3E?authSource=admin&gssapiServiceName=mongodb&replicaSet=atlas-2y4u8j-shard-0&ssl=true\n2020-11-01T05:41:19.699+0000 I NETWORK [js] Starting new replica set monitor for atlas-2y4u8j-shard-0/sandbox-shard-00-02.ofcuo.mongodb.net.:27017,sandbox-shard-00-00.ofcuo.mongodb.net.:27017,sandbox-shard-00-01.ofcuo.mongodb.net.:27017\n2020-11-01T05:41:19.865+0000 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:19.865+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 1 checks in a row.\n2020-11-01T05:41:20.406+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:20.406+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 2 checks in a row.\n2020-11-01T05:41:20.947+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:20.947+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 3 checks in a row.\n2020-11-01T05:41:21.489+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:21.489+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 4 checks in a row.\n2020-11-01T05:41:22.031+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:22.031+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 5 checks in a row.\n2020-11-01T05:41:22.572+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:22.572+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 6 checks in a row.\n2020-11-01T05:41:23.112+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:23.112+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 7 checks in a row.\n2020-11-01T05:41:23.653+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:23.653+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 8 checks in a row.\n2020-11-01T05:41:24.195+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:24.195+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 9 checks in a row.\n2020-11-01T05:41:24.737+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:24.737+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 10 checks in a row.\n2020-11-01T05:41:25.277+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:25.278+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 11 checks in a row.\n2020-11-01T05:41:25.819+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:26.359+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:26.900+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:27.448+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:27.989+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:28.530+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:29.071+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:29.612+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:30.153+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:30.696+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:30.696+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 21 checks in a row.\n2020-11-01T05:41:31.238+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:31.778+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:32.319+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:32.860+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:33.400+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:33.941+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:34.481+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:41:34.482+0000 E QUERY [js] Error: connect failed to replica set atlas-2y4u8j-shard-0/sandbox-shard-00-02.ofcuo.mongodb.net.:27017,sandbox-shard-00-00.ofcuo.mongodb.net.:27017,sandbox-shard-00-01.ofcuo.mongodb.net.:27017 :\nconnect@src/mongo/shell/mongo.js:328:13\n@(connect):1:6\nexception: connect failed\nbash-4.4# mongo \"mongodb+srv://sandbox.ofcuo.mongodb.net/\nMongoDB shell version v4.0.5\nEnter password:\nconnecting to: mongodb://sandbox-shard-00-02.ofcuo.mongodb.net.:27017,sandbox-shard-00-00.ofcuo.mongodb.net.:27017,sandbox-shard-00-01.ofcuo.mongodb.net.:27017/%3Cdbname%3E?authSource=admin&gssapiServiceName=mongodb&replicaSet=atlas-2y4u8j-shard-0&ssl=true\n2020-11-01T05:43:40.571+0000 I NETWORK [js] Starting new replica set monitor for atlas-2y4u8j-shard-0/sandbox-shard-00-02.ofcuo.mongodb.net.:27017,sandbox-shard-00-00.ofcuo.mongodb.net.:27017,sandbox-shard-00-01.ofcuo.mongodb.net.:27017\n2020-11-01T05:43:40.755+0000 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:40.755+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 1 checks in a row.\n2020-11-01T05:43:41.296+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:41.296+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 2 checks in a row.\n2020-11-01T05:43:41.836+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:41.836+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 3 checks in a row.\n2020-11-01T05:43:42.385+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:42.385+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 4 checks in a row.\n2020-11-01T05:43:42.927+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:42.927+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 5 checks in a row.\n2020-11-01T05:43:43.468+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:43.468+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 6 checks in a row.\n2020-11-01T05:43:44.010+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:44.010+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 7 checks in a row.\n2020-11-01T05:43:44.551+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:44.551+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 8 checks in a row.\n2020-11-01T05:43:45.091+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:45.091+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 9 checks in a row.\n2020-11-01T05:43:45.633+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:45.633+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 10 checks in a row.\n2020-11-01T05:43:46.173+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:46.173+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 11 checks in a row.\n2020-11-01T05:43:46.720+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:47.264+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:47.805+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:48.346+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:48.886+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:49.427+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:49.968+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:50.508+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:51.049+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:51.590+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:51.590+0000 I NETWORK [js] Cannot reach any nodes for set atlas-2y4u8j-shard-0. Please check network connectivity and the status of the set. This has happened for 21 checks in a row.\n2020-11-01T05:43:52.131+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:52.671+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:53.212+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:53.753+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:54.293+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:54.834+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:55.375+0000 W NETWORK [js] Unable to reach primary for set atlas-2y4u8j-shard-0\n2020-11-01T05:43:55.375+0000 E QUERY [js] Error: connect failed to replica set atlas-2y4u8j-shard-0/sandbox-shard-00-02.ofcuo.mongodb.net.:27017,sandbox-shard-00-00.ofcuo.mongodb.net.:27017,sandbox-shard-00-01.ofcuo.mongodb.net.:27017 :\nconnect@src/mongo/shell/mongo.js:328:13\n@(connect):1:6\nexception: connect failed", "username": "jay_bhosale" }, { "code": "", "text": "Is your cluster up and running?\nPlease check status in Atlas.Any errors?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Most likely you forgot to whitelist your IP address.", "username": "steevej" }, { "code": "", "text": "Yess,Its up & running.\nI didn’t see any wrong with configuration.", "username": "jay_bhosale" }, { "code": "", "text": "I added IP (My-Machine) in network access.\nIts seem successfully added without any error.but while i trigger connection command trough console its throwing me error.", "username": "jay_bhosale" }, { "code": "", "text": "3 posts were split to a new topic: Not able to connect to cluster through IDE", "username": "Shubham_Ranjan" }, { "code": "whitelistIPs0.0.0.0", "text": "Hi @jay_bhosale,Can you try to whitelist all the IPs by selecting 0.0.0.0 option?Please take a look at this post for more information.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "I can’t paste the command to terminal. the terminal is not responding. what should I do?", "username": "Binti_Solihah" }, { "code": "", "text": "Please show us the screenshot\nMay be you pasted it in wrong area?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Press the + button.", "username": "santimir" } ]
Lab: Connect to your Atlas cluster
2020-10-04T11:37:43.662Z
Lab: Connect to your Atlas cluster
2,482
null
[]
[ { "code": "", "text": "hi i keep getting the authentication failed error when i try to connect to the host via compass.\nthanks.", "username": "mariam_34273" }, { "code": "", "text": "Were you able to resolve your issue? You may want to check out the earlier posts and the lecture notes.", "username": "Vicki_56212" }, { "code": "m001-mongodb-basics", "text": "Hi mariam_34273,Please note that the password for m001-student is m001-mongodb-basics. Here is the screenshot of my connection parameters.\nLet me know if it helps.Kanika", "username": "kanikasingla" }, { "code": "", "text": "Hi thereI am currently experiencing this error, I have entered the given password on numerous occasions.I would really appreciate it if I can be assisted.", "username": "sibabalwe-qamata" }, { "code": "", "text": "ok thanks.am trying again now", "username": "mariam_34273" }, { "code": "", "text": "Hi all… I am currently facing same error of authentication with given credentials… Could anyone please suggest", "username": "sukriti_71182" }, { "code": "", "text": "Try again, with the same credentials. I also had the same issue, and I managed to log-in", "username": "sibabalwe-qamata" }, { "code": "", "text": "Hi All, I am getting the same error “authentication fail” with given credentials, can someone please help. I need to finish this class today as it is the last day.", "username": "Bindu_76445" }, { "code": "", "text": "Hi All, I have used the credentials used above (same from course) but still getting the following error msg:\"Could not connect to MongoDB on the provided host and port \".Need help in connecting compass to proceed to rest of the course and is required for the labwork due today. Thanks in advance.", "username": "Vig_78283" }, { "code": "m001-mongodb-basics", "text": "Password: m001-mongodb-basics for cluster: “cluster0-shard-00-00-jxeqq.mongodb.net”, user: “m001-student”Kanika", "username": "kanikasingla" }, { "code": " ping cluster0-shard-00-00-jxeqq.mongodb.net\ntoptop", "text": "Try pinging the cluster from your command line:And if 27017 port is getting used in some other process, you need to stop the process. In linux, you can find top processes using top command.For more cluster connection issues, check this: Compass Connection [Solution]Kanika", "username": "kanikasingla" }, { "code": "", "text": "Had the same error, then noticed that I copied a space before the username. Might be the case for others, too!", "username": "Balint_22890" }, { "code": "", "text": "Hi , you can try setting your IP whitelist into this. Just go to your atlast account, then click cluster, then Security.\nimage.png737×445 16.6 KB\n", "username": "Tyn_06328" }, { "code": "", "text": "Thank you @Balint_22890…yeah, extra spaces before username. My problems is solved.", "username": "Julitra_Anaada_34330" }, { "code": "", "text": "m001-mongodb-basicsYeah it worked, When i tried as you said. The mistake is only in typing the username and password manually. But i didn’t get what these host name, replica set names are. Still having a doubt on those.", "username": "akhilesh_15140" }, { "code": "", "text": "But i didn’t get what these host name, replica set names are.Host Name: Hostname is used to connect to a mongod instance. This hostname can be a hostname, an FQDN, an IPv4 address, or an IPv6 address.Replica Set Name: Replica Set is a set of servers with the copies of data. Usually, it is a set of 3 mongod servers with 1 Primary and 2 Secondary servers. So, replica set name is the name given for whole replica set. More information is given here in docs-replication. And there is M103 course which covers the topic in detail.Kanika", "username": "kanikasingla" }, { "code": "", "text": "after using above solution it connects but I got following error in red lines:\nAn error occurred while loading navigation: ‘not master and slaveOk=false’: It is recommended to change your read preference in the connection dialog to Primary Preferred or Secondary Preferred or provide a replica set name for a full topology connection.", "username": "Dhananjay_Wadhavane" }, { "code": "", "text": "Are you connecting to your Sandbox cluster or Class cluster?\nTry with read preference as Primary preferred", "username": "Ramachandra_Tummala" }, { "code": "Primary nodesrv", "text": "Hi @Dhananjay_Wadhavane,Please use srv type connection string. It will always connect you to the Primary node of the Atlas replica set.You can get the srv connection string for your cluster from your Atlas account.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "", "username": "system" } ]
Authentication failed
2019-01-10T17:57:49.939Z
Authentication failed
5,870
https://www.mongodb.com/…49bfaf152897.png
[]
[ { "code": "", "text": "Hello everyone ,i have a probleme i follow the official tutorial to install mongodb but i can’t active my connection.\nI would like to install mongodb on my Ovh vps server on ubuntu 20.04.Screenshot from 2020-10-05 12-03-26980×186 46.2 KBwhen i execute mongoScreenshot from 2020-10-05 12-05-401375×431 73.9 KBi can see my database and userthis is my network listen\nScreenshot from 2020-10-05 12-08-58938×113 21.8 KBthis is my last mongo log :Screenshot from 2020-10-05 12-12-341615×260 88.7 KBthis is my mongod.conf\nScreenshot from 2020-10-05 12-15-47707×769 53.7 KBmy mongodb.serviceScreenshot from 2020-10-05 12-17-23782×251 23.2 KBthank you for your help.", "username": "Quentin_Valenti" }, { "code": "mongod.servicemongodps aux | grep \"mongod\"\n", "text": "Hi @Quentin_Valenti and welcome in the MongoDB Community !First, looks like your mongod.service is failing to start because you already have another mongod running and using the port 27017 on this machine.This command above will help you find the other instance already running.Second: You MUST activate the authentication! As you are apparently setting up a prod cluster, I would recommend reading the production notes. There are a lot of “details” to consider.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi @Quentin_Valenti,In addition to the Production Notes as mentioned by @MaBeuLux88, I would also like to add that for any production-facing deployment, following the Security Checklist is also strongly recommended.Best regards,\nKevin", "username": "kevinadi" } ]
Connexion probleme (code=exited, status=48)
2020-10-05T13:28:03.326Z
Connexion probleme (code=exited, status=48)
3,550
null
[]
[ { "code": "", "text": "I am a complete newb & trying to learn MongoDB. I found some Atlas promo credit & applied those into my account. As learning step I deleted organization and recreated a new organization, and noticed all those promo credits were deleted too. Is there anyway to get those back?Thanks!", "username": "Ibrahim_Rasel" }, { "code": "", "text": "Hi @Ibrahim_Rasel and welcome in the MongoDB Community !It’s awesome that you are trying to learn more about MongoDB Atlas. Don’t hesitate to ask more questions in the forum if you need some directions.Have you tried re-applying the same codes again in the new organisation? That being said, MongoDB Atlas as a generous free tier for all the tools:Feel free to test & learn more about these. There is plenty you can do with the free tier already.Check out the blog posts in the DevHub. Lots of great MongoDB tutorials in here.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hello @MaBeuLux88, thanks for your replay. I will surely ask in the forum if I need any help.I just re-applied some code, and it seems some of them are working. Unfortunately I don’t remember which codes I had used before. But it’s alright.Thanks for your help. Really appreciate it.", "username": "Ibrahim_Rasel" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Deleted organization with applied credit
2020-10-05T13:27:28.664Z
Deleted organization with applied credit
1,561
null
[ "atlas-device-sync" ]
[ { "code": "let marker = Marker(value: [\n \"_partition\" : RealmConstants.commonRealmPartitionKey,\n \"lat\" : realLat,\n \"lng\" : realLng,\n \"sakamichi\" : rowValues[\"sakamichi\"] as! String,\n \"category\" : rowValues[\"category\"] as! String,\n \"image\" : imageUrl,\n \"postedBy\" : app.currentUser()!.id!\n ]\n)\nmarkerBody.relatedMarker = marker\n\nvar config = app.currentUser()?.configuration(partitionValue: RealmConstants.commonRealmPartitionKey)\nconfig?.objectTypes = [Marker.self, MarkerBody.self, MemberTag.self]\nlet realm = try! Realm(configuration: config!)\n\ntry realm.write({ () -> Void in\n realm.add(markerBody)\n realm.add(marker)\n print(\"Marker Is Saved\")\n})\nError:\n\n\nfailed to validate upload changesets: SET instruction had incorrect partition value for key \"_partition\" (ProtocolErrorCode=212)\n\nPartition:\n\n\nCommonRealm\n\nWrite Summary:\n\n{\n \"Marker\": {\n \"inserted\": [\n \"5f79be08e26c1ba4d8d3c808\"\n ]\n },\n \"MarkerBody\": {\n \"inserted\": [\n \"5f79be08e26c1ba4d8d3c807\"\n ]\n }\n}\nSDK:\n\nRealm Cocoa v10.0.0-beta.5\n\nPlatform Version:\n\nVersion 13.7 (Build 17H22)\n", "text": "I got BadChangeset Error while creating a marker object.↓Error logI think partition value is correct, why did this error occur?", "username": "Shi_Miya" }, { "code": "app.currentUser()?guard let user = thisApp.currentUser() else {\n print(\"no user object, please log in\")\n return\n}\nconfig?.objectTypes = [Marker.self, MarkerBody.self, MemberTag.self]MemberTag.self", "text": "Shooting in the dark here but there could be a few causes. First though, markerBody object is undefined in your question so it’s possible that could be the cause.Also, if you’re opening Realm for the first time, you need to do it with .asyncOpen to get the server and your local realm aligned. See Open a Synced RealmAlso force unwrapping optionals is not advisedapp.currentUser()?protect your code by handling it in case it’s nilAlso, you’re specifying three object types and the last one doesn’t appear to be used and is undefined in the question so perhaps it doesn’t have a correct _partitionconfig?.objectTypes = [Marker.self, MarkerBody.self, MemberTag.self]and since the error log is indicating successful writes of Marker and MarkerBody, I am going with the last one MemberTag.self being funky.and… it could just be a random server error as we’ve seen those as well.", "username": "Jay" }, { "code": "", "text": "Thanks a lot, Jay. I checked all of causes you pointed out, but same error still occurred.\nNext, I created another xcode project and build simplified application, then BadChangeset error while writing didn’t occur.\nSo, I uninstalled and reinstalled Realm, RealmSwift and application itself on a Simulator, then I finally added object successfully from main application.", "username": "Shi_Miya" }, { "code": "", "text": "@Shi_Miya This is likely due to the local realm state which is stored on the simulator. If at some previous time you had opened the realm with a different partitionKey value then you would likely get this error. Clearing the simulator by uninstalling the app or wiping the simulator is the best way to clear this.", "username": "Ian_Ward" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
I got BadChangeset Error
2020-10-04T12:49:05.103Z
I got BadChangeset Error
2,658
null
[]
[ { "code": "", "text": "From eCustoms We synchronize global trade.Personally I work with MongoDb since 2011 and it is nice to see how Mongo has grow. Just Joined Startup program for a new project.Hello from Madrid, Spain", "username": "Marcos_Icardo" }, { "code": "", "text": "Bienvenidos, @Marcos_Icardo! The community will benefit from your experience, I’m sure, & congrats on joining the startups program!", "username": "Jamie" }, { "code": "", "text": "Welcome @Marcos_Icardo!Man, you’ve been around using mongo from the beginning!You must really hate transforming JSON to and from tables. ", "username": "James_Lynch" }, { "code": "", "text": "It was a pain in the past, but not it is much straight forward", "username": "Marcos_Icardo" } ]
Hi Just Arrived. Hello from eCustoms Team
2020-07-28T19:50:15.957Z
Hi Just Arrived. Hello from eCustoms Team
2,648
null
[ "queries" ]
[ { "code": "[\n {\n \"_id\": \"5f7a1b6a3aedab1df33574bb\",\n \"name\": \"myFirstGame\",\n \"players\": [\n {\n \"_id\": \"5f7a1be79acaf41e058ccd9f\",\n \"name\": \"Albert\",\n \"points\": 10\n },\n {\n \"_id\": \"5f7a20de87f0fc1ea309145c\",\n \"name\": \"Adam\",\n \"points\": 10\n },\n {\n \"_id\": \"5f7a25fdcd01211fc84afd0e\",\n \"name\": \"John\",\n \"points\": 10\n }\n ],\n \"description\": \"This is a fun game to watch\",\n \"__v\": 0\n }\n]\ntry{\n const cond = {\n _id: '5f7a1b6a3aedab1df33574bb', //the id of the game\n 'players.name': { $eq: 'John' }, // the condition on the player\n }\n player = await Game.find(cond, 'players');\n return player;\n}catch (e) {\n console.error(e);\n}\n {\n \"_id\": \"5f7a25fdcd01211fc84afd0e\",\n \"name\": \"John\",\n \"points\": 10\n }\nSELECT p.name, p.points FROM players as p\nwhere p.game_id = '5f7a1b6a3aedab1df33574bb' AND p.name = 'John'\n", "text": "Hi everyone \nI’m new to mongo, I’m currently building my very first API with this new technologyI have a collection that looks like thisAnd I’m trying to get the player which name is JOHNWhat I’m expecting to get is an object like thisBut instead I’m getting the full collection with no filter appliedIs there anyway to get an object inside a collection array?In SQL it would be something like thise", "username": "Julian_Mendez" }, { "code": "db.test.find( { _id: \"5f....\", \"players.name\": \"Adam\" }, { \"players.$\": 1 } )", "text": "Hello @Julian_Mendez, welcome to the community!In MongoDB there are various operators to work with arrays. In your case you want to project an array element for a matching condition, with the positional $ operator.This projects the first element in an array that matches the query condition.The query:db.test.find( { _id: \"5f....\", \"players.name\": \"Adam\" }, { \"players.$\": 1 } )", "username": "Prasad_Saya" }, { "code": "[\n {\n \"_id\": \"5f7a1b6a3aedab1df33574bb\",\n \"players\": [\n {\n \"_id\": \"5f7a1be79acaf41e058ccd9f\",\n \"name\": \"Adam\",\n \"points\": 101\n }\n ]\n }\n]\n {\n \"_id\": \"5f7a1be79acaf41e058ccd9f\",\n \"name\": \"Adam\",\n \"points\": 101\n }\n_id", "text": "Thank you @Prasad_Saya,\nThis is more or less what I was looking for…\nNow, I have another question…Is no other way to get instead of a Collection/Array, to get the plain object?I mean, I did what you told me and I get this…And i thought the result would be like thisI’m asking this because I’m looking for the players, not the whole game, I already “filter” the game when I’m using the _id of a game", "username": "Julian_Mendez" }, { "code": "{ \"players.$\": 1, _id: 0 }find", "text": "You need to use projection to remove the fields (or keep the fields you want) in the output. for example, { \"players.$\": 1, _id: 0 }.The output is always a document. You can control what fields you want in the output with a find query.", "username": "Prasad_Saya" }, { "code": "db.game.find(\n {\"players.name\": \"John\"},\n {\"players.$\": 1, _id:0}\n).next().players[0]\n{ \"_id\" : \"5f7a25fdcd01211fc84afd0e\", \"name\" : \"John\", \"points\" : 10 }\ndb.game.aggregate([\n {\n '$unwind': {\n 'path': '$players'\n }\n }, {\n '$match': {\n 'players.name': 'John'\n }\n }, {\n '$project': {\n '_id': '$players._id', \n 'name': '$players.name', \n 'points': '$players.points'\n }\n }\n])\n", "text": "Hi @Julian_Mendez,I tested the following query in Mongo Shell:And I got the following result:NB: next() iterates over the cursor (returned by find()) and returns the next value in the cursor.This (overkill) aggregation pipeline would also do the job and offer a little more flexibility:The main difference between the 2 here: the aggregation pipeline will return 2 Johns if you had 2 “John” in the same game while the first find() query will only return the first “John” found.Enjoy,\nMaxime.", "username": "MaBeuLux88" } ]
Find inside an array in a collection
2020-10-05T01:10:08.585Z
Find inside an array in a collection
8,796
null
[ "data-modeling" ]
[ { "code": "\"User\": {\n \"_id\": \"<ObjectId>\",\n \"firstname\": \"<String>\",\n \"birthdate\": \"<ISODate>\",\n \"inscriptionDate\": \"<ISODate>\",\n \"lastConnectionDate\": \"<ISODate>\",\n \"totalFriends\": \"<Int>\",\n \"totalMessageSent\": \"<Int>\"\n}\ntotalFriendsdb.users.update(\n { _id: 1 },\n { $inc:{ totalFriends: 1}}\n)\ntotalMessageSent", "text": "Hello,I’ve read this article about computed pattern however it is very theoretical I’ve got a hard time to understand how to implement it. Let’s say I have a collection as the one below:For this example let’s exclude the approximation pattern.\ntotalFriends will be updated every time one user accept or remove a friend. How should I update this field ? Should I do a simpleFor example totalMessageSent will be updated every 10 messages sent. How should I update this field ?", "username": "Yoni_Obia" }, { "code": "{ $inc: { totalMessagesSent: -10 } }", "text": "Hi, @Yoni_Obia - welcome to the community!I’m curious how you ended up implementing this. Did you find an implementation that worked for you? Did you discover anything that didn’t work so well?For TotalFriends, doing the increment every time a friend is added probably makes sense. Another option would be to calculate the number of friends periodically. My guess is that adding a friend doesn’t happen super frequently, so the $inc approach will likely be the better approach.For the totalMessagesSent, you could do something like:\n{ $inc: { totalMessagesSent: -10 } }", "username": "Lauren_Schaefer" } ]
Real use of Computed pattern and Approximation pattern
2020-04-07T17:36:06.677Z
Real use of Computed pattern and Approximation pattern
1,505
null
[]
[ { "code": "", "text": "I use the web IDE and created a database call “sampledb” in the Sandbox cluster.\nThis is what I typed in the terminal:\nmongo “mongodb+srv://sandbox.gdmba.mongodb.net/sampledb” --username m001-student -password m001-mongodb-basicsFollowing are the error message:\n<me m001-student -password m001-mongodb-basics\nMongoDB shell version v4.0.5\nconnecting to: mongodb://sandbox-shard-00-02.gdmba.mongodb.net.:27017,sandbox-shard-00-00.gdmba.mongodb.net.:27017,sandbox-shard-00-01.gdmba.mongodb.net.:27017/sample_training?authSource=admin&gssapiServiceName=mongodb&replicaSet=atlas-sn90qq-shard-0&ssl=true\n2020-10-03T01:24:57.969+0000 I NETWORK [js] Starting new replica set monitor for atlas-sn90qq-shard-0/sandbox-shard-00-02.gdmba.mongodb.net.:27017,sandbox-shard-00-00.gdmba.mongodb.net.:27017,sandbox-shard-00-01.gdmba.mongodb.net.:27017\n2020-10-03T01:24:58.112+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:24:58.112+0000 I NETWORK [js] Cannot reach any nodes for set atlas-sn90qq-shard-0. Please check network connectivity and the status of the set. This has happened for 1 checks in a row.\n2020-10-03T01:24:58.652+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:24:58.652+0000 I NETWORK [js] Cannot reach any nodes for set atlas-sn90qq-shard-0. Please check network connectivity and the status of the set. This has happened for 2 checks in a row.\n2020-10-03T01:24:59.192+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:24:59.192+0000 I NETWORK [js] Cannot reach any nodes for set atlas-sn90qq-shard-0. Please check network connectivity and the status of the set. This has happened for 3 checks in a row.\n2020-10-03T01:24:59.732+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:24:59.732+0000 I NETWORK [js] Cannot reach any nodes for set atlas-sn90qq-shard-0. Please check network connectivity and the status of the set. This has happened for 4 checks in a row.\n2020-10-03T01:25:00.278+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:25:00.278+0000 I NETWORK [js] Cannot reach any nodes for set atlas-sn90qq-shard-0. Please check network connectivity and the status of the set. This has happened for 5 checks in a row.\n2020-10-03T01:25:00.818+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:25:00.819+0000 I NETWORK [js] Cannot reach any nodes for set atlas-sn90qq-shard-0. Please check network connectivity and the status of the set. This has happened for 6 checks in a row.\n2020-10-03T01:25:01.359+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:25:01.359+0000 I NETWORK [js] Cannot reach any nodes for set atlas-sn90qq-shard-0. Please check network connectivity and the status of the set. This has happened for 7 checks in a row.\n2020-10-03T01:25:01.901+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:25:01.901+0000 I NETWORK [js] Cannot reach any nodes for set atlas-sn90qq-shard-0. Please check network connectivity and the status of the set. This has happened for 8 checks in a row.\n2020-10-03T01:25:02.441+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:25:02.441+0000 I NETWORK [js] Cannot reach any nodes for set atlas-sn90qq-shard-0. Please check network connectivity and the status of the set. This has happened for 9 checks in a row.\n2020-10-03T01:25:02.981+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:25:02.981+0000 I NETWORK [js] Cannot reach any nodes for set atlas-sn90qq-shard-0. Please check network connectivity and the status of the set. This has happened for 10 checks in a row.\n2020-10-03T01:25:03.521+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:25:03.521+0000 I NETWORK [js] Cannot reach any nodes for set atlas-sn90qq-shard-0. Please check network connectivity and the status of the set. This has happened for 11 checks in a row.\n2020-10-03T01:25:04.061+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:25:04.604+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:25:05.145+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:25:05.684+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:25:06.224+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:25:06.766+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:25:07.306+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:25:07.846+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:25:08.386+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:25:08.926+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:25:08.926+0000 I NETWORK [js] Cannot reach any nodes for set atlas-sn90qq-shard-0. Please check network connectivity and the status of the set. This has happened for 21 checks in a row.\n2020-10-03T01:25:09.471+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:25:10.011+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:25:10.551+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:25:11.091+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:25:11.632+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:25:12.172+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:25:12.712+0000 W NETWORK [js] Unable to reach primary for set atlas-sn90qq-shard-0\n2020-10-03T01:25:12.712+0000 E QUERY [js] Error: connect failed to replica set atlas-sn90qq-shard-0/sandbox-shard-00-02.gdmba.mongodb.net.:27017,sandbox-shard-00-00.gdmba.mongodb.net.:27017,sandbox-shard-00-01.gdmba.mongodb.net.:27017 :\nconnect@src/mongo/shell/mongo.js:328:13\n@(connect):1:6\nexception: connect failed", "username": "Fan_Chen" }, { "code": "--password-password", "text": "Hi @Fan_Chen,Did you create network access to your cluster? There is a step in the instructions that asks you to hit “Allow access from anywhere” in the network access section of your Atlas Cluster.Additionally it looks like you’re missing a - before password. It is supposed to be --password instead of -password.", "username": "Yulia_Genkina" }, { "code": "", "text": "Thanks Yulia, the password parameter is actually a typo when I did above but not in my web terminal. The “Allow access from anywhere actually” solved my issue, I was using my own ip in the previous exercise.\nNot sure why that is the issue, unless the terminal is not an actually online IDE that hosted in the cloud server, but uses our local brower as a client to connect to the database similar to the MongoDB compass? And because my ip is dynamics it throws the error when I only enable my connection by whitelist my ip address?", "username": "Fan_Chen" }, { "code": "", "text": "I was using my own ip in the previous exercise.This is not necessarily the IP address that the cluster is seeing. If your Internet provider, uses NAT on your router, it is the address of the router. If you are using a VPN is the public address at the exit point of the VPN. The following will give you your real public IPSee the IP address assigned to your device. Show my IP city, state, and country. What Is An IP Address? IPv4, IPv6, public IP explained.\nEst. reading time: 1 minute\n", "username": "steevej" }, { "code": "", "text": "", "username": "system" } ]
Unable to connect to database for Lab Connect to Your Atlas Cluster
2020-10-03T01:29:15.291Z
Unable to connect to database for Lab Connect to Your Atlas Cluster
4,410
null
[ "sharding" ]
[ { "code": "", "text": "Hello,I have added 2 new shards to existing cluster of three node replica set.\nChunks balancing is good, but I can see there is no disk size reduced on existing shards while other new shards are getting data.– NO chunk archive is enabled\n– Any option other than repair database, why data is not reducing automatically.", "username": "Aayushi_Mangal" }, { "code": "", "text": "Hi @Aayushi_Mangal,The Wired Tiger storage does not release space when chunks are moved and deleted from old shards as deletion only mark blocks for reuse.I suspect that you haven’t compacted or resynced the nodes with the unreleased space and therefore it is still shown as occupied…If you wish to release space to the os consider doing a rolling resync on those shards.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thank you @Pavel_Duchovny for explanation.We have around 1TB of data, looks like resync is not good option for us, it will take a lot of time. De-frag looks like good option here. Let me know if you have any other suggestions.", "username": "Aayushi_Mangal" }, { "code": "", "text": "Hi @Aayushi_Mangal,Since the blocks are marked for reuse you can leave it as is… New data will use this space… You can monitor the behaviour and consider if you still need a defrag otherwise just let it run and add storage when needed.Best\nPavel", "username": "Pavel_Duchovny" } ]
Adding shard to existing cluster does not reduce disk size on existing shards
2020-10-03T03:51:51.487Z
Adding shard to existing cluster does not reduce disk size on existing shards
2,332
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "Hi - I’m sorry, this is going to be a real newbie question but I’m hoping someone can help me improve my understanding!I have a number of 3rd party services / functions in Mongo Realm that are consumed by a React application. I would like to add Google Auth / Login to the React app so that I can create some restricted routes.I have created my Google project / client ID / secret etc and have used react-google-login - npm to add login/logout buttons. When I click login I receive the usual Google authentication challenge and I can then console.log the details of the user who signed in.However, I’m confused as to whether I now need to send / persist data from this login transaction to Realm. I have been through the Google authentication setup doc at https://docs.mongodb.com/realm/authentication/google/ (although for some reason it didn’t like any Redirect URI or domain restriction I put in there, saying they were invalid).Do I now need to create functions in Realm using this https://docs.mongodb.com/realm/authentication/google/ code and send the token from the React app to Realm?Any guidance / suggested reading much appreciated!", "username": "Stuart_Brown" }, { "code": "", "text": "Hi @Stuart_Brown,Once you created the Realm side configuration with the correct Google and expected application urls you should trigger a Google authentication flow from your respectful used realm sdk when the login button is fired.I assume you use the react native sdks:Integrate this code example to complete the authentication and have your sdk objects on the correct user context.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "'gapi' is not defined", "text": "@Pavel_Duchovny many thanks for the reply :). I’m not using React Native - just standard React for a web app. As such I am using the code suggested at https://docs.mongodb.com/realm/web/authenticate/#google-oauth. I am using react-google-login - npm which works fine independently of Realm.I have set up Google Oauth with Realm details as per https://docs.mongodb.com/realm/authentication/google/#generate-oauth-client-credentials and also set localhost for allowed domains and redirect address.When I start the app however I get 'gapi' is not defined which I guess stems from the Promise at https://docs.mongodb.com/realm/web/authenticate/#google-oauth", "username": "Stuart_Brown" }, { "code": "", "text": "@Stuart_Brown,As far as I understand the Google provider through realm is designed to work with one of realm supported sdks.Perhaps you should use the node js sdk:I don’t see how third party sdk can be used with realm…Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "There are some detials you should be aware of when using a third-party library in the Google OAuth flow: First off, it’s not the only way on web. Second you should ensure that the library you’re using is not consuming the Google “auth code”, since that is a secret which can only be used once. If your client side app uses it, it cannot be used by the MongoDB Realm server. Third, we’re currently experiencing a breaking change in Google’s OAuth service that means Google authentication using auth codes are temporarily blocked from working properly.I would suggest that you use the redirect URL variant of the Google credentials provided by Realm Web for now.I’ve written a bit about this, including a piece of JS code to complete Google authentication via a redirect URL on this issue: Error exchanging access code with OAuth2 provider (Google login, web SDK) · Issue #3249 · realm/realm-js · GitHub", "username": "kraenhansen" }, { "code": "", "text": "Many thanks @kraenhansen. I think I may just wait until that issue is resolved For my info, for a React (not React native) should I be using https://docs.mongodb.com/realm/web/authenticate/#google-oauth or https://docs.mongodb.com/realm/node/authenticate/#google-oauth ?", "username": "Stuart_Brown" }, { "code": "", "text": "For a web app on React you should go with Realm Web (https://docs.mongodb.com/realm/web/authenticate/#google-oauth) and be aware that this SDK does not have a Realm Sync client, so if you’re planning on accessing data, that would be through the MongoDB service or Functions.", "username": "kraenhansen" }, { "code": "", "text": "Great thanks. Have you seen any newbie tutorials around this use case (or a repo that demonstrates it)? If not when I go through it I’ll try and put something together", "username": "Stuart_Brown" }, { "code": "", "text": "I haven’t personally seen any tutorials around this, but I imagine our developer relations team is working on this as we speak. I would definitely encourage you to share your progress and findings - not only because it can help other developers new to our platform, but also because it helps us see and understand our SDKs and documentation from a new set of eyes.", "username": "kraenhansen" } ]
Google Auth Confusion
2020-10-03T20:27:03.639Z
Google Auth Confusion
3,991
null
[ "aggregation", "python" ]
[ { "code": "[\n {\n \"measure\":123\n \"dim_1\":{\n \"dim1.col1\": \"xyz\"\n },\n \"dim_2\":{\n \"dim2.col1\": \"abc\"\n } \n },\n {\n \"measure\":234\n \"dim_1\":{\n \"dim1.col1\": \"yyz\"\n },\n \"dim_2\":{\n \"dim2.col1\": \"def\"\n } \n }\n]\n", "text": "Hello,I have set of raw MongoDB collections and now for reporting purposes, need to create set of aggregated collections which will have several computed/calculated values (measures), grouping (dimensions) i.e. a computed pattern with structure similar to star schema style:Please suggest if using MongoDB Aggregation Framework is the right choice here or using Pandas library for all the transformations and computations and then simply load/insert into MongoDB collection using PyMongo ? Which is the most efficient method ?Appreciate your inputs.Thanks!", "username": "ManiK" }, { "code": "", "text": "Using Pandas will require you to copy all the data into the client and process it locally. For a large dataset this can be a significant overhead. With the aggregation framework the processing is done inside the database cluster with no network transfer or local storage overheads. You can also easily streams your results into another collection using $out.", "username": "Joe_Drumgoole" }, { "code": "", "text": "Thanks, Aggregation framework also seems to have certain result + memory size limitations. However, is it correct that when allowDiskUse is set to True, it would take care of these restrictions implicitly ? And since we would be pushing down the large processing overhead onto the MongoDB server side, what is the minimum DB capacity w.r.t RAM, storage etc preferred?", "username": "ManiK" }, { "code": "", "text": "Hard to predict required memory without know the dataset size, the queries and doing a full working set calculation. If you are using Atlas it is relatively easy to try out different configurations and see what memory configuration works.", "username": "Joe_Drumgoole" } ]
MongoDB Aggregation Framework versus Pandas using PyMongo?
2020-10-01T15:45:29.058Z
MongoDB Aggregation Framework versus Pandas using PyMongo?
3,581
null
[ "java" ]
[ { "code": "{\n totalWins: 12,\n seasons: {\n 1: {\n wins: 10\n },\n 2: {\n wins: 2\n }\n }\n}\nwinsseasonstotalWinsseasonstotalWins", "text": "Hi,My collection has objects that are like this:I want to sum the wins from the seasons objects and put that sum into the field totalWins. However, it is possible that the seasons field might not exist, or it has no objects in itself, in which case totalWins should be 0. How would I aggregate this sum while handling these kind of special cases with efficient queries?", "username": "Diamond_Block" }, { "code": "\n[\n {\n \"afield\": \"\",\n \"seasons\": [\n {\n \"season\": 1,\n \"wins\": 10\n },\n {\n \"season\": 2,\n \"wins\": 2\n }\n ]\n },\n {\n \"afield\": \"\",\n \"seasons\": [\n {\n \"season\": 1,\n \"wins\": 4\n },\n {\n \"season\": 2,\n \"wins\": 5\n }\n ]\n },\n {\n \"afield\": \"\"\n }\n]\n\n{\n \"update\": \"testcoll\",\n \"updates\": [\n {\n \"q\": {},\n \"u\": [\n {\n \"$addFields\": {\n \"totalwins\": {\n \"$cond\": [\n {\n \"$ne\": [\n {\n \"$type\": \"$seasons\"\n },\n \"missing\"\n ]\n },\n {\n \"$reduce\": {\n \"input\": \"$seasons\",\n \"initialValue\": 0,\n \"in\": {\n \"$add\": [\n \"$$value\",\n \"$$this.wins\"\n ]\n }\n }\n },\n 0\n ]\n }\n }\n }\n ],\n \"multi\": true\n }\n ]\n}\n[\n {\n \"afield\": \"\",\n \"seasons\": [\n {\n \"season\": 1,\n \"wins\": 10\n },\n {\n \"season\": 2,\n \"wins\": 2\n }\n ],\n \"totalwins\": 12\n },\n {\n \"afield\": \"\",\n \"seasons\": [\n {\n \"season\": 1,\n \"wins\": 4\n },\n {\n \"season\": 2,\n \"wins\": 5\n }\n ],\n \"totalwins\": 9\n },\n {\n \"afield\": \"\",\n \"totalwins\": 0\n }\n ]\n", "text": "Hello : )Storing unknown keys in documents,its not good idea i think,because we will have to find\nthose keys to sum them.With arrays we don’t find them,and mongo offers so many array operations.Why not store an array,if all seasons exist 1,2,3 … you dont even need the field season.\nThe index will be the key.If there isn’t any important reason to save the seasons inside a document,you can use\nthis code,i used arrays.If you need documents for some reason it can work also,but more\ncomplicated and slow i think.Data inserts (3 documents,last one dont have a season,works also if seasons is empty array)UpdateQuery just to see the updated and gotHope it helps ,its fast ,and simple code,if not important reason for document use i think its\nfine.", "username": "Takis" }, { "code": "$cond", "text": "$condI am having trouble implementing this operator with the MongoDB Java driver. Is there a workaround?", "username": "Diamond_Block" }, { "code": "", "text": "Hello : )This is update using a pipeline,it requires MongoDB 4.2 >= , and a java driver that allows the\nuse of the pipeline to update function.Check the driver i guess it will support it,at least the latest driver.I think this from the 4.1 java driver,will work,the list will be that pipeline\n(the “u” part,and the filter the “q” part that here is empty)\nupdateMany​(Bson filter, List<? extends Bson> update, UpdateOptions updateOptions)In worst case you can run the command from the driver like run.Command(…)\nOr use the update operators,and do the update without use of pipeline.", "username": "Takis" }, { "code": "db.runCommanddb.collection.update db.test.update({}, [\n {\n \"$addFields\": {\n \"totalwins\": {\n \"$cond\": [\n {\n \"$ne\": [\n {\n \"$type\": \"$seasons\"\n },\n \"missing\"\n ]\n },\n {\n \"$reduce\": {\n \"input\": \"$seasons\",\n \"initialValue\": 0,\n \"in\": {\n \"$add\": [\n \"$$value\",\n \"$$this.wins\"\n ]\n }\n }\n },\n 0\n ]\n }\n }\n }\n ],\n {\"multi\": true})\ntest[{\n \"_id\": {\n \"$oid\": \"5f765b65b9e3847a36ff10cb\"\n },\n \"name\": \"Xe\",\n \"seasons\": [\n {\n \"season\": 0,\n \"kills\": 1502,\n \"wins\": 100\n },\n {\n \"season\": -1,\n \"kills\": 1,\n \"wins\": 10\n }\n ]\n},{\n \"_id\": {\n \"$oid\": \"5f765b66b9e3847a36ff10cc\"\n },\n \"name\": \"Rec\",\n \"seasons\": [\n {\n \"season\": 0,\n \"kills\": 1502,\n \"wins\": 90\n }\n ]\n}]\n", "text": "By the way your code doesn’t work. The new field is not added to the documents. I tried your code with db.runCommand and also with db.collection.update:My documents in the test collection:", "username": "Diamond_Block" }, { "code": "[\n {\n \"_id\": {\n \"$oid\": \"5f765b65b9e3847a36ff10cb\"\n },\n \"name\": \"Xe\",\n \"seasons\": [\n {\n \"season\": 0,\n \"kills\": 1502,\n \"wins\": 100\n },\n {\n \"season\": -1,\n \"kills\": 1,\n \"wins\": 10\n }\n ]\n },\n {\n \"_id\": {\n \"$oid\": \"5f765b66b9e3847a36ff10cc\"\n },\n \"name\": \"Rec\",\n \"seasons\": [\n {\n \"season\": 0,\n \"kills\": 1502,\n \"wins\": 90\n }\n ]\n }\n]\n[\n {\n \"_id\": {\n \"oid\": \"5f765b65b9e3847a36ff10cb\"\n },\n \"name\": \"Xe\",\n \"seasons\": [\n {\n \"season\": 0,\n \"kills\": 1502,\n \"wins\": 100\n },\n {\n \"season\": -1,\n \"kills\": 1,\n \"wins\": 10\n }\n ],\n \"totalwins\": 110\n },\n {\n \"_id\": {\n \"oid\": \"5f765b66b9e3847a36ff10cc\"\n },\n \"name\": \"Rec\",\n \"seasons\": [\n {\n \"season\": 0,\n \"kills\": 1502,\n \"wins\": 90\n }\n ],\n \"totalwins\": 90\n }\n]\n{\n \"aggregate\": \"testcoll\",\n \"pipeline\": [\n {\n \"$group\": {\n \"_id\": null,\n \"globalTotalWins\": {\n \"$sum\": \"$totalwins\"\n }\n }\n },\n {\n \"$project\": {\n \"_id\": 0\n }\n }\n ],\n \"maxTimeMS\": 300000,\n \"cursor\": {}\n}\n{\"globalTotalWins\" 200}\n", "text": "Hello : )I tested the code that i sended and i gave the documents in,the query,and the result.\nI tested it again with your data.BecameIf the question was how to sum all the totalwins,maybe you want this.\ntotalwins = sum of each season array\nglobaltotalwins = sum of all seasons in all seasons arraysAssumes that totalwins field exist,or you add it with the first update.If still this is not what you want,if you can give data in,and how the data should become.\nand we will find the query : )", "username": "Takis" }, { "code": "db.runCommand({...})", "text": "I want “totalwins = sum of each season array”. As I mentioned before the query you gave me before doesn’t work. I ran the command in mongo shell with db.runCommand({...}) and the new field was not added. I gave you the “data in” and “how the data should become”, but the query doesn’t work.", "username": "Diamond_Block" }, { "code": "> use testdb\nswitched to db testdb\n> db.testcoll.drop()\ntrue\n> db.testcoll.insert([\n... {\n... \"_id\": {\n... \"oid\": \"5f765b65b9e3847a36ff10cb\"\n... },\n... \"name\": \"Xe\",\n... \"seasons\": [\n... {\n... \"season\": 0,\n... \"kills\": 1502,\n... \"wins\": 100\n... },\n... {\n... \"season\": -1,\n... \"kills\": 1,\n... \"wins\": 10\n... }\n... ]\n... },\n... {\n... \"_id\": {\n... \"oid\": \"5f765b66b9e3847a36ff10cc\"\n... },\n... \"name\": \"Rec\",\n... \"seasons\": [\n... {\n... \"season\": 0,\n... \"kills\": 1502,\n... \"wins\": 90\n... }\n... ]\n... }\n... ])\nBulkWriteResult({\n\t\"writeErrors\" : [ ],\n\t\"writeConcernErrors\" : [ ],\n\t\"nInserted\" : 2,\n\t\"nUpserted\" : 0,\n\t\"nMatched\" : 0,\n\t\"nModified\" : 0,\n\t\"nRemoved\" : 0,\n\t\"upserted\" : [ ]\n})\n> db.runCommand({\n... \"update\": \"testcoll\",\n... \"updates\": [\n... {\n... \"q\": {},\n... \"u\": [\n... {\n... \"$addFields\": {\n... \"totalwins\": {\n... \"$cond\": [\n... {\n... \"$ne\": [\n... {\n... \"$type\": \"$seasons\"\n... },\n... \"missing\"\n... ]\n... },\n... {\n... \"$reduce\": {\n... \"input\": \"$seasons\",\n... \"initialValue\": 0,\n... \"in\": {\n... \"$add\": [\n... \"$$value\",\n... \"$$this.wins\"\n... ]\n... }\n... }\n... },\n... 0\n... ]\n... }\n... }\n... }\n... ],\n... \"multi\": true\n... }\n... ]\n... })\n{ \"n\" : 2, \"nModified\" : 2, \"ok\" : 1 }\n> db.testcoll.find().pretty()\n{\n\t\"_id\" : {\n\t\t\"oid\" : \"5f765b65b9e3847a36ff10cb\"\n\t},\n\t\"name\" : \"Xe\",\n\t\"seasons\" : [\n\t\t{\n\t\t\t\"season\" : 0,\n\t\t\t\"kills\" : 1502,\n\t\t\t\"wins\" : 100\n\t\t},\n\t\t{\n\t\t\t\"season\" : -1,\n\t\t\t\"kills\" : 1,\n\t\t\t\"wins\" : 10\n\t\t}\n\t],\n\t\"totalwins\" : 110\n}\n{\n\t\"_id\" : {\n\t\t\"oid\" : \"5f765b66b9e3847a36ff10cc\"\n\t},\n\t\"name\" : \"Rec\",\n\t\"seasons\" : [\n\t\t{\n\t\t\t\"season\" : 0,\n\t\t\t\"kills\" : 1502,\n\t\t\t\"wins\" : 90\n\t\t}\n\t],\n\t\"totalwins\" : 90\n}\n", "text": "HelloI runned the code,in my driver,now i tested in shell also,its your data,just removed the dollar from “$oid” and made it “oid”.Dollar as name field even if not error,its suggested to not be used.You gave the data in,but you didn’t give the data out,to be sure that this is what you want.\nThe code works,run it on your shell,but needs mongodb>=4.2 .", "username": "Takis" }, { "code": "totalWinsseasonsseasonsseasons$objectToArraywinstotalWinsdb.test.aggregate([\n { \n $project: { \n seasons: { \"$objectToArray\": \"$seasons\" } \n } \n },\n { \n $project: { \n totalWins: { \n $reduce: { \n input: { $ifNull: [ \"$seasons\", [] ] }, \n initialValue: 0, \n in: { $add: [ \"$$value\", \"$$this.v.wins\" ] }\n }\n }\n }\n }\n])\n", "text": "Hello @Diamond_Block,I tried this aggregation and it returns the totalWins as you expect, in different situations (when the seasons is empty or not existing or when data exists). Note the seasons field is not an array, but it is an object.The aggregation converts the seasons object into an array field using the $objectToArray operator. Then, iterate the array to reduce the wins to totalWins.Then, you can use an Aggregation Update to update the document (NOTE: this requires MongoDB v4.2 or newer).", "username": "Prasad_Saya" }, { "code": "totalWinscollection.aggregateupdateManyprojectaddFields", "text": "I tried to implement your code into Java using the Java MongoDB driver but the aggregation doesn’t seem to be adding the new field totalWins. I have tried using collection.aggregate, updateMany and passing the pipeline there, replacing the second project with addFields but none of them work so far. I am new to aggregation by the way so I am a bit lost here.I would also like to ask you the same question with Takis in terms of using objects vs arrays. Thanks!", "username": "Diamond_Block" }, { "code": "dbuseseasons1seasons.1.winsseasons", "text": "I was using Mongo Compass’s web shell to run your command and it didn’t work because db was pointing to my collection as well (as opposed to my database), which is why your command didn’t work when I tried it. I had to use use to explicitly specify the database. After I’ve done that, the command works as expected.However, for me to integrate your solution to my database, I would have to refactor my seasons objects into arrays. The original reason why I decided to use objects was because it was easy for me to query the nested season object’s fields. For example if I want to sort documents by season 1 wins, I would simply sort with seasons.1.wins.I want to ask is this practice good (using objects instead of arrays), or should I follow your advice and use arrays? Let’s say I want to get top 10 documents with the most wins (sorted) with seasons arrays, how would I do so?", "username": "Diamond_Block" }, { "code": "totalWinscollection.aggregateupdateManyprojectaddFields", "text": "I tried to implement your code into Java using the Java MongoDB driver but the aggregation doesn’t seem to be adding the new field totalWins . I have tried using collection.aggregate , updateMany and passing the pipeline there, replacing the second project with addFields but none of them work so far. I am new to aggregation by the way so I am a bit lost here.See this post, which uses $set (aggregation) on Java with mongo-java-driver/4.0/ . You need to use this Java driver API for update with aggregation: updateMany.", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Updating field by summing values
2020-09-19T19:55:26.978Z
Updating field by summing values
9,852
null
[ "node-js" ]
[ { "code": "Const dbURI =\n \"mongodb+srv://user:[email protected]/?w=majority\";\n\nconst { MongoClient } = require(\"mongodb\");\n\nconst client = new MongoClient(dbURI, { useUnifiedTopology: true });\n\nasync function run() {\n try {\n await client.connect();\n\n const database = client.db(\"test\");\n const collection = database.collection(\"testcollection\");\n // create a document to be inserted\n const doc = { name: \"Red\", town: \"kanto\" };\n const result = await collection.insertOne(doc);\n\n console.log(\n `${result.insertedCount} documents were inserted with the _id: ${result.insertedId}`\n );\n } finally {\n await client.close();\n }\n}\nrun().catch(console.dir);\n", "text": "Hey there, I’m running this example code from this docs page. When I run the code the single insertOne call is inserting two documents into the collection. I cannot figure out why this is happening. Here’s the code:", "username": "Brian_Cross" }, { "code": "testcollection{ \"_id\" : ObjectId(\"5f78450f74c4e534304dadb0\"), \"name\" : \"Red\", \"town\" : \"kanto\" }{ name: \"Red\", town: \"kanto\" }_id_id_id{ \"_id\" : ObjectId(\"5f78459981ebc353289476f0\"), \"name\" : \"Red\", \"town\" : \"kanto\" }", "text": "Hello @Brian_Cross, welcome to the community.I ran your code once and found that the script inserted exactly one document into the testcollection:\n{ \"_id\" : ObjectId(\"5f78450f74c4e534304dadb0\"), \"name\" : \"Red\", \"town\" : \"kanto\" }It is likely, you ran the code twice and found two documents with same JSON { name: \"Red\", town: \"kanto\" }, but different _id values. Note that the _id is created automatically and is a unique value.If you run the script the second time you will see a second document, with a different _id, for example:\n{ \"_id\" : ObjectId(\"5f78459981ebc353289476f0\"), \"name\" : \"Red\", \"town\" : \"kanto\" }", "username": "Prasad_Saya" }, { "code": "", "text": "Hey @Prasad_Saya, thanks for the welcome and the reply. I don’t think I’m running the code twice but yeah that’s exactly what it looks like. I’m just running it at the command line with node and I’m only getting one console log of the document being created. Very weird. Maybe my system has a bug of some sort. I’ll try it on a different machine later. Thanks again for the reply; much appreciated!", "username": "Brian_Cross" }, { "code": "", "text": "I ran the code from command-line with NodeJS v12.18.3 and MongoDB 4.2.3.", "username": "Prasad_Saya" } ]
Node.js driver - insertOne inserts two documents
2020-10-03T06:49:03.517Z
Node.js driver - insertOne inserts two documents
2,785
null
[ "monitoring" ]
[ { "code": "", "text": "I’m going to use “prometheus”.However, since MongoDB does not support HTTP from 3.6, we are forced to use 3.4.Disable HTTP Interface\nChanged in version 3.6: MongoDB 3.6 removes the deprecated HTTP interface and REST API to MongoDB.But I want to know how to use the latest version of MongoDB in other ways.\nWhat do other people usually use instead of HTTP?", "username": "Kim_Hakseon" }, { "code": "", "text": "What do other people usually use instead of HTTP?Any one of the optimized native drivers that use a binary wire protocol. See https://docs.mongodb.com/drivers/", "username": "steevej" }, { "code": "", "text": "You should not need http for prometheus. I do not use it myself but there is a exporter and it connects as a standard mongo client.A Prometheus exporter for MongoDB including sharding, replication and storage engines - GitHub - percona/mongodb_exporter: A Prometheus exporter for MongoDB including sharding, replication and stor...", "username": "chris" }, { "code": "", "text": "I think HTTP is the way to connect exporter and MongoDB.Although it was not connected in the latest version, version 3.2 was connected.", "username": "Kim_Hakseon" }, { "code": "MONGODB_URI--mongodb.uriHTTP_AUTHserver_userserver_password", "text": "RunningTo define your own MongoDB URL, use environment variable MONGODB_URI . If set this variable takes precedence over --mongodb.uri flag.To enable HTTP basic authentication, set environment variable HTTP_AUTH to user:password pair. >Alternatively, you can use YAML file with server_user and server_password fields.export MONGODB_URI=‘mongodb://localhost:27017’ export HTTP_AUTH=‘user:password’ ./bin/mongodb_exporter []A Prometheus exporter for MongoDB including sharding, replication and storage engines - GitHub - percona/mongodb_exporter: A Prometheus exporter for MongoDB including sharding, replication and stor...Note about how this worksPoint the process to any mongo port and it will detect if it is a mongos, replicaset member, or stand alone mongod and return the appropriate metrics for that type of node. This was done to prevent the need to an exporter per type of process.", "username": "chris" }, { "code": "", "text": "Thank you I think it’s settled.", "username": "Kim_Hakseon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Monitoring with Prometheus
2020-10-02T14:52:03.724Z
Monitoring with Prometheus
10,050
null
[ "replication" ]
[ { "code": "", "text": "Hi,I’m looking for a bit more information on the initial sync architecture with respect to the Oplog and data changes during the data copy phase.I’m referring to this page; https://docs.mongodb.com/manual/core/replica-set-sync/If I understand this correctly, while the data copy is happening from the source to the target, any data changes picked up on the source are written to the local database on the target. Once the data copy is complete, the changes in the target local are then executed to ensure the source and target are in sync? This is the behaviour from 3.4 onwards.The Oplog is 150GB. My thinking is that if we generated 300GB of data changes, they would be synced to the local on target and then replayed once the data copy stage has completed?I was always under the impression that the data copy phase had to complete before the Oplog records were overwritten. This may have been the behaviour in 3.2 and earlier. Or it could be that I never properly understood this area!The reason for asking is that I am syncing a very large database. The replication oplog window for this replicaset can vary between 1 and 4 hours - it is a very volatile database in terms of collection’s being created and dropped very frequently.Thanks,\nClive", "username": "clivestrong" }, { "code": "", "text": "This may have been the behaviour in 3.2 and earlier. Or it could be that I never properly understood this area!That is what I remember too. 3.2 docs for reference:The reason for asking is that I am syncing a very large database. The replication oplog window for this replicaset can vary between 1 and 4 hoursMake the oplog on the new replica 450GB if you have the space. No worries!", "username": "chris" }, { "code": "", "text": "Thanks Chris.An increase in the Oplog size is something being considered while as a short term solution until we can look to add sharding into the mix - an 8.5TB shard isn’t particularly fun.Thanks for the link - I didn’t think to read up on 3.2 and confirm what I thought may be the case.Clive", "username": "clivestrong" }, { "code": "", "text": "Hi @clivestrong,I’d like to directly answer some of your questions:If I understand this correctly, while the data copy is happening from the source to the target, any data changes picked up on the source are written to the local database on the target. Once the data copy is complete, the changes in the target local are then executed to ensure the source and target are in sync? This is the behaviour from 3.4 onwards.Correct.The Oplog is 150GB. My thinking is that if we generated 300GB of data changes, they would be synced to the local on target and then replayed once the data copy stage has completed?Correct.Previously in MongoDB pre-3.4, initial sync happens in two distinct phases: data copy → oplog pull & apply. The primary would need to have enough oplog to cover the whole initial sync time. In some cases, this could be hard to approximate.Starting from 3.4, the oplog was tailed during the data copy phase, eliminating the need for the primary to have a large oplog to cover the data copy phase. However, the primary would still need to have oplog coverage during the oplog apply phase. This is typically a much smaller window than the data copy phase, though.Further, in the latest MongoDB 4.4, there are further improvements in replication (see the 4.4 release notes); namely, initial sync would automatically resume in the face of transient network error, streaming replication, and the ability to set the minimum oplog retention period in hours.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi Kevin,Thanks for this - I’d pretty much confirmed my thoughts whilst running through the logs / monitoring the process but appreciate your confirming this.I’ve read up on the 4.4 features and these are of value to us. We have some software on 4.2 and other software on 3.6 but will be looking at 4.4 at some point next year!Clive", "username": "clivestrong" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
More info on MongoDB initial sync architecture
2020-09-27T16:43:14.179Z
More info on MongoDB initial sync architecture
2,742
null
[ "aggregation" ]
[ { "code": "{\n \"_id\" : ObjectId(\"5f6a81a9690a0787557f797e\"),\n \"attributes\" : {\n \"first_name\" : \"John\",\n \"last_name\" : \"Doe\",\n \"email\" : \"[email protected]\",\n },\n \"events\" : [ \n {\n \"id\" : \"unique-id-1\",\n \"event\" : \"clicked\",\n \"event_data\" : {\n \"some-data\" : 1\n }\n \"created_at\" : ISODate(\"2020-01-01T10:10:10.000Z\")\n }, \n {\n \"id\" : \"unique-id-2\",\n \"event\" : \"opened\",\n \"event_data\" : {\n \"some-data\" : 1\n }\n \"created_at\" : ISODate(\"2020-01-01T10:10:10.000Z\")\n }, \n {\n \"id\" : \"unique-id-3\",\n \"event\" : \"add_to_cart\",\n \"event_data\" : {\n \"some-data\" : 1\n },\n \"created_at\" : ISODate(\"2020-01-01T10:10:10.000Z\")\n }, \n {\n \"id\" : \"unique-id-4\",\n \"event\" : \"add_to_cart\",\n \"event_data\" : {\n \"some-data\" : 1,\n },\n \"created_at\" : ISODate(\"2020-02-01T10:10:10.000Z\")\n }, \n ]\n}\ndb.clients.aggregate([\n { $match: { \"_id\" : ObjectId(\"5f6a81a9690a0787557f797e\") }},\n { $unwind : \"$events\" },\n { $project: {\n \"_id\": \"$events.id\",\n \"event\": \"$events.event\",\n \"event_data\": \"$events.event_data\",\n \"created_at\": \"$events.created_at\"\n }},\n { $sort: { \"_id\" : -1 }}\n])\nunique-id-1unique-id-2unique-id-4unique-id-3add_to_cartopened", "text": "This is my query:And this is where I hit a roadblock. I am able to get events sorted by the created_at. However I’d like to show only last events (somehow group it by event).In my example I would need t return unique-id-1, unique-id-2, unique-id-4 (because it is created after unique-id-3).What would be the best approach in terms of performance? Also, at this point I need it to be ‘grouped’ by one event, but maybe in the future I’d like to select last 5 records of each event (last 5 add_to_cart, last 5 opened, etc.)Thank you.", "username": "jellyx" }, { "code": "events.created_date : -1", "text": "Hi @jellyx ,You should be using a $limit stage after you sort by events.created_date : -1 .For running multiple different groups in one aggregation set I would recommend using $facetBest\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thank you for your answer.Limit is definitely not an option, but facet is great. Totally forgot about it.Thanks!", "username": "jellyx" }, { "code": "", "text": "Hi @jellyx,Not sure why limit is not the way to get last x documents. Perhaps, I didn’t understand the exact outcome you want can you post the desired output?Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "openedadd_to_cartopenedopenedadd_to_cartfacet", "text": "@Pavel_Duchovnyyes, I guess that’s the reason. Here is my explanation.If I have 100 events. Let’s say 95 od them are opened and 5 are add_to_cart, respectively added.Now, if I limit to show 2 events, then It means I’ll get last 2 opened events. And I want to get 1 opened, 1 add_to_cart, etc.Hope it makes sense. But, with facet works great.", "username": "jellyx" }, { "code": "db.clients.aggregate( [{\n // Match the needed doc\n $match: {\n _id: ObjectId('5f6a81a9690a0787557f797e')\n }\n}, \n// Unwind events to go over each event\n{\n $unwind: '$events'\n}, \n// Sort all events by date\n{\n $sort: {\n \"events.created_at\": -1\n }\n}, \n// Group the events by event name and push them to a new array of ordered events by type\n{\n $group: {\n _id: \"$events.event\",\n events: {\n $push: \"$$CURRENT\"\n }\n }\n}, \n// Construct the document back to original shape and slice the created arrays by \"n\" , eg. first 2 most recent elements\n{\n $project: {\n _id: {\n $arrayElemAt: ['$events._id', 0]\n },\n event_type: \"$_id\",\n attributes: {\n $arrayElemAt: ['$events.attributes', 0]\n },\n events: {\n $slice: ['$events', 0, 2]\n }\n\n }\n},\n// reconstruct again to have the original view with events array (could use $addFields as well)\n {\n $project: {\n _id: 1,\n attributes: 1,\n event_type: 1,\n events: \"$events.events\"\n }\n}]);\n[\n {\n _id: 5f6a81a9690a0787557f797e,\n event_type: 'add_to_cart',\n attributes: {\n first_name: 'John',\n last_name: 'Doe',\n email: '[email protected]'\n },\n events: [\n {\n id: 'unique-id-6',\n event: 'add_to_cart',\n event_data: { 'some-data': 1 },\n created_at: 2020-02-01T10:10:10.000Z\n },\n {\n id: 'unique-id-8',\n event: 'add_to_cart',\n event_data: { 'some-data': 1 },\n created_at: 2020-02-01T10:10:10.000Z\n }\n ]\n },\n {\n _id: 5f6a81a9690a0787557f797e,\n event_type: 'opened',\n attributes: {\n first_name: 'John',\n last_name: 'Doe',\n email: '[email protected]'\n },\n events: [\n {\n id: 'unique-id-2',\n event: 'opened',\n event_data: { 'some-data': 1 },\n created_at: 2020-01-01T10:10:10.000Z\n },\n {\n id: 'unique-id-3',\n event: 'opened',\n event_data: { 'some-data': 1 },\n created_at: 2020-01-01T10:10:10.000Z\n }\n ]\n },\n {\n _id: 5f6a81a9690a0787557f797e,\n event_type: 'clicked',\n attributes: {\n first_name: 'John',\n last_name: 'Doe',\n email: '[email protected]'\n },\n events: [\n {\n id: 'unique-id-1',\n event: 'clicked',\n event_data: { 'some-data': 1 },\n created_at: 2020-01-01T10:10:10.000Z\n }\n ]\n }\n]\n", "text": "@jellyx,Ok I get now what you have requested. I see how you can do it with a $facet, however, the aggregation framework is very powerful and you can achieve what you need with a standard grouping stages game and $sort and $slice as following (see my comments on each stage):This aggregation with my test data retrieve 2 most recent events for every unique type:Hope that helps!Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "db.clients.aggregate( [{\n // Match the needed doc\n $match: {\n _id: ObjectId('5f6a81a9690a0787557f797e')\n }\n}, \n// Unwind events to go over each event\n{\n $unwind: '$events'\n}, \n// Sort all events by date\n{\n $sort: {\n \"events.created_at\": -1\n }\n}, \n// Group the events by event name and push them to a new array of ordered events by type\n{\n $group: {\n _id: \"$events.event\",\n events: {\n $push: \"$CURRENT\"\n }\n }\n}, \n// Construct the document back to original shape and slice the created arrays by \"n\" , eg. first 2 most recent elements\n{\n $project: {\n _id: {\n $arrayElemAt: ['$events._id', 0]\n },\n event_type: \"$_id\",\n attributes: {\n $arrayElemAt: ['$events.attributes', 0]\n },\n events: {\n $slice: ['$events', 0, 2]\n }\n\n }\n},\n// reconstruct again to have the original view with events array (could use $addFields as well)\n {\n $project: {\n _id: 1,\n attributes: 1,\n event_type: 1,\n events: \"$events.events\"\n }\n}]);\ndb.clients.aggregate([\n {\n $match: { \n \"_id\" : ObjectId(\"5f6a81a9690a0787557f797e\") \n }\n },\n { \n $unwind : \"$events\" \n },\n { \n $project: {\n \"_id\": \"$events.id\",\n \"event\": \"$events.event\",\n \"event_data\": \"$events.event_data\",\n \"created_at\": \"$events.created_at\"\n }\n },\n { \n $sort: { \n \"created_at\" : 1 \n }\n },\n { \n $facet : {\n \"add_to_cart\" : [\n { $match: { \"event\" : \"add_to_cart\" } },\n { $limit: 2 }\n ],\n \"delivered\" : [\n { $match: { \"event\" : \"delivered\" } },\n { $limit: 3 } // e.g.\n ]\n }\n }\n])\nfacet", "text": "Thanks for the answer. That works too.Here is how I did it:Your option is better because I don’t need to ‘manually’ through backend add events in facet. Thanks.", "username": "jellyx" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Only one item per group
2020-10-03T13:30:12.991Z
Only one item per group
9,404
null
[]
[ { "code": "{\n \"array1\": [\n {value: \"x\"}\n {value: \"y\"}\n {value: \"z\"}\n {value: 0}\n {value: 1}\n {value: 2}\n {value: 3}\n ]\n}\n", "text": "I’ve got a collection similar to the following structureIf I want to remove the last 3 elements in the array, how can this be done with a Mongo query?", "username": "scar9" }, { "code": "{\n \"aggregate\": \"testcoll\",\n \"pipeline\": [\n {\n \"$addFields\": {\n \"array1\": {\n \"$slice\": [\n \"$array1\",\n {\n \"$subtract\": [\n {\n \"$size\": \"$array1\"\n },\n 3\n ]\n }\n ]\n }\n }\n }\n ],\n \"maxTimeMS\": 300000,\n \"cursor\": {}\n}\n", "text": "Hello : )seeThe bellow code,says take the elements from [ 0 , size(array)-3]\nI didn’t run it i hope will be ok.The bellow is database command,you can take the pipeline\nand use it in aggregation or update.If you want any range,like remove 5 6 7 10 12 indexes,you can reduce the array,and keep only the wanted members.", "username": "Takis" } ]
Removing elements in a mongo array by range of indices
2020-10-02T18:49:00.186Z
Removing elements in a mongo array by range of indices
3,202
null
[]
[ { "code": "", "text": "Hi,\nI want to use Mongo Atlas for production level code. Currently I am using a M30 cluster with 1 primary and 2 replicas and the reads are set to use the primary node. Is there a way to read data from all the 3 nodes rather than just primary node? Should I consider a shard cluster?Please let me know your thoughts.Thanks,\nSupriya", "username": "Supriya_Bansal" }, { "code": "", "text": "Hi @Supriya_Bansal,What version is your Atlas cluster? We do offer different read Preferences for your operations as part of the client logic.Now there are a few considerations you need to be aware of:The secondary is going through same write workload as the primary. Non blocking reads (blocked behind replication) were introduced only in 4.0+. So although you will ease your primary you will not necessarily get better performanceSince replication is asynchronous you may read stale data. Not all reads can live with that.The driver does not necessarily load balance connections across secondaries and therefore it will be more a round robin assignment.I would suggest that if you have specific load you want to offload from the primary for analytics you should provision ANALYTIC nodes .MongoDB official stand on scaling is by using sharding as the best scale out option. You can always just scale up with raising the Atlas tier as well.Please read more on our production consideration:Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thank you so much for the response. Atlas cluster is using the latest Mongo 4.4 version.\nI am going to look into sharding of Atlas cluster. My storage would be less than 1 GB which is around 500 MB. Raising the Atlas tier definitely provides higher RAM and CPU but gives lot of storage which is not needed for me.Best,\nSupriya", "username": "Supriya_Bansal" }, { "code": "", "text": "Hi @Supriya_Bansal,With this small storage I expect M30-M40 to.be good enough with a replica set. Do you know the data size?If you can remain with one replica set I would consider that as you van convert any replica set to a sharded cluster at any time.Be aware that sharding requries to shard collections with a performant sharding key which might introduce extra complexity, just a warning.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny,\nThank you again for the guidance.\nThe data size is 72MB which is going to grow till 150MB by end of this year. This is static data, once the collection is uploaded in Mongo its used only for read purposes. Average document size is around 480B.Thanks,\nSupriya", "username": "Supriya_Bansal" }, { "code": "", "text": "Hi @Supriya_Bansal,Sharding is definitely not needed. If there is fairly low traffic you may consider even M20 for those sizes as it will still fit into memory…Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongo Atlas scaling
2020-10-01T15:40:23.187Z
Mongo Atlas scaling
2,872
null
[]
[ { "code": "", "text": "Hi everyone,Just trying to get some insights on the appropriate Realm SDK for a desktop app built in NextJS, Electron - gRPC. Should it be the NodeJS SDK or the Web SDK?Any feedback is appreciated. Thank you!", "username": "acru" }, { "code": "", "text": "@acru If you want persistence on device then you should go with the node.js SDK - if you want more of a web-client model then I would go with the web SDK", "username": "Ian_Ward" }, { "code": "", "text": "Thanks @Ian_Ward! Leaning towards the NodeJS SDK. The app in essence is a desktop messaging app (with all the bells and whistles) that uses gRPC calls. Then those data would have to stored into the database.", "username": "acru" } ]
Appropriate Realm SDK for desktop app
2020-10-02T18:49:02.756Z
Appropriate Realm SDK for desktop app
2,054
null
[ "node-js" ]
[ { "code": "_id:5f2f40b8d1287d5c98309cb9\nfoo: \"Some value\"\nzed: \"Another Thing\"\nitems: Object\n item_16414803554: Object\n name: \"Some Name\"\n age: \"45\"\n item_16414806548: Object\n name: \"Some Name\"\n age: \"45\"\n", "text": "Hello, I am a new to MongoDB and I seeking help.I have a document in my collection that looks like this:I am using mongojs to work with the data and I need to remove one of the sub objects like “item_16414806548”. I cannot figure out what the query should be to do so.I tried this:db.mycollection.remove({_id: mongojs.ObjectId(id)}, {items: {_id: {$eq: itemid }}}, function(err, doc) {\nres.json(doc);\n});But that’s not working. Can anyone tell me what the proper query would be? Thanks for any help!", "username": "Jason_B" }, { "code": ".remove().updateOne({//matching conditions}, {$unset:{\"items.item_16414806548\": \"\"}})", "text": "Hi, .remove() will actually remov the whole document, not part of document. What you might want to do is to update a document to set/unset a key in it.\nTry - .updateOne({//matching conditions}, {$unset:{\"items.item_16414806548\": \"\"}})\nThis will unset (update the document to remove that key) the key, and when you write it in dot notation, it can access the internal key item_16414806548 from items.\nCheckout - $unset", "username": "shrey_batra" }, { "code": "", "text": "Thank you! Going to give that a try! Is there a way I can pass the items.item_16414806548 in as a variable. I cannot seem to put a variable in there to save my life. Thanks again for responding!", "username": "Jason_B" }, { "code": "key_var = \"items.item_16414806548\"\ndb.coll.updateOne({/*matching conditions*/}, {$unset:{ key_var : \"\"}})\n", "text": "Try -", "username": "shrey_batra" }, { "code": "", "text": "Ok, I will! Thanks again!", "username": "Jason_B" }, { "code": "", "text": "Just wanted to let you know that your advice was greatly appreciated! Because of you I got it to work. Thank you so much!!", "username": "Jason_B" } ]
Need help with query for subobject
2020-09-30T20:16:56.196Z
Need help with query for subobject
2,617
null
[ "node-js", "field-encryption" ]
[ { "code": "mongocryptdschemaJSON.stringifyMongoError: Array elements must have type BinData, found objectmongocryptd", "text": "Hi,\nI’m trying to perform automatic client-side field level encryption using mongoDB Enterprise Server 4.2 (ultimately Mongo Atlas 4.2 Cluster) and Node mongo driver version 3.6.1.I’ve already tested connections and installations by performing the same routine using both servers associated with a compatible mongo shell script.\nWhen using the node driver, I see that the mongocryptd starts but the encryption doesn’t actually occur.I have two different behaviours depending on how I use the schema. When I JSON.stringify it before inserting a document, the insertion results in success but no encryption is performed and the data is saved as plain text.If I initialise the client with the JS object for the schema, I get the following error:\nMongoError: Array elements must have type BinData, found objectI’ve even tried starting the mongocryptd process manually on port 27020 and only then run the script. I see that connections are made to mongocryptd but still no data is encrypted.Any help would be appreciated.", "username": "Helton_Costa" }, { "code": "", "text": "Hi @Helton_Costa, welcome to the forumsJust based on the information that you’ve provided, it seems like there was a slight error in how the script encrypts. You may find the examples listed on Client Side Field Level Encryption Guide useful as a reference (You can toggle the code snippets to Node.JS).If you are still encountering the issue, please provide an example code that could reproduce the error.Regards,\nWan.", "username": "wan" }, { "code": "const kmsProviders = {\n aws: {\n accessKeyId: \"<AWS_KEY_ID>\",\n secretAccessKey: \"<AWS_SECRET_ACCESS>\"\n }\n}\n\nconst mySchema = {\n 'healthcheck.Users': {\n \"bsonType\": \"object\",\n \"properties\": {\n \"name\": {\n \"encrypt\": {\n \"bsonType\": \"string\",\n \"keyId\": [\n {\n \"$binary\": {\n \"base64\": \"<BASE64_KEY_ID>\",\n \"subType\": \"04\"\n }\n }\n ],\n \"algorithm\": \"AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic\",\n }\n },\n }\n }\n}\n\nlet client = new MongoClient(Url, {\n useUnifiedTopology: true,\n autoEncryption: {\n keyVaultNamespace,\n kmsProviders,\n schemaMap: mySchema,\n extraOptions: {\n mongocryptdSpawnArgs: [\"--port\", \"30000\"],\n logger: 4,\n mongocryptdURI: 'mongodb://localhost:30000',\n mongocryptdSpawnPath: '/usr/local/bin/mongocryptd',\n }\n }\n})\n\nconst writeDocument = async () => {\n try {\n await client.connect()\n let coll = client.db('mydb').collection('mycollection')\n let res = await coll.insertOne({name: 'John Doe'})\n } catch (e) {\n console.log(e)\n } finally {\n await client.close()\n }\n}\nkeyId: [\"<base64_jey_id>\"]", "text": "Hi @wan,Not sure I understood what you meant, but I’ve already used that resource to get to where I am now.\nEven if I try copying all the code there, just changing the kms and the server, I still get the same behaviour.Assuming I have correct master key on KMS and data encryption key (since it worked for mongo shell), this is the code I’m using:I also tried creating the schema with keyId: [\"<base64_jey_id>\"] and the same thing happens.\nLet me know if there’s any information missing.", "username": "Helton_Costa" }, { "code": "", "text": "were you able to reproduce the issue @wan ?", "username": "Helton_Costa" }, { "code": "keyId \"keyId\": [\n {\n \"$binary\": {\n \"base64\": \"<BASE64_KEY_ID>\",\n \"subType\": \"04\"\n }\n }\n ],\nvar Binary = require('mongodb').Binary;\n\nlet base64DataKeyId = \"<BASE64_KEY_ID>\";\nlet buff = Buffer.from(base64DataKeyId, 'base64');\nlet keyid = new Binary(buff, Binary.SUBTYPE_UUID); \n\nconst mySchema = {\n...\n \"keyId\": [keyid],\n...\n}\n", "text": "Hi @Helton_Costa,Could you try replacing the keyId part of the schema:With the following:Regards,\nWan.", "username": "wan" }, { "code": "", "text": "Hi @wan, that solved it for me.\nThanks for the help.", "username": "Helton_Costa" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Automatic CSFLE Node mongocryptd issue
2020-09-22T22:45:35.541Z
Automatic CSFLE Node mongocryptd issue
3,058
null
[]
[ { "code": "", "text": "Hello folks,I have a 16TB Database in production (AWS) currently running on 3.6 and I want to upgrade it to 4.2. The issue I am having is related to the WiredTiger Checkpoints (WTCheckpointThread), those are taking very long time in 4.2 version. I noticed that because when running a daily snapshot backup on the Secondary DB it takes up to 10 minutes to complete the db.fsyncLock() so I checked the DB log and found those WiredTiger messages. Here is one example:\n2020-07-28T03:37:28.690+0000 I STORAGE [WTCheckpointThread] WiredTiger message [1595907448:690341][35550:0x7f08d4e8b700], WT_SESSION.checkpoint: Checkpoint ran for 609 seconds and wrote: 418375 pages (26402 MB)As I said before, I have the same DB size with the exact same DB resources and configurations and don’t have this issue there so I want to understand why this is happening or if there is something specific for 4.2 version that is causing this.Can you please help.Thanks,\nEfrainI don", "username": "Lopez_Efrain" }, { "code": "", "text": "I am having similar issues with multiple checkpoints running for multiple hours in 4.2", "username": "Matthew_Zimmerman" } ]
WTCheckpointThread take up to 10 minutes on Mongo 4.2
2020-07-28T19:50:07.857Z
WTCheckpointThread take up to 10 minutes on Mongo 4.2
3,350
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 4.2.10 is out and is ready for production deployment. This release contains only fixes since 4.2.9, and is a recommended upgrade for all 4.2 users.\nFixed in this release:4.2 Release Notes | All Issues | Downloads\n\nAs always, please let us know of any issues.\n\n– The MongoDB Team", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.2.10 is released
2020-10-02T19:24:25.735Z
MongoDB 4.2.10 is released
1,755
null
[ "sharding", "performance" ]
[ { "code": "{\"t\":{\"$date\":\"2020-08-27T10:44:02.290+02:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn1258\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"admin.$cmd\",\"command\":{\"_recvChunkStatus\":\"cdrarch.cdr_af_20200830\",\"waitForSteadyOrDone\":true,\"sessionId\":\"db_rs002_db_rs035_5f476ff3da305f92f9c94d31\",\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1598517840,\"i\":2}},\"signature\":{\"hash\":{\"$binary\":{\"base64\":\"ih0J82H8GKLsFMBDFj7Xoa0DJho=\",\"subType\":\"0\"}},\"keyId\":6821045628173287455}},\"$configServerState\":{\"opTime\":{\"ts\":{\"$timestamp\":{\"t\":1598517840,\"i\":1}},\"t\":9}},\"$db\":\"admin\"},\"numYields\":0,\"reslen\":734,\"locks\":{},\"protocol\":\"op_msg\",\"durationMillis\":1001}}\n{\"t\":{\"$date\":\"2020-08-27T10:44:03.294+02:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn1258\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"admin.$cmd\",\"command\":{\"_recvChunkStatus\":\"cdrarch.cdr_af_20200830\",\"waitForSteadyOrDone\":true,\"sessionId\":\"db_rs002_db_rs035_5f476ff3da305f92f9c94d31\",\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1598517841,\"i\":1}},\"signature\":{\"hash\":{\"$binary\":{\"base64\":\"rd++AdPb36F7OEENJuunnvxzLEA=\",\"subType\":\"0\"}},\"keyId\":6821045628173287455}},\"$configServerState\":{\"opTime\":{\"ts\":{\"$timestamp\":{\"t\":1598517841,\"i\":1}},\"t\":9}},\"$db\":\"admin\"},\"numYields\":0,\"reslen\":734,\"locks\":{},\"protocol\":\"op_msg\",\"durationMillis\":1001}}\n", "text": "hi gurus,We had a situation where manual chunk movements were started (manually hence) while data loading was busy at the same time. The MongoDB cluster granted to a halt…\nSo we interrupted the “mongo” client from where the chunk movements were initiated.Now, with a completely idle cluster (no data loading, nada happening on it), we restarted those chunk movements of an entirely new, thus empty (no docs) collection. And it goes slow ! On chunk moved every 15 mins or so.\nAnd no, the servers underneath are not busy doing other things. It’s dedicated hardware (12 servers), local storage, plenty of RAM. None of their resources are exhausted. Network is 1Gbps NICs, PING roundtrips are sub-msec speeds.How can we find the cause of those super slow chunk movements ? Sure we can raise the logging verbosity to any desired level, but it contains tons of messages of things going well… So not of great help, unless we know in advance where to look for ‘the’ problem.Eg messages we get in the “mongod.log” of the primary server of the receiving shard, are this (every second):and so on.\nThese logs appear because of “slow query”. What is the query then? I see “_recvChunkStatus” which refers to the name of the collection (“cdrarch.cdr_af_20200830”).\nThe sending replicaset/shard is named “db_rs002”, the receiving replicaset/shard is named “db_rs035”.Any help to get more visibility on what’s going slow under the hood would be greatly appreciated !thx in advance !\nRob", "username": "Rob_De_Langhe" }, { "code": "...\n{\"t\":{\"$date\":\"2020-08-31T06:25:09.557+02:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn357\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"_KEYHOLE_88800.$cmd\",\"command\":{\"dbStats\":1,\"lsid\":{\"id\":{\"$uuid\":\"b8998ee1-a8d9-46be-ba84-d0d1c40b19bd\"}},\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1598847907,\"i\":4}},\"signature\":{\"keyId\":6821045628173287455,\"hash\":{\"$binary\":\"base64\":\"7ks/HUkgj+LvEKYINCIfalrKvVU=\",\"subType\":\"0\"}}}},\"$db\":\"_KEYHOLE_88800\"},\"numYields\":0,\"reslen\":11324,\"protocol\":\"op_query\",\"durationMillis\":1216}}\n{\"t\":{\"$date\":\"2020-08-31T06:25:09.794+02:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn357\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"admin.$cmd\",\"command\":{\"dbStats\":1,\"lsid\":{\"id\":{\"$uuid\":\"b8998ee1-a8d9-46be-ba84-d0d1c40b19bd\"}},\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1598847908,\"i\":1}},\"signature\":{\"keyId\":6821045628173287455,\"hash\":{\"$binary\":\"base64\":\"n4DO3e0Xx2Fv0taEA1k7CqkR97U=\",\"subType\":\"0\"}}}},\"$db\":\"admin\"},\"numYields\":0,\"reslen\":11630,\"protocol\":\"op_query\",\"durationMillis\":234}}\n{\"t\":{\"$date\":\"2020-08-31T06:25:12.726+02:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn357\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"cdrarch.$cmd\",\"command\":{\"dbStats\":1 ,\"lsid\":{\"id\":{\"$uuid\":\"b8998ee1-a8d9-46be-ba84-d0d1c40b19bd\"}},\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1598847909,\"i\":3}},\"signature\":{\"keyId\":6821045628173287455,\"hash\":{\"$binary\":{\"base64\":\"RaCGq/SXVR7wYB7Nb0L/G+R2OfI=\",\"subType\":\"0\"}}}},\"$db\":\"cdrarch\"},\"numYields\":0,\"reslen\":11718,\"protocol\":\"op_query\",\"durationMillis\":2930}}\n{\"t\":{\"$date\":\"2020-08-31T06:25:14.738+02:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":20997, \"ctx\":\"Uptime-reporter\",\"msg\":\"Refreshed RWC defaults\",\"attr\":{\"newDefaults\":{}}}\n{\"t\":{\"$date\":\"2020-08-31T06:25:15.631+02:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn357\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"config.$cmd\",\"command\":{\"dbStats\":1,\"lsid\":{\"id\":{\"$uuid\":\"b8998ee1-a8d9-46be-ba84-d0d1c40b19bd\"}},\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1598847912,\"i\":1}},\"signature\":{\"keyId\":6821045628173287455,\"hash\":{\"$binary\":{\"base64\":\"XJgwP/q5znnKbcpFEEaobUrBeF4=\",\"subType\":\"0\"}}}},\"$db\":\"config\"},\"numYields\":0,\"reslen\":11666,\"protocol\":\"op_query\",\"durationMillis\":2902}}\n{\"t\":{\"$date\":\"2020-08-31T06:25:16.460+02:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn357\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"_KEYHOLE_88800.$cmd\",\"command\":{\"dbStats\":1,\"lsid\":{\"id\":{\"$uuid\":\"b8998ee1-a8d9-46be-ba84-d0d1c40b19bd\"}},\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1598847915,\"i\":2}},\"signature\":{\"keyId\":6821045628173287455,\"hash\":{\" $binary\":{\"base64\":\"7M14WAvT4WlyqOnFwnuTbCiPLdQ=\",\"subType\":\"0\"}}}},\"$db\":\"_KEYHOLE_88800\"},\"numYields\":0,\"reslen\":11324,\"protocol\":\"op_query\",\"durationMillis\":120}}\n{\"t\":{\"$date\":\"2020-08-31T06:25:24.746+02:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":20997, \"ctx\":\"Uptime-reporter\",\"msg\":\"Refreshed RWC defaults\",\"attr\":{\"newDefaults\":{}}}\n...\n{\"t\":{\"$date\":\"2020-08-31T06:35:19.076+02:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn142636\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"cdrarch.cdr_af_20200830\",\"appName\":\"MongoDB Shell\",\"command\":{\"moveChunk\":\"cdrarch.cdr_af_20200830\",\"find\":{\"SHARD_MINSEC\":608.0},\"to\":\"db_rs033\",\"_waitForDelete\":true,\"lsid\":{\"id\":{\"$uuid\":\"f734929b-2bc9-4c58-ab1b-a5027730cb7e\"}},\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1598847531,\"i\":2}},\"signature\":{\"hash\":{\"$binary\":\"base64\":\"AvmYRG2AnH4Ui6mH7I22bm4c64w=\",\"subType\":\"0\"}},\"keyId\":6821045628173287455}},\"$db\":\"admin\"},\"numYields\":0,\"reslen\":175,\"protocol\":\"op_msg\",\"durationMillis\":987521}}\n{\"t\":{\"$date\":\"2020-08-31T06:35:19.485+02:00\"},\"s\":\"I\", \"c\":\"SH_REFR\", \"id\":24104, \"ctx\":\"ConfigServerCatalogCacheLoader-1495\",\"msg\":\"Refreshed cached collection\",\"attr\":{\"namespace\":\"cdrarch.cdr_af_20200830\",\"newVersion\":{\"0\":{\"$timestamp\":{\"t\":3910,\"i\":1}},\"1\":{\"$oid\":\"5f46d8b6da305f92f9b46bdf\"}},\"oldVersion\":\" from version 3909|1||5f46d8b6da305f92f9b46bdf\",\"durationMillis\":7}}\n", "text": "I tried to find any documentation about how the chunk movements can be traced with any verbosity, but this seems nowhere documented.\nThe chunk migrations (empty chunks, zero documents in them) of newly created collections still go horribly slow, nearly granted to a halt (one migration every 15mins or so).\nStill, there is zero other activity on my cluster, we cannot use it until those new collections are ready with their chunks on the right shard servers.\nMongoDB server version is 4.4.0The logs of the router servers show:So you see that the “moveChunk” command took nearly 1000 secs.", "username": "Rob_De_Langhe" }, { "code": "{\"t\":{\"$date\":\"2020-08-29T22:05:27.033+02:00\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":22435, \"ctx\":\"thread1\",\"msg\":\"WiredTiger error\",\"attr\":{\"error\":95,\"message\":\"[1598731527:33588][86:0x7fa900293700], log-server: __posix_std_fallocate, 58: /var/lib/mongodb/journal/WiredTigerTmplog.0000000007: fallocate:: Operation not supported\"}}\n", "text": "I many of my shard servers, I see following WiredTiger errors in the log:Are these “fallocate” errors harmless or not ?", "username": "Rob_De_Langhe" }, { "code": "# pwd\n/var/lib/mongodb/journal\n# ls -ltr\ntotal 20560\n-rw------- 1 mongodb mongodb 104857600 Aug 30 15:59 WiredTigerPreplog.0000000007\n-rw------- 1 mongodb mongodb 104857600 Sep 1 09:52 WiredTigerPreplog.0000000010\n-rw------- 1 mongodb mongodb 104857600 Sep 1 17:45 WiredTigerLog.0000000997", "text": "I see that this file is perfectly existing:", "username": "Rob_De_Langhe" }, { "code": "/var/lib/mongodb/journaldf -T /var/lib/mongodb/journal", "text": "Hi @Rob_De_LangheWhat are the vital statistics, OS, Mongo Version. And what is the filesystem type of /var/lib/mongodb/journaldf -T /var/lib/mongodb/journal", "username": "chris" }, { "code": "", "text": "hi Chris,\nthx a lot for your feedback.\nThe servers run Ubuntu 18, MongoDB 4.4.0, all inside LXC containers, 36 shards of 3 servers each.\nThe servers are nearly idling (some 10-15% busy CPU, several 10’s of GB RAM free).\nThe filesystems underneath the LXC containers (including their MongoDB directories) is ZFS.\nI read about some issue with ZFS and the “fallocate” system calls to pre-allocate big files like those journal files.\nSo I am now gradually converting my servers away from ZFS and towards LVM/mdadm, very unfortunately because we will miss the flexibilities and ease of working with ZFS.\nBut still I suspect that the chunk inventory on my config servers is somehow corrupted, and no longer being updated efficiently with new moves (hence the dramatic delays, I think). Also new moves requests take very long time and then seem to be suddenly cancelled or skipped because they terminate without any message but still are not executed (as shown by “sh.status()” ).\nIs this assumption possible? Is there some check that I could run to verify that all internal collections (like the chunks locations) are still consistent ?thx again for any further feedbacks, mostly appreciated !\nRob", "username": "Rob_De_Langhe" }, { "code": "_recvChunkStatus", "text": "_recvChunkStatusHi Rob,Did you have some progress since your last post ?I’m facing the same kind of problem (on a lighter cluster : 2 replicated shards)\nDue to massive delete and may be “bad” partition key, one of the shard was full of empty chunks.\nAfter a merging procedure, the balancer started his job to balance the 2 shard.\nDuring 24h, the moving time of the (~32->64m) chunk was not quick, but acceptable (something like 30\").\nSuddenly, in a couple of hours, the perf degraded, and now we need thousands of seconds to move a single chunk.\nIf we liberate the cluster from heavy RW load, it doesn’t change anything.The only related traces are the _recvChunkStatus you see too.\nWe don’t see “fallocate” errors.", "username": "FRANCK_LEFEBURE" }, { "code": "", "text": "hi Franck, Chris,still facing the same issue: chunk migrations run terribly slow, so slow that they do not complete entirely and thus we can not start loading data in MongoDB because the chunks are still on a single shard…Performance wise, the underlying physical servers are nearly fully idle.\nWe migrated all the filesystems away from ZFS, and towards BTRFS, so ZFS not being supported by MongoDB should be no more possible cause.\nNo bad or suspicious logs are reported, just simply terribly slow (30 mins or more for 1 chunk).After some hours, the shell session gets disconnected (too long busy, or idle). I re-launch the same chunk movements, and the same chunks that were reported during the previous attempt as having been moved, are being moved again ?!-> how can we check what is happening during the chunk movements, which is delaying the progress?By the way: all is running now version 4.4.1", "username": "Rob_De_Langhe" }, { "code": "{\"t\":{\"$date\":\"2020-10-02T02:59:27.634+02:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn3845\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"config.$cmd\",\"command\":{\"update\":\"mongos\",\"bypassDocumentValidation\":false,\"ordered\":true,\"updates\":[{\"q\":{\"_id\":\"mongop_rtr1:27017\"},\"u\":{\"$set\":{\"_id\":\"mongop_rtr1:27017\",\"ping\":{\"$date\":\"2020-10-02T00:59:27.533Z\"},\"up\":6361,\"waiting\":true,\"mongoVersion\":\"4.4.1\",\"advisoryHostFQDNs\":[\"mongop_rtr1.c5qbig.net\"]}},\"multi\":false,\"upsert\":true}],\"writeConcern\":{\"w\":\"majority\",\"wtimeout\":60000},\"maxTimeMS\":30000,\"$replData\":1,\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1601600365,\"i\":1}},\"signature\":{\"hash\":{\"$binary\":{\"base64\":\"dlGdgrHNtIx9THiJXv/LoISsF64=\",\"subType\":\"0\"}},\"keyId\":6821045628173287455}},\"$configServerState\":{\"opTime\":{\"ts\":{\"$timestamp\":{\"t\":1601600365,\"i\":1}},\"t\":12}},\"$db\":\"config\"},\"numYields\":0,\"reslen\":619,\"locks\":{\"ParallelBatchWriterMode\":{\"acquireCount\":{\"r\":3}},\"ReplicationStateTransition\":{\"acquireCount\":{\"w\":3}},\"Global\":{\"acquireCount\":{\"r\":2,\"w\":1}},\"Database\":{\"acquireCount\":{\"r\":1,\"w\":1}},\"Collection\":{\"acquireCount\":{\"w\":1}},\"Mutex\":{\"acquireCount\":{\"r\":2}},\"oplog\":{\"acquireCount\":{\"r\":1}}},\"flowControl\":{\"acquireCount\":1,\"timeAcquiringMicros\":1},\"writeConcern\":{\"w\":\"majority\",\"wtimeout\":60000},\"storage\":{},\"protocol\":\"op_msg\",\"durationMillis\":100}}\n{\"t\":{\"$date\":\"2020-10-02T12:14:25.743+02:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4608400, \"ctx\":\"PeriodicShardedIndexConsistencyChecker\",\"msg\":\"Skipping sharded index consistency check because feature compatibility version is not fully upgraded\"}\ndb.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )\nconnecting to: mongodb://mongop_cfg1:27019/cdrarch?authSource=admin&compressors=disabled&gssapiServiceName=mongodb\nImplicit session: session { \"id\" : UUID(\"6fa9ef61-2a3e-4c4e-b50f-616ab9631119\") }\nMongoDB server version: 4.4.1\n{\n \"featureCompatibilityVersion\" : {\n \"version\" : \"4.2\"\n },\n \"ok\" : 1,\n \"$gleStats\" : {\n \"lastOpTime\" : Timestamp(0, 0),\n \"electionId\" : ObjectId(\"7fffffff000000000000000c\")\n },\n \"lastCommittedOpTime\" : Timestamp(1601634848, 67),\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1601634848, 67),\n \"signature\" : {\n \"hash\" : BinData(0,\"7JA+LfjqaFLdPtYykM/0jUkVCUY=\"),\n \"keyId\" : NumberLong(\"6821045628173287455\")\n }\n },\n \"operationTime\" : Timestamp(1601634848, 67)\n}\n{\"t\":{\"$date\":\"2020-10-02T13:04:54.533+02:00\"},\"s\":\"D1\", \"c\":\"QUERY\", \"id\":22790, \"ctx\":\"conn463\",\"msg\":\"Received interrupt request for unknown op\",\"attr\":{\"opId\":7396919}}\n{\"t\":{\"$date\":\"2020-10-02T13:04:54.615+02:00\"},\"s\":\"D1\", \"c\":\"QUERY\", \"id\":22790, \"ctx\":\"conn317\",\"msg\":\"Received interrupt request for unknown op\",\"attr\":{\"opId\":7396921}}\n{\"t\":{\"$date\":\"2020-10-02T13:04:54.623+02:00\"},\"s\":\"D1\", \"c\":\"QUERY\", \"id\":22790, \"ctx\":\"conn20\",\"msg\":\"Received interrupt request for unknown op\",\"attr\":{\"opId\":7396923}}\n{\"t\":{\"$date\":\"2020-10-02T13:07:26.576+02:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn3834\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"admin.$cmd\",\"appName\":\"MongoDB Shell\",\"command\":{\"_configsvrMoveChunk\":1,\"ns\":\"cdrarch.cdr_mobi_20200830\",\"min\":{\"SHARD_MINSEC\":2015.0},\"max\":{\"SHARD_MINSEC\":2016.0},\"shard\":\"db_rs002\",\"lastmod\":{\"$timestamp\":{\"t\":3692,\"i\":1}},\"lastmodEpoch\":{\"$oid\":\"5f53fc15da305f92f9cee0be\"},\"toShard\":\"db_rs036\",\"maxChunkSizeBytes\":67108864,\"secondaryThrottle\":{},\"waitForDelete\":true,\"forceJumbo\":false,\"writeConcern\":{\"w\":\"majority\",\"wtimeout\":15000},\"lsid\":{\"id\":{\"$uuid\":\"236d3eab-d1d8-4757-a5f7-0bea4c46f8a3\"},\"uid\":{\"$binary\":{\"base64\":\"YtJ8CVGJPpojGlBhlVfpmkB+TWiGCwPUvkGEjp5tty0=\",\"subType\":\"0\"}}},\"$replData\":1,\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1601635925,\"i\":17}},\"signature\":{\"hash\":{\"$binary\":{\"base64\":\"dsDAd22QCBRQKtavuAirYaWi3bI=\",\"subType\":\"0\"}},\"keyId\":6821045628173287455}},\"$audit\":{\"$impersonatedUsers\":[{\"user\":\"mongo-admin\",\"db\":\"admin\"}],\"$impersonatedRoles\":[{\"role\":\"root\",\"db\":\"admin\"}]},\"$client\":{\"application\":{\"name\":\"MongoDB Shell\"},\"driver\":{\"name\":\"MongoDB Internal Client\",\"version\":\"4.4.1\"},\"os\":{\"type\":\"Linux\",\"name\":\"Ubuntu\",\"architecture\":\"x86_64\",\"version\":\"18.04\"},\"mongos\":{\"host\":\"mongop_rtr2:27017\",\"client\":\"10.100.22.99:14298\",\"version\":\"4.4.1\"}},\"$configServerState\":{\"opTime\":{\"ts\":{\"$timestamp\":{\"t\":1601635925,\"i\":17}},\"t\":12}},\"$db\":\"admin\"},\"numYields\":0,\"reslen\":537,\"locks\":{\"ParallelBatchWriterMode\":{\"acquireCount\":{\"r\":11}},\"ReplicationStateTransition\":{\"acquireCount\":{\"w\":14}},\"Global\":{\"acquireCount\":{\"r\":11,\"w\":3}},\"Database\":{\"acquireCount\":{\"r\":6,\"w\":3}},\"Collection\":{\"acquireCount\":{\"r\":2,\"w\":3}},\"Mutex\":{\"acquireCount\":{\"r\":9}},\"oplog\":{\"acquireCount\":{\"r\":4}}},\"flowControl\":{\"acquireCount\":3,\"timeAcquiringMicros\":3},\"writeConcern\":{\"w\":\"majority\",\"wtimeout\":15000},\"storage\":{},\"protocol\":\"op_msg\",\"durationMillis\":921059}}\nTotals\n data : 0B docs : 0 chunks : 3601\n Shard db_rs028 contains 0% data, 0% docs in cluster, avg obj size on shard : 0B\n Shard db_rs034 contains 0% data, 0% docs in cluster, avg obj size on shard : 0B\n Shard db_rs036 contains 0% data, 0% docs in cluster, avg obj size on shard : 0B\n Shard db_rs002 contains 0% data, 0% docs in cluster, avg obj size on shard : 0B\n Shard db_rs031 contains 0% data, 0% docs in cluster, avg obj size on shard : 0B\n Shard db_rs029 contains 0% data, 0% docs in cluster, avg obj size on shard : 0B\n Shard db_rs030 contains 0% data, 0% docs in cluster, avg obj size on shard : 0B\n Shard db_rs035 contains 0% data, 0% docs in cluster, avg obj size on shard : 0B\n Shard db_rs033 contains 0% data, 0% docs in cluster, avg obj size on shard : 0B\n Shard db_rs032 contains 0% data, 0% docs in cluster, avg obj size on shard : 0B", "text": "Typical logs messages on our config servers (replica set of 3 servers) report about slow queries, but not sure if that is the cause or a consequence of the problems:(the node name “mongop_rtr1” is one of our router servers)Another one from the logs of config server that might be relevant (or not, who can tell?) :Checking further on this “feature compatibility version” log: I ran the following command against my primary server of the configs replica set :which returned me the following output:Notice this version=4.2 ; is this as expected, since we are running with MongoDB binaries of version 4.4.1 ?\nI have run this same command against the primaries of all my replica sets, and they all reply with this version=4.2, while running MongoDB binaries of version 4.41\n-> is that ok, or not ?Raised the log verbosity on the config shards for the “config” dbase from level 0 to 1, for the “QUERY” logs, and got very many of these logs:-> normal, or not ?And finally, the reporting about terribly slow queries for the chunks:And to confirm again that no single document needs to be moved, with these chunk movements: there are 0 (=zero) docs in this collection:", "username": "Rob_De_Langhe" }, { "code": "", "text": "Notice this version=4.2 ; is this as expected, since we are running with MongoDB binaries of version 4.4.1 ?\nI have run this same command against the primaries of all my replica sets, and they all reply with this version=4.2, while running MongoDB binaries of version 4.41\n → is that ok, or not ?It’s fine. Looks like the FCV upgrade from 4.2 to 4.4 is left to be done. This is the last step of an upgrade and recommended to run in this mode before this final operation:We migrated all the filesystems away from ZFS, and towards BTRFS, so ZFS not being supported by MongoDB should be no more possible cause.XFS and ext4 are the recommended filesystems.After some hours, the shell session gets disconnected (too long busy, or idle).Check Keepalives:https: //docs.mongodb.com/manual/administration/production-checklist-operations/#linux\nhttps://docs.mongodb.com/manual/faq/diagnostics/#faq-keepaliveReading back to your original deployment description. You have 36 shards of 3 node replica sets on 12 servers? 9 mongod instances per server? Not to mention the 3 mongod for the config replicaset.", "username": "chris" }, { "code": "blabla slow query blabla \"errMsg\":\"received command without explicit writeConcern on an internalClient connection { moveChunk: \\\"cdrarch.cdr_af_20200906\\\", blabla...db.adminCommand({moveChunk: \"...\", find:{...}, to:\"...\", _secondaryThrottle: true, writeConcern:{w:2}, _waitForDelete: true })", "text": "thx Chris for your feedbacks, highly appreciated !I tried indeed to apply this feature-compatibility version equal to “4.4”, and things started to move suddenly in my chunk movements progress window;\nhowever I checked the “sh.status” output of that collection, and the chunks were still not being moved.\nSo I went looking at the logs of the primary config server, and I noticed messages like this:blabla slow query blabla \"errMsg\":\"received command without explicit writeConcern on an internalClient connection { moveChunk: \\\"cdrarch.cdr_af_20200906\\\", blabla...this made me have a look at the write-concern options in the “moveChunk” command, and I changed my command options now to this:\ndb.adminCommand({moveChunk: \"...\", find:{...}, to:\"...\", _secondaryThrottle: true, writeConcern:{w:2}, _waitForDelete: true })These two changes did the trick !", "username": "Rob_De_Langhe" }, { "code": "", "text": "Unless you are vitualising the benefits from sharding may not eventuate. Assuming you are manually configuring WiredTgiger cache appropriately the mongod will experience contention on the filesystem cache which would lead to more disk fetches.Overall cpu/ram/network/io/filehandles etc are all going to be contended for.", "username": "chris" }, { "code": "storage:\n dbPath: /var/lib/mongodb\n journal:\n enabled: true\n wiredTiger:\n engineConfig:\n cacheSizeGB: 4\n configString: eviction=(threads_min=15,threads_max=15),eviction_dirty_trigger=2,eviction_dirty_target=1,eviction_trigger=90,eviction_target=80\n", "text": "hi again Chris,many thanks for expressing your concerns Unless you are vitualising the benefits from sharding may not eventuate.yes indeed, we are running all our mongo (router, config, shard) instances in individual LXC containers (Ubuntu 18).Assuming you are manually configuring WiredTgiger cache appropriately the mongod will experience contention on the filesystem cache which would lead to more disk fetches.Each of the containers runs on its own (BTRFS) filesystem, so each filesystem has its own cache handling. We configured indeed WiredTiger cache explicitly for each mongod instance, e.g.Overall cpu/ram/network/io/filehandles etc are all going to be contended for.With the separate containers, this is not an issue. All is segregated.", "username": "Rob_De_Langhe" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Chunk movements stopped going quick
2020-08-27T11:52:29.760Z
Chunk movements stopped going quick
4,889
null
[ "react-native" ]
[ { "code": "", "text": "Hi,I’m having trouble updating a Realm object in React Native Realm. I’m querying for the object, able to console.log the object, but once I try to update the object by setting one of its keys, I get this error:Attempting to create an object of type ‘[OBJECT]’ with an existing primary key value ‘5f76395eeca29f5aeb466873’I’m particularly confused because I’m able to successfully update an object using that pattern (querying then mutating) in a different component.Any idea what could be going on here?Thanks!", "username": "Jerry_Wang" }, { "code": "", "text": "@Jerry_Wang You cannot change primary keys once set - if you want to do that you should delete the object and then recreate a new one with the primary key you want.", "username": "Ian_Ward" }, { "code": "", "text": "I’m not attempting to change the primary key, though.It seems as though my mutation (of properties other than the primary key) is being interpreted as a creation request.", "username": "Jerry_Wang" }, { "code": "", "text": "You said key - so I assumed primary key. What are you attempting to do then?", "username": "Ian_Ward" }, { "code": "", "text": "Sorry, I should’ve been more specific.Here’s an example:Realm object:\n{ _id: “fkl2j3”, “username”: “Jerry”, “description”: “a developer” }I’m trying to update the description with:\nrealm.write( () => { realmObject.description = “a React Native developer” } )", "username": "Jerry_Wang" }, { "code": "", "text": "@Jerry_Wang I believe you either need to use the upsert method with “modified” parameter passed -Or realmObject in your code snippet is actually a realm result and you need to get the actual indice of the object you are trying to update, as in - realmObject[0]", "username": "Ian_Ward" }, { "code": "", "text": "Interesting. Okay. I’ve checked to make share that I’m modifying the object (and not the Realm result).Any idea why updating some properties requires upsert while some properties can be directly modified?For my particular usecase, I’m able to modify “description” directly, but not “updatedAt”Realm object:\n{ _id: “fkl2j3”, “username”: “Jerry”, “description”: “a developer”, updatedAt: null }Works:\nrealm.write( () => { realmObject.description = “a React Native developer” } )Doesn’t work (throws primary key error):\nrealm.write( () => { realmObject.updatedAt = “Fri, 25 Sep 2020 02:39:28 GMT\"” } )", "username": "Jerry_Wang" }, { "code": "", "text": "@Ian_WardIt looks like upserting gives me the same error. Are there logs I can find beyond the ones provided by the Realm console that would help me debug what’s going on further?", "username": "Jerry_Wang" }, { "code": "realmObject.updatedAt = \"Fri, 25 Sep 2020 02:39:28 GMT\"updateAtrealmObject.updatedAt = new Date(\"Fri, 25 Sep 2020 02:39:28 GMT\")", "text": "realmObject.updatedAt = \"Fri, 25 Sep 2020 02:39:28 GMT\" will not work since updateAt isn’t a string. realmObject.updatedAt = new Date(\"Fri, 25 Sep 2020 02:39:28 GMT\") is better.", "username": "Ian_Ward" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Error updating object in React Native Realm
2020-10-01T20:29:48.876Z
Error updating object in React Native Realm
4,100
null
[ "queries", "mongodb-shell" ]
[ { "code": "{\n \"objectClass\": \"Trunks\",\n \"timestamp\": \"1601590491\",\n \"Trunk\": \"Trunk_External\",\n \"Calls\": 0,\n \"Call Rate\": 0,\n ... blah blah...\n } \ndb.collection.find({\n timestamp: { $gt: (new Date()-12345).getTime() }\n})\n", "text": "Hello,I fear as newbie I’m missing something really obvious but no amount of googling, including searching the forum is helping… hence why I think I might be barking up the wrong tree.I’m ‘simply’ trying to find documents where a field called timestamp (I now know that was a bad name) is later/newer than a calculated unixtime. My documents all follow this format:I’m trying to do something like this:Obviously the ‘12345’ could be any number of seconds/milliseconds and then I’d get all the documents returned with a “timestamp” after that calculated date/time.I’ve tried so many things I’m not sure if it is syntax error or non-existent function. I’m not even sure getTime() is allowed in a MongoDB query, but after several hours I must seek help before time runs out and I look at extracting the whole dataset and then parsing it outside MongoDB!Has anyone tried using and comparing unixtimes in MongoDB, or is it really just a very bad date format to try and use? Cheers!!", "username": "Chunky_Plumpy" }, { "code": "", "text": "Hello @Chunky_Plumpy! welcome to the community.“timestamp”: “1601590491”,This is of type Epoch, also known as Unix timestamps, is the number of seconds (NOT milliseconds) that have elapsed since January 1, 1970 at 00:00:00 GMT.In MongoDB, the Date objects are stored as a signed 64-bit integer representing the number of milliseconds since the Unix epoch (Jan 1, 1970).Then, there are 1000 milliseconds in a second.", "username": "Prasad_Saya" }, { "code": "", "text": "Hello @Prasad_Saya, thank you for stepping in to assist. You are correct, this was one of my many mistakes but the thing that caught me out in the end was timezones. Despite the version posted originally being wrong I was able to filter correctly using:…find ({timestamp:{$gt:new Date().getTime()/1000-3600}})Sadly in the first hour (important timespan) it simply returned nothing so I charged off trying a multitude of other things and confusing myself. If only to prevent another newbie making the same mistake, the source system was generating BST Epoc timestamps which were actually ‘the future’. By the time Epoc had overtaken my original test data timestamps I’d gotten into a right old mess All sorted now, and thanks again as the ‘1000’ part was equally important!! Cheers.", "username": "Chunky_Plumpy" } ]
Calculating and comparing unixtime offsets
2020-10-01T23:02:33.953Z
Calculating and comparing unixtime offsets
12,360
null
[ "atlas-device-sync" ]
[ { "code": "ERROR: Connection[1]: Failed to resolve 'ws.eu-west-1.aws.realm.mongodb.com:443': Host not found (authoritative)\n\n{ name: 'Error',\n message: 'Host not found (authoritative)',\n isFatal: false,\n category: 'realm.util.network.resolve',\n code: 1,\n userInfo: {} }\n", "text": "Hello,Lately I’ve begun experiencing the following problem when connecting to my realm app (both from Swift and Node SDKs). Sometimes it will connect, but most of the time I get the following error.My cluster is hosted on eu-west-1, and so is my Realm app.", "username": "Morten_Toxvaerd" }, { "code": "", "text": "After dialogue with MongoDB support, this issue has been resolved by their team. Apparently there was a problem with their DYN configuration ", "username": "Morten_Toxvaerd" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
'Host not found' during sync
2020-09-30T13:01:57.644Z
&lsquo;Host not found&rsquo; during sync
2,744
null
[ "production", "kafka-connector" ]
[ { "code": "", "text": "Today we released an updated version of our Kafka connector V1.3!This is a significant release as it enables MongoDB to integrate better within the Kafka ecosystem. For more detailed information read our blog post announcement.Highlights include:Source Schema - Define schema on MongoDB source data, perform Single Message Transforms, and leverage more of the Kafka ecosystem such as the Confluent Schema RegistryConnector Reliability - Support for error tolerance and handling of poison pill messagesHandle Big Data - Granular copying of existing data and support for restarting the connector without resuming from the change stream improves usability of the connector with very large data sizesAvailable for download on the Confluent Hub → MongoDB Connector (Source and Sink) | Confluent Hub", "username": "Robert_Walters" }, { "code": "", "text": "", "username": "system" } ]
MongoDB Connector for Apache Kafka V1.3 released!
2020-10-01T16:55:37.488Z
MongoDB Connector for Apache Kafka V1.3 released!
1,920
null
[ "aggregation", "queries" ]
[ { "code": "months: { January: [] , February: [] } , in case of a year period.\ndays: { 1: [] , 2: [] } , in case of a year and month period\nyears: { 2020: [] , 2021: [] } , in case nothing was informed\n", "text": "I am using Express and mongoose as back-end, my application works with travels and in one of front-end page’s, specifically the ‘/reports’ page, requires back-end to send and group travels by day, in case user specify year and month and by month in case of user just insert an year, and by year in case user doesn’t specify anything. How can I easily return:", "username": "Marcello_Manuel_Borg" }, { "code": "[\n {\n $match: {\n // your matching statements\n }\n },\n {\n $bucket: {\n boundaries: boundary_var,\n groupBy: grouping_var\n }\n }\n]\nboundary_vargrouping_vargrouping_var = '$monthKey'\nbuckets = [1,2,3,4,5,6,7,8,9,10,11,12]\ndate", "text": "Hi @Marcello_Manuel_Borg,Find the following agrgegation pipeline, suit it to your own needs -You can define the variables - boundary_var and grouping_var based on your conditions coming in request. Ex -\nIn case of year period -And similar for your other use cases. You might want to add some project keys for month names, converting your date key in the document to get Month value, etc.\nCheckout - $bucket, $project, $month, $year", "username": "shrey_batra" }, { "code": "", "text": "Ok, shrey. I will try it. If it solves my problem, i will mark you comment as solution. thanks, man!", "username": "Marcello_Manuel_Borg" } ]
Group items by date, in order to easily provide a report
2020-10-01T01:41:14.061Z
Group items by date, in order to easily provide a report
2,126
null
[ "replication" ]
[ { "code": "I INITSYNC [replication-2] CollectionCloner ns:attachments.attachments.chunks finished cloning with status: Location16465: Error querying collection 'attachments.attachments.chunks :: caused by :: recv failed while exhausting cursor\nE INITSYNC [replication-2] collection clone for 'attachments.attachments.chunks' failed due to Location16465: Error cloning collection 'attachments.attachments.chunks' :: caused by :: Error querying collection 'attachments.attachments.chunks :: caused by :: recv failed while exhausting cursor\nW INITSYNC [replication-2] database 'attachments' (2 of 6) clone failed due to InitialSyncFailure: Location16465: Error cloning collection 'attachments.attachments.chunks' :: caused by :: Error querying collection 'attachments.attachments.chunks :: caused by :: recv failed while exhausting cursor\nI INITSYNC [replication-2] Finished cloning data: InitialSyncFailure: Location16465: Error cloning collection 'attachments.attachments.chunks' :: caused by :: Error querying collection 'attachments.attachments.chunks :: caused by :: recv failed while exhausting cursor. Beginning oplog replay.\n", "text": "Been trying to add a new member to a working replica set.Get the following messages:All the replica set members are Percona MongoDB v4.2.8-8 and the new one is Percona MongoDB v4.2.9-9\nI’ve tried sync from different members with the same result.Can anybody advise what might be the problem?", "username": "Lauri_Anteploon" }, { "code": "", "text": "Hi @Lauri_Anteploon welcome to the community!Unfortunately since you’re using Percona’s fork of MongoDB, we can’t really help with the error since we have no idea what Percona did with the fork. That is, chances are it doesn’t work the same way with official MongoDB releases. I would suggest contacting Percona support for this.Best regards,\nKevin", "username": "kevinadi" } ]
Initial sync fails: recv failed while exhausting cursor
2020-10-01T09:55:52.770Z
Initial sync fails: recv failed while exhausting cursor
2,253
null
[ "connecting", "atlas" ]
[ { "code": "", "text": "Hi All,I am new to Mongo, and I think the product is great. I have a connectivity issue with Atlas, and I am seeking advice. I believe a have a security/IAM issue.I cannot establish a connection with the cluster using commercial software. I am using ScrapeStorm, which scrapes the screen, and inserts collections into a MongoDB database.I have the ScrapeStorm working on my local docker replica, it injects collections into the database, and I have a python changestream handler waiting for change events. It all works well. When I try to access the Atlas cluster, I cannot find a hostname that the software recognizes. I get Failed to connect to server [ec2-3.2020yadayada.compute-1.amazonaws.com.27017] error message. I added my ip-address to the Whitelist.What works:\n1-Mongo command line and Compass works, I can attach to my database.\n2-The entire exporting of the data works in my docker replica set, and change steam events are working.\n3-I can ping the primary in the Atlas cluster, I get alive response.Is there any security/IAM issues that needs to be resolved?My apologies, if this topic is already covered. If so, please provide a link to the relevant information.Thanks\nHoward", "username": "Windrider" }, { "code": "", "text": "Hi Howard,Are you connecting with the authentication and TLS context to the Atlas cluster? When you say that you can connect from the command line successfully: that’s connecting to your Atlas cluster from the exact same server that is failing to connect?-Andrew", "username": "Andrew_Davidson" }, { "code": "", "text": "Hi, Howard.I have the same issue in the use of the export to the Atlas cluster in the Scaprestorm tool.\nWhen I type the host URL exactly at the top , it won’t appear an error message. but it appears the error message as I complete the input in DB name.\nIf I mistype the host URL, it appears an error message as soon as I focus out from the host input.\nI think it is related to the Atlas authorization method to use the user name and password.\nif you solve this problem. please let me know in detail.Thanks.\nSmith G.", "username": "golden_smith" } ]
DNS issues - ScrapeStorm not detecting Atlas cluster, but works on local Docker replica
2020-06-22T13:56:10.649Z
DNS issues - ScrapeStorm not detecting Atlas cluster, but works on local Docker replica
2,298
null
[ "atlas-device-sync" ]
[ { "code": "", "text": "How can I delete all the data that have been synced from my MongoDB Realm app to the atlas cluster?", "username": "Michael_Macutkiewicz" }, { "code": "", "text": "@Michael_Macutkiewicz You can use the mongodb shell or driver to delete documents -MongoDB Manual: How to delete documents in MongoDB. How to remove documents in MongoDB. How to specify conditions for removing or deleting documents in MongoDB.You also have the ability to delete the collections via the data explorer here -You will need to terminate and re-enable sync if you are dropping collections though", "username": "Ian_Ward" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How can I remove all synced data from the database?
2020-09-30T20:17:11.423Z
How can I remove all synced data from the database?
2,486
null
[]
[ { "code": "", "text": "I created a Realm DB to use GraphQL with MongoDB and now I can’t find it.", "username": "awe_ful" }, { "code": "", "text": "Well that sounds frustrating!Can you give us some clues about what steps you took to create the database? Did you create a cluster on Atlas (cloud.mongodb.com) and then create a Realm app that is associated with that cluster? How did you insert the data into the database? Have you checked the Collections tab in Atlas (not the Realm app)?", "username": "Lauren_Schaefer" }, { "code": "", "text": "The data is available on the collections tab. I’d like to do some more work on what was created on the Realm tab. I was able to create custom functions and more on the Realm tab. I’d like to do more work there, but each time I click on the Realm tab it prompts me to create a new app.", "username": "awe_ful" }, { "code": "", "text": "Are you saying that when you click on the Realm tab, there are no applications listed in the area that says ‘Applications’?", "username": "Jay" }, { "code": "", "text": "@awe_ful You should be able to create a Realm Cloud app following these instructions -\nhttps://docs.mongodb.com/realm/procedures/create-realm-app/If you are saying you did that and you are now missing the app please open a support ticket - you can do this by clicking the chat icon in the bottom right of the Cloud Web UI", "username": "Ian_Ward" } ]
Realm DB Disappeared
2020-09-30T02:43:47.794Z
Realm DB Disappeared
2,638
null
[]
[ { "code": "likesposts{\n performer: {\n username: {\n type: String\n }\n },\n post_id: {\n type: Schema.Types.ObjectId,\n required: true,\n ref: 'posts'\n },\n title: {\n type: String\n }\n }\n{\n user: {\n username: {\n type: String\n }\n },\n post_id: {\n type: String,\n required: true\n },\n title: {\n type: String,\n }\n}\n'performer.username': \"X\"{\n user: {\n username: {\n type: String\n }\n },\n post_id: {\n type: String,\n required: true\n },\n title: {\n type: String,\n },\nliked: Boolean\n}\n", "text": "Hi Everyone!\nI’ve two collections likes and posts. For every like on a post, I’m creating a like document.\nNow, I’ve to make data such that it can contain if a provided user liked which posts.\nFor example, If there are 10 posts and a user liked 5 of them. I’ve to get all 10 posts but with a field or anything which can infer that a specific post was liked by the user.Like SchemaPost Schema:I need a JSON if 'performer.username': \"X\" requests for data like:Please guide me on what should I look for in this case scenario.", "username": "Himanshu_Singh" }, { "code": "[\n {\n \"post_id\": 1,\n \"likes\": [\n {\n // like document\n }\n ]\n }\n]\n", "text": "Hi @Himanshu_Singh,See a simple workaround is to fetch all the posts from your collection, then have a $lookup stage in your aggregation to like collection. This will join all the likes per post, giving you an array. While doing this, in $lookup you can have the pipeline attribute to limit the documents you fetch from Like collection (fetch only Like documents that is of User X). This will give you something like this -Now you can check that is the likes array is empty, user did not like the post, else he liked the post. Can do this using a $project stage.My worries - Why are we fetching all posts and marking posts that are liked by the users? This could have serious performance issues. Ex - Number of Posts are 1M, so for each user, all 1M posts are fetched and then $lookup to Likes, which is very costly. Obviously, you can have indexing and pagination and all, but do evaluate the performance angle of your use case.Happy to see what others come up with.!", "username": "shrey_batra" }, { "code": "", "text": "Actually, I’m making a social network clone as my project.\nWhen a user likes a post I’ll have to show it liked in UI and for that I’ll be needing something by which I can manage the like state when user browses feed.Is there any alternative way to do this?\nPlease let me know I’ll be really thankful for that.", "username": "Himanshu_Singh" }, { "code": "", "text": "You can follow the above approach of aggregation, but then make sure to have pagination in Posts collection (taking only 20/40 documents at once), and to apply pipeline parameter in $lookup. This is for production level performance, if you are making a small project these won’t make more sense.", "username": "shrey_batra" }, { "code": "", "text": "Thank you so much @shrey_batra, May I get any resource related to this?\nI’m using mongoose on nodejs.\nSomeone said to me that I can store an array of user_id of users who liked that specific post and that check on the frontend via .includes and map according to that.\nWhat do you say about this approach?\nI think it’ll lead to unnecessary data transmission from API.", "username": "Himanshu_Singh" }, { "code": "", "text": "It may… Also it might lead to data scaling problem when your solution might reach 10-50k users. Imagine that every post is liked by each user, your array might be going boundless, with a max limit of 16 MB per document. Not sure about approach in NodeJs, but you can see the generic aggregation stage - $lookup. I am sure this stage can be applied in the Mongoose when applying an aggregation.", "username": "shrey_batra" }, { "code": "", "text": "Exactly that’s what I thought! It’ll be too heavy and retaining that in client-side will end up in performance issues. Let me look around for #lookup.Thank you", "username": "Himanshu_Singh" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Get documents from multiple collections and append them as per condition
2020-09-29T08:24:30.209Z
Get documents from multiple collections and append them as per condition
5,396
null
[]
[ { "code": "", "text": "Hi students MongoDB for Academia is looking for student projects, and we would love to hear from you!Have you built an exciting project or application using MongoDB, and would you like to have it featured on MongoDB Developer Hub? This is your chance to show your work We’re looking for MongoDB students projects to highlight on our to-be-launched Student Spotlights page on DevHub: this is a great opportunity for you to inspire others Do you have a project that you want to share with us? Let us know by filling out the form!\n MongoDB Student Spotlights", "username": "Lieke_Boon" }, { "code": "", "text": "Great step by MongoDB, Really appreciated. Students, now it’s the time to show your skills, Fill the form. here is the link: MongoDB Student Spotlights", "username": "Nabeel_Raza" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Calling all Student Developers: have your work featured on DevHub!
2020-10-01T10:07:01.929Z
Calling all Student Developers: have your work featured on DevHub!
4,744
null
[]
[ { "code": "", "text": "Unable to specify the cluster name as sandbox in the Atlas while creating a cluster. It is showing error like the project already has a free cluster.", "username": "Aanandita_Madaan" }, { "code": "", "text": "You are only allowed to 1 free tier per project. You may create as many project as you wish.", "username": "steevej" }, { "code": "", "text": "Should I create a new project with another cluster name …will it accept ?", "username": "Aanandita_Madaan" }, { "code": "Project", "text": "Hi @Aanandita_Madaan,You can create a new Project with the same cluster name.", "username": "Shubham_Ranjan" }, { "code": "", "text": "", "username": "system" } ]
Unable to specify cluster name in atlas
2020-09-30T18:01:56.221Z
Unable to specify cluster name in atlas
1,990
null
[ "aggregation", "dot-net" ]
[ { "code": " public class StreetSummary\n {\n\t\t[BsonId]\n\t\tpublic string Id { get; set; }\n\n\t\t[BsonElement(\"streetId\")]\n\t\tpublic string StreetId { get; set; }\n\n\t\t[BsonElement(\"summaryDateUtc)]\n\t\tpublic DateTime SummaryDate { get; set; }\n\n\t\t[BsonElement(\"field1\")]\n\t\tpublic int Field1 { get; set; \n\n\t\t[BsonElement(\"field2\")]\n\t\tpublic int Field2 { get; set; \n }\nvar result = collection\n .Aggregate()\n .Group(\n x => x.StreetId,\n g => new \n\t{\n\t StreetId = g.Key,\n\t Latest = g.OrderByDescending(m => m.SummaryDate).First()\n\t}\n )\n ToList();\n", "text": "Hi,I’m using the C# Mongo driver and a strongly typed Collection. The collection is based on:The SummaryDate is holding just the date portion when the summary was calculated. Foreach streetId I want to select the latest StreetSummary document.I think I need to to group by StreetId but then I am having difficulties selecting the latest record by date. I thought something like below would work saying OrderByDescending isnt supported in expression treeThe method OrderByDescending is not supported in the expression tree: {document}.Orvar collection = db.GetCollection(“streetSummary”);How should I be doing this?", "username": "Tej_Sidhu" }, { "code": " var docs = collection.Aggregate()\n .Group(y=>y.StreetId,\n z => new { \n StreetId = z.Key, \n LatestSummaryDate = z.Max(a => a.SummaryDate)\n }\n ).ToList();\n", "text": "Hi @Tej_Sidhu, and welcome to the forums!Foreach streetId I want to select the latest StreetSummary document.You can utilise aggregation accumulator operator $max for this. For example:Regards,\nWan.", "username": "wan" } ]
C# Aggregation Group By string and select last by date
2020-09-23T19:43:07.707Z
C# Aggregation Group By string and select last by date
16,790
null
[ "queries" ]
[ { "code": "{ partId string\n partName string\n partDescription string\n sub-part {\n partId string\n version string\n }\n}\n{ “partId”: “123\", “partName” : \"laptop\", “partDescription” : ”macBook\" , sub-part : {124, \"1.0\"} , {“125\", \"2.1\"}}\n{ “partId”, \"124\", “partName” : \"Keyboard\", “partDescription” : “keyboard for gaming”}\n{ “partId”: \"125\", “partName” : ”mouse”,”partDescription” : \"mouse for gaming\"}\n{ “partId”: “126”, “partName” : “desktop, “partDescription” : ”Dell” , sub-part : {124, “2.0”} , {“125\", “4.0”}}\n{ “partId”: “123\", “partName” : \"laptop\", “partDescription” : ”macBook\" , \n sub-part : [ \n { “partId”, \"124\", “partName” : \"Keyboard\", “partDescription” : “keyboard for gaming”, “version” :’1.0”},\n { “partId”: \"125\", “partName” : ”mouse”,”partDescription” : \"mouse for gaming”, “version” : “2.1”}\n ]\n}\n{ “partId”: “126”, “partName” : “desktop, “partDescription” : ”Dell” ,\n sub-part : [\n { “partId”, \"124\", “partName” : \"Keyboard\", “partDescription” : “keyboard for gaming”, “version” :’2.0”},\n { “partId”: \"125\", “partName” : ”mouse”, ”partDescription” : \"mouse for gaming”, “version” : “4.0”}\n ]\n}\n", "text": "collection schemaData in the collectionI want to find all the part that have sub-parts and at the same time return part name and part description for sub-part.Output", "username": "tom_reed" }, { "code": "sub-part", "text": "Hello @tom_reed, welcome to the community.You can use the $graphLookup aggregation stage to perform your query. See some of the examples in the provided documentation link.Also, you may want to include a proper sample of the actual document (it is not clear the representation of the field sub-part in the posted sample).EDIT ADD:Here is an example I found with a similar question and some answers:", "username": "Prasad_Saya" } ]
Subquery within same collection
2020-09-30T20:18:57.202Z
Subquery within same collection
3,309
null
[ "typescript" ]
[ { "code": "mongo:// const apiKey = process.env.REALM_API_KEY;\n const appId = process.env.REALM_APP_ID;\n const mongoUrl = `mongodb://_:${apiKey}@realm.mongodb.com:27020/?authMechanism=PLAIN&authSource=%24external&appName=${appId}:mongodb-atlas:api-key&ssl=true`;\n console.log(mongoUrl);\n const client = await MongoClient.connect(mongoUrl, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n });\nMongoServerSelectionError: failed to handle command request \"ismaster\": error processing client metadata: expected app name to be composed of: <appID>:<svc>:<authProvider>, got [object Object]remoteMongoClientmongodb://Linked Data Sources?ssl=trueconsole.log(mongoUrl)mongomongo \"mongodb://......\")mongodb://", "text": "I wanted to connect to a Realm-enabled database with mongo:// url in typescript (NodeJS). I wrote like:This produces an error:MongoServerSelectionError: failed to handle command request \"ismaster\": error processing client metadata: expected app name to be composed of: <appID>:<svc>:<authProvider>, got [object Object]I understand there is remoteMongoClient to do the similar thing, but I want to know whether mongodb:// url can be used as well.What I did:Why I wanted:", "username": "Toshi" }, { "code": "appName", "text": "I suspect this is a “Realm ↔ NodeJS Driver” combination problem. (So I edited the title)Does NodeJS Driver properly understand the colon-separators in appName of the mongo URI?", "username": "Toshi" }, { "code": "Driver", "text": "I reposted this in the Driver category as it seems more relevant.", "username": "Toshi" } ]
Connection String for Realm might not be working with NodeJS Driver
2020-09-18T09:41:25.733Z
Connection String for Realm might not be working with NodeJS Driver
3,666
null
[ "dot-net" ]
[ { "code": "DnsClient.DnsResponseException: Query 4162 => api-dev.xdqkf.mongodb.net IN TXT on 10.118.2.239:53 failed with an error. ---> System.IO.FileNotFoundException: Could not load file or assembly 'System.Buffers, Version=4.0.2.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51' or one of its dependencies. The system cannot find the file specified.\n at DnsClient.Internal.PooledBytes..ctor(Int32 length)\n at DnsClient.DnsDatagramWriter..ctor()\n at DnsClient.DnsUdpMessageHandler.Query(IPEndPoint server, DnsRequestMessage request, TimeSpan timeout)\n at DnsClient.LookupClient.ResolveQuery(IReadOnlyList`1 servers, DnsQuerySettings settings, DnsMessageHandler handler, DnsRequestMessage request, LookupClientAudit audit)\n --- End of inner exception stack trace ---\n at DnsClient.LookupClient.ResolveQuery(IReadOnlyList`1 servers, DnsQuerySettings settings, DnsMessageHandler handler, DnsRequestMessage request, LookupClientAudit audit)\n at DnsClient.LookupClient.QueryInternal(DnsQuestion question, DnsQuerySettings queryOptions, IReadOnlyCollection`1 servers)\n at MongoDB.Driver.Core.Configuration.ConnectionString.Resolve(Boolean resolveHosts)\n at MongoDB.Driver.MongoUrl.Resolve(Boolean resolveHosts)\n at MongoDB.Driver.MongoClientSettings.FromUrl(MongoUrl url)\n", "text": "I am developing an ADFS plugin that connects to Mongo via the MongoDB Driver. This requires the libraries to be installed in the GAC, and and ran into two issues:First one, the official driver available in nuget is not signed. This can easily be resolved by searching for and running an unofficial unsigned build from nuget, which we have.Second one - this where I’m stuck:\nI am running this on Windows Server 2016 with .net 4.8 installed. When the plugin code calls the MongoDB Driver to make a connection - the following error is thrown:Code Snippet:_mongoClientSettings = MongoClientSettings.FromConnectionString($“mongodb+srv://{_DBEndPoint}/{_DBNameMFA}?authSource=$external&authMechanism=MONGODB-X509&retryWrites=true&w=majority”);Exception:The MongoDB Driver is looking for System.Buffers 4.0.2.0, but the version installed is 4.0.3.0, and as it’s in the same .net major version (4.x), there doesn’t seem to be a way to have both versions side by side…Today, I’m going to try to rebuild the driver from github - which might allow us to regartet the .net version its built on and also allow us to sign it ourselves.Thanks,\nAbraham", "username": "Abraham_Dybvig" }, { "code": "", "text": "Dunno what the standard is for replying to one’s own messages - someone set me strait if needed.Here’s the solution:First - Additional context:\nThis is a GAC installed library to be used as a TOTP plugin for ADFS\nAs such all dependent libraries also need to be present in the GAC as well.Windows Server 2016\n.NET 4.8 installed\nUnofficial (Signed) MongoDB Driver v2.12.2In my Project References / Manage NuGet Packages, I downgraded System.Buffers to 4.4.0 (from 4.5.1). This allowed me to copy it to the GAC during installation along with any other dependencies.", "username": "Abraham_Dybvig" }, { "code": "", "text": "Dunno what the standard is for replying to one’s own messagesHi @Abraham_Dybvig,Welcome to the MongoDB community! Replying to your own topics is fine … and definitely appreciated if you can share any solutions or workarounds to help others who might encounter a similar issue.You can also mark your own comment (or any that is most helpful in a topic that you have started) as a Solution.Regards,\nStennie", "username": "Stennie_X" } ]
MongoDB Driver incompatible with .net 4.8?
2020-09-30T20:18:03.392Z
MongoDB Driver incompatible with .net 4.8?
4,458
null
[]
[ { "code": "", "text": "We are using Realm created default graphql apis , Firebase Auth + Custom Jwt (Mongo Realm).\nWe want to restrict the api call or documents reads for a non-premium user with in a certain limit ( e.g: Non Premium user can only use the api 5 times)\nHow can I achieve this scenario with existing functionality ?\nIs there any way without using custom-resolvers ?", "username": "serverDown_Saturday" }, { "code": "premium_limit", "text": "You should be able to achieve something like this by using custom user data or creating a collection with a mapping of each user to a field that could be called something like premium_limit which defaults to 0 when a user is created. Then you can write a custom resolver that grabs the context.currentUser() and updates this field, while also continuing to do the query if the # is under the limit.Hopefully that answers your question", "username": "Sumedha_Mehta1" }, { "code": "", "text": "@Sumedha_Mehta1 thank you very much for you reply. I understood all the steps you mentioned and looking for the very last step of using custom resolvers. Right now we are using default mongo generated graphql apis form collection schemas. Is there a way to intercept that those graphql requests , something similar to middleware in Node Js? Just wants to avoid righting a custom resolvers atm, due to time constraints.", "username": "serverDown_Saturday" }, { "code": "", "text": "I don’t think there is a way to intercept those requests unless you implement your own serverside (which may defeat the purpose of using Realm in the first place).There are a few more ways to go about this:Permissions (I should have mentioned this one earlier): you can add permissions + filters for each user, based on their custom user data before they can read/write any data. Your permissions will depend on checking limit you specified in the custom user data (see examples here). GraphQL will automatically apply these permissions when you make any requests.\nOne thing to note is that you will have to appropriately handle the ‘Access Not Granted’ error on the client if the user no longer has access.write a Realm function that checks the limit for the user and one that updates the limit. Call the function to check the limit before deciding to do the API call. Call the function to update the limit after the API call is finished", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Great, Understand your points.\nJust want to add few points on top of it,", "username": "serverDown_Saturday" }, { "code": "", "text": "Thanks for the feedback and glad you liked it.I will say that the recommended approach here would be permissions + functions. However, you can add the feedback items you listed here so the product team can get a good idea of how many users are requesting certain features and prioritize appropriately. Thanks!", "username": "Sumedha_Mehta1" } ]
Restrict Read Count for a user , Realm, GraphQl
2020-09-30T13:01:52.197Z
Restrict Read Count for a user , Realm, GraphQl
2,571
null
[ "queries" ]
[ { "code": "", "text": "I have Q1,Q2,Q3,Q4,…Qn queries.Need to execute all the find queries in a single operation and need resultset separately for each queriesEx: Just an assumptiondb.collection.bulkFetch({[Q1,Q2,…,Qn]})\nand output\n{\nQ1:,\nQ2: ,\n…,\nQn: }", "username": "Sowmya_LR" }, { "code": "", "text": "Hi, I am not sure if this can be done directly, but there maybe a workaround. Not sure why you want it, as it will have a bad influence on the performance. You can try to write an aggregation query with OR of all queries, like (Q1 OR Q2 OR Q3) in a match stage, then apply $facet to break it into individual result sets. and get data.Again, this is a very bad way to accomplish something you are asking, as each query needs to be performed separately, having its own result set. Not seeing how bulk queries can get something meaningful out.", "username": "shrey_batra" }, { "code": "", "text": "Hello : )As been said this is what $facet does,but $facet will not execute each query in parallel,but serial.Also if you want all results to fit in 1 document,this document final size must be<16 MB.$facet would make sense i think if you had like a common part that could be shared from all queries,before entering the $facet stage,to avoid doing that part many times for each “query”.If the queries don’t share a common part,i think $facet will cause only restrictions and slower\nexecution time,from calling the n queries separately,so they can run in parallel,if you call them\nin parallel (async or threads)See this also", "username": "Takis" } ]
How to execute bulk fetch query in mongodb
2020-09-29T07:45:10.229Z
How to execute bulk fetch query in mongodb
3,342
null
[ "monitoring" ]
[ { "code": "", "text": "I want to know the count for the last 2 days happened for a collection based on its CURD operations (UPDATE, DELETE and INSERT) in mongodb.End result is i need to know how many documents got updated , inserted and deleted and its counts for the last 2 days.In case I dont have creationDate or updationDate key in my collection then how to figure it out?", "username": "Mamatha_M" }, { "code": "use local\ndb.oplog.rs.aggregate([{$match: {\n ts: { $gt: Timestamp( 1601215288,0)},\n ns: \"databaseOfInterest.collectionOfInterest\"\n}}, {$group: {\n _id: \"$op\",\n count: {\"$sum\": 1}\n}}])\n", "text": "Aside from using monitoring to capture and review this data I can only think of the oplog as the source of information. That requires a replicaSet and an oplog adequately sized to cover your two day period.If you don’t have a replicaset then you may fall back on using an ObjectID if that is the type for _id. That will only give you documents that have been created though.", "username": "chris" }, { "code": "", "text": "Hi Team,I will check the script if it works, but before that how can I get the timestamp which is mentioned in the script for the last 2 days?Timestamp( 1601215288,0)Regards\nMamatha", "username": "Mamatha_M" }, { "code": "date -d '2 days ago' +%s", "text": "Hi @Mamatha_M,The first number is unix epoch, seconds since Jan 1 1970. The second is the order.\nIf multiple operations occur in the same second in the oplog they are ordered by the second number.I generated the first number with the date command on ubuntu. date -d '2 days ago' +%s", "username": "chris" }, { "code": "", "text": "Hello Chris,I tried the command in linux box and i got the below output.[mongodb@mongoxyz11 ~] date -d '2 days ago' +%s\n1601288759\n[mongodb@mongoxyz11 ~] date -d ‘1 days ago’ +%s\n1601375172\n[mongodb@mongoxyz11 ~]$ date -d ‘10 days ago’ +%s\n1600597583How do we know what should be the order for the second number? On what basis we can know on the second number?Secondly I tried the script provided from your end and for one collection which as approx 12 million records, took almost one hour 45 min to provide the data.Is there any better method to get the output, as fetching details from oplog might really take lot of time and might throw a performance issue.Regards\nMamatha", "username": "Mamatha_M" }, { "code": "", "text": "Hi @Mamatha_MHow do we know what should be the order for the second number? On what basis we can know on the second number?For your purposes, 0. The 0th operation that occurred at that epoch second.Secondly I tried the script provided from your end and for one collection which as approx 12 million records, took almost one hour 45 min to provide the data.The oplog is a special collection. Not intended for this use, it will not be performant. If you intend to run a few queries like this consider making a copy and create some indexes.Is there any better method to get the output, as fetching details from oplog might really take lot of time and might throw a performance issue.Add some instrumentation on your deployment if you want metrics like this. I use DataDog, it is trivial to get these data.", "username": "chris" }, { "code": "", "text": "An example for an ad-hoc query/notebook on our data.image1061×561 48.9 KB", "username": "chris" } ]
How to get the count for all the CRUD operations on a collection for the last 2 days
2020-09-29T11:37:00.443Z
How to get the count for all the CRUD operations on a collection for the last 2 days
4,232
null
[ "graphql" ]
[ { "code": "{\n _id: \"project-id\",\n title: \"Project Title\",\n credits: [\n {\n personId: \"person-id\"\n roles: [\"writer\"]\n }\n ]\n}\n{\n _id: \"person-id\"'\n name: \"John Doe\"\n}\n{\n \"credits\": {\n \"foreign_key\": \"_id\",\n \"ref\": \"#/relationship/mongodb-atlas/burns_media/people\",\n \"is_list\": true\n }\n}\n", "text": "I have the following data in a collection called “projects”:I have the following collection called “people”:Is there a way to create a “Relationship” in Realm for GraphQL that is nested like this? Normally I would write some kind of foreign key relationship like this in the Realm UI under Realm > Schema > Relationships:However, since “personId” is contained in an object inside an array, I’m not sure how to define the nested relationship. Is this possible?", "username": "Brian_Burns" }, { "code": "personIdprojectspeoplepersonIdprojects", "text": "Hey Brian -At the moment we don’t support relationship linking between fields in nested objects in arrays but you can request the feature here so we can track requests for future prioritization Realm: Top (70 ideas) – MongoDB Feedback EngineAs a workaround, you can achieve the same thing by nesting the personId in the projects collection (as you’re already doing) and writing a custom resolver that will query the people collection using the personIds that is returned from the projects collection.Let me know if you have further questions", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Perfect answer! Thank you so much @Sumedha_Mehta1! ", "username": "Brian_Burns" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm | GraphQL Schema | How to format/create/define relationships inside nested objects and arrays?
2020-09-29T22:26:48.522Z
Realm | GraphQL Schema | How to format/create/define relationships inside nested objects and arrays?
3,600
null
[ "graphql" ]
[ { "code": "", "text": "I’m attempting to add GraphQL to an existing cluster. I’m not sure if cluster is the correct term. I have an ExpressJS API that connects to a DB I have in Mongo Atlas. I followed the youtube video “GraphQL: The Easy Way to do the Hard Stuff”. It went well. I attempted to create a schema for one of my collections with the assumption that it would be easy to add more collections to the schema later. I was wrong. I cannot find a way to alter the schema to add the other collections in the DB to the schema.Is there a quick way to do this? Should I delete the Realm instance and start over?", "username": "awe_ful" }, { "code": "", "text": "Hi there -You don’t need to delete your Realm instance. To add to your GraphQL Schema, you simply need to create/generate a schema for another collection. To do that you can do the following:image1290×1012 111 KBConfigure Rules (a quick way is to choose a template).\nimage1852×998 78.2 KBClick on ‘Schema’ and either define the json schema for the documents in your collection or generate it (if you have data in your schema already).Once you save and deploy the GraphQL schema should update which you can verify by going to ‘GraphQL’ --> ‘Schema’Hope that answers your question, if not, feel free to reply to this thread with follow-up questions.", "username": "Sumedha_Mehta1" } ]
Alter graphql realm schema
2020-09-29T21:24:29.930Z
Alter graphql realm schema
2,402
null
[ "python" ]
[ { "code": "", "text": "When I query documents in PyMongo, I will have some documents that have user specified data types (although optional) for some attributes e.g.\n‘Age’: Decimal128(‘40.8’) --> Decimal128\n‘JoinDate’: datetime.datetime(2020, 9, 11, 4, 0)In the above example, I want to remove the ‘data type’ tags from the document e.g. ‘Decimal128’,‘datetime.datetime()’. For DateTime, perhaps I can use a default return a string but for other data types, how can I omit the data types? Is it possible via setting a property in PyMongo or do I need to use a regex and update the values?Thanks much !.", "username": "Satya_Tanduri" }, { "code": "", "text": "Hi @Satya_Tanduri,I think you’d have to cast the values manually in your code. Otherwise you may be able to use a custom codec to automate this somewhat. This is because MongoDB stores data in BSON datatype format, and pymongo will ensure that you don’t lose any type information. Otherwise, if the data is stored in Decimal128, then you do manipulation on the data and save the result back to the database, you could inadvertently change the datatype of the field, resulting in data inconsistencies. The same issue would be present with the datetime type.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi , it seems codec_options are meant to transform the data types e.g. convert decimal to string etc. I understand that the documents have embedded data type prefixes before the values. Was looking at ways to remove the data type prefixes only (not transform the data type). Because even if transform a Decimal to str, the data type prefix will not be removed from the document. Thanks for your guidance.", "username": "Satya_Tanduri" } ]
Is it possible to not include Data Type names in document retrieved in PyMongo?
2020-09-29T17:15:52.661Z
Is it possible to not include Data Type names in document retrieved in PyMongo?
2,044
https://www.mongodb.com/…2975a8da76bd.png
[ "graphql" ]
[ { "code": "", "text": "I’m trying to use GraphQL Playground with the endpoint that Realm supplies. It gives me a 401 error code. I attached screenshots.Is that endpoint designed to work with GraphQL Playground?\n\nScreen Shot 2020-09-29 at 15.39.08960×296 26.5 KB\n \nScreen Shot 2020-09-29 at 15.40.431210×428 26.8 KB\n", "username": "awe_ful" }, { "code": "{\n \"Authorization\": \"Bearer <access_token>\"\n}\n", "text": "Is that endpoint designed to work with GraphQL Playground?Yes, you should be able to use it with playground, but you need to remember to add your authorization header to GraphQL Playground like so:To get the access token, you can use any one of the cURL commands listed here", "username": "Sumedha_Mehta1" } ]
Realm endpoint not working in GraphQL Playground
2020-09-29T23:32:24.611Z
Realm endpoint not working in GraphQL Playground
2,585
null
[ "mongoose-odm" ]
[ { "code": "nModified", "text": "Hi,I’ve gotten into a discussion about what to expect from MongoDB when doing updates. I’m working with Mongoose and the phenomena happens for both update and updateMany.It seems, if the update object just happens to be the same as the document to be updated, then MongoDB doesn’t count it as modified (nModified).My question is, does MongoDB still update or overwrite the document, despite the values being the same and then after the update, considers the update a “non-modification”, because the value in the document is still the same?Or, does MongoDB not carry out the modification, if it notices the values in the document and the update are the same?Scott", "username": "scott_molinari" }, { "code": "updateManyZEROupdateupdateupdateMany", "text": "Yes @scott_molinari, In mongoDB if we update the records using updateMany clause it will not update the record as the new record is same so the count of modified if ZERO. but when we use only update clause it will update the record. Here are the screenshots of update and updateMany clauses:Update Many Clause Screen ShotUpdate Clause Screen Shot\nI hope this will help you in understanding the update concept.", "username": "Nabeel_Raza" }, { "code": "", "text": "Hey. Thanks for answering @Nabeel_Raza. But, I’m still not really clear on what MongoDB is “thinking” internally.For updateMany and/ or update, if the data to update is the same as the data in the document, does MongoDB still carry out the update (overwriting the same data), and then notices the data is the same and returns “nModified: 0”, or does MongoDB see the data is the same and does not modify the document at all?Scott", "username": "scott_molinari" }, { "code": "", "text": "To expand on the issue. Mongoose has a pre hook available to add timestamps. And, it correctly timestamps basically any update with an UpdatedAt timestamp. However, if MongoDB doesn’t overwrite the document as mentioned above, then the timestamping would be incorrect. If, however, MongoDB does overwrite the document, but just doesn’t count it as modified due to the data being the same, then the timestamping would be correct, just not totally logical.I was also trying to explain that for the update or updateMany to be truthful/ consistent, a proper filter would need to be used. My debators are saying “no”, that can’t be right and I should prove it, so they can believe me and thus why I’m here asking.Maybe @Asya_Kamsky can chime in? Hi Asya. Very long time no see. Scott", "username": "scott_molinari" }, { "code": "", "text": "for updateMany it check the data is the data is same or not? if same then don’t replace if not same then update the data. where as in update only single record is updated.", "username": "Nabeel_Raza" }, { "code": "updatezero", "text": "Yes @scott_molinari, when we use the date timestamp field for updation then it will update the field with new record(as the records is same) but will update the timestamp field./* 1 */\n{\n“_id” : ObjectId(“5f72bec821f28d445c34c38c”),\n“name” : “Jeff”,\n“roll#” : “R101”,\n“date” : ISODate(“2020-09-29T04:57:44.398Z”)\n}/* 2 */\n{\n“_id” : ObjectId(“5f72becf21f28d445c34c38d”),\n“name” : “Ben”,\n“roll#” : “R102”,\n“date” : ISODate(“2020-09-29T04:59:45.213Z”)\n}/* 3 */\n{\n“_id” : ObjectId(“5f72bf3121f28d445c34c38e”),\n“name” : “Ben”,\n“roll#” : “R102”,\n“date” : ISODate(“2020-09-29T04:59:45.213Z”)\n}/* 4 */\n{\n“_id” : ObjectId(“5f72bf3221f28d445c34c38f”),\n“name” : “Ben”,\n“roll#” : “R102”,\n“date” : ISODate(“2020-09-29T04:59:45.213Z”)\n}/* 5 */\n{\n“_id” : ObjectId(“5f72bf3221f28d445c34c390”),\n“name” : “Ben”,\n“roll#” : “R102”,\n“date” : ISODate(“2020-09-29T04:59:45.213Z”)\n}this is my sample data in a collection. Let’s use updateMany clause to update the document:\n\nhere ^ is the result that when we update the document using timestamp field the modified count was changed as timestamp is always changing.And if you comment out the date field then the count will be zero. as the rest of the data is same. When so ever the timestamp field is in document then the update count will be changed.\nIn backend the data is not updated as the new data is also the same as previous so no need to change it.\nI hope you will get what you need.", "username": "Nabeel_Raza" }, { "code": "", "text": "@Nabeel_Raza - Are you a MongoDB developer?Scott", "username": "scott_molinari" }, { "code": "", "text": "Yes Regards: Nabeel", "username": "Nabeel_Raza" }, { "code": "", "text": "Ok. Fantastic. I’m now understanding the whole situation.So, my last question to you @Nabeel_Raza (and thanks so much for your help):The way to make sure the pre hook updatedAt functionality with updateMany can work properly is only when the query properly filters for the data that has to be updated. Trying to use the internal “if the data is the same, don’t update it” logic isn’t going to be enough. You need to filter to have updates work on the data as you wish it to to get the right updateAt timestamping. Would you also agree to that?Or put another way. There is no way for Mongoose (or any other client) to know what documents were modified and which weren’t modified during an updateMany operation, when the update data is the same as in some documents, thus some of them not getting modified. Only a count of modified docs is returned, which is useless for the determination of “hooking in” an updatedAt timestamp on only those records that were modified.Sorry. And just to double check, as a MongoDB developer, you work within the source code of MongoDB and know its inner workings? Sorry, for acting so mistrustful. It’s not my intention. Scott", "username": "scott_molinari" }, { "code": "roll#date", "text": "@scott_molinari, i think you misunderstand me. I am not the part of MongoDB Developer Community, I am just using MongoDB for a project & have little bit experience on it.\nIf you are updating the data & that’s the same data as previous then it doesn’t matter but if you are using the timestamp field with updation then it will update the timestamp field rest will be the same as the timestamp always changes. When so ever you use the timestamp field it will update the records see example\nThe roll# was the same but the date isn’t so it update the record with count 4 as modified count.", "username": "Nabeel_Raza" }, { "code": "", "text": "Ah. Ok.Yeah, so then we are back to the core question. Since the timestamp pre hook on updateMany isn’t reacting to properly “ignored” updates, it is timestamping incorrectly, thus, is it true when I say, the only way to make sure an updateMany works with timestamping, is to properly filter on the data that should be updated. If you see in the example I gave above, the user is not filtering for anything, yet expecting only two of the docs to get timestamped, because MongoDB is not updating the last doc and correctly sending back nModified as 2.No filter, no proper timestamps with the pre hook and don’t expect Mongoose to do it right, because it will never know that an update of a doc didn’t occur. That is the simple answer or rather the frame of mind one should have to the situation to understand how to make the timestamping work right.Or put another way, you have to make sure you don’t “hit” docs that MongoDB would not update, because the data is already “correct” and no modification is necessary.Scott", "username": "scott_molinari" }, { "code": "modification flag0", "text": "Yes @scott_molinari, this is same as we discuss earlier. if we update the document and new document is same as previous then the modification flag will be 0. But if we use some timestamp field in it then only that field will be updated, rest will be same.\nThis doesn’t mean that we have incorrect data as the new record was same as previous so it doesn’t matter, basically new record is same as previous. So saying that the data is not in updated version it is wrong, it’s the updated document.\n@Nabeel_Raza", "username": "Nabeel_Raza" }, { "code": "", "text": "Got it. Thing is, the person I was debating this with is saying that the timestamping shouldn’t happen with his non-filtered example. His logic is flawed, because MongoDB doesn’t offer the right time to “hook” into the system for the timestamping. Theoretically, it should be within the database, not the client, when timestamping occurs, and AFAIK, Mongo offers no such facility.So, filter on what should be updated or expect funky results with the updatedAt timestamping, is the clear answer for me.Scott", "username": "scott_molinari" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Does MongoDB still update or overwrite a document if the values will be unchanged?
2020-09-27T20:23:46.377Z
Does MongoDB still update or overwrite a document if the values will be unchanged?
29,037
null
[ "mongodb-shell", "change-streams" ]
[ { "code": "use KLP_MGMT_19\nconsole.log(\"Watching for new changes in this collection 'TestCounter' ...\");\n\ndb.TestCounter.watch([{\n $match: {\n //operationType: \"update\", //update|insert|delete|replace|invalidate\n }\n }],\n {\n fullDocument: \"updateLookup\", //default|updateLookup\n })\n .on(\"change\", (data) => {\n console.log(tojson(data))\n })\n .on(\"error\", (err)=>{\n console.error(err)\n });\n\nwhile (true) {\nsleep(1000)\n}\nswitched to db KLP_MGMT_19\n2020-09-03T06:00:52.217+0000 E QUERY [js] ReferenceError: console is not defined :\n@(shell):1:1\n2020-09-03T06:00:54.152+0000 I CONTROL [main] shutting down with code:0\n", "text": "Hi Team,We want to implement changestreams for one of the collection and we have written the below code:cat schema_creation_cs.sh\n#!/bin/bash\n/home/mongodb/MONGODB_TESTDB/mongodb/mongodb/bin/mongo --host wikloh.hp.internet --port 27000 --authenticationDatabase=admin -u mongodb -p XXXXX < /home/mongodb/change_stream/change_stream.json >> /home/mongodb/change_stream/schema_creation_cs.logcat change_stream.jsonThe code works well in nosqlbooster tool but same when i try to execute from server level getting below error:Please let me know hw to run the same via server and i also want to run this in background using nohup?Regards", "username": "Mamatha_M" }, { "code": "mongoconsole.log()cur = db.test.watch([], {fullDocument:'updateLookup'})\nwhile(!cur.isExhausted()) {\n if (cur.hasNext()) {\n printjson(cur.next())\n }\n}\n", "text": "Hi @Mamatha_MThe error is because the mongo shell exposes a different API than the node driver. For example, console.log() is not a method the shell recognizes, as signified by the error message.See db.collection.watch() for the API exposed by the shell with regard to change stream.Roughly, what you want could be translated as such in the mongo shell:Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi Kevin,Thanks for the information.I have heard MongoDB Kafka Connector also helps in changestreams to capture the CURD operation.We have community version of mongo installed in our environment, So my query is will MongoDB Kafka Connector work for community edition or will it work only for Enterprise version?Is there any other method by which we can work on changestreams?Regards\nMamatha", "username": "Mamatha_M" } ]
How to work with changestreams for a collection?
2020-09-03T10:05:51.341Z
How to work with changestreams for a collection?
2,599
null
[ "aggregation", "java", "atlas-data-lake" ]
[ { "code": " db.getCollection('myCollection').aggregate([\n {\n '$match': {\n 'start': {\n '$gte': ISODate('2017-12-31T15:00:00.000Z'),\n '$lt' : ISODate('2018-01-01T15:00:00.000Z')\n }\n }\n },\n {\n '$addFields': {\n 'criterionDate': ISODate('2017-12-31T15:00:00.000Z')\n }\n },\n {\n '$out': {\n 's3': {\n 'bucket': 'myBucket',\n 'region': 'ap-northeast-2',\n 'filename': {\n '$concat': [\n {\n '$toString': '$_id'\n }, '/',\n {\n '$toString': '$criterionDate'\n }, '/'\n ]\n },\n 'format': {'name': 'json.gz', 'maxFileSize': \"200MiB\"}\n }\n }\n }\n ])\njava.lang.IllegalStateException: Cannot return a cursor when the value for $out stage is not a namespace document\n \n BsonDocument lastPipelineStage = getLastPipelineStage();\n if (lastPipelineStage == null) {\n return null;\n }\n if (lastPipelineStage.containsKey(\"$out\")) {\n if (lastPipelineStage.get(\"$out\").isString()) {\n return new MongoNamespace(namespace.getDatabaseName(), lastPipelineStage.getString(\"$out\").getValue());\n } else if (lastPipelineStage.get(\"$out\").isDocument()) {\n BsonDocument outDocument = lastPipelineStage.getDocument(\"$out\");\n if (!outDocument.containsKey(\"db\") || !outDocument.containsKey(\"coll\")) {\n throw new IllegalStateException(\"Cannot return a cursor when the value for $out stage is not a namespace document\");\n }\n return new MongoNamespace(outDocument.getString(\"db\").getValue(), outDocument.getString(\"coll\").getValue());\n } else {\n throw new IllegalStateException(\"Cannot return a cursor when the value for $out stage \"\n + \"is not a string or namespace document\");\n }\n } else if (lastPipelineStage.containsKey(\"$merge\")) {\n BsonDocument mergeDocument = lastPipelineStage.getDocument(\"$merge\");\n if (mergeDocument.isDocument(\"into\")) {\n BsonDocument intoDocument = mergeDocument.getDocument(\"into\");\n \n ", "text": "mongo-java-driver: $out fails in lastPipelineStage when aggregatehi mongodb,\nI wanted to use datalake to archive my data, so I verified the following query in mongo shell.The query worked successfully!And then, I wanted to implement as application level, so I used spring boot application and it has mongo-java-driver 4.1.0.But when I aggregate an above pipeline stage, I’ve encountered the following error.java.lang.IllegalStateException: Cannot return a cursor when the value for $out stage is not a namespace documentAnd I was able to find the root cause below link.As you can see, in my lastPipelineStage, which names $out, there are no keys which names ‘db’ and ‘coll’. there is only existing ‘s3’.Currently, is there another option to use $out pipeline with ‘s3’ key in mongo-java-driver?\nOr do you have a plan to add about ‘s3’ key into mongo-java-driver?Any help will be much appreciated.", "username": "Ian_Ye" }, { "code": "", "text": "Hi, thanks for your question. This should work with the Java driver if you call AggregateIterable#toCollection(). Unlike AggregateIterable#cursor(), that method is agnostic as to the contents of the $out stage.Hope this is helpful.Regards,\nJeff", "username": "Jeffrey_Yemin" }, { "code": "\n \n \t\t\toptions.getComment().ifPresent(aggregateIterable::comment);\n \n \t\t\tif (options.hasExecutionTimeLimit()) {\n \t\t\t\taggregateIterable = aggregateIterable.maxTime(options.getMaxTime().toMillis(), TimeUnit.MILLISECONDS);\n \t\t\t}\n \n \t\t\tif (options.isSkipResults()) {\n \n \t\t\t\t// toCollection only allowed for $out and $merge if those are the last stages\n \t\t\t\tif (aggregation.getPipeline().isOutOrMerge()) {\n \t\t\t\t\taggregateIterable.toCollection();\n \t\t\t\t} else {\n \t\t\t\t\taggregateIterable.first();\n \t\t\t\t}\n \t\t\t\treturn new AggregationResults<>(Collections.emptyList(), new Document());\n \t\t\t}\n \n \t\t\tMongoIterable<O> iterable = aggregateIterable.map(val -> {\n \n \t\t\t\trawResult.add(val);\n \t\t\t\treturn callback.doWith(val);\n \n ", "text": "@Jeffrey_Yemin\nReally thanks for a quick reply and it was the perfect answer for me!\nIt was a perfect choice MongoDeveloperCommunity to ask my question.Remarks:\nMore contexts for someone who has the same problem as me", "username": "Ian_Ye" } ]
Mongo-java-driver: $out fails in lastPipelineStage when aggregate
2020-09-29T02:05:13.868Z
Mongo-java-driver: $out fails in lastPipelineStage when aggregate
4,764
null
[ "java" ]
[ { "code": "@Inject MongoService mongoService; // this has some methods like getting the MongoDatabase for example\n...\nMongoDatabase database = mongoService.getDatabase();\nMongoCollection<Document> collection = database.getCollection(\"myCollection\");\ncollection.createIndex(Indexes.ascending(\"dateCreated\"));\n", "text": "Just started working with MongoDB in general and I have a local project that I want to make some indexes on. I’ve tried using the code below but it doesn’t add a new index when I check my collection after this part is executed.I’ve followed this reference minus the subscribe part just to test it out: Create Indexes", "username": "Junmil_Rey_So" }, { "code": "collection#createIndexsubscribe()subscribesubscribePublisher<String> indexPublisher = collection.createIndex(Indexes.ascending(\"dateCreated\"));\n\n// The subscribe method requests the publisher to start the stream of data.\nindexPublisher.subscribe(new Subscriber<String>() {\n\n public void onSubscribe(Subscription s) {\n // Data will start flowing only when this method is invoked.\n // The number 1 indicates the number of elements to be received.\n s.request(1);\n }\n\n public void onNext(String s) {\n System.out.println(\"Index created: \" + s); // this will be the new index name\n }\n\n public void onError(Throwable t) {\n System.out.println(\"Failed: \" + t.getMessage());\n }\n\n public void onComplete() {\n System.out.println(\"Completed\");\n }\n});\n\n// This is used to block the main thread until the subscribed task completes asynchronously.\ntry {\n Thread.sleep(3000);\n}\ncatch(InterruptedException e) {\n}\n", "text": "Hello @Junmil_Rey_So, welcome to the community.The API method collection.createIndex returns a org.reactivestreams.Publisher.As such the collection#createIndex doesn’t create the index until the returned publisher’s subscribe() method is invoked. The subscribe method takes a org.reactivestreams.Subscriber as a parameter, an interface to be implemented. The implemented methods complete the task of creation of the index when the subscribe method is invoked.You can try using one of the Java implementations at ReactiveX (like RxJava), instead of the low level API from Reactive Streams.", "username": "Prasad_Saya" }, { "code": "subscribe()subscribe()", "text": "Thanks a lot for the clear explanation, I see that I need to call the subscribe() method after creating my index then.I am trying to get my Gradle project using Micronaut and reactivex to index my collections on startup, does that affect how/when I should invoke the subscribe() on them?", "username": "Junmil_Rey_So" }, { "code": "subscribe", "text": "Hi @Junmil_Rey_So, the index will be created only when the subscribe method is run.", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb-driver-reactivestreams createIndex() doesn't make new Index in MongoDB
2020-09-29T19:38:23.708Z
Mongodb-driver-reactivestreams createIndex() doesn&rsquo;t make new Index in MongoDB
3,810
null
[]
[ { "code": "", "text": "Hi,I know that Mongo Atlas free tier allows 500 concurrent users, however I want to know about any limit in the number of simultanous crud queries in this tier.Also can you give me an idea about how much data can I store in 512 mb.\nAnd is there a difference between storing 10 paragraphs and two sets of 5 paragraphs on the storage limit.Thank you.", "username": "sam23" }, { "code": "", "text": "Hi @sam23,Atlas free tier allow 500 connections and a max of 500 collections (100 databases) in total.It also allows only 100 crud requests per second at max.There are more limitations here:Its hard to.predict how your data will be compressed on disk and it depends on docs structure and data types:It is possible for mostly text/number documents to be up to 3-5 time compressed.Please note : we don’t recommend M0 for production usage and it is meant for POC or development only.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hey @sam23!According to the MongoDB Atlas documentation, the M0 free tier has a limit of 500 concurrent incoming connections.With regard to your question about the amount of data that you can save with 512 MB of space, is “it depends.” MongoDB has a maximum document size limit of 16 MB, which means if you maxed out your documents, you could save 512 MB/16MB = 32 documents. However, this is VERY unusual, and most users have MUCH smaller documents.For example, I have a M0 cluster and each document is about 500 bytes, which means I can save about 512 Mb (4096000000 bytes) / 500 bytes = 8,192,000 Documents before I max out my M0 tier, This example tends to be more of an “average” users experience, but again, it depends on your data schema.You can find more information about limits of each tier here.", "username": "JoeKarlsson" }, { "code": "", "text": "Thank you.\nGiven that I am on a budget, I will start with the free tier on Atlas and possibly switch to another hosting platform.I want to know if I have to change anything , regarding queries structure, when migrating my database to another hosting provider. I have a modest experience with Atlas only.\nThank you.", "username": "sam23" }, { "code": "", "text": "Hi @sam23,As long as you stick to the MongoDB version the queries itself should work similarly anywhere (functionally speaking, with the mentioned restrictions on the documentation) .However, using Atlas have huge benefits with a fair price therefore I am not sure I understand your consideration of leaving the platform where you can scale as needed and have backup and lots of other features unavailable for other hosting (triggers, atlas search, data lake, realm apps , backups , security etc.)Additionally, you get access to full support and a user chat to directly discuss your issues or challenges.We would love to hear your thoughts on moving to other platform to see how we can improve.Best\nPavel", "username": "Pavel_Duchovny" } ]
Request limit in the free tier
2020-09-29T12:41:12.309Z
Request limit in the free tier
30,337
null
[ "dot-net" ]
[ { "code": "namespace RealmTest\n{\n public class Customer : RealmObject\n {\n public Customer()\n {\n Name = \"Test Name\";\n IsActive = false;\n Phone = \"8000-0000\" ;\n\n foreach (var t in MyList.GetType().GetInterfaces())\n Console.WriteLine(t);\n\n // Both throw exceptions\n var v = MyList.AsRealmCollection<string>();\n var w = MyList as IRealmCollection<string>;\n }\n public int Id { get; set; }\n public string Name { get; set; }\n public string Phone { get; set; }\n public bool IsActive { get; set; }\n\n public IList<string> MyList { get; }\n }\n\n\nclass Program\n{\n static void Main(string[] args)\n {\n using (var realm = Realm.GetInstance(\"MyData.realm\"))\n {\n realm.Write(() => realm.Add(new Customer()) );\n }\n }\n}\n} var v = MyList.AsRealmCollection<string>();\n var w = MyList as IRealmCollection<string>;\n }\n public int Id { get; set; }\n public string Name { get; set; }\n public string Phone { get; set; }\n public bool IsActive { get; set; }\n\n public IList<string> MyList { get; }\n}\n System.Collections.Generic.IList`1[System.String]\n System.Collections.Generic.ICollection`1[System.String]\n System.Collections.Generic.IEnumerable`1[System.String]\n System.Collections.IEnumerable\n System.Collections.IList\n System.Collections.ICollection\n System.Collections.Generic.IReadOnlyList`1[System.String]\n System.Collections.Generic.IReadOnlyCollection`1[System.String]\n", "text": "Hi,I want to register a NotifyCollectionChangedEventHandler to a List property, but in my test code I get a runtime exception because the List object assigned by Realm cannot be cast into a IRealmCollection. It also doesn’t implement INotifyCollectionChanged. This is my test code.These are the only Interfaces implemented:I’m new to Realm, so I don’t know what the problem is, and I didn’t find anything helpfull in the docs. I hope someone here can help.RegardsThorstenps: I’m trying to map a dictionary to two lists because I need a dictionary. But dictionaries are not supported by realm. I’m trying to fire a change event in my dictionary property when the internal list changes. But if someone already has a solution for dictionary properties I’m all ears.", "username": "Thorsten_Schmitz" }, { "code": "using (var realm = Realm.GetInstane())\n{\n var customer = new Customer();\n realm.Write(() => realm.Add(customer));\n var myListCollection = customer.MyList.AsRealmCollection();\n}\n", "text": "Realm lists become observable only after their parent object is persisted in the database. So something like this should work:", "username": "nirinchev" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
List properties' CollectionChangeEvent in .Net
2020-09-26T22:23:21.970Z
List properties&rsquo; CollectionChangeEvent in .Net
1,686
null
[ "dot-net" ]
[ { "code": "uuid string (Primary Key) \nId int \nname string\n public class FullUser : RealmObject\n {\n [PrimaryKey]\n [MapTo(\"uuid\")]\n public string Uuid { get; set; }\n\n [MapTo(\"id\")]\n public int Id { get; set; }\n\n [MapTo(\"name\")]\n public string Name { get; set; }\n}\nuuid string? (Primary Key) \nId int \nname string?\n", "text": "I have an iOS app written in Swift, that is using a full sync Realm stored in Realm Cloud.When I look at the class “FullUser” in Realm Studio it looks like this:I have also a .net class on the server that will access the same Realm.C# code of class, (somewhat shortened):When I try to do a full sync in .net, I get error message:ERROR: Connection[1]: Session[1]:\nFailed to transform received changeset: Schema mismatch:\n‘FullUser’ has primary key ‘uuid’, which is nullable on one side,\nbut not the other.If I then look at my newly created, local synced Realm-file it looks like this:For some reason, when I sync the Realm in .net\nit makes the string properties optional in my local Realm.How do I specify that I want my C#-code to create the string as\nnon nullable in my local Realm so it matches the one in the server so they can sync?", "username": "Per_Eriksson" }, { "code": "[Required]", "text": "Strings are implicitly nullable in .NET. If you want to make them non-nullable in the database, you should annotate the property with [Required].", "username": "nirinchev" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm .NET string primary key is optional
2020-09-29T19:37:36.430Z
Realm .NET string primary key is optional
2,328
null
[]
[ { "code": "", "text": "I’m writing some unit tests which will be run automatically. The function in my program returns a list of local mongo databases and my unit test checks that the list is correct. At the moment, both my program and my unit test use the mongoDB listDatabases() command. My test will also call the program function to confirm that both match. However, I feel that it is not a valid test when both the program and the test use the same call. I can check it visually in Compass, but I am aiming for automated TDD. Any advice gratefully received.", "username": "Julie_Stenning" }, { "code": "", "text": "I think that is it. You end up sending the same command if you’ve ever had the opportunity to look at a tcpdump of the operation.", "username": "chris" }, { "code": "", "text": "If there were two ways you could get the databases listed you would actually be testing that both commands produce the same output rather than that you have the databases you are expecting to have. Even if all your databases got deleted a test that lists databases (even if the command is run in two different ways) would pass because both would still have the same amount of databases.If you want to test that your cluster contains the databases you are expecting there’s no way around listing the databases you should have somewhere as a hard coded value and then comparing the output of listDatabases to that.", "username": "Naomi_Pentrel" }, { "code": "", "text": "I think what I will do is check that both the list of databases in my test and the list of databases in my program match. Then I’ll add a new database with a randomly generated name and check that both list the database new database with a name that I know. Thank you to @Naomi_Pentrel and @Stennie_X for your prompt responses.", "username": "Julie_Stenning" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is there more than one way of finding a list of database names?
2020-09-28T19:48:25.146Z
Is there more than one way of finding a list of database names?
2,227
null
[]
[ { "code": "", "text": "Hi Everyone,I am a newbie developer. I created a simple CRUD app in the MERN stack. It’s a discussion forum. But most people suggest I should have used a relational database, as there are a lot of relations like the post, comments, sub comments. Any suggestion is it doable with MongoDB, or should I switch to SQL.Thank you.", "username": "Norma_Moras" }, { "code": "", "text": "Everything is doable @Norma_Moras . Most people opt for relational databases as they see everything in a relational schema. You can obviously use Mongo, but your data model would look a bit different (having some stuff embedded) and some things still sort of relational. You can follow these for data modelling - https://docs.mongodb.com/manual/core/data-modeling-introduction/", "username": "shrey_batra" }, { "code": "", "text": "I second what @shrey_batra said. MongoDB is a general purpose database and fits with most use cases. You can definitely build a discussion forum using MongoDB.When people use the term non-relational to describe MongoDB, it doesn’t mean MongoDB doesn’t support relationships. MongoDB just handles them in a different way. In fact, many people consider the way MongoDB handles relationships to be more natural than the way a relational database handles them.As you begin modeling your data, consider how you will be accessing (displaying or updating) your data. Data this accessed together should be stored together.I hope you enjoy trying out MongoDB!", "username": "Lauren_Schaefer" }, { "code": "", "text": "Hello @Norma_MorasIt is great that you want to utilize the strong features of MongoDB. In a first step it might help to change the wording. As mentioned from @shrey_batra and @Lauren_Schaefer MongoDB models relations as well as other databases. Maybe we should say tabular vs. natural approach to store data and with them relations.To get the most out of an noSQL Setup, you need to change the way of thinking about schema design. Your first goal will no longer be to get the maximal normalized Schema, Denormalization is not bad, the requirement of your queries will drive your design. The story will start to think about a good schema design. In case you move the SQL normalized Data Model 1:1 to MongoDB you will not have much fun or benefit.You can find further information on the Transitioning from Relational Databases to MongoDB in the linked blog post. Please note also the links at the bottom of this post, and the referenced migration guide .Since you are new to MongoDB and noSQL I highly recommend to take some of great and free classes from the MongoDB Univerity:Generally data modelling is a broad topic to discuss, this is due to many factors that may affect the design decision. One major factor is the application requirements, knowing how the application is going to interact with the database. With MongoDB flexible schema characteristic, developers can focus on the application design and let the database design conform for the benefit of the application. See also : MongoDB A summary of all the patterns we’ve looked at in this seriesYou may also can checkout:This is just a sample which can get you started very well. In case this is going to be a mission critical project\nI’d recommend getting Professional Advice to plan a deployment There are many considerations, and an experienced consultant can provide better advice with a more holistic understanding of your requirements. Some decisions affecting scalability (such as shard key selection) are more difficult to course correct once you have a significant amount of production data.Hope this helps to start, while getting familiar and all time after, feel free to ask you questions here - we will try to help.Michael", "username": "michael_hoeller" }, { "code": "", "text": "Thanks @shrey_batra. I kind of realized it’s doable because I am sure this community runs on mongoDB and something similar I had in mind.I have used a model tree structure with parent reference. will read that.", "username": "Norma_Moras" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is mongoDB right for discussion forum
2020-09-29T11:04:27.656Z
Is mongoDB right for discussion forum
6,775
null
[ "atlas-search" ]
[ { "code": "**createdBy**", "text": "I have one query regarding search implementation. Here search implementation or atlas-search is best suitable for search in a single collection. I think normally we faced a scenario where we need to search with populate or ref collection also. How can I do it with better implementation?Consider the very simple scenario where I have 2 collections.\n1 Product (text index of product name field)\n2 User (text index on username fields may be firstName, lastName etc)The user create the products so in product collection there is ref on **createdBy** with user’s MongoID. On Frontend, there is a listing of products so I need to provide search criteria like below in single text searchWhat is the best way to handle such a scenario with MongoAtlas? ex. I want all products created by “Donni Bachhan”. what is the aggregation pipeline you just or best practise for it", "username": "JD_Old7" }, { "code": "", "text": "Hi Jaydipsinh. This is a really interesting question. Several different approaches depending on your use case and data.\nIf a product is created by a user, can that data (user name, id) be embedded in the product document? That seems simple.\nYou can only create an Atlas Search index per collection, so you can find whatever documents you need, and then $lookup from the same aggregation pipeline at a later stage to the other collection… or leave the aggregation and reference the second collection.\nAlso, do you have lots of products with complicated names? If not, you could potentially avoid Search and just consider using relationships within data in GraphQL. Or use $search in conjunction with GraphQL. $search to find the product. Then use a filter to GraphQL endpoint enabled by Realm to get info about that product and the owners of it in 1 call. I talk about relationships in GraphQL in this blog: https://www.mongodb.com/how-to/graphql-easy.\nI know these are lots of options. Hope this helps.", "username": "Karen_Huaulme" }, { "code": "", "text": "Thanks, Karen for your replyI embedded the user information into the product document. Due to that if the user updates their information I need to update all product created by the user, which is I personally think no the good way but I need to follow that anyhow.I don’t have experience with GraphQL but will try to do something with that. as of now, my product is already in the live market so I don’t have much time to so.", "username": "JD_Old7" }, { "code": "", "text": "Hi, Jaydipsinh, I would suggest that you can very easily have a database trigger set on your user collection, so that if the user document changes, it will execute a function that will automatically update those fields in the product collection. Yours is an excellent use case for this. You basically set it, and forget it.Here is a quick 5 minute video about triggers and functions: Functions + Triggers Demo - MongoDB Realm - YouTube\nand the documentation for triggers is here: Functions + Triggers Demo - MongoDB Realm - YouTube.Have a look. I think this will really help you. Let me know if you have questions about it.Thanks.", "username": "Karen_Huaulme" }, { "code": "", "text": "Ahh, That seems interesting but is there any impact on database performance? just want to know for the curiosity.I think GraphQL is interesting one but need to invest sometime to learn it. with GraphQL is that possible to quering to relation collection field? line If I am quering to Product collection but get the based on user name from User collection. is that possible? If you remember my case I need to text search on either Product name or user name. we can manage that in single query with GraphQL?Will try to implement Trigger once I sure with performance imapct of it on database or may be on query.Thanks for your suggestion", "username": "JD_Old7" }, { "code": "", "text": "Hi! It will not affect your database performance. The triggers that you configure will fire off a function that you write. These functions execute serverlessly, independent of your database - and they only run when they are used. https://docs.mongodb.com/realm/functions - and they take no time to set up.I am a big fan of the GraphQL stuff. I sent you the link to a blog article, but it is also a video with a real example I work through if that is easier to consume: GraphQL: The Easy Way to Do the Hard Stuff - YouTubeKeep me posted. I am looking forward to hearing if these help you.", "username": "Karen_Huaulme" }, { "code": "", "text": "Thanks for your help Karen, Will try to figure out with GraphQL to bind the proper solution.\nI really appreciate your efforts on it. thanks gain…And yeah one more thing now I am following you on Twitter “youoldMaid” right?", "username": "JD_Old7" }, { "code": "", "text": "Yes @JD_Old7, she is on twitter with the same username which is mentioned by you.", "username": "Nabeel_Raza" }, { "code": "", "text": "hi guys! Yes, indeed!! Followed you back!", "username": "Karen_Huaulme" }, { "code": "", "text": "Tiwitter\nFacebook\nLinkedIn\nInstagram", "username": "Nabeel_Raza" }, { "code": "", "text": "Hey Karen,I am trying to implement GraphQL setup. I have done successfully…! Now I am trying to call the data using GraphiQLI am unable to pass limit params to my query. I don’t know how to solve thatScreenshot 2020-09-28 at 6.07.43 PM2554×1288 369 KBIs that anything I am missing over here?Sorry adding this to the current thread…", "username": "JD_Old7" }, { "code": "", "text": "hi, JD! Glad to see how much progress you are making so fast! It looks like you have a misspelling in “limit.” Add an ‘i’ and it should work. Let me know. Karen", "username": "Karen_Huaulme" }, { "code": "", "text": "Yes, there was an issue. but that was due to n numbers of try to sort this out.But the real problem with collection name “space” it should be the “spaces” so that’s the problem.I have some of good scenarios with my real world application with graphQL and how to cover it with our queries. How can we short this out it’s a question actually. I am happy to share scenarios if you think to manage something on it.", "username": "JD_Old7" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Atlas Search with populate in aggregation pipeline with text index
2020-07-21T12:48:42.571Z
Atlas Search with populate in aggregation pipeline with text index
6,156
https://www.mongodb.com/…4_2_1024x512.png
[ "app-services-user-auth", "react-native" ]
[ { "code": "", "text": "I am following the Task Tracker tutorial and I am not able to persist the login session. I am able to register and login, however, everytime I close the app and open it again, it redirects me to the login screen.In addition, I use AuthProvider to read user, I get “null”.Anyone have any ideas? Do I have to write custom logic to persist login information / session tokens?Thanks", "username": "Irfan_Maulana" }, { "code": "const [currentUser, setCurrentUser] = useState(null);\nReact.useEffect(()=>{\n setCurrentUser(app.currentUser)\n }, [])\nconst {currentUser} = useAuth();\n...\n{currentUser == null ? (\n <LogInView />\n ) : (\n <TasksProvider projectId=\"My Project\">\n <TasksView />\n </TasksProvider>\n )}", "text": "Found the problem by looking into the library.“user” is a run-time variable only set upon calling the login function, while “currentUser” is persisted. To fix the problem, we have to look at the currentUser object instead of the User object.In my case, I added a currentUser variable to the AuthProvider that is set upon the initialization of the app.The currentUser is now exposed to other aspects of the app through the AuthProvider.", "username": "Irfan_Maulana" }, { "code": "", "text": "Irfan,I’m glad you “self solved” this issue. I will speak with the docs team to make sure the currentUser variable is highlighted as the persistent variable in the docs.Thanks for highlighting", "username": "Shane_McAllister" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
React Native does not persist login / sessions after app restart
2020-09-26T23:20:16.709Z
React Native does not persist login / sessions after app restart
3,917
https://www.mongodb.com/…d783b57c8603.png
[ "ruby" ]
[ { "code": "", "text": "Hi I have been using “MongoDB Ruby Driver” which you can find more information about here GitHub - mongodb/mongo-ruby-driver: The Official MongoDB Ruby Driver.I am trying get a list of users using the code example “client.database.users” fromHowever I end up with a “Mongo::Auth::User::View” class as a return.\nThere is no document about what “Mongo::Auth::User::View” is and how to use it to get list of users. I can add, edit. update, and delete user however I can not get a lost if users to display.There is no getUsers or getRoles methods.\nTo make this short how can I use “MongoDB Ruby Driver” to get a list of users for a database? This has been my final struggle.If any one know how to to do this in ruby it will be greatly appreciated. It can be different gem I wouldn’t mind switching as long as its ruby.Thank you.", "username": "Saimon_Lovell" }, { "code": "usersInfousersInfo", "text": "Hello @Saimon_Lovell, welcome to the community.The command to run is the usersInfo. The db.getUsers() is a wraper around the usersInfo command.These commands return information about one or more users.In Ruby, you can use the command method, to execute the usersInfo command on the database to get the list of users.", "username": "Prasad_Saya" }, { "code": "client.database.command(usersInfo: 1)\nclient.use(:admin).database.command(usersInfo: 1)\n", "text": "Thank you very much. You have made my day. I was loosing it looking for it all over the web lol.That did the trick. Thanks again.", "username": "Saimon_Lovell" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
List users using MongoDB Ruby Driver
2020-09-29T02:05:10.383Z
List users using MongoDB Ruby Driver
3,343
null
[ "queries", "performance" ]
[ { "code": "", "text": "Hi,I have a MongoDB cluster set up in an Openshift container platform. A replica set of 3 is set to this MongoDB instance (1 primary, 1 secondary, and 1 stationery). The application connects to this primary MongoDB cluster to execute queries and retrieve data. For a certain period of time, both the data reads and writes seems to be quite fast and completes within 5 secs. After the cluster has been running for 2 days, all the data reads get slower and take 45mins to complete. The application is multi-threaded and this data slowness attributes to the subsequent calls to hung or fail and greatly affects the performance.Query to retrieve a list of student ids from a college dept.db.getCollection(‘college_dept’).distinct(“student_id”,{“dept”: “BS”,“alias”:{ $not: /^chem/ } })Kindly advise on ways to resolve this issue.Thanks for your time.", "username": "jegadees_B" }, { "code": "", "text": "Hi @jegadees_B Can you mention more specifics? Is the size of data increasing over the two days? If yes, how much? Can you also check the memory usage of your mongodb cluster over time?", "username": "shrey_batra" }, { "code": "", "text": "Adding to what shrey_batra says: you may want to look at your oplog to see what is happening. Let us know what that says and hopefully that’ll help us pinpoint the problem.", "username": "Naomi_Pentrel" }, { "code": "{\"dept\": 1, \"alias\": 1, \"student_id\": 1}", "text": "Hi @jegadees_B,Did you create the index {\"dept\": 1, \"alias\": 1, \"student_id\": 1} on this collection which - I think - would make this query covered.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "For fast querying always use the indexed field. It will be much faster then the other fields.\nMake sure that you have the same field indexed as per the query for more detail about indexing check this link.~ Thanks", "username": "Nabeel_Raza" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb data reads getting slow
2020-09-28T02:46:54.338Z
Mongodb data reads getting slow
5,945
null
[ "server", "containers", "security" ]
[ { "code": "mongodumprootconfigroot@mongo-db1:/srv/mongo# mongodump --username=rcroot --password=\"secret\" --out=/var/backups/20200925\n2020-09-26T05:40:20.138+0000 Failed: error counting config.system.indexBuilds: not authorized on config to execute command { count: \"system.indexBuilds\", query: {}, $readPreference: { mode: \"secondaryPreferred\" }, $db: \"config\" }\nmongodb.system.indexBuilds.count()configrootrs0:PRIMARY> db.getUser(\"rcroot\")\n{\n \"_id\" : \"admin.rcroot\",\n \"userId\" : UUID(\"81fc86ff-6d12-4d23-83ab-7fc2591516a2\"),\n \"user\" : \"rcroot\",\n \"db\" : \"admin\",\n \"roles\" : [\n {\n \"role\" : \"userAdminAnyDatabase\",\n \"db\" : \"admin\"\n },\n {\n \"role\" : \"root\",\n \"db\" : \"admin\"\n }\n ],\n \"mechanisms\" : [\n \"SCRAM-SHA-1\",\n \"SCRAM-SHA-256\"\n ]\n}\nmongod --auth --replSet rs0 --keyFile /data/db/keyfile --enableMajorityReadConcern false\nroot", "text": "I’m trying to set up backups on a replica set using mongodump, authenticating with a user that has the root role, but I’m getting an error that suggests my root user doesn’t have permission to access the config database:If I log in to the mongo shell with that user, I find that indeed I cannot run the db.system.indexBuilds.count() command in config; I get an error indicating I’m not authorized. However, the user appears to have the root role:This is mongod version 4.4, and my replica set is configured to use keyfile auth (in case that matters):I’ve tried creating an entirely new user with the root role, but I get the same results.I’m new to MongoDB, so I’m probably missing something silly…I’d appreciate any clues as to what it is!", "username": "Matt_Winckler" }, { "code": "mongodumpmongodumprcrootdocker-compose.ymlversion: '2'\n\nservices:\n mongo:\n image: mongo:4.4\n restart: unless-stopped\n volumes:\n - ./data/db:/data/db\n - ./data/dump:/dump\n - ./data/backups:/data/backups\n command: mongod --auth --replSet rs0 --keyFile /data/db/keyfile --enableMajorityReadConcern false\n ports:\n - \"27017:27017\"\nmongodumpmongomongolocalconfig", "text": "This apparently is related to (or caused by) running mongo within a docker container. I was trying to run the mongodump command from the host machine, outside the container, and that’s when I got the above error. If I run the mongodump command from inside the container, it works fine and the rcroot user has the necessary permissions to complete the backup.So I guess some kind of strange interaction is happening with docker. My docker-compose.yml:The mongo instance is exposed from the container to the host on the standard port. Given that, I thought I’d be able to run the mongodump and mongo shell commands from the host. (And indeed, mongo works from the host for everything not requiring access to local or config databases.)I can just run backup scripts from within the container, but I’m still curious about why this doesn’t work from outside.", "username": "Matt_Winckler" }, { "code": "", "text": "Thanks for sharing those findings.", "username": "steevej" }, { "code": "", "text": "but I’m still curious about why this doesn’t work from outside.Not sure how it is working on docker but i tried this from command line on my Windows host.Got the same error\nIt seems you have to give explicit privileges on system.collections\nEven root user will not have access to some system.collections in config & local DBs\nDid you try backup and restore roles?\nOr create a custom role(readwrite on config db) and grant that to your admin user", "username": "Ramachandra_Tummala" }, { "code": "mongodump version: r4.2.8mongodump version: 100.1.1 s0:PRIMARY> db.system.indexBuilds.find()\nError: error: {\n\t\"operationTime\" : Timestamp(1601224356, 1),\n\t\"ok\" : 0,\n\t\"errmsg\" : \"not authorized on config to execute command { find: \\\"system.indexBuilds\\\", filter: {}, lsid: { id: UUID(\\\"c010c58f-7228-4489-a21e-f09066dfca42\\\") }, $clusterTime: { clusterTime: Timestamp(1601224356, 1), signature: { hash: BinData(0, A598E6347A1457E588869C023723BFEFA7D5DA1E), keyId: 6877201118682677249 } }, $db: \\\"config\\\" }\",\n\t\"code\" : 13,\n\t\"codeName\" : \"Unauthorized\",\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1601224356, 1),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"pZjmNHoUV+WIhpwCNyO/76fV2h4=\"),\n\t\t\t\"keyId\" : NumberLong(\"6877201118682677249\")\n\t\t}\n\t}\n}\ns0:PRIMARY> \nbye\nroot@85030c5e9a4a:/# mongodump --uri 'mongodb://admin:0o9i8u7y@mongo-1-a/?authSource=admin&replicaSet=s0'\n2020-09-27T16:33:13.310+0000\twriting admin.system.users to dump/admin/system.users.bson\n2020-09-27T16:33:13.311+0000\tdone dumping admin.system.users (1 document)\n2020-09-27T16:33:13.311+0000\twriting admin.system.version to dump/admin/system.version.bson\n2020-09-27T16:33:13.312+0000\tdone dumping admin.system.version (2 documents)\n2020-09-27T16:33:13.312+0000\twriting test.foo to dump/test/foo.bson\n2020-09-27T16:33:13.313+0000\tdone dumping test.foo (4 documents)\n\n", "text": "Intrigued I also did a little test. I could not query the config.system.indexbuilds. But a dump completed successfully. Running mongodump from the 4.2 packages mongodump version: r4.2.8 gives the error @Matt_Winckler posted about.mongodump from 4.4 packages dumps without error mongodump version: 100.1.1 So possibly a mongodump version mismatch ?", "username": "chris" }, { "code": "rootbackupbackup", "text": "Not sure how it is working on docker but i tried this from command line on my Windows host.Got the same error\nIt seems you have to give explicit privileges on system.collections\nEven root user will not have access to some system.collections in config & local DBs\nDid you try backup and restore roles?\nOr create a custom role(readwrite on config db) and grant that to your admin userThe root role supposedly includes the backup role. (From what I was reading, this wasn’t always the case in previous versions, but it should be true in 4.4.) However, I did go ahead and try adding the backup role explicitly, and it made no difference.", "username": "Matt_Winckler" }, { "code": "mongodumpmongo-toolsmongodump --versionbuilt-without-version-stringmongodump", "text": "So possibly a mongodump version mismatch ?aha, I think this is the winner! I had installed mongodump on the host machine via the Ubuntu mongo-tools package, and mongodump --version unhelpfully reports built-without-version-string.So I copied the mongodump binary out of the docker container and tried running it on the host, and the backup then worked fine. Mystery solved!Thanks Chris!", "username": "Matt_Winckler" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
User with `root` role cannot access config database
2020-09-26T07:38:11.994Z
User with `root` role cannot access config database
12,567
null
[ "python" ]
[ { "code": "", "text": "Hello,Is it possible to extract all documents from a collection with the datetime fields in ISO format when using PyMongo? Is there a property I can set at the collection level ? This is probably a very basic question. Thanks for any suggestions.Best regards.", "username": "Satya_Tanduri" }, { "code": "", "text": "Hi @Satya_Tanduri welcome to the community!Could you elaborate on what you mean by “extract in ISO format using pymongo”? What are you trying to achieve? Are you trying to export the collection using pymongo? What are the result of your efforts so far?If you need to export the whole collection, you can use the mongoexport tool. Note that the tools (mongoexport/mongoimport/mongodump, etc.) are downloaded separately starting from MongoDB 4.4.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Sorry for the late response. I have tried out MongoExport and MongoDump utilities. I am exploring developer options (Python/Java) etc right now. I see that there is DATETIME Data type serialization issue and the work around seems to be using JSON_UTIL. I managed to get that to work for my purpose. Thank you so much for your guidance.", "username": "Satya_Tanduri" } ]
Extracting datetime fields in ISO format from all documents in PyMongo to avoid JSON Serialization error
2020-09-24T19:38:37.380Z
Extracting datetime fields in ISO format from all documents in PyMongo to avoid JSON Serialization error
2,457
null
[ "atlas-search" ]
[ { "code": "", "text": "Hello, with the Beta of Atlas Text Search (thats currently in production for v 4.2.v), can we do the following auto-complete scope?I know it can be done in ElasticSearch, but would prefer to use Mongo Text Search of possibleusers/client type 2 characters in search box, then auto-complete options will show for words of only <5 characters. Once 3 characters are typed, the autocomplete shows options for all possible words (>=5 char)", "username": "Achal_Amin" }, { "code": "", "text": "Atlas Text SearchI haven’t used that new $searchbeta but I know it is based on lucene; it is a full text search, and I used lucene in the past in couple projects. I think, even you use lucene directly, you need to write your own custom analyzer to do it.Did you check out $regex operator? We needed similar functionality, and we achieved it by using $regexpattern matching on strings in MongoDB 6.0I hope it works for you.", "username": "coderkid" }, { "code": "", "text": "Atlas Search has released the autocomplete operator for Atlas Search ($search) and it is a much better option than using $regex for this use case. In particular, take a look at at edgeGrams and nGrams discussed here in the Atlas Search documentation.", "username": "Marcus" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Text Search Auto-Complete
2020-03-05T21:59:05.375Z
Text Search Auto-Complete
4,320
null
[ "data-modeling" ]
[ { "code": "", "text": "Hi all,\nWe are in the process of designing a multi-tenant application. During my research, I found out there are a few approaches that we can follow.I believe Approach 3 would not work for us since this application needs to scale to 1000s or 10,000s of tenants.I want to know\nA) what are the limitations and advantages of approaches 1 or 2? are there any performance limitations or cost-wise issues in each approach?\nB) what is the most recommended practice. we are planning on using MongoDB atlas and everything needs to be handled programmatically.\nC) how we can limit access control in each approach, however this also needs to be done programmatically.Thank you!", "username": "dushan_Silva" }, { "code": "", "text": "Welcome @dushan_Silva! A) If you use one collection per tenant you will likely end up having data modeling issues. Normally you use collections for entities like users or orders etc. If you only have one collection you would struggle to store different entities in there.So you can choose to have one database per tenant or use ids to separate data. When you are scaling up you could shard using the tenant_id thus spreading out the load across multiple replica sets…B) I’ll let someone else handle this one with more detail aside from what I’m saying as answers for A and C.C) Since you are using MongoDB Atlas, you can control access based on field values using Realm Rules.\nIf you go with approach 1 you can control access on a database or collection level. So you could use that to restrict access for a database per client for your scenario.Cheers,\nNaomi", "username": "Naomi_Pentrel" }, { "code": "", "text": "Thank you Naomi, I have one more question. Is it possible to search the content stored within a file which is stored in MongoDB? for example if I store a pdf file in MongoDB is it possible to do a search for a keyword within the file?", "username": "dushan_Silva" }, { "code": "", "text": "Not that I know of I’m afraid", "username": "Naomi_Pentrel" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Multi tenanted SAAS Application
2020-09-26T08:40:01.325Z
Multi tenanted SAAS Application
4,520
https://www.mongodb.com/…95f2ab41276e.png
[ "data-modeling" ]
[ { "code": "", "text": "Hello! Sorry if this seems like a very novice question, I’m just starting out.I’m pretty lost on how to model the data for a Pokémon-like game that I’m making to teach myself MongoDB and some javascript, that would run as a Discord Bot. My idea for it looks like the following:imagen843×772 21.8 KBTracking Stamina per user is easy. However, I have no idea how to approach the russian doll design that I’ll need for monsters, and all of the different monsters that someone may have. Should I use maps? Arrays? I’m lost.Thanks in advance.", "username": "Alexander_B" }, { "code": "{\n \"trainer\": \"Naomi\",\n \"party\": [\n { \"name\": \"Bulbasaur\", \"type\": \"grass\", \"level\": 3, \"xp\": 80},\n { \"name\": \"Ivysaur\", \"type\": \"grass\", \"level\": 18, \"xp\": 200},\n ...\n ]\n \"storage\": [\n ...\n ]\n}\n{\"pkdx_id\":1,\"national_id\":1,\"name\":\"Bulbasaur\",\"__v\":3,\"image_url\":\"http://pokeapi.co/media/img/1.png\",\"description\":\"Bulbasaur can be seen napping in bright sunlight. There is a seed on its back. By soaking up the sun's rays, the seed grows progressively larger. Bulbasaur can be seen napping in bright sunlight. There is a seed on its back. By soaking up the sun's rays, the seed grows progressively larger.\",\"art_url\":\"http://assets22.pokemon.com/assets/cms2/img/pokedex/full/001.png\",\"types\":[\"poison\",\"grass\"],\"evolutions\":[{\"level\":16,\"method\":\"level_up\",\"to\":\"Ivysaur\"}]}\n", "text": "Welcome to the community @Alexander_B! Assuming this is like Pokémon I think you’d actually probably want to store a monster’s level and XP with the user to avoid you having to look up a user, getting an array of monsters and then having to look up each monster’s data with additional queries.If this was Pokémon, I’d probably model it like this:And then for monsters if there is a general list of monsters maybe have a collection that has entries with general information about the monster such as:It sort of depends on how you end up querying the data, maybe the user data should contain a bit more information for each monster but you’ll probably work that out fairly quickly when you start writing the queries :). If you tell me a bit more about how the game is supposed to work and how you might need to query the data I can help.", "username": "Naomi_Pentrel" }, { "code": "", "text": "Wow, thank you so much for the quick and awesome reply @Naomi_Pentrel !I’ve gone and made my schemas around that, and it looks like this now:const mongoose = require(‘mongoose’);const playerSchema = mongoose.Schema({\n_id: mongoose.Schema.Types.ObjectId,\nuserID: String,\nuserStamina: Number,\nmonsterParty: [monsterSchema],\nmonsterStorage: [monsterSchema]\n});const monsterSchema = mongoose.Schema({\n_id: mongoose.Schema.Types.ObjectId,\nmonsterSpecies: String,\nlevel: Number,\nxp: Number,\nhappiness: Number,\n});module.exports = mongoose.model(‘Users’, playerSchema, ‘users’);\nmodule.exports = mongoose.model(‘Monster’, monsterSchema, ‘monsters’);As for how the game works, the basics is this:The game runs on the Discord chat service as a bot, and is intended to be played a few times a day in short bursts - Stamina is intended for this purpose, and all actions (fight a random trainer or fight a random monster) costs Stamina, which recharges by a certain amount each hour.Fights are resolved immediately once started, there are no turns or further player interaction there. You press the button to fight and you then win or lose. Same with monsters, except you get the choice to catch them if you win.I would like to eventually add a store and the ability to buy items to care for your monsters and items to catch new ones, later down the line once I’m more acquainted with everything.About the general list monster collection, how would I approach making something like that? I was going to just code that into the bot’s javascripts, so that I don’t have to ask the database about it. But with that aside, I would actually like to learn how to do what you mention because I’d like to learn more of course! So, would it be just one “pokedexEntry” schema, which I then fill with information, yes? When in the code should I fill it in with the information?", "username": "Alexander_B" }, { "code": "", "text": "I think it will depend on how many different kinds of monsters you have whether that makes sense. If it’s a small number storing it with the bot makes sense. But if it’s many different ones or if you may want to change them without you wanting to update the bot then putting it in the database may make sense ", "username": "Naomi_Pentrel" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Modelling for a Pokémon-like game
2020-09-28T10:28:35.942Z
Modelling for a Pokémon-like game
3,949
null
[]
[ { "code": "", "text": "Hi Team,I have below three documents retrieved from three collections -\n{ fulldocument }\n{“Key1”:“value1”,“key2”:“value2”}\n{“key3”:“value3”,“key4”:“value4”}Need to merge in one and insert in elastic search like below -\n{ fulldocument,\n“Key1”:“value1”,\n“key2”:“value2”,\n“key3”:“value3”,\n“key4”:“value4”}I am not able to merge it in one document.\nI can do it by adding one by one attribute but want to add it as document.Thanks,", "username": "gopinath_maheswaran" }, { "code": "", "text": "Hi @gopinath_maheswaran,Check the option of running aggregation with $mergeObjects:Potentially you can use $unionWith from the three collection and group by null. Last stage can be the merge objects.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks for the suggestion!,I tried multiple options,root is creating with $mergeObject name and it’s list of object in one object.root is creating with $set.but in both scenario after data insert in elastic search output for all the integer data is like below -\n“downloads”: {\n“$numberInt”: “10”\n}expected result - “downloads”: 10Elastic search output is working expected if data is inserting through postman.Thanks,", "username": "gopinath_maheswaran" }, { "code": "docs = await coll.find()\nresponse.setBody(JSON.stringify(docs))\n", "text": "This is expected as it is Extended Json represention…How do you fetch the data ? Is it through a Driver or a Realm function? (Now noticed Realm case type)Of its from a function or a webhook you need to JSON.stringify output :Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Merge Multiple Documents in realm
2020-09-25T02:08:28.932Z
Merge Multiple Documents in realm
2,243
null
[ "data-modeling" ]
[ { "code": " CategoryGroup: {\n CategoryGroupID\n Categories: [\n { CategoryID, value},\n { CategoryID, value},\n ]\n }\n", "text": "So this is my first week working with MongoDB and I’ve come to a road block because I don’t know how I should set up my schema in Javascript, I have 3 Models: Budget, CategoryGroup, Category. I want one category to hold one budgeted amount for each month. But I don’t want to store the data directly in Category (I think)so what I would is budget to store it something like this:So that I can get like “Monthly expenses” -> “Rent” + “400” f.ex.\nIs this doable and how?Is it over engineered? How would you guys with experience do it.\nRemember that I already have CategoryGroups and Categories.English is not my native tongue so it might not be exactly clear what I want to achieve but I hope you understand anyway. Thanks in advance.", "username": "Mikael_Larsson" }, { "code": "", "text": "I suggest that you go through the free MongoDB learning module on data modelling. MongoDB Courses and Trainings | MongoDB UniversityI have 30 years of RDBMS experience and I found it most instructive. I have changed my database as a result.", "username": "Julie_Stenning" } ]
Setting up my first schema for a budget
2020-09-28T18:45:19.919Z
Setting up my first schema for a budget
2,512
null
[ "atlas-functions", "app-services-user-auth" ]
[ { "code": "", "text": "How do you access Realm Auth or even Realm from functions ? I want to create a user or reset password from a Realm function.Thanks.", "username": "David_N_A" }, { "code": "", "text": "You can use the Admin API within functions to create users and a password reset function can be set from the UI in ‘Email/Password Provider’To call the Admin API from a function, you can see an example in a function by Pavel here.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Interresting, thanks. I’ll look at it.\nIt’s too bad that there is no easier way. I thought that Realm functions had similar access as api functions do.\nThey have access to mongodb and services, why don’t they have access to users (auth) as well ?", "username": "David_N_A" }, { "code": "", "text": "@David_N_A,I think the rational is that functions are meant to be user interaction with logic and services and not administration of users or provisioning.I know that user administration is a gray area where we get more and more feedback that we should. Probably offer some better programmatic api for it… CC: @Drew_DiPalmaThanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "I think the rational is that functions are meant to be user interaction with logic and services and not administration of users or provisioning.Said that way, it all makes sense now.", "username": "David_N_A" }, { "code": "", "text": "Actually it’s not really user management that I want to do. Are you thinking of adding a password requirements such as AWS has. Adding user pool password requirements - Amazon Cognito\nThat would be nice, I wouldn’t have to use the realm api in a functions to wrap the creation of users.", "username": "David_N_A" }, { "code": "", "text": "@David_N_A,The password restrictions is build in for email password and currently only enforced.Perhaps, for a more dedicated logic of Auth you can use custom-function authentication where you can do whatever checks you like for passwords:https://docs.mongodb.com/realm/authentication/custom-function/Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "The password restrictions is build in for email password and currently only enforced.Yes i’m using email/password auth. I’m not sure what you mean by enforce. I know that they allow any types of password without restrictions, is this correct?Perhaps, for a more dedicated logic of Auth you can use custom-function authentication where you can do whatever checks you like for passwords:Edit: I guess i should try custom auth function, although that will be more work.", "username": "David_N_A" }, { "code": "", "text": "@David_N_A,\nPassword is restricted between 6 and 128 characters for node sdk:https://docs.mongodb.com/realm/node/manage-email-password-users/#register-a-new-user-accountI assume similar restrictions for other sdks.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Ok thank you.\nI will either build a password format validation on React and then Realm will only check the password length. (yuk…)\nOr I will use the custom auth function on Realm to build a password format validation there.", "username": "David_N_A" }, { "code": "", "text": "@David_N_A - thanks for explaining your use-case + feedback here. If you add your feature request here Realm: Top (70 ideas) – MongoDB Feedback Engine, we can track how many other users are also interested in something like this.", "username": "Sumedha_Mehta1" } ]
Access Realm Auth from functions
2020-09-26T22:25:19.581Z
Access Realm Auth from functions
3,740
null
[ "connecting", "installation" ]
[ { "code": "", "text": "I am a student and we are supposed to use the Community version of MongoDB for our Web Application course. I am having a problem of running MongoDM on my Windows 10 64bit machine. I have installed the latest version 4.4.1 and the installation had no problems. However, when I issue a command mongo in the Command Prompt I get an error:C:>mongo\nMongoDB shell version v4.4.1\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nError: couldn’t connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: No connection could be made because the target machine actively refused it. :\nconnect@src/mongo/shell/mongo.js:374:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1I have found a solution on the StackOverflow but it’s for the older versions:\nHow can have this fixed to have the MongoDB running, preferably with one click?", "username": "Dmitry_Morozov" }, { "code": "", "text": "You have to start mongod before trying to connect with mongo shell.", "username": "steevej" }, { "code": "", "text": "C:>So\nC:>mongod\nand then\nC:>mongo\nis the only solution?", "username": "Dmitry_Morozov" }, { "code": "", "text": "is the only solution?No it is not. You could start mongod as a service, you could start mongod with a configuration file. You could connect to an Atlas cluster so you do not have to start mongod.", "username": "steevej" }, { "code": "", "text": "How can I start mongod with a configuration file? It sounds more appealing to me.", "username": "Dmitry_Morozov" }, { "code": "", "text": "I also made a blog post with a Youtube video about how to get started with the MongoDB Atlas Free Tier which is enough for what you are trying to do. No credit card needed at all.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Thanks, but it’s not going to work for me, because according to our course we are supposed to have the Custom settings, when installing the MongoDB.", "username": "Dmitry_Morozov" }, { "code": "", "text": "When you installed what option you have clicked?The MongoDB service is started upon successful installation [1].If you followed above no need to start mongod\nJust issue mongo and connectIn your case mongo command failed to connect as mongod is not up and running on default port 27017\nSo please check why it is not up\nFrom task manager you can see mongod service under processes", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Here are the instructions we had:I’ve done all of that.", "username": "Dmitry_Morozov" }, { "code": "", "text": "Ok UnderstoodWhen you issued just mongod at cmd prompt what happened?\nMost likely it would have failed with missing directoryPlease follow instructions from this link", "username": "Ramachandra_Tummala" }, { "code": " 2020-09-25T11:05:33.901-04:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\n 2020-09-25T11:05:33.903-04:00: This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip <address> to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning\n\n Enable MongoDB's free cloud-based monitoring service, which will then receive and display\n metrics about your deployment (disk utilization, CPU, operation statistics, etc).\n\n The monitoring data will be available on a MongoDB website with a unique URL accessible to you\n and anyone you share the URL with. MongoDB may use this information to make product\n improvements and to suggest MongoDB products and deployment options to you.\n\n To enable free monitoring, run the following command: db.enableFreeMonitoring()\n To permanently disable this reminder, run the following command: db.disableFreeMonitoring()", "text": "It has the same result as what I was describing in the beginning when I open a separate command window and run mongod command first and then in another command window run mongo command.C:>mongod\n{“t”:{\"$date\":“2020-09-25T11:05:32.127-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23285, “ctx”:“main”,“msg”:“Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’”}\n{“t”:{\"$date\":“2020-09-25T11:05:32.134-04:00”},“s”:“W”, “c”:“ASIO”, “id”:22601, “ctx”:“main”,“msg”:“No TransportLayer configured during NetworkInterface startup”}\n{“t”:{\"$date\":“2020-09-25T11:05:32.134-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:4648602, “ctx”:“main”,“msg”:“Implicit TCP FastOpen in use.”}\n{“t”:{\"$date\":“2020-09-25T11:05:32.136-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:4615611, “ctx”:“initandlisten”,“msg”:“MongoDB starting”,“attr”:{“pid”:11568,“port”:27017,“dbPath”:“C:/data/db/”,“architecture”:“64-bit”,“host”:“IronCube”}}\n{“t”:{\"$date\":“2020-09-25T11:05:32.136-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23398, “ctx”:“initandlisten”,“msg”:“Target operating system minimum version”,“attr”:{“targetMinOS”:“Windows 7/Windows Server 2008 R2”}}\n{“t”:{\"$date\":“2020-09-25T11:05:32.136-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23403, “ctx”:“initandlisten”,“msg”:“Build Info”,“attr”:{“buildInfo”:{“version”:“4.4.1”,“gitVersion”:“ad91a93a5a31e175f5cbf8c69561e788bbc55ce1”,“modules”:,“allocator”:“tcmalloc”,“environment”:{“distmod”:“windows”,“distarch”:“x86_64”,“target_arch”:“x86_64”}}}}\n{“t”:{\"$date\":“2020-09-25T11:05:32.137-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:51765, “ctx”:“initandlisten”,“msg”:“Operating System”,“attr”:{“os”:{“name”:“Microsoft Windows 10”,“version”:“10.0 (build 19041)”}}}\n{“t”:{\"$date\":“2020-09-25T11:05:32.137-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:21951, “ctx”:“initandlisten”,“msg”:“Options set by command line”,“attr”:{“options”:{}}}\n{“t”:{\"$date\":“2020-09-25T11:05:32.139-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22270, “ctx”:“initandlisten”,“msg”:“Storage engine to use detected by data files”,“attr”:{“dbpath”:“C:/data/db/”,“storageEngine”:“wiredTiger”}}\n{“t”:{\"$date\":“2020-09-25T11:05:32.141-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22315, “ctx”:“initandlisten”,“msg”:“Opening WiredTiger”,“attr”:{“config”:“create,cache_size=3301M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],”}}\n{“t”:{\"$date\":“2020-09-25T11:05:32.315-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:\"[1601046332:314976][11568:140709665395952], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 7 through 8\"}}\n{“t”:{\"$date\":“2020-09-25T11:05:32.512-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:\"[1601046332:511952][11568:140709665395952], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 8 through 8\"}}\n{“t”:{\"$date\":“2020-09-25T11:05:32.716-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:\"[1601046332:715948][11568:140709665395952], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Main recovery loop: starting at 7/4096 to 8/256\"}}\n{“t”:{\"$date\":“2020-09-25T11:05:33.095-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:\"[1601046333:94970][11568:140709665395952], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 7 through 8\"}}\n{“t”:{\"$date\":“2020-09-25T11:05:33.380-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:\"[1601046333:379508][11568:140709665395952], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 8 through 8\"}}\n{“t”:{\"$date\":“2020-09-25T11:05:33.565-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:\"[1601046333:565509][11568:140709665395952], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)\"}}\n{“t”:{\"$date\":“2020-09-25T11:05:33.566-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:\"[1601046333:565509][11568:140709665395952], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (0, 0)\"}}\n{“t”:{\"$date\":“2020-09-25T11:05:33.806-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:4795906, “ctx”:“initandlisten”,“msg”:“WiredTiger opened”,“attr”:{“durationMillis”:1665}}\n{“t”:{\"$date\":“2020-09-25T11:05:33.809-04:00”},“s”:“I”, “c”:“RECOVERY”, “id”:23987, “ctx”:“initandlisten”,“msg”:“WiredTiger recoveryTimestamp”,“attr”:{“recoveryTimestamp”:{\"$timestamp\":{“t”:0,“i”:0}}}}\n{“t”:{\"$date\":“2020-09-25T11:05:33.829-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22262, “ctx”:“initandlisten”,“msg”:“Timestamp monitor starting”}\n{“t”:{\"$date\":“2020-09-25T11:05:33.901-04:00”},“s”:“W”, “c”:“CONTROL”, “id”:22120, “ctx”:“initandlisten”,“msg”:“Access control is not enabled for the database. Read and write access to data and configuration is unrestricted”,“tags”:[“startupWarnings”]}\n{“t”:{\"$date\":“2020-09-25T11:05:33.903-04:00”},“s”:“W”, “c”:“CONTROL”, “id”:22140, “ctx”:“initandlisten”,“msg”:“This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning”,“tags”:[“startupWarnings”]}\n{“t”:{\"$date\":“2020-09-25T11:05:33.922-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:20536, “ctx”:“initandlisten”,“msg”:“Flow Control is enabled on this deployment”}\n{“t”:{\"$date\":“2020-09-25T11:05:34.204-04:00”},“s”:“I”, “c”:“FTDC”, “id”:20625, “ctx”:“initandlisten”,“msg”:“Initializing full-time diagnostic data capture”,“attr”:{“dataDirectory”:“C:/data/db/diagnostic.data”}}\n{“t”:{\"$date\":“2020-09-25T11:05:34.210-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:23015, “ctx”:“listener”,“msg”:“Listening on”,“attr”:{“address”:“127.0.0.1”}}\n{“t”:{\"$date\":“2020-09-25T11:05:34.210-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:23016, “ctx”:“listener”,“msg”:“Waiting for connections”,“attr”:{“port”:27017,“ssl”:“off”}}C:>mongo\nMongoDB shell version v4.4.1\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nImplicit session: session { “id” : UUID(“93e33731-1848-4a22-aa76-a9b6f6e140cb”) }\nMongoDB server version: 4.4.1The server generated these startup warnings when booting:", "username": "Dmitry_Morozov" }, { "code": "", "text": "ctx”:“listener”,“msg”:“Waiting for connections”,“attr”:{“port”:27017,“ssl”:“off”}}Your log clearly shows it is waiting for connectionsctx”:“listener”,“msg”:“Waiting for connections”,“attr”:{“port”:27017,“ssl”:“off”}}You should be able to connect.Is that mongo snapshot full?.It says connectingYou can ignore those startup warnings.By default ACL is not enabled\nYou have to mention in yor config file or use --auth parameter if you are starting mongod on command lineAnother way to start a simple mongod on your system is to choose different port and dbpath\nmongod --port 28000 --dbpath “C:/…/…” give some valid path\nOnce mongod is up open another terminal and connect using\nmongo --port 28000", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I think you are failing to understand that I can connect, it’s just an awkward way of connection. I cannot connect to a database using just mongo command, unlike my classmates who use UNIX-based OSes. Or I don’t understand what you are trying to do.", "username": "Dmitry_Morozov" }, { "code": "", "text": "Your first post shows mongo failed to connectYour last but one post says you are getting same results as first post\nWhat i understood is you are facing issues in connecting to mongod\nNow i understand what you meant by “awkward way of connection”\nOn one session you are starting mongod and in another session you are trying to connect using mongo which you do not want\nThat is how it works in Windows\nIf you have mongod running as service you don’t need to start mongod eveytime\nJust go to cmd prompt and run mongo.It will connect\nOn Unix it uses fork option so your mongod will be running in the background\nThat is why your classmates don’t have to start mongod everytime and can connect just by issuing mongoHope it is clear now", "username": "Ramachandra_Tummala" }, { "code": "", "text": "You can connect with just mongo if you try to connect to a running mongod. It looks like, in your case, no mongod is running on your local host at default port 27017. That is why you have to start it.", "username": "steevej" }, { "code": "", "text": "O.K. Now I got it. Thank you very much!", "username": "Dmitry_Morozov" } ]
Windows10 Community version problem
2020-09-21T20:35:18.875Z
Windows10 Community version problem
22,956
null
[ "text-search" ]
[ { "code": "", "text": "We are trying to do an aggregate query with a $text command. This works, but then when we try to sort the results on one of the string fields we end up with case sensitive sorting, which is not what we are hoping for. If we add collation to the aggregation, then the query fails with an error.Is there any way to do this, or is this impossible with mongodb?Thanks", "username": "John_Shaver" }, { "code": "", "text": "I think you will need to specify the collation for the sort; documented at https://docs.mongodb.com/manual/reference/collation/. You will see that the default strength item is 3 which includes case so you will need to change it to 1.", "username": "Graeme_Henderson" }, { "code": "", "text": "I don’t see a way to specify a collation for the $sort. In fact I just found:You cannot specify multiple collations for an operation. For example, you cannot specify different collations per field, or if performing a find with a sort, you cannot use one collation for the find and another for the sort. I think collation for $text has to be “simple” therefore any query that involves $text can only do a case sensitive sort. Can anyone confirm this?I guess we’ll have to do the old hack of adding an extra field with a lowercase version of the column we’re sorting on and then sorting on that.", "username": "John_Shaver" } ]
$text and case sensitive sorting
2020-09-26T01:31:27.952Z
$text and case sensitive sorting
4,481
null
[ "schema-validation" ]
[ { "code": "", "text": "“Document would fail validation” is not a very helpful error message.\nI’ve looked in the log and there is no more information there.\nI have tried to eyeball my validation schema and I see no errors (though there must be one!).\nHow do I debug failed validation, please?", "username": "Jack_Woehr" }, { "code": "", "text": "Do validation rules imply an order to the members of the document when they are insterted?", "username": "Jack_Woehr" }, { "code": "", "text": "If you share your document that fails your schema and your schema, we could help better.", "username": "steevej" }, { "code": "", "text": "I’m not terribly eager to share that. I was just wondering if I am missing tools or error level settings that would make it easier to find the problem.", "username": "Jack_Woehr" }, { "code": "", "text": "No problem.I personally avoid schema validation because one thing I like about MongoDB is the schema less document storage. I fully unit test all my code so having a schema validation is more a hurdle than an help.Have fun and a good day.", "username": "steevej" }, { "code": "", "text": "@steevej I’m new to MongoDB and was exploring how far a collection can be made rigorous.I wasn’t really looking for someone to fix my problem, more like, point me to what tools I should be using or what config factors would make MongoDB identify for me the problem field that is causing the validation error.It seems so elementary a requirement that it’s hard to believe such a feature does not exist.", "username": "Jack_Woehr" }, { "code": "", "text": "https://jira.mongodb.org/browse/SERVER-20547Apparently there is no such feature, but 4.6 will have it.", "username": "Jack_Woehr" }, { "code": "", "text": "Hi @Jack_Woehr,Have you tried MongoDB Compass?\nThere is a tab for validation.\nimage1013×717 30.4 KB\nCheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Yes, @MaBeuLux88, the problem is not validation itself but debugging why validation has failed.\nThere is simply no user facility in MongoDB to help determine the cause of validation failure.", "username": "Jack_Woehr" } ]
Document would fail validation
2020-09-26T20:52:25.499Z
Document would fail validation
3,939
null
[]
[ { "code": "[\n {\n \"fields\": {\n \"field_1\": { /* dynamic key */\n \"name\": \"f1\",\n \"first\": {\n \"check\": true\n }\n },\n \"field_2\": { /* dynamic key */\n \"name\": \"f2\",\n \"second\": {\n \"check\": true\n }\n },\n \"description\": \"abc\",\n \"summary\": {\n \"val\": \"xyz\"\n }\n }\n },\n {\n \"fields\": {\n \"field_1\": { /* dynamic key */\n \"name\": \"f1\",\n \"second\": {\n \"check\": false\n }\n },\n \"field_2\": null, /* dynamic key */\n \"field_3\": { /* dynamic key */\n \"name\": \"f3\",\n \"second\": {\n \"check\": true\n },\n \"first\": {\n \"check\": true\n }\n },\n \"description\": \"lmn\",\n \"summary\": {\n \"val\": \"abc\"\n }\n }\n }\n]\nfields.<*dynamicKey*>.firstfields.<*dynamicKey*>.second[\n {\n \"fields\": {\n \"field_1\": {\n \"name\": \"f1\",\n \"first\": {\n \"check\": true\n }\n },\n \"field_2\": {\n \"name\": \"f2\",\n \"second\": {\n \"check\": true\n }\n }\n }\n },\n {\n \"fields\": {\n \"field_1\": {\n \"name\": \"f1\",\n \"second\": {\n \"check\": false\n }\n },\n \"field_3\": {\n \"name\": \"f3\",\n \"second\": {\n \"check\": true\n },\n \"first\": {\n \"check\": true\n }\n }\n }\n }\n]\ndb.collection.aggregate([\n {\nI need a stage to filter out documents here\n },\n {\n $project: {\n data: {\n $objectToArray: \"$fields\"\n }\n }\n }\n", "text": "I need to group based on dynamic keys. So I’m using $objectToArray to query on dynamic keys.But I need to filter out the documents based on a condition so as to reduce the input going to $objectToArray. Because I have millions of documents and I just want only a subset of the object fields to be fed to the $objectToArray operator.My aim is to get better query performance by reducing the amount of data passed to $objectToArray operatorA sample format of my MongoDB schema:There are also many fields other than dynamic fields.I need to filter out the documents before being fed to $objectToArray operator based on the following condition:Expected output:How can I achieve this use case without changing my document structure?My aggregation query to group based on dynamic keys:", "username": "Tiya_Jose" }, { "code": "", "text": "HelloI think you ask something similar to thisIf the keys are uknown,the only way that i know is javascript and $function operator.Upvote that Jira(see second link) if you want also,but i think that jira is only for get($$doc,$$key),i tihnk we need remove also at least", "username": "Takis" }, { "code": "[{$project: {\n fields: {$objectToArray: \"$fields\"}\n}}, {$addFields: {\n \"shouldInclude\": {\n $map: {\n input: \"$fields\",\n as: \"myFields\",\n in: { $or: [\n { $ifNull: [\"$$myFields.v.first\", false] },\n { $ifNull: [\"$$myFields.v.second\", false] }\n ]\n }\n }\n }\n}}, {$addFields: {\n shouldInclude: {$anyElementTrue: [\"$shouldInclude\"]}\n}}, {$match: {\n shouldInclude: true\n}}]", "text": "I played with this for a bit and wasn’t able to figure out a way to do the filtering before doing $objectToArray. In case it’s helpful, here is a pipeline I created that does the filtering after $objectToArray.", "username": "Lauren_Schaefer" }, { "code": "db.collection.aggregate(\n [{\n $addFields: {\n has_first_or_second: {\n $function: {\n body: function(fields) {\n for (var f in fields) {\n if (Object.prototype.hasOwnProperty.call(fields[f], 'first')) {\n return true;\n }\n if (Object.prototype.hasOwnProperty.call(fields[f], 'second')) {\n return true;\n }\n }\n return false;\n },\n args: [\"$fields\"],\n lang: \"js\"\n }\n }\n }\n }, \n {\n $match: {has_first_or_second: true}\n },\n {\n $project: {has_first_or_second: 0}\n }\n ])\n", "text": "Given that you say specifically you want to do that without changing the document structure I assume you know that this data modeling is probably not ideal. Changing the data modeling will likely improve performance. But without changing the data model, I think the below solution works. Note that it just checks if there is a field called second not that the field value of second.check is “true” - I think that is what you wanted but am not sure… If that was not what you wanted it can be easily changed though. Note that you need to be using MongoDB 4.4 for this to work because it uses custom aggregation expressions", "username": "Naomi_Pentrel" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to filter out the documents based on dynamic keys before using $objectToArray?
2020-09-28T11:49:48.478Z
How to filter out the documents based on dynamic keys before using $objectToArray?
12,843
null
[ "ops-manager", "kubernetes-operator" ]
[ { "code": "", "text": "I deployed MongoDB Kubernetes Operator and an instance of Ops Manager. The thing is it’s working fine but can we change the URI to our desired URI.\nThe URI is ops-manager-svc.mongodb.svc.cluster.local\nCan we change this to our desired one.", "username": "Vikas_Jella" }, { "code": " externalConnectivity", "text": "Hi @Vikas_Jella,I think this is outside of operators duty and there is a setting called externalConnectivity which has a type field. By default it expects a load balancer to be mapped to the internal route but you can add verious service types like external DNS of your choice where you can map your desired URL…Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny,\nCan you share any doc or link for creating service types like external DNS.\nShould I need to include it in externalConnectivity.annotations:Regards\nVikas Jella", "username": "Vikas_Jella" }, { "code": "", "text": "@Vikas_Jella,This is out of scope for MongoDB but you can find the link for kuberentes docsExpose an application running in your cluster behind a single outward-facing endpoint, even when the workload is split across multiple backends.I think in externalConnectivity.type …Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Can we change the Ops Manager URI
2020-09-24T19:38:12.641Z
Can we change the Ops Manager URI
2,768
https://www.mongodb.com/…200099cea3af.png
[ "app-services-user-auth" ]
[ { "code": "message: 'expected either accessToken, id_token or authCode in payload', code: 47message: 'error exchanging access code with OAuth2 provider', code: 47);$ 73 }).catch(console.error);$ 74 }$ 75 $ 76 /**$ 77 * Associate a Google login with a Realm user.$ 78 */$ 79 async function loginGoogle(id_token) {$ 80 // Log the user in to your app$ 81 console.log(\"ID TOKEN\");$ 82 console.log(id_token);$ 83 const credentials = Realm.Credentials.google(id_token);$ 84 console.log(\"CREDENTIALS\");$ 85 console.log(credentials);$ 86 await realm.logIn(credentials)$ 87 .then(user => {$ 88 console.log(", "text": "I’m building authentication for a NodeJS app with Realm.I’ve followed the docs to get Facebook and Google OAuth2 authentication working but Realm is not able to process the tokens properly.The error message for Facebook is message: 'expected either accessToken, id_token or authCode in payload', code: 47\nThe error message for Google is message: 'error exchanging access code with OAuth2 provider', code: 47I have my client IDs and secrets set up correctly so I don’t think this is the issue.\nThe tokens are I get from Google and Facebook seem correct and I am submitted them as strings to Realm.Credentials and then myRealmApp.logIn.Can anyone provide advice on what I am missing?Error messages from Realm logs:\noauths740×403 17.4 KBServer code (copy-pasted from Vim; ignore the at the end of the lines):\n 63 async function loginFacebook(accessToken) { 64 // Log the user in to your app$ 65 console.log(“ACCESS TOKEN”); 66 console.log(accessToken); 67 const credentials = Realm.Credentials.facebook(accessToken); 68 console.log(\"CREDENTIALS\"); 69 console.log(credentials); 70 await realm.logIn(credentials) 71 .then(user => { 72 console.log(`Logged in with id: {user.id});$ 73 }).catch(console.error);$ 74 }$ 75 $ 76 /**$ 77 * Associate a Google login with a Realm user.$ 78 */$ 79 async function loginGoogle(id_token) {$ 80 // Log the user in to your app$ 81 console.log(\"ID TOKEN\");$ 82 console.log(id_token);$ 83 const credentials = Realm.Credentials.google(id_token);$ 84 console.log(\"CREDENTIALS\");$ 85 console.log(credentials);$ 86 await realm.logIn(credentials)$ 87 .then(user => {$ 88 console.log(Logged in with id: {user.id}`); 89 }).catch(console.error); 90 } 91 92 /** 93 * App endpoints$ 94 */ 95 app.post('/login', (req, res) => { 96 provider = req.body.provider; 97 if (! provider) { 98 res.status(400).send(‘No authentication provider provided’); 99 } 100 switch (provider.toLowerCase()) { 101 case 'facebook': 102 accessToken = req.body.accessToken; 103 if (! accessToken) { 104 res.status(400).send(‘Missing access_token’); 105 } 106 return loginFacebook(accessToken).catch(console.error); 107 break; 108 case ‘google’: 109 id_token = req.body.id_token; 110 if (! id_token) { 111 res.status(400).send('Missing id_token'); 112 } 113 return loginGoogle(id_token).catch(console.error);\n…Requests from login page (the tokens appear to be correctly sent to the back-end):\n10 realmFacebookLogin = async function(fbResponse) { 11 accessToken = fbResponse.authResponse.accessToken; 12 await fetch(’/login’, { 13 method: 'POST', 14 headers: { 15 'Content-Type': 'application/json' 16 }, 17 body: JSON.stringify( 18 { 19 accessToken: accessToken, 20 provider: ‘facebook’ 21 } 22 ) 23 }); 24 25 const content = await rawResponse.json(); 26 27 console.log(content); 28 } 29 30 async function realmGoogleLogin(googleUser) { 31 const id_token = googleUser.getAuthResponse().id_token; 32 await fetch(’/login’, { 33 method: 'POST', 34 headers: { 35 'Content-Type': 'application/json' 36 }, 37 body: JSON.stringify( 38 { 39 id_token: id_token, 40 provider: ‘google’ 41 } 42 ) 43 }); 44 }$", "username": "Martin_Bradstreet" }, { "code": "", "text": "In case anyone else runs into this issue, it turns out that this is a problem on the developers’ side.See: expected either accessToken, id_token or authCode in payload when trying to use Facebook Authentication v10 · Issue #3109 · realm/realm-js · GitHub\nerror exchanging access code with OAuth2 provider (Google login, nodejs SDK) · Issue #3116 · realm/realm-js · GitHub", "username": "Martin_Bradstreet" }, { "code": "", "text": "Hey All - there appears to be hard-breaking change on the Google side. We are investigating a fix now with the Google engineering team", "username": "Ian_Ward" }, { "code": "", "text": "Any updates on this at all?", "username": "Will_Nixon" }, { "code": "", "text": "There is a fix for node.js in beta.13 as updated in this issue here - error exchanging access code with OAuth2 provider (Google login, nodejs SDK) · Issue #3116 · realm/realm-js · GitHub", "username": "Ian_Ward" }, { "code": "", "text": "Sorry, I should have been more specific… I’m using the Java version for Android, do we know if there are updates for that?", "username": "Will_Nixon" }, { "code": "", "text": "@Will_Nixon I think the Google OAuth works with the RealmJava SDK today - the problem seems to be in the docs - we erroneously called it googleToken and said access token in our docs whereas what you really need to do is pass in the Google auth code - see docs here:And follows this flow:\n\nimage (39)1516×1044 116 KB\nLet me know if that still doesn’t work for you, we are cleaning up the docs now.", "username": "Ian_Ward" }, { "code": "", "text": "I’ll have a look. At the moment, I don’t see how to get the auth code after receiving the access token - any ideas?It also doesn’t look like you get a “client secret” when creating credentials in Goodle, only a “client id”, unless I’m meant to manually add a “client secret” to the JSON that’s generated?", "username": "Will_Nixon" }, { "code": "", "text": "Also, this requires us to fetch a user id token but we’re unable to do that without a web server configured, which seems ridiculous. Are there examples y’all have over there for doing this correctly?", "username": "Will_Nixon" }, { "code": "private fun googleSignIn() {\n val inStream = this::class.java.getResourceAsStream(CREDENTIALS_FILE_PATH)\n ?: throw FileNotFoundException(MyApp.context.getString(R.string.fileNotFoundError))\n val clientSecrets =\n GoogleClientSecrets.load(JSON_FACTORY, InputStreamReader(inStream))\n\n val gso = GoogleSignInOptions.Builder(GoogleSignInOptions.DEFAULT_SIGN_IN)\n .requestServerAuthCode(clientSecrets.web.clientId, true)\n .requestEmail()\n .build()\n mGoogleSignInClient = GoogleSignIn.getClient(MyApp.context, gso)\n val signInIntent = mGoogleSignInClient.signInIntent;\n startActivityForResult(signInIntent, RC_SIGN_IN);\n}\n\noverride fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {\n super.onActivityResult(requestCode, resultCode, data)\n if (requestCode == RC_SIGN_IN) {\n val task = GoogleSignIn.getSignedInAccountFromIntent(data)\n handleSignInRequest(task)\n }\n}\n\nprivate fun handleSignInRequest(task: Task<GoogleSignInAccount>) {\n try {\n val account = task.getResult(ApiException::class.java)!!\n MyApp.sharedPreferences.edit().putString(getString(R.string.preferences_authCode), account.serverAuthCode)\n realmSignIn(account.serverAuthCode!!)\n } catch (error: ApiException) {\n println(\"Sign-in error: ${error.statusCode}\")\n Log.d(SIGN_IN_TAG,\"Sign-in error: ${error.statusCode}\")\n }\n}\n\nprivate fun realmSignIn(authCode: String) {\n val credential = Credentials.google(authCode)\n MyApp.realmApp.loginAsync(credential) {\n if (it.isSuccess) {\n myHandler.postDelayed({\n goToMainActivity()\n }, splashTime)\n } else {\n println(\"Error logging in: ${it.error}\")\n Log.d(SIGN_IN_TAG,\"Error logging in: ${it.error}\")\n }\n }\n}", "text": "Actually @Ian_Ward, it looks like even if I use Google Sign-In and fetch my auth code, I still get an \"error exchanging access code with oauth provider.", "username": "Will_Nixon" }, { "code": "", "text": "@Will_Nixon Are you able to email at [email protected] with your Realm Dashboard URL? I’d like to take a look from our side", "username": "Ian_Ward" }, { "code": "", "text": "Hi Ian,I’ve cracked it! The problem is that you have to create a Web Client in Google as well as your Android client - no one seems to be massively clear about this, only that Google Sign-In creates one automatically (it hadn’t - I had to try and generate one from the Google Sign-In docs, although it wasn’t working and then suddenly there it was!).With that Web Client, I could then use that Client ID inside the Android app to make calls to get the auth code to pass to MongoDB/Realm.The final problem was that the Mongo docs are very unclear about this - you ask for a Client ID and Client secret, but it’s the WEB CLIENT id and secret you need, not the Android client id (which has no secret). That’d be really helpful to make clear in your docs and on the Realm Dashboard. I only figured it out by trying it just now before sending you the link!", "username": "Will_Nixon" }, { "code": "", "text": "@Will_Nixon Thanks for the feedback and I’m glad you were able to get it working. Generally I’d prefer to leave the docs to the 3rd party provider (Google) since we should just be a wrapper around their implementation but it sounds like there are enough issues with this logIn that we should provide them as well. I’ll look to get that sorted - it’s interesting that we don’t have these kinds of problems with our other built-in providers like Facebook and Apple.", "username": "Ian_Ward" } ]
Problems with Facebook and Google OAuth2
2020-08-07T16:30:15.597Z
Problems with Facebook and Google OAuth2
5,205
null
[ "atlas-functions" ]
[ { "code": "Failed to upload node_modules.tar.gz: unknown: Argument name clash in strict mode \n(54:29) 52 | }, 53 | 'and removing 1 token': { > 54 | topic: function(_, _, bucket) { | ^ 55 | \nthis.gStart = +new Date(); 56 | bucket.removeTokens(1, this.callback); 57 | },\n", "text": "Hi,Previously I have followed the docs at https://docs.mongodb.com/realm/functions/upload-external-dependencies/ to upload axios and dependencies into Realm and I am now trying to do the same for limiter - npm. However when I upload the tar I get an errorI’ve read the other thread on here with the same issue and tried older versions of the package but it consistently fails. I’ve tried using a node_modules dir that only contains the new package, and I’ve tried a node_modules dir that includes the previously uploaded packages. Both consistently fail.Can anyone offer advice on this? Is there a way to delete the packages I have already uploaded?Update - I’ve tested with an empty node_modules directory and the issue persists so it doens’t seem related to the modules", "username": "Stuart_Brown" }, { "code": "", "text": "Hi Stuart,Thanks for reporting this, I was able to reproduce and will check-in with the team to investigate and fix.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Thanks @Sumedha_Mehta1. Happy to help test if necessary.", "username": "Stuart_Brown" }, { "code": "", "text": "Hi @Sumedha_Mehta1 - do you have a rough idea of when this might be addressed? Is there any workaround?", "username": "Stuart_Brown" }, { "code": "", "text": "Hey Stuart - we don’t have another release till next week and depending on the size of the bug, it could also take a bit more. However is it possible to use another rate limiting package that is similar? (e.g. Bottleneck - https://www.npmjs.com/package/bottleneck seemed to work for me)", "username": "Sumedha_Mehta1" } ]
Failed to upload node_modules.tar.gz: unknown: Argument name clash in strict mode
2020-09-23T09:59:49.836Z
Failed to upload node_modules.tar.gz: unknown: Argument name clash in strict mode
2,365
null
[]
[ { "code": "", "text": "Hi,I’d like to potentially use Realm mobile on iOS to gather some time series data. I am interested in syncing with a time series database on the backend such as InfluxDB using the TICK stack. I am not that interested in in ROS as I don’t need to sync my data across multiple devices (this is primarily a data acquisition system for machine learning purposes). Is it possible to create a background sync service for Realm mobile database that syncs to my own backend database such as Influx rather than the realm object server. Alternatively, can I sync the realm object server to InfluxDB (for example)?Any thoughts or examples would be greatly appreciated.Thanks.", "username": "Jon_Lederman" }, { "code": "", "text": "We are working on a Sync service for connecting MongoDB Realm to MongoDB Atlas, Realm Sync. Getting Realm to sync with other database vendors is not currently on the Roadmap.", "username": "Joe_Drumgoole" }, { "code": "", "text": "@Jon_Lederman The Realm Mobile SDKs will sync directly with a MongoDB Atlas cluster. Once the data is there you are welcome to shuttle it to another datastore, such as InfluxDB. We have other user’s who use a Kafka connector to shuttle data back and forth between MongoDB and another datastore - perhaps this could work for you", "username": "Ian_Ward" } ]
Syncing Realm Mobile Database with InfluxDb
2020-09-27T14:21:14.410Z
Syncing Realm Mobile Database with InfluxDb
2,114
https://www.mongodb.com/…1e9710423435.png
[ "atlas" ]
[ { "code": "", "text": "I’m trying to get a free tier cluster. But, after I click on Create Cluster from the shared cluster selection, I’m not getting Free Tier available titled server in either of GCP, AWS or Azure. I’m attaching the screenshot. ", "username": "Devanshu_Mevada" }, { "code": "", "text": "Hi Devanshu!Welcome to the forums… On the screenshot above, what happens when you click the Cluster Tier M0 Sandbox? That should get you what you want. You are allowed 1 M0 cluster per project.Karen", "username": "Karen_Huaulme" } ]
Not seeing any Free Tier servers in either GCP, AWS or AZURE
2020-04-02T12:02:37.339Z
Not seeing any Free Tier servers in either GCP, AWS or AZURE
1,832
null
[]
[ { "code": "// already logged in\nlet userIdentity = \"fjwe3akj4d308fj2okljt439fwe127ahkf\" // target user's identity\nlet realmURL = URL(string: \"realms://myapp.us1.cloud.realm.io/\\(userIdentity)/PrivateSyncRealm\")!\nlet configPrivate = currentUser.configuration(realmURL: realmURL, fullSynchronization: true, enableSSLValidation: true, urlPrefix: nil)\n\nRealm.asyncOpen(configuration: configPrivate) { realm, error in\n if let realm = realm {\n print(\"success\")\n let user = realm.objects(User.self).filter(String(format: \"syncuserid == '%@'\", userIdentity))\n print(user[0])\n \n } else if let error = error {\n print(\"failed\")\n print(error)\n }\n \n}\nfailed\n\n**Domain=io.realm.unknown Code=89 \"Operation canceled\" \nUserInfo={Category=realm.basic_system, \nNSLocalizedDescription=Operation canceled, Error Code=89}**\nHTTP response: ◯◯◯◯◯◯ {\"type\":\"https://docs.realm.io/server/troubleshoot/errors#access-denied\",\"title\":\"The path is invalid or current user has no access.\",\"status\":403,\"code\":614}", "text": "I’m developing ios app with Realm Cloud.\nMy app has multiple realm; one Common realm and many Private(per-user) realm.\nI logged in realm server with specific user and tried to open another user’s Private realm following this method.I got an error.I think that permission of the user could be a factor? (but don’t know how to modify)How to solve this error and get data from Private realm of another user.target realm: Sync - Full\nlogs: HTTP response: ◯◯◯◯◯◯ {\"type\":\"https://docs.realm.io/server/troubleshoot/errors#access-denied\",\"title\":\"The path is invalid or current user has no access.\",\"status\":403,\"code\":614}", "username": "Shi_Miya" }, { "code": "let configPrivate = currentUser.configuration(realmURL: realmURL, fullSynchronization: true, enableSSLValidation: true, urlPrefix: nil)", "text": "Lets back up a step. Are you using ‘legacy’ Realm or MongoDB Realm? It looks like it’s kinda both and if so, that’s not going to work.The linked question and the code provided by @Ian_Ward is for MongoDB Realm and involves partitions which don’t exist in ‘legacy’ Realm. But this codelet configPrivate = currentUser.configuration(realmURL: realmURL, fullSynchronization: true, enableSSLValidation: true, urlPrefix: nil)is how legacy Realm would connect.If you’re going with MongoDB Realm (which you should be at this point). Go through the getting started guide Sync Data section to see how to connect up to MongoDB Realm and how to utilize partitions.Once you do that, the code provided by Ian will make a lot more sense.I may just be not looking at the question correctly.", "username": "Jay" }, { "code": "", "text": "Just a moment ago, I knew that Realm is different from MongoDB Realm.\nI’m using legacy Realm Cloud, so I’d like to user it for the time being.", "username": "Shi_Miya" }, { "code": "", "text": "@Shi_Miya Are you trying to open a realm that is private for the user doing the opening? If so, you can use the tilda - ~ - to automatically fill this in like so:\nrealms://myapp.us1.cloud.realm.io/~/PrivateSyncRealmIf you are trying to open another user’s realm then you need to grant permission to the user performing the open - see here:", "username": "Ian_Ward" }, { "code": "", "text": "Thank you for your explanation.", "username": "Shi_Miya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
I got an error while opening another user's realm
2020-09-24T07:09:35.671Z
I got an error while opening another user&rsquo;s realm
1,639
null
[ "kafka-connector" ]
[ { "code": "", "text": "Hi all, I’m almost crying xD\nI tried tu use this config https://docs.mongodb.com/kafka-connector/v1.2/kafka-sink-cdc/ with a post processor but… when I add “change.data.capture.handler”: “com.mongodb.kafka.connect.sink.cdc.debezium.mongodb.MongoDbHandler” my post processor block stops to work.Example, if I not put change.data.capture.handler and add post processor to block a field op from Debezium, the connector not write at mongo de op field (just for testing). But when I add change.data.capture.handler with post processor to block the field of my entity, nohting happens.Any idea?", "username": "Joao_Mello" }, { "code": "", "text": "I got the same problem. Anyone can help please?", "username": "Nam_Phan" }, { "code": "", "text": "Are there any errors or warnings in the kafka connect log?", "username": "Robert_Walters" }, { "code": "", "text": "Can you try using the postgres handler instead of the MongoDBHandler?“change.data.capture.handler”: “com.mongodb.kafka.connect.sink.cdc.debezium.rdbms.postgres.PostgresHandler”", "username": "Robert_Walters" } ]
Kafka Connect Sink with Debezium Handler + Post Processor
2020-09-15T01:48:37.342Z
Kafka Connect Sink with Debezium Handler + Post Processor
2,976
null
[]
[ { "code": "{\"t\":{\"$date\":\"2020-09-28T14:06:33.511+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2020-09-28T14:06:33.523+03:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2020-09-28T14:06:33.525+03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2020-09-28T14:06:33.527+03:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":17248,\"port\":27017,\"dbPath\":\"/Users/dell/mongodb-data\",\"architecture\":\"64-bit\",\"host\":\"DESKTOP-6RKH9CH\"}}\n", "text": "Below is some of the error", "username": "Adel_minwer" }, { "code": "", "text": "Can you show more details from that log\nIt says mongod startingWhat error you are getting when you run mongo?\nA screen shot will help", "username": "Ramachandra_Tummala" } ]
Error while trying to run mongo
2020-09-28T11:50:36.562Z
Error while trying to run mongo
1,967
null
[ "mongodb-shell" ]
[ { "code": "mongoshnpm run compile-exec", "text": "I’ve been casually following mongosh\nToday’s pull in npm run compile-exec wants to build Node.js v12.18.4\nThis is weird because I have that version of Node installed and as my default.\nAny tips to make the build behave?", "username": "Jack_Woehr" }, { "code": "mongoshwget https://s3.amazonaws.com/mciuploads/mongosh/<commit>/mongosh-0.4.0-linux.tgz", "text": "The reason we’ve made that switch is because an approach that compiles Node.js from scratch, with custom mongosh-specific parts, allows us to generate binaries that:Do you have a specific use case for compiling binaries locally? We’re mostly focusing on producing executables that match what would be produced in an actual release here.(Fwiw, you can always also download the release artifacts from CI using e.g. wget https://s3.amazonaws.com/mciuploads/mongosh/<commit>/mongosh-0.4.0-linux.tgz.)", "username": "Anna_Henningsen" }, { "code": "", "text": "Do you have a specific use case for compiling binaries locally? We’re mostly focusing on producing executables that match what would be produced in an actual release here.@Anna_Henningsen, I was just doing the open source thing of building on the local machine. I’ll try your suggestion.", "username": "Jack_Woehr" }, { "code": "", "text": "Okay, download of the artifact works for me.\nAnd I’ll try the build on a faster machine \nThanks @Anna_Henningsen", "username": "Jack_Woehr" }, { "code": "", "text": "Was able to build on more powerful machine, thanks again @Anna_Henningsen.", "username": "Jack_Woehr" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongosh npm run compile-exec tries to build Node v12.18.4 already installed
2020-09-27T23:21:21.105Z
Mongosh npm run compile-exec tries to build Node v12.18.4 already installed
2,488