image_url
stringlengths 113
131
⌀ | tags
list | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null |
[
"dot-net"
] |
[
{
"code": "",
"text": "I’m currently learning and the way I connect to the database is by using one client for each collection.\nnow I have 7 collections which is 7 MongoClients, should I change my code to use one client only ?",
"username": "Ay.Be"
},
{
"code": "MongoClientMongoClient",
"text": "Because each MongoClient represents a pool of connections to the database, most applications require only a single instance of MongoClient, even across multiple requestsFrom https://www.mongodb.com/docs/drivers/csharp/current/fundamentals/connection/connect/",
"username": "Kobe_W"
}
] |
Using multiple MongoClient c#
|
2023-08-27T01:32:08.779Z
|
Using multiple MongoClient c#
| 471 |
null |
[
"app-services-user-auth",
"android",
"flutter"
] |
[
{
"code": "Exception has occurred.\n\nAppException (AppException: non-zero custom status code considered fatal, link to server logs: null)\nFailed host lookup: 'realm.mongodb.com'\n\nOSError (OS Error: No address associated with hostname, errno = 7)\n\"No address associated with hostname\"\nfinal jwtCredentials = Credentials.jwt(token);\ncurrentUser = await app.logIn(jwtCredentials); // Problematic\n",
"text": "I’m getting the following error in my Flutter app when trying to run it on an Android Emulator.The source of the error seems to be an OS Error:The code being called triggering the issue is the registration functionality, which is mostly just from the documentation.I’ve been trying to figure out why I suddenly started receiving this error but I am out of ideas. For a second if I skipped the errors, it would still continue and let me use my app as expected. But now I’m unable to do that either.I haven’t changed any permissions, and I have the Internet permission added already. I do have a working internet connection and the browser on the emulator works as well. I tried restarting the emulator and my computer but that didn’t work either.I tried with realm version 1.3.0 and 1.4.0 but I’m getting the same issue. I’m not sure what the issue is since as far as I know, I didn’t change anything. I even tried reverting anything I might have changed, but this error is still persisting. My cluster and app services are on GCP.Appreciate any help, thanks!",
"username": "Akansh"
},
{
"code": "realm.mongodb.comnslookupadb shell\nping realm.mongodb.com\nping: unknown host realm.mongodb.com\ngoogle.comping google.com\nping 8.8.8.8\n",
"text": "You cannot lookup realm.mongodb.com from your android emulator. This is not really a realm issue.Did you by any chance disable internet on the emulator (pull down from the top on recent emulator images).Can you resolve the name with nslookup on your development machine?If you can, can you do the same on the emulator:if this fails withthen try google.com as well:if it also fails then try:What did you learn?BTW, A quick search with google send me to this SO thread: emulation - Android emulator not able to access the internet - Stack OverflowMaybe it can be helpful as well",
"username": "Kasper_Nielsen1"
},
{
"code": "realm.mongodb.comnslookupnslookup\nDefault Server: dns.google\nAddress: 8.8.8.8\n\n> realm.mongodb.com\nServer: dns.google\nAddress: 8.8.8.8\n\nNon-authoritative answer:\nName: us-east-2.lb-b.r53.aws.cloud.mongodb.com\nAddress: 3.136.69.246\nAliases: realm.mongodb.com\n global.aws.realm.mongodb.com\n global.lb-b.r53.aws.cloud.mongodb.com\n",
"text": "I wasn’t manually looking up realm.mongodb.com in the emulator, it’s just from Realm functionality. Also, it was working just fine for months. I didn’t disable the internet on the emulator either. Browsing through Chrome is working normally on the emulator still. Also, requests to Firebase Auth are going through normally still.This is the output of nslookup on my machine:Pinging all of the addresses doesn’t fail with an unknown host error, but it seems I’m getting 100% packet loss with 0 packets received. I’m not sure why this could be the case.I’ve tried pretty much all of the suggested solutions in the SO thread but still facing the same issue.",
"username": "Akansh"
},
{
"code": "adb shell",
"text": "Could you do adb shell and the suggested pings as well?\nEDIT: Or did you already do that?",
"username": "Kasper_Nielsen1"
},
{
"code": "adb shell",
"text": "My apologies, I should have been clearer.The pings on my development machine are run successfully. In the emulator using adb shell is where I’m getting the 100% packet loss.Also, I just realized, is there a reason there is no GCP alias being shown with nslookup? If I try to visit the global gcp alias in a browser it seems to load as intended. My cluster and Atlas app are both on GCP. In case that is relevant to the situation.",
"username": "Akansh"
},
{
"code": "❯ adb -s emulator-5554 shell\nemu64a:/ $ ping realm.mongodb.com\nPING eu-central-1.lb-b.r53.aws.cloud.mongodb.com (35.157.33.91) 56(84) bytes of data.\n64 bytes from ec2-35-157-33-91.eu-central-1.compute.amazonaws.com (35.157.33.91): icmp_seq=1 ttl=255 time=40.1 ms\n64 bytes from ec2-35-157-33-91.eu-central-1.compute.amazonaws.com (35.157.33.91): icmp_seq=2 ttl=255 time=31.1 ms\n64 bytes from ec2-35-157-33-91.eu-central-1.compute.amazonaws.com (35.157.33.91): icmp_seq=3 ttl=255 time=34.5 ms\nrealm.mongodb.comrealm.mongodb.comgoogle.com8.8.8.8",
"text": "On my end, when running ping under adb, it works fine (just to proof that ICMP does indeed work with realm.mongodb.com):We need to get your network in shape, so that the emulator can do DNS lookups. The realm SDK needs to do a DNS lookup of realm.mongodb.com, and it need to be able to connect with https on port 443.When you say you get 100% packet loss on adb, is that for both realm.mongodb.com, google.com, and 8.8.8.8?Also, what emulator image (API level) are you using? I’m using 33.",
"username": "Kasper_Nielsen1"
},
{
"code": "",
"text": "Interesting, I’m not sure what changed suddenly for me.Also yes, the 100% packet loss is in adb. The pings are being sent and received just fine from my local machine, just not from adb it seems.I’m using API level 33 as well.",
"username": "Akansh"
},
{
"code": "netstat -nremu64a:/ # netstat -nr\nKernel IP routing table\nDestination Gateway Genmask Flags MSS Window irtt Iface\n10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0\n10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0\n",
"text": "What is the output of netstat -nr on adb on your end. For me it is:Another thing to consider - are you running a local firewall on your developer machine?I general I can recommend reading the emulator docs on networking: Set up Android Emulator networking | Android Studio | Android Developers to get aquainted with the caveats",
"username": "Kasper_Nielsen1"
},
{
"code": "emu64xa:/ # netstat -nr\nKernel IP routing table\nDestination Gateway Genmask Flags MSS Window irtt Iface\n10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0\n10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0\n",
"text": "This is the output for adb on my end:I did have a firewall initially but when issues started occurring, I disabled it. Disabling it didn’t resolve the issues.Thanks for your help and linking the networking docs, I’ll take a look through those.",
"username": "Akansh"
}
] |
App Services Error: Failed host lookup
|
2023-08-25T07:48:32.244Z
|
App Services Error: Failed host lookup
| 863 |
null |
[
"database-tools",
"backup"
] |
[
{
"code": "",
"text": "Hi there,I need help to convert the below filter\n{CREATED_ON:{$lt:new Date(“2023-07-01”)}}To the below format to be able to use it in mongodump:{“CREATED_ON”:{“$lt”: {“$timestamp”:{“t”: 1688158800, “i”: 1}}}}I tried to use unix epoch online converter but the above value doesn’t fetch any output.Please advise",
"username": "Abdullah_Madani"
},
{
"code": "db.aggregate([\n {\n // let's assume, you have this example document\n // in your solection\n $documents: [\n {\n name: 'Sashko',\n birthday: ISODate('2023-07-01'),\n }\n ]\n },\n // caluclate new field that would contain \n // converted timestamp\n {\n $addFields: {\n birthdayTimestamp: {\n $toLong: '$birthday'\n }\n }\n },\n // then we can match documents by timestamp\n {\n $match: {\n birthdayTimestamp: 1688169600000\n }\n }\n]);\n[\n {\n name: 'Sashko',\n birthday: ISODate(\"2023-07-01T00:00:00.000Z\"),\n birthdayTimestamp: Long(\"1688169600000\")\n }\n]\nbirthdayTimestamp",
"text": "Hello, @Abdullah_Madani !To be able to query date fields with timestamps, you will need to convert fields value first. It can be done in the aggregation pipeline with $toLong operator:Output:You can remove birthdayTimestamp field from output using $project stage.",
"username": "slava"
},
{
"code": " {CREATED_ON:{$lt:new Date(\"2023-07-01\")}}mongodump{\"CREATED_ON\": {\"$lt\": {\"$date\":{\"$numberLong\":\"1688158800000\"}}}}\n{\"CREATED_ON\": {\"$lt\": {\"$date\":\"2023-07-01T00:00:00+03:00\"}}}\n",
"text": "Hi @Abdullah_MadaniAssuming {CREATED_ON:{$lt:new Date(\"2023-07-01\")}} returns the documents you’re interested in the format for mongodump query, depending on whether it is canonical or relaxed format is:orFrom mongodump docsThe query must be in Extended JSON v2 format (either relaxed or canonical/strict mode), including enclosing the field names and operators in quotes. For example:Here is the extended JSON v2 format for dates\nhttps://www.mongodb.com/docs/manual/reference/mongodb-extended-json/#mongodb-bsontype-Date",
"username": "chris"
}
] |
How to convert "new Date" filter to timestamp format
|
2023-08-27T14:29:51.653Z
|
How to convert “new Date” filter to timestamp format
| 731 |
null |
[
"realm-web"
] |
[
{
"code": " const app = new Realm.App({ id: xxx' });\n const credentials = Realm.Credentials.emailPassword('[email protected]', 'xxxx');\n // Authenticate the user\n const user = await app.logIn(credentials);\n // `App.currentUser` updates to match the logged in user\n console.assert(user.id === app.currentUser.id);\n",
"text": "Hi. I don’t have a frontend server. I have created a plain html file that has javascript to connect to a database with this code:I get error ERR_CERT_COMMON_NAME_INVALID.As there is no real website but an html file running on the browser I have no SSL certificate.I have tried to host the html file in services like CloudFlare with the same result.What can I do?Thanks.",
"username": "Eduardo_Cobian"
},
{
"code": "{ id: xxx' }",
"text": "{ id: xxx' }Understanding the xxx is the AppId, is that the actual code with the extra ’ in it? Or is that just a typo here?Are you attempting to connect to Atlas App Services? Also, it feels like the code is missing .config or perhaps need to use .getApp? The environment is a little unclear so that’s just a guess.",
"username": "Jay"
},
{
"code": " this.db = user.mongoClient('mongodb-atlas').db('xxx');\n",
"text": "Hi Thanks for answering.Yes it’s { id: ‘xxx’ });I am trying to connect to an Atlas database.\nI haven’t seen an getApp call in the documentation.The only missing part was a finall:This code is javascript running on a element.",
"username": "Eduardo_Cobian"
}
] |
I get error ERR_CERT_COMMON_NAME_INVALID when accesing Realm from plain html file
|
2023-08-27T12:19:17.994Z
|
I get error ERR_CERT_COMMON_NAME_INVALID when accesing Realm from plain html file
| 497 |
null |
[
"containers",
"storage"
] |
[
{
"code": "",
"text": "Hi,\nI have a MongoDB version 4.2 on a docker container.\nYesterday night Mongo suddenly crashed.\nThe log shows a “getMore” operation, and than a server restart:2023-08-27T04:38:40.317+0300 I COMMAND [conn33] command octdb.device_audit_hourly command: getMore { getMore: 5495031475288944971, collection: “device_audit_hourly”, $db: “octdb”, $2023-08-27T07:05:42.398+0300 I CONTROL [main] ***** SERVER RESTARTED *****\n2023-08-27T07:05:42.404+0300 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’\n2023-08-27T07:05:42.660+0300 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27016 dbpath=/data/db 64-bit host=1.1.1.79\n2023-08-27T07:05:42.660+0300 I CONTROL [initandlisten] db version v4.2.3\n2023-08-27T07:05:42.660+0300 I CONTROL [initandlisten] git version: 6874650b362138df74be53d366bbefc321ea32d4\n2023-08-27T07:05:42.660+0300 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1 11 Sep 2018\n2023-08-27T07:05:42.660+0300 I CONTROL [initandlisten] allocator: tcmalloc\n2023-08-27T07:05:42.660+0300 I CONTROL [initandlisten] modules: none\n2023-08-27T07:05:42.660+0300 I CONTROL [initandlisten] build environment:\n2023-08-27T07:05:42.660+0300 I CONTROL [initandlisten] distmod: ubuntu1804\n2023-08-27T07:05:42.660+0300 I CONTROL [initandlisten] distarch: x86_64\n2023-08-27T07:05:42.660+0300 I CONTROL [initandlisten] target_arch: x86_64\n2023-08-27T07:05:42.660+0300 I CONTROL [initandlisten] options: { config: “/etc/mongod.conf”, net: { bindIp: “*”, port: 27016 }, processManagement: { timeZoneInfo: “/usr/share/zoneinfo” }, replication: { replSetName: “octopusrs0” }, security: { authorization: “enabled”, keyFile: “/etc/mongo-keyfile” }, storage: { dbPath: “/data/db”, journal: { enabled: true } }, systemLog: { destination: “file”, logAppend: true, path: “/var/log/mongodb/mongod.log” } }\n2023-08-27T07:05:42.666+0300 W STORAGE [initandlisten] Detected unclean shutdown - /data/db/mongod.lock is not empty.\n2023-08-27T07:05:42.666+0300 I STORAGE [initandlisten] Detected data files in /data/db created by the ‘wiredTiger’ storage engine, so setting the active storage engine to ‘wiredTiger’.\n2023-08-27T07:05:42.666+0300 W STORAGE [initandlisten] Recovering data from the last clean checkpoint.\n2023-08-27T07:05:42.666+0300 I STORAGE [initandlisten]\n2023-08-27T07:05:42.666+0300 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine\n2023-08-27T07:05:42.666+0300 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem\n2023-08-27T07:05:42.666+0300 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=63752M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],\n2023-08-27T07:05:43.279+0300 I STORAGE [initandlisten] WiredTiger message [1693109143:279137][1:0x7fbd88dafb00], txn-recover: Recovering log 3688 through 3689\n2023-08-27T07:05:43.325+0300 I STORAGE [initandlisten] WiredTiger message [1693109143:325574][1:0x7fbd88dafb00], txn-recover: Recovering log 3689 through 3689\n2023-08-27T07:05:43.438+0300 I STORAGE [initandlisten] WiredTiger message [1693109143:438361][1:0x7fbd88dafb00], txn-recover: Main recovery loop: starting at 3688/60554624 to 3689/256\n2023-08-27T07:05:43.439+0300 I STORAGE [initandlisten] WiredTiger message [1693109143:439450][1:0x7fbd88dafb00], txn-recover: Recovering log 3688 through 3689\n2023-08-27T07:05:43.480+0300 I STORAGE [initandlisten] WiredTiger message [1693109143:480413][1:0x7fbd88dafb00], file:sizeStorer.wt, txn-recover: Recovering log 3689 through 3689\n2023-08-27T07:05:43.538+0300 I STORAGE [initandlisten] WiredTiger message [1693109143:538390][1:0x7fbd88dafb00], file:sizeStorer.wt, txn-recover: Set global recovery timestamp: (1693100255, 1)\n2023-08-27T07:05:43.559+0300 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(1693100255, 1)\n2023-08-27T07:05:43.574+0300 I STORAGE [initandlisten] Starting OplogTruncaterThread local.oplog.rs\n2023-08-27T07:05:43.574+0300 I STORAGE [initandlisten] The size storer reports that the oplog contains 28315505 records totaling to 53595282159 bytes\n2023-08-27T07:05:43.574+0300 I STORAGE [initandlisten] Sampling the oplog to determine where to place markers for truncation\n2023-08-27T07:05:43.575+0300 I STORAHow can I diagnose the reason for the crash?Thanks,\nTamar",
"username": "Tamar_Nirenberg"
},
{
"code": "",
"text": "Nothing to indicate why in the mongod log.Check your docker, server and kernel logs between the getMore and server restart timestamps.",
"username": "chris"
}
] |
MongoDB suddenly crashed
|
2023-08-27T07:05:31.892Z
|
MongoDB suddenly crashed
| 520 |
null |
[] |
[
{
"code": "",
"text": "Hey EveryoneI’m encountering a bit of a roadblock while trying to set up SSL/TLS encryption for MongoDB 5.0 connections on my Legion 5i Tower Gen 7 Gaming Desktop. I’m hoping some of you experienced folks might be able to lend a hand or offer some advice.Issue:\nI’ve been following the official MongoDB documentation and various online guides to configure SSL/TLS encryption for my MongoDB 5.0 database connections. Everything seems to be set up correctly, including generating the necessary certificates and configuring MongoDB with the appropriate options. However, when I attempt to establish a secure connection, I keep getting errors.Error Message: The error message I’m encountering is something like: “SSLHandshakeFailed: SSL handshake received but server certificate is invalid.”System Information:Legion 5i Tower Gen 7 Gaming Desktop\nMongoDB 5.0\nWindows 10 ProAny assistance or insights would be greatly appreciated. Thanks in advance for your time and help!Best regards,",
"username": "John_Harries"
},
{
"code": "",
"text": "Determining the reason why the certificate is invalid is a good next step. If any other text regarding the error is available please share it.It is most likely that the connecting client is not configured to use the Certificate Authority that was used to issue the certificate.Other reasons could be:How are you connecting to the server, using mongosh or driver?",
"username": "chris"
}
] |
SSL/TLS Encryption Issue for MongoDB 5.0 Connections on Legion 5i Tower Gen 7 Gaming Desktop
|
2023-08-22T07:21:04.966Z
|
SSL/TLS Encryption Issue for MongoDB 5.0 Connections on Legion 5i Tower Gen 7 Gaming Desktop
| 281 |
null |
[
"java",
"atlas-cluster"
] |
[
{
"code": "2023-08-18 04:09:41,156 ERROR [stderr] (cluster-ClusterId{value='64deef05d0f94033591f9522', description='null'}-plnd-mu-shard-00-02.mtgmk.mongodb.net:27017) Exception in thread \"cluster-ClusterId{value='64deef05d0f94033591f9522', description='null'}-plnd-mu-shard-00-02.mtgmk.mongodb.net:27017\" java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@b98f4d[Not completed, task = java.util.concurrent.Executors$RunnableAdapter@71419177[Wrapped task = com.mongodb.internal.connection.DefaultConnectionPool$BackgroundMaintenanceManager$$Lambda$2114/0x000000010462bc40@3fc9e890]] rejected from java.util.concurrent.ScheduledThreadPoolExecutor@5593aee0[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]\n",
"text": "Since upgrading to Mongo 6.0.8 we experience the java.util.concurrent.RejectedExecutionException error.We are currently running the java driver version 4.5 but we can’t upgrade because of the fact that we are still running Java11 and have dependency on the bson.records.codec.We want to know what this error entails and what we can do to maybe fix it.",
"username": "Donneh_Huwae"
},
{
"code": "bson.record.codec",
"text": "This is just a connection pool maintenance task that is being schedule as the pool is being shut down. So I think the log message is harmless. Unless you are seeing ill effects in the application it’s likely safe to ignore it.Regards bson.record.codec: the bulk of our users are still on Java 11 so this shouldn’t be an issue for you. What error are you seeing that makes you think this is a problem?Regards,\nJeff",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "Hey Jeffrey,Thanks for the heads up! As for your question. When trying to upgrade our environment we couldn’t because of a dependency to the bson.record.codec. But because we use it on our code base and we don’t want it excluded we needed to upgrade. As per your college, so that’s why we can’t upgrade.Kind Regards,\nDonneh",
"username": "Donneh_Huwae"
}
] |
java.util.concurrent.RejectedExecutionException
|
2023-08-23T09:27:33.548Z
|
java.util.concurrent.RejectedExecutionException
| 566 |
null |
[
"replication",
"compass",
"sharding"
] |
[
{
"code": "db version v6.0.3\nBuild Info: {\n \"version\": \"6.0.3\",\n \"gitVersion\": \"f803681c3ae19817d31958965850193de067c516\",\n \"openSSLVersion\": \"OpenSSL 1.0.1e-fips 11 Feb 2013\",\n \"modules\": [],\n \"allocator\": \"tcmalloc\",\n \"environment\": {\n \"distmod\": \"rhel70\",\n \"distarch\": \"x86_64\",\n \"target_arch\": \"x86_64\"\n }\n}\n**{\"t\":{\"$date\":\"2023-08-24T08:43:41.392+00:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23079, \"ctx\":\"conn29491736\",\"msg\":\"Invariant failure\",\"attr\":{\"expr\":\"shardResult.shardHostAndPort\",\"file\":\"src/mongo/db/pipeline/sharded_agg_helpers.cpp\",\"line\":1440}}**\n\n**{\"t\":{\"$date\":\"2023-08-24T08:43:41.392+00:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23080, \"ctx\":\"conn29491736\",\"msg\":\"\\n\\n***aborting after invariant() failure\\n\\n\"}**\n\n{\"t\":{\"$date\":\"2023-08-24T08:43:41.392+00:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":6384300, \"ctx\":\"conn29491736\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"Got signal: 6 (Aborted).\\n\"}}\n",
"text": "Hello,I have 3 mongos, 3 mongod ( config ), and 1 replica set with 2 mongo 1 arbiter server for data.\nAll mongodb versions are same v6.0.3 on CentOS 7Recently mongos process down couple of times, with these logs,It occurred while querying through Compass, query got stuck and mongos has down at that time.\nI tried to execute query again but it returned result in a short time through the other mongos.I was wondering what can caused this error, and how I can prevent mongos process down.Any leads and suggestions are much appreciated.Kind regards,\nSam.",
"username": "Sam_Lee1"
},
{
"code": "",
"text": "A recommended step is to upgrade to the latest 6.0 version 6.0.9If this issue persists after that log an issue at jira.mongodb.com",
"username": "chris"
}
] |
`mongos` process down with "msg":"Invariant failure" log
|
2023-08-25T23:53:46.673Z
|
`mongos` process down with “msg”:”Invariant failure” log
| 412 |
null |
[] |
[
{
"code": "root@curtis:~# sudo systemctl status mongod.service\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\n Active: failed (Result: exit-code) since Fri 2023-08-25 15:00:43 BST; 13min ago\n Docs: https://docs.mongodb.org/manual\n Process: 67006 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=2)\n Main PID: 67006 (code=exited, status=2)\n\nAug 25 15:00:43 curtis systemd[1]: Started MongoDB Database Server.\nAug 25 15:00:43 curtis mongod[67006]: Error opening config file: Is a directory\nAug 25 15:00:43 curtis mongod[67006]: try '/usr/bin/mongod --help' for more information\nAug 25 15:00:43 curtis systemd[1]: mongod.service: Main process exited, code=exited, status=2/INVALIDARGUMENT\nAug 25 15:00:43 curtis systemd[1]: mongod.service: Failed with result 'exit-code'.\nsudo journalctl -u mongod.serviceAug 25 15:28:29 curtis systemd[1]: Started MongoDB Database Server.\nAug 25 15:28:29 curtis mongod[69085]: Error opening config file: Is a directory\nAug 25 15:28:29 curtis mongod[69085]: try '/usr/bin/mongod --help' for more information\nAug 25 15:28:29 curtis systemd[1]: mongod.service: Main process exited, code=exited, status=2/INVALIDARGUMENT\nAug 25 15:28:29 curtis systemd[1]: mongod.service: Failed with result 'exit-code'.\nsudo systemctl status mongodb.serviceAug 25 15:28:29 curtis systemd[1]: Started MongoDB Database Server.\nAug 25 15:28:29 curtis mongod[69085]: Error opening config file: Is a directory\nAug 25 15:28:29 curtis mongod[69085]: try '/usr/bin/mongod --help' for more information\nAug 25 15:28:29 curtis systemd[1]: mongod.service: Main process exited, code=exited, status=2/INVALIDARGUMENT\nAug 25 15:28:29 curtis systemd[1]: mongod.service: Failed with result 'exit-code'.\n",
"text": "I am unable to start mongo. Below is the error shown when I check the status of the mongod servicedoing sudo journalctl -u mongod.service also shows the same error.below is the result of me running sudo systemctl status mongodb.serviceAdditional information\nI am on Ubuntu 10.04.6 LTSThanks in advance for any help anyone can provide.",
"username": "Curtis_L"
},
{
"code": "",
"text": "The error clearly indicates that the configuration file is a directory. It has to be a file.From the output of systemctl the configuration file is /etc/mongod.conf so if it is really a file and not a directory then we have a real problem.I have no clue as to why it is a directory, but if it is really a directory you have to find the configuration file you really want to use and update the systemd unit file to use it.",
"username": "steevej"
}
] |
Mongod.service status code 2 "Error opening config file: Is a directory"
|
2023-08-25T14:42:28.829Z
|
Mongod.service status code 2 “Error opening config file: Is a directory”
| 603 |
null |
[
"aggregation",
"queries"
] |
[
{
"code": "",
"text": "Can we generate result of $bucket using $group?\nIf yes, What is difference of $bucket and $group? When to use which one? Is there any real example",
"username": "Prof_Monika_Shah"
},
{
"code": "",
"text": "Can we generate result of $bucket using $group?Yes you could but with some effort. You would need some complicated $addFields to define the buckets. But why would you want to complicate your life by using a stage meant for a different purpose.What is difference of $bucket and $group?The main difference is that $group will produce 1 document for each value you $group on while $bucket will produce 1 document per bucket. Look at the artists example of $bucket. If you would use _id:$year_born in a $group you would end up with 1 document for each unique value of year_born rather than 1 document per year range.You could think that $group is per value and $bucket is per value range.",
"username": "steevej"
},
{
"code": "~~/)~~",
"text": "Dear @Prof_Monika_Shah,please followup on this thread and your other one atMarking one post as the solution will provide confidence to the other users that it makes sense as they might have the same questioning.Followup questions for clarification is also a good way to keep the forum useful.Thanks~~/)~~",
"username": "steevej"
}
] |
$bucket vs $group
|
2023-08-15T16:02:18.709Z
|
$bucket vs $group
| 493 |
[
"dot-net"
] |
[
{
"code": " var id = \"\";\n var localRealmFilePath = $\"{_webHostEnvironment.ContentRootPath}\" + \"RealmData\";\n\n var config = new AppConfiguration(id)\n {\n BaseFilePath = localRealmFilePath\n };\n\n var app = App.Create(config);\n var credential = Credentials.Anonymous();\n var realm = Realm.GetInstance();\n\n var user = await app.LogInAsync(credential);\n",
"text": "Hi,I’m using MongoDb realm with .NET SDK. Have build and deployed a webAPI to a VPS with IIIS on it.\nWhen accessing the API, I get the following error:System.InvalidOperationException: Could not determine a writable folder to store the Realm file. When constructing the RealmConfiguration, provide an absolute optionalPath where writes are allowed.This is my codebut in the server the folder created\n",
"username": "Ibrahim_ALSURKHI"
},
{
"code": "Realm.GetInstanceMyDocuments/default.realmFlexibleSyncConfigurationRealm.GetInstanceRealm.GetInstance",
"text": "The issue is that you’re opening a local realm - Realm.GetInstance opens the default realm which is typically located in MyDocuments/default.realm. If you want to open a synchronized Realm (since judging by your code you’re creating an app config and logging in a user), you need to create a FlexibleSyncConfiguration and pass that to Realm.GetInstance. Alternatively, if you’re opening a local Realm, you need to specify a path your application can write to as an argument to Realm.GetInstance. Also, be sure to check out the guide for using Realm in a console application here as web api projects don’t typically have synchronization contexts.",
"username": "nirinchev"
}
] |
Could not determine a writable folder to store the Realm file
|
2023-08-26T18:17:47.749Z
|
Could not determine a writable folder to store the Realm file
| 415 |
|
null |
[
"aggregation"
] |
[
{
"code": " // make new algo with capabilities of carry forward reserver advance & payable ---------\n\n let calcAdvance = 0; //holds calculated advance\n let adReserve = 0; //holds reserve advance\n let prevPayable = 0; //holds arreras\n\n let algo = (bills || []).forEach((item, idx) => {\n if (item.bill === \"ADVANCE_PAYMENT\") {\n calcAdvance += parseFloat(item.fromAdvance); //10000\n if (prevPayable && prevPayable > adReserve + calcAdvance) {\n //1000\n prevPayable -= adReserve + calcAdvance;\n calcAdvance = 0;\n adReserve = 0;\n } else {\n calcAdvance += adReserve - prevPayable;\n prevPayable = 0;\n adReserve = 0;\n }\n //----------------------------------------\n totalAdvance += parseFloat(item.fromAdvance);\n //----------------------------------------\n }\n if (item.bill === \"INVOICE\" && item.type !== \"OPD\") {\n //-----------------------------------------\n fullGrandTotal += parseFloat(item.payable);\n //-----------------------------------------\n if (adReserve > 0) {\n adReserve += calcAdvance;\n item.currentAdvance = adReserve;\n //check for advance payment reserved or left\n if (adReserve > parseFloat(item.payable)) {\n adReserve = adReserve - parseFloat(item.payable); //change grandtotal to payNow\n } else {\n prevPayable = 0 + parseFloat(item.payable) - adReserve; //change grandtotal to payNow\n adReserve = 0;\n }\n calcAdvance = 0;\n } else {\n item.currentAdvance = calcAdvance;\n //set totalPrevPayable to next invoice\n item.prevPayable = prevPayable; // new addition set prevPayable to next invoice\n\n if (calcAdvance > parseFloat(item.payable) + prevPayable) {\n //new addition add prevPayable\n adReserve = calcAdvance - (parseFloat(item.payable) + prevPayable);\n prevPayable = 0;\n } else {\n prevPayable += parseFloat(item.payable) - calcAdvance;\n }\n calcAdvance = 0;\n }\n }\n });\n const bills = await Bill.aggregate([\n {\n $addFields: {\n patient: { $toString: \"$patient\" },\n },\n },\n {\n $match: {\n patient: patientId,\n },\n },\n {\n $lookup: {\n from: \"users\",\n foreignField: \"_id\",\n localField: \"author\",\n as: \"author\",\n },\n },\n {\n $lookup: {\n from: \"addmissions\",\n foreignField: \"_id\",\n localField: \"addmission\",\n as: \"addmission\",\n },\n },\n {\n $lookup: {\n from: \"invoices\",\n let: { invoiceId: \"$invoice\" },\n pipeline: [{ $match: { $expr: { $eq: [\"$_id\", \"$$invoiceId\"] } } }],\n as: \"invoice\",\n },\n },\n {\n $lookup: {\n from: \"advancepayments\",\n let: { advancePaymentId: \"$advancePayment\" },\n pipeline: [\n { $match: { $expr: { $eq: [\"$_id\", \"$$advancePaymentId\"] } } },\n ],\n as: \"advancePayment\",\n },\n },\n {\n $addFields: {\n author: { $arrayElemAt: [\"$author\", 0] },\n addmission: { $arrayElemAt: [\"$addmission\", 0] },\n advancePayment: { $arrayElemAt: [\"$advancePayment\", 0] },\n invoice: { $arrayElemAt: [\"$invoice\", 0] },\n },\n },\n\n /* -------------------------------CALCULATIONS---------------------------------- */\n {\n $group: {\n _id: null,\n payments: {\n $push: {\n $cond: [\n { $eq: [\"$bill\", \"ADVANCE_PAYMENT\"] },\n {\n type: \"ADVANCE_PAYMENT\",\n amount: \"$advancePayment.totalAmount\",\n },\n { type: \"INVOICE\", amount: \"$invoice.payable\" },\n ],\n },\n },\n bills: { $push: \"$$ROOT\" },\n },\n },\n {\n $addFields: {\n finalPayments: {\n $reduce: {\n input: \"$payments\",\n initialValue: { total: 0, result: [] },\n in: {\n total: {\n $cond: [\n { $eq: [\"$$this.type\", \"ADVANCE_PAYMENT\"] },\n { $subtract: [\"$$value.total\", \"$$this.amount\"] },\n { $add: [\"$$value.total\", \"$$this.amount\"] },\n ],\n },\n result: {\n $concatArrays: [\n \"$$value.result\",\n [{ $cond: [\"$$this.type\", \"ADVANCE_PAYMENT\", \"$$total\"] }],\n ],\n },\n },\n },\n },\n },\n },\n /* -------------------------------CALCULATIONS---------------------------------- */\n ]);\n",
"text": "I want to change the algorithm code written in JS to Mongodb aggregate for better performance and scope. MongoDB provides some custom powerful features like $function, $accumulator which can solve my global variables issue but unfortuantly digital ocean mongodb does not allow server scide scripting so i have to use built in aggregate stages for this.This is the Algorithm written in JS:-And this is the aggregate Code:-\ni had alittle experiments using map and reduce",
"username": "Owais_Ahmed"
},
{
"code": "global variables",
"text": "Hello, @Owais_Ahmed ! Welcome to the community! It clear, that you’re moving logic from your node.js code to the aggregation pipeline. But what is the global variables issue that you have?MongoDB provides some custom powerful features like $function, $accumulator which can solve my global variables issueCan you give more details on this?",
"username": "slava"
},
{
"code": "",
"text": "Hey @slava Thank you for your time. So what i want is to change this js code algorithm to aggregate so in aggregate i can iterate over all the bills and do calcuations but their is varaible updating problem. so like when their advance payment i would like to store this value in a global variable so that i can pass its value to the next bill and now if the next bill is also advance payment now i would add the passed advanceAmount and current advance amount and now if the next 3rd bill is invoice so i would subtract them from each other now imagine that after subtracting their is still invoice amount so what i would do i would save it in global variable as previous payable amount and pass it to next bill.Visual Example:\n\nScreenshot 2023-08-26 at 2.29.12 PM2452×1184 225 KB\n",
"username": "Owais_Ahmed"
},
{
"code": "db.aggregate([\n { \n // imagine your aggregation pipeline have \n // these sample documents at some point\n $documents: [ \n {\n billId: 'B1',\n type: 'advancePayment',\n amount: 500\n },\n {\n billId: 'B2',\n type: 'advancePayment',\n amount: 500\n },\n {\n billId: 'B3',\n type: 'invoice',\n amount: 2000\n }\n ] \n },\n {\n // you need to gather them in an array\n // 'bills' in this case\n $group: {\n _id: null,\n bills: {\n $push: {\n type: '$type',\n amount: '$amount',\n }\n }\n }\n },\n {\n $addFields: {\n calculated: {\n $reduce: {\n input: '$bills',\n initialValue: {\n // initialize your variables here\n // later they can be accessted with $$value (see code below)\n previousPayable: 0,\n reserveAdvance: 0,\n },\n in: {\n // manipulate 'reserveAdvance' for each bill, represented as $$this\n reserveAdvance: {\n $cond: {\n if: {\n // if bill.type === 'advancePayment'\n $eq: ['$$this.type', 'advancePayment']\n },\n then: {\n // then add its amount to reserveAdvance calculated variable\n $add: ['$$value.reserveAdvance', '$$this.amount'],\n },\n else: {\n // nested condition (short syntax of $cond)\n $cond: [\n { \n // if invoice amount is greater than calculated advancePayment\n $gt: ['$$this.amount', '$$value.advancePayment']\n },\n // then assing zero to advancePayment\n 0,\n // else write to advancePayment what is left\n { $subtract: ['$$value.advancePayment', '$$this.amount'] },\n ]\n },\n },\n },\n // manipulate 'previousPayable' for each bill, represented as $$this\n previousPayable: {\n $cond: {\n if: {\n // if bill.type === 'invoice'\n $eq: ['$$this.type', 'invoice']\n },\n then: {\n // then subtract calculated reserveAdvance amount from it\n $subtract: ['$$this.amount', '$$value.reserveAdvance'],\n },\n // else reuturn previousPayable unchanged\n else: '$$value.previousPayable'\n }\n }\n }\n }\n }\n }\n }\n]);\n[\n {\n _id: null,\n bills: [\n { type: 'advancePayment', amount: 500 },\n { type: 'advancePayment', amount: 500 },\n { type: 'invoice', amount: 2000 }\n ],\n calculated: { reserveAdvance: 0, previousPayable: 1000 }\n }\n]\n",
"text": "I have built an aggregation pipeline example of how that can be done using $reduce, $cond and some arithmetic pipeline operators. Please, read thoroughly the code and comments of my example, read the documentation for relevant operators, if needed and try to apply the solution to your own code. Then, return with more questions, if you have any Output of the pipeline:",
"username": "slava"
},
{
"code": "",
"text": "Hey @slava Thank you, man I just have one more query How can add the respective calculated amount to its bill?\nHere is an example. Sorry for this at first I missed but I need this in this manner.No hurry just reply when you are free Thank you, man, \nScreenshot 2023-08-27 at 3.33.34 PM2200×1180 220 KB\n",
"username": "Owais_Ahmed"
}
] |
How to declare and update Global variables in mongodb aggregate
|
2023-08-25T03:01:36.361Z
|
How to declare and update Global variables in mongodb aggregate
| 696 |
[
"aggregation",
"mumbai-mug"
] |
[
{
"code": "Software Engineer @ PostmanTech Consultant @ DeloitteSenior Community Manager, MongoDB",
"text": "\nMUG1920×1080 222 KB\nMongoDB User Group Mumbai is excited to announce its first meetup on May 27th at Microsoft Campus, Mumbai. The gathering will feature two engaging presentations complete with demonstrations, a collaborative fun exercise, lunch , an opportunity to meet fellow MongoDB enthusiasts and win some exciting swag! The event aims to provide you with an overview of MongoDB’s Developer Data Platform, it’s Aggregation Pipeline and make you experience MongoDB Charts.We invite you to join us for a day filled with learning and networking! To RSVP - Please click on the “ ✓ RSVP ” link at the top of this event page if you plan to attend. The link should change to a green button if you are Going. You need to be signed in to access the button.Event Type: In-Person\nLocation: Windsor, 4th Floor, off Central Salsette Tramway Road, Kalina, Santacruz East, Mumbai, Maharashtra 400098\nS478×602 270 KB\nSoftware Engineer @ Postman–\nimage640×641 66.2 KB\nTech Consultant @ Deloitte–\nH822×786 58.7 KB\nSenior Community Manager, MongoDB",
"username": "Nilesh_32704"
},
{
"code": "",
"text": "Hi I and 3 of my friends are looking to join this meetup , but currently RSVP is showing “Booked Out”. Are there any chances we can do the RSVP and join the meetup.Thank you",
"username": "Sachin_maurya"
},
{
"code": "",
"text": "We will get back to you on this",
"username": "Nilesh_32704"
},
{
"code": "",
"text": "I wish to attend but it shows booked out",
"username": "devam_gosalia"
},
{
"code": "",
"text": "Unfortunately, we have limited space and won’t be able to accommodate more attendees for this event. We are confirming with the attendees who have RSVPed and plan to open more tickets according to the confirmations we receive. ",
"username": "Harshit"
},
{
"code": "",
"text": "Hey, Can you share attendees list? I have RSVP on time and also filled the Google form",
"username": "Om_Bhojane"
},
{
"code": "",
"text": "I forgot to check my mail for the confirmation sheet of attendance before 23. If possible, I want to have my attendance confirmed.",
"username": "Narayan_Gupta"
},
{
"code": "",
"text": "@Om_Bhojane All those who confirmed using the form will be receiving a confirmation email today.@Narayan_Gupta No Problem, we will send out a confirmation email to you.Looking forward to seeing you both at the event.",
"username": "Harshit"
},
{
"code": "",
"text": "Hello Sir, I have filled the confirmation form within an hour when it was released. But I still haven’t received my confirmed seat email. It would be an immense kindness if you could please look into it once. I really appreciate your help!",
"username": "Arpita_N_A"
},
{
"code": "",
"text": "Hey Arpita,\nI see the confirmation sent to you. I will resend it just to make sure it’s on the top of your inbox. ",
"username": "Harshit"
},
{
"code": "",
"text": "Hello sir, I had filled the RSVP and the attendance confirmation both on time but still haven’t received the confirmation email from your end. It would be great if you would look into this. Thank You!",
"username": "Satyam_Jaiswal1"
},
{
"code": "",
"text": "Please check your inbox, I have resent the confirmation to your email ",
"username": "Harshit"
},
{
"code": "",
"text": "Hey, I’m Shreyas, I love to use MongoDB and want to learn beyond. I was unaware about this event, I want to attend this event. Can you please allow me to attend this event…",
"username": "shreyash_bhalekar"
},
{
"code": "",
"text": "Hey,\nI had done RSVP when 40+ slots were pending. But havent received the confirmation mail.\nVery Excited to attend the event. Can you allow me to attend it ?,",
"username": "Rajiv_P"
},
{
"code": "",
"text": "Hey , I missed RSVP , but still want to attend it as it seems more intresting inauguration of new communityin Mumbai… Can you send Confirmation to me so that i can Join ",
"username": "ASHUTOSH_UPADHYAY"
},
{
"code": "",
"text": "\n20230527_1504101920×865 67.6 KB\n",
"username": "Mohammed_Arif"
},
{
"code": "",
"text": "\nimage853×744 28.4 KB\n\n@Harshit Done Sir",
"username": "SUMIT_JADHAV"
},
{
"code": "",
"text": "Hey @SUMIT_JADHAV,\nTo get it right you will have to split the “cuisine” string field into an array and then do aggregate (count) to find the answer ",
"username": "Harshit"
}
] |
MUG Mumbai Inauguration Meetup
|
2023-05-10T19:50:14.641Z
|
MUG Mumbai Inauguration Meetup
| 3,422 |
|
null |
[
"queries"
] |
[
{
"code": "",
"text": "mongodb’s db.find().limit(1000) does NOT return the first 1000 documents I inserted. How would I make mongodb return documents in the order they were inserted?",
"username": "Askr_Askr"
},
{
"code": "",
"text": "How were they inserted? Basically you’ll need to sort them before adding a limit, either on the _id or another field that was added when the documents were added.\n_id sorting should be sufficient but if there are multiple clients adding documents at the same time and clocks are slightly out of sync then there could be an edgecase where this does 100% return them in the order they were actually inserted.\nThis is as the mongodb driver may have generated the ID using the local machine clock as opposed to being server generated.What’s the usecase for this?",
"username": "John_Sewell"
}
] |
How would I make monodb return documents in the order they were inserted?
|
2023-08-27T06:50:41.783Z
|
How would I make monodb return documents in the order they were inserted?
| 466 |
null |
[] |
[
{
"code": "mongod/tmp/mongodb-27017.sockmongodsudo service mongod startmongodb-27017.socksrwx------ 1 mongodb mongodb 0 January 13 12:49 /tmp/mongodb-27017.sock\n/data/db$ ls -la /data/db\ntotal 8\ndrwxr-xr-x 2 mongodb mongodb 4096 January 13 11:35 .\ndrwxr-xr-x 3 root root 4096 January 13 11:35 ..\nmongod$ mongod --version\ndb version v6.0.3\nBuild Info: {\n \"version\": \"6.0.3\",\n \"gitVersion\": \"f803681c3ae19817d31958965850193de067c516\",\n \"openSSLVersion\": \"OpenSSL 1.1.1 11 Sep 2018\",\n \"modules\": [],\n \"allocator\": \"tcmalloc\",\n \"environment\": {\n \"distmod\": \"ubuntu1804\",\n \"distarch\": \"x86_64\",\n \"target_arch\": \"x86_64\"\n }\n}\nUbuntu 18/var/log/auth.logJan 13 13:40:24 ehsan-HP sudo: ehsan : TTY=pts/7 ; PWD=/home/ehsan ; USER=root ; COMMAND=/usr/sbin/service mongod restart\nJan 13 13:40:24 ehsan-HP sudo: pam_unix(sudo:session): session opened for user root by (uid=0)\nJan 13 13:40:24 ehsan-HP sudo: pam_unix(sudo:session): session closed for user root",
"text": "I am facing this issue where I am unable to get my mongod started successfully:This is the error:{“t”:{\"$date\":“2023-01-13T12:33:59.922+05:00”},“s”:“I”,\n“c”:“NETWORK”, “id”:4915701, “ctx”:\"-\",“msg”:“Initialized wire\nspecification”,“attr”:{“spec”:{“incomingExternalClient”:{“minWireVersion”:0,“maxWireVersion”:17},“incomingInternalClient”:{“minWireVersion”:0,“maxWireVersion”:17},“outgoing”:{“minWireVersion”:6,“maxWireVersion”:17},“isInternalClient”:true}}}\n{“t”:{\"$date\":“2023-01-13T12:33:59.922+05:00”},“s”:“I”,\n“c”:“CONTROL”, “id”:23285, “ctx”:\"-\",“msg”:“Automatically disabling\nTLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols\n‘none’”} {“t”:{\"$date\":“2023-01-13T12:33:59.924+05:00”},“s”:“I”,\n“c”:“NETWORK”, “id”:4648601, “ctx”:“main”,“msg”:“Implicit TCP\nFastOpen unavailable. If TCP FastOpen is required, set\ntcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.”}\n{“t”:{\"$date\":“2023-01-13T12:33:59.925+05:00”},“s”:“I”, “c”:“REPL”,\n“id”:5123008, “ctx”:“main”,“msg”:“Successfully registered\nPrimaryOnlyService”,“attr”:{“service”:“TenantMigrationDonorService”,“namespace”:“config.tenantMigrationDonors”}}\n{“t”:{\"$date\":“2023-01-13T12:33:59.925+05:00”},“s”:“I”, “c”:“REPL”,\n“id”:5123008, “ctx”:“main”,“msg”:“Successfully registered\nPrimaryOnlyService”,“attr”:{“service”:“TenantMigrationRecipientService”,“namespace”:“config.tenantMigrationRecipients”}}\n{“t”:{\"$date\":“2023-01-13T12:33:59.925+05:00”},“s”:“I”, “c”:“REPL”,\n“id”:5123008, “ctx”:“main”,“msg”:“Successfully registered\nPrimaryOnlyService”,“attr”:{“service”:“ShardSplitDonorService”,“namespace”:“config.tenantSplitDonors”}}\n{“t”:{\"$date\":“2023-01-13T12:33:59.925+05:00”},“s”:“I”,\n“c”:“CONTROL”, “id”:5945603, “ctx”:“main”,“msg”:“Multi threading\ninitialized”} {“t”:{\"$date\":“2023-01-13T12:33:59.925+05:00”},“s”:“I”,\n“c”:“CONTROL”, “id”:4615611, “ctx”:“initandlisten”,“msg”:“MongoDB\nstarting”,“attr”:{“pid”:24280,“port”:27017,“dbPath”:\"/data/db\",“architecture”:“64-bit”,“host”:“ehsan-HP”}}\n{“t”:{\"$date\":“2023-01-13T12:33:59.925+05:00”},“s”:“I”,\n“c”:“CONTROL”, “id”:23403, “ctx”:“initandlisten”,“msg”:“Build\nInfo”,“attr”:{“buildInfo”:{“version”:“6.0.3”,“gitVersion”:“f803681c3ae19817d31958965850193de067c516”,“openSSLVersion”:“OpenSSL\n1.1.1 11 Sep 2018”,“modules”:,“allocator”:“tcmalloc”,“environment”:{“distmod”:“ubuntu1804”,“distarch”:“x86_64”,“target_arch”:“x86_64”}}}}\n{“t”:{\"$date\":“2023-01-13T12:33:59.925+05:00”},“s”:“I”,\n“c”:“CONTROL”, “id”:51765, “ctx”:“initandlisten”,“msg”:“Operating\nSystem”,“attr”:{“os”:{“name”:“Ubuntu”,“version”:“18.04”}}}\n{“t”:{\"$date\":“2023-01-13T12:33:59.925+05:00”},“s”:“I”,\n“c”:“CONTROL”, “id”:21951, “ctx”:“initandlisten”,“msg”:“Options set\nby command line”,“attr”:{“options”:{}}}\n{“t”:{\"$date\":“2023-01-13T12:33:59.926+05:00”},“s”:“E”,\n“c”:“NETWORK”, “id”:23024, “ctx”:“initandlisten”,“msg”:“Failed to\nunlink socket\nfile”,“attr”:{“path”:\"/tmp/mongodb-27017.sock\",“error”:“Operation not\npermitted”}} {“t”:{\"$date\":“2023-01-13T12:33:59.926+05:00”},“s”:“F”,\n“c”:“ASSERT”, “id”:23091, “ctx”:“initandlisten”,“msg”:“Fatal\nassertion”,“attr”:{“msgid”:40486,“file”:“src/mongo/transport/transport_layer_asio.cpp”,“line”:1125}}\n{“t”:{\"$date\":“2023-01-13T12:33:59.926+05:00”},“s”:“F”, “c”:“ASSERT”,\n“id”:23092, “ctx”:“initandlisten”,“msg”:\"\\n\\n***aborting after\nfassert() failure\\n\\n\"}What I have tried so far:Delete /tmp/mongodb-27017.sock file and then start mongod by doing sudo service mongod start. It didn’t work. The file mongodb-27017.sock created after running this comand has the following permissions:These are the permissions of /data/db:What is going wrong here? This is my mongod version:I am on Ubuntu 18.EDIT:\nHere are the contents of /var/log/auth.log:",
"username": "Ehsan_Elahi"
},
{
"code": "",
"text": "Appears to be permissions issue when you try to bring up mongod as different users\nHow did you start mongod first time?\nMost likely you issued just mongod\nAfter removing tmp file try bring up mongod by issuing just mongod\nYou should not use sudo which will try to create files as root\nPlease check documentation\nIf you installed mongod as service you have to use sudo sysctl mongod start",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "im facing the same isssue",
"username": "Deep_Thakkar"
},
{
"code": "rm -f /tmp/mongodb-27017.sock\n",
"text": "run,or, check the ownership of the above",
"username": "Robin_Rajput"
},
{
"code": "",
"text": "sudo rm -f /tmp/mongodb-27017.sock\nsudo systemctl start mongod\nsudo systemctl status mongod → Running",
"username": "Aditya_Kumar12"
}
] |
Failed to unlink socket file: Operation not permitted
|
2023-01-13T10:09:12.717Z
|
Failed to unlink socket file: Operation not permitted
| 17,263 |
null |
[] |
[
{
"code": "",
"text": "I was looking around at the platform, but now I want to delete my account. How do I go about doing that?",
"username": "Ed_Me"
},
{
"code": "",
"text": "You mean Atlas account? or some other account",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Not just Atlas, but everything associated when I logged in. How do I do that? There’s no option that I can find.",
"username": "Ed_Me"
},
{
"code": "",
"text": "Please check this link\nOnce you complete Atlas deletion and send mail everything will be deleted i think",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I deleted my atlas account, and even the organization along with active project. But my account is still live and I can’t delete it. This violates CCPA.",
"username": "Ed_Me"
},
{
"code": "",
"text": "Hi @Ed_Me,The documentation link provided by @Ramachandra_Tummala outlines the process for Atlas account deletion.The last step for account deletion is to Email Atlas Support. The team has specific procedures to follow for compliance measures and full removal of data from our business systems, so you should receive a response and confirmation for your account deletion request.You can read more about our cloud security, compliance, and privacy measures via the MongoDB Trust Center.Regards,\nStennie",
"username": "Stennie_X"
}
] |
Delete my account
|
2020-11-19T16:29:05.234Z
|
Delete my account
| 3,467 |
null |
[
"queries",
"java",
"serverless",
"spring-data-odm"
] |
[
{
"code": "May 22 05:00:00 ip-172-31-4-152 web: org.springframework.dao.DataAccessResourceFailureException: Exception sending message; nested exception is com.mongodb.MongoSocketWriteException: Exception sending message\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:85) ~[spring-data-mongodb-3.4.2.jar!/:3.4.2]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.data.mongodb.core.MongoTemplate.potentiallyConvertRuntimeException(MongoTemplate.java:3044) ~[spring-data-mongodb-3.4.2.jar!/:3.4.2]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.data.mongodb.core.MongoTemplate.executeFindMultiInternal(MongoTemplate.java:2980) ~[spring-data-mongodb-3.4.2.jar!/:3.4.2]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.data.mongodb.core.MongoTemplate.doFind(MongoTemplate.java:2667) ~[spring-data-mongodb-3.4.2.jar!/:3.4.2]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.data.mongodb.core.MongoTemplate.doFind(MongoTemplate.java:2649) ~[spring-data-mongodb-3.4.2.jar!/:3.4.2]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.data.mongodb.core.MongoTemplate.find(MongoTemplate.java:902) ~[spring-data-mongodb-3.4.2.jar!/:3.4.2]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.onthecase.config.database.InheritanceAwareSimpleMongoRepository.findAll(InheritanceAwareSimpleMongoRepository.java:51) ~[classes!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.onthecase.config.database.InheritanceAwareSimpleMongoRepository.findAll(InheritanceAwareSimpleMongoRepository.java:16) ~[classes!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_352]\nMay 22 05:00:00 ip-172-31-4-152 web: at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_352]\nMay 22 05:00:00 ip-172-31-4-152 web: at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_352]\nMay 22 05:00:00 ip-172-31-4-152 web: at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_352]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.data.repository.core.support.RepositoryMethodInvoker$RepositoryFragmentMethodInvoker.lambda$new$0(RepositoryMethodInvoker.java:289) ~[spring-data-commons-2.7.2.jar!/:2.7.2]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.data.repository.core.support.RepositoryMethodInvoker.doInvoke(RepositoryMethodInvoker.java:137) ~[spring-data-commons-2.7.2.jar!/:2.7.2]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.data.repository.core.support.RepositoryMethodInvoker.invoke(RepositoryMethodInvoker.java:121) ~[spring-data-commons-2.7.2.jar!/:2.7.2]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.data.repository.core.support.RepositoryComposition$RepositoryFragments.invoke(RepositoryComposition.java:530) ~[spring-data-commons-2.7.2.jar!/:2.7.2]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.data.repository.core.support.RepositoryComposition.invoke(RepositoryComposition.java:286) ~[spring-data-commons-2.7.2.jar!/:2.7.2]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.data.repository.core.support.RepositoryFactorySupport$ImplementationMethodExecutionInterceptor.invoke(RepositoryFactorySupport.java:640) ~[spring-data-commons-2.7.2.jar!/:2.7.2]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.3.22.jar!/:5.3.22]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.data.repository.core.support.QueryExecutorMethodInterceptor.doInvoke(QueryExecutorMethodInterceptor.java:164) ~[spring-data-commons-2.7.2.jar!/:2.7.2]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.data.repository.core.support.QueryExecutorMethodInterceptor.invoke(QueryExecutorMethodInterceptor.java:139) ~[spring-data-commons-2.7.2.jar!/:2.7.2]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.3.22.jar!/:5.3.22]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) ~[spring-aop-5.3.22.jar!/:5.3.22]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.3.22.jar!/:5.3.22]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:215) ~[spring-aop-5.3.22.jar!/:5.3.22]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.sun.proxy.$Proxy129.findAll(Unknown Source) ~[na:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_352]\nMay 22 05:00:00 ip-172-31-4-152 web: at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_352]\nMay 22 05:00:00 ip-172-31-4-152 web: at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_352]\nMay 22 05:00:00 ip-172-31-4-152 web: at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_352]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344) ~[spring-aop-5.3.22.jar!/:5.3.22]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198) ~[spring-aop-5.3.22.jar!/:5.3.22]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) ~[spring-aop-5.3.22.jar!/:5.3.22]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:137) ~[spring-tx-5.3.22.jar!/:5.3.22]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.3.22.jar!/:5.3.22]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:215) ~[spring-aop-5.3.22.jar!/:5.3.22]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.sun.proxy.$Proxy129.findAll(Unknown Source) ~[na:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.onthecase.repository.InvestigatorRepository$findAll$0.call(Unknown Source) ~[na:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47) ~[groovy-3.0.12.jar!/:3.0.12]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125) ~[groovy-3.0.12.jar!/:3.0.12]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:130) ~[groovy-3.0.12.jar!/:3.0.12]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.onthecase.schedule.DeleteUnpaidUsers.softDeleteInvestigator(DeleteUnpaidUsers.groovy:37) ~[classes!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_352]\nMay 22 05:00:00 ip-172-31-4-152 web: at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_352]\nMay 22 05:00:00 ip-172-31-4-152 web: at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_352]\nMay 22 05:00:00 ip-172-31-4-152 web: at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_352]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:84) ~[spring-context-5.3.22.jar!/:5.3.22]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54) ~[spring-context-5.3.22.jar!/:5.3.22]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:95) [spring-context-5.3.22.jar!/:5.3.22]\nMay 22 05:00:00 ip-172-31-4-152 web: at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_352]\nMay 22 05:00:00 ip-172-31-4-152 web: at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_352]\nMay 22 05:00:00 ip-172-31-4-152 web: at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_352]\nMay 22 05:00:00 ip-172-31-4-152 web: at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [na:1.8.0_352]\nMay 22 05:00:00 ip-172-31-4-152 web: at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_352]\nMay 22 05:00:00 ip-172-31-4-152 web: at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_352]\nMay 22 05:00:00 ip-172-31-4-152 web: at java.lang.Thread.run(Thread.java:750) [na:1.8.0_352]\nMay 22 05:00:00 ip-172-31-4-152 web: Caused by: com.mongodb.MongoSocketWriteException: Exception sending message\nMay 22 05:00:00 ip-172-31-4-152 web: at com.mongodb.internal.connection.InternalStreamConnection.translateWriteException(InternalStreamConnection.java:684) ~[mongodb-driver-core-4.6.1.jar!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.mongodb.internal.connection.InternalStreamConnection.sendMessage(InternalStreamConnection.java:555) ~[mongodb-driver-core-4.6.1.jar!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.mongodb.internal.connection.InternalStreamConnection.sendCommandMessage(InternalStreamConnection.java:381) ~[mongodb-driver-core-4.6.1.jar!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:329) ~[mongodb-driver-core-4.6.1.jar!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.mongodb.internal.connection.UsageTrackingInternalConnection.sendAndReceive(UsageTrackingInternalConnection.java:116) ~[mongodb-driver-core-4.6.1.jar!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.mongodb.internal.connection.DefaultConnectionPool$PooledConnection.sendAndReceive(DefaultConnectionPool.java:644) ~[mongodb-driver-core-4.6.1.jar!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.mongodb.internal.connection.CommandProtocolImpl.execute(CommandProtocolImpl.java:71) ~[mongodb-driver-core-4.6.1.jar!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.mongodb.internal.connection.LoadBalancedServer$LoadBalancedServerProtocolExecutor.execute(LoadBalancedServer.java:159) ~[mongodb-driver-core-4.6.1.jar!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.mongodb.internal.connection.DefaultServerConnection.executeProtocol(DefaultServerConnection.java:226) ~[mongodb-driver-core-4.6.1.jar!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.mongodb.internal.connection.DefaultServerConnection.command(DefaultServerConnection.java:126) ~[mongodb-driver-core-4.6.1.jar!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.mongodb.internal.connection.DefaultServerConnection.command(DefaultServerConnection.java:116) ~[mongodb-driver-core-4.6.1.jar!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.mongodb.internal.operation.CommandOperationHelper.createReadCommandAndExecute(CommandOperationHelper.java:232) ~[mongodb-driver-core-4.6.1.jar!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.mongodb.internal.operation.FindOperation.lambda$execute$1(FindOperation.java:695) ~[mongodb-driver-core-4.6.1.jar!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.mongodb.internal.operation.OperationHelper.lambda$withSourceAndConnection$2(OperationHelper.java:575) ~[mongodb-driver-core-4.6.1.jar!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.mongodb.internal.operation.OperationHelper.withSuppliedResource(OperationHelper.java:600) ~[mongodb-driver-core-4.6.1.jar!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.mongodb.internal.operation.OperationHelper.lambda$withSourceAndConnection$3(OperationHelper.java:574) ~[mongodb-driver-core-4.6.1.jar!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.mongodb.internal.operation.OperationHelper.withSuppliedResource(OperationHelper.java:600) ~[mongodb-driver-core-4.6.1.jar!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.mongodb.internal.operation.OperationHelper.withSourceAndConnection(OperationHelper.java:573) ~[mongodb-driver-core-4.6.1.jar!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.mongodb.internal.operation.FindOperation.lambda$execute$2(FindOperation.java:690) ~[mongodb-driver-core-4.6.1.jar!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.mongodb.internal.async.function.RetryingSyncSupplier.get(RetryingSyncSupplier.java:65) ~[mongodb-driver-core-4.6.1.jar!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.mongodb.internal.operation.FindOperation.execute(FindOperation.java:722) ~[mongodb-driver-core-4.6.1.jar!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.mongodb.internal.operation.FindOperation.execute(FindOperation.java:86) ~[mongodb-driver-core-4.6.1.jar!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:191) ~[mongodb-driver-sync-4.6.1.jar!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.mongodb.client.internal.MongoIterableImpl.execute(MongoIterableImpl.java:135) ~[mongodb-driver-sync-4.6.1.jar!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.mongodb.client.internal.MongoIterableImpl.iterator(MongoIterableImpl.java:92) ~[mongodb-driver-sync-4.6.1.jar!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at org.springframework.data.mongodb.core.MongoTemplate.executeFindMultiInternal(MongoTemplate.java:2968) ~[spring-data-mongodb-3.4.2.jar!/:3.4.2]\nMay 22 05:00:00 ip-172-31-4-152 web: ... 53 common frames omitted\nMay 22 05:00:00 ip-172-31-4-152 web: Caused by: java.net.SocketException: Connection reset\nMay 22 05:00:00 ip-172-31-4-152 web: at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:115) ~[na:1.8.0_352]\nMay 22 05:00:00 ip-172-31-4-152 web: at java.net.SocketOutputStream.write(SocketOutputStream.java:155) ~[na:1.8.0_352]\nMay 22 05:00:00 ip-172-31-4-152 web: at sun.security.ssl.SSLSocketOutputRecord.deliver(SSLSocketOutputRecord.java:319) ~[na:1.8.0_352]\nMay 22 05:00:00 ip-172-31-4-152 web: at sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1193) ~[na:1.8.0_352]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.mongodb.internal.connection.SocketStream.write(SocketStream.java:99) ~[mongodb-driver-core-4.6.1.jar!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: at com.mongodb.internal.connection.InternalStreamConnection.sendMessage(InternalStreamConnection.java:552) ~[mongodb-driver-core-4.6.1.jar!/:na]\nMay 22 05:00:00 ip-172-31-4-152 web: ... 77 common frames omitted\n",
"text": "I’m currently working on a backend application using\nSpring: 2.7.3\nJava: 1.8\nDatabase: MongoDB ServerlessI’m using MongoDB default URI to connect to the database. I believe the default connection pool size is 100 and the idle time before close for each connection is 10 minutes.From time to time I keep getting the following issue. For a quick fix, I have used spring @Retryable annotation to retry the methods if the given exception is encountered. By the way, I have also enabled a scheduled job (that calls a simple find query every 2 minutes) to avoid MongoDB serverless cold start issues.What can be done or can configuration changes be made to avoid the given exception? Please suggest.",
"username": "prabin_upreti1"
},
{
"code": "May 22 05:00:00 ip-172-31-4-152 web: Caused by: java.net.SocketException: Connection reset",
"text": "May 22 05:00:00 ip-172-31-4-152 web: Caused by: java.net.SocketException: Connection resetTCP Connection is reset by the peer. it can happen for many reasons, to name a few, host crash, server process crash…What you mean by Mongodb serverless? If the server hosts are only available for a short amount of time, then the connections will obviously be “reset” after the server hosts are gone.",
"username": "Kobe_W"
},
{
"code": "",
"text": "@Kobe_W The ‘serverless’ means the Atlas Serverless plan. If you go to Pricing | MongoDB it is the first kinda free option. Actually, I have the same problem the connection is closed periodically. What is the correct config should be to avoid it? Thank you.",
"username": "Aliaksei_Matsarski"
}
] |
Exception due to Socket Connection Closed or TimeOut or Reset
|
2023-05-31T11:18:46.859Z
|
Exception due to Socket Connection Closed or TimeOut or Reset
| 1,589 |
null |
[] |
[
{
"code": "{\n \"_id\" : ObjectId(\"6007fd9d984e2507ad452cf3\"),\n \"name\" : \"John\",\n \"city\" : \"A\",\n},\n{\n \"_id\" : ObjectId(\"6007ff6844d9e517e1ec0976\"),\n \"name\" : \"Jack\",\n \"city\" : \"B\",\n}\nrouter.get('/search', async (request, response) => {\n try {\n let result = await Client.aggregate([\n {\n \"$search\": {\n \"autocomplete\": {\n \"query\": `${request.query.term}`,\n \"path\": \"name\",\n \"fuzzy\": {\n \"maxEdits\": 2,\n \"prefixLength\": 3,\n },\n },\n },\n },\n {\n $limit: 3\n },\n {\n $project: {\n \"_id\": 0,\n }\n }\n ]);\n response.send(result);\n } catch (e) {\n response.status(500).send({message: e.message});\n }\n});\n const autoCompleteJS = new autoComplete({\n data: {\n src: async () => {\n const query = document.querySelector(\"#autoComplete\").value;\n const source = await fetch(`${window.location.origin}/search?term=${query}`);\n const data = await source.json();\n return data;\n },\n key: [\"name\"],\n },\n trigger: {\n event: [\"input\", \"focus\"],\n },\n searchEngine: \"strict\",\n highlight: true,\n });\n \"$search\": {\n \"compound\": {\n \"filter\" : [{\n \"text\" : { path: \"city\", query: \"A\" }\n }],\n \"must\": [{\n \"autocomplete\": {\n \"query\": `${request.query.term}`,\n \"path\": \"name\",\n \"fuzzy\": {\n \"maxEdits\": 2,\n \"prefixLength\": 3,\n },\n }\n }]\n }\n }\n",
"text": "Hello all !I would like to perform autocompletion on the name but filtered on a specific city with mongoose and nodejs. I have a mongodb collection like this :What i have done so far :\nI have setup MongoDB Atlas with a Search Index (with the help of search doc) And set up the autocomplete like that :In front-end, with autocompleteJs :So far it is working well. But I don’t know how to make the autocomplete result filtered based on city. It seems that the documentation does not mention this. Do you have any leads.Thank youPS : I have tried with filter but it return me an empty array, i don’t know why ",
"username": "Rudy_Z"
},
{
"code": "",
"text": "@ Rudy_Z\nHave you resolved this?\nI have the same issue, getting empty arrayCan you please help",
"username": "Manoranjan_Bhol"
},
{
"code": "db.residents.aggregate([\n {\n $search: {\n compound: {\n filter: [\n {\n text: {\n path: 'city',\n query: 'n'\n }\n },\n ],\n must: [\n {\n autocomplete: {\n query: 'o',\n path: 'name'\n }\n }\n ],\n }\n }\n }\n]);\ndb.residents.insertMany([\n {\n _id: 'A',\n name: 'Yaroslav',\n city: 'N',\n },\n {\n _id: 'B',\n name: 'Vladislav',\n city: 'M',\n },\n {\n _id: 'C',\n name: 'Oleg',\n city: 'N',\n }\n]);\n[ \n { _id: 'C', name: 'Oleg', city: 'N' } \n]\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"city\": {\n \"type\": \"string\"\n },\n \"name\": {\n \"analyzer\": \"lucene.standard\",\n \"foldDiacritics\": false,\n \"maxGrams\": 5,\n \"minGrams\": 1,\n \"tokenization\": \"edgeGram\",\n \"type\": \"autocomplete\"\n }\n }\n }\n}\n",
"text": "Hello, everyone! @Rudy_Z , your code should work if Atlas Search index is setup properly.I have used the following aggregation pipeline with $search autocomplete:On this dataset:Result (just as expected):Index configuration:",
"username": "slava"
}
] |
Need Help with filter on Autocomplete
|
2021-02-08T17:35:01.241Z
|
Need Help with filter on Autocomplete
| 2,899 |
[
"node-js",
"mongoose-odm"
] |
[
{
"code": "",
"text": "Hello,As a newbie developer, I have been improving my NodeJS, Express, MongoDB (with Mongoose) skills by building a small-size project. It will basically allow visitors to search for word(s) and get matched sentences from books as a result.I already have a MongoDB collection that consists of 4 millions of unique sentences. In the future, the total number of documents in the collection will be more than 100 millions.Example Data:\n\nSCR-20230814-mpjf1161×619 55.8 KB\nAdding a full-text search feature with mongoose is quite easy. However, all I need is to add partial and/or fuzzy search functionality to my project. I know Atlas Search is perfectly suitable for that but I have no enough budget to create dedicated MongoDB server in Atlas since I’m an unemployed person. My country is currently struggling for economic survival. So I cannot pay more than $40/month for text searching feature… Instead, I will create cloud server thanks to VPS providers like Hetzner at lower price.What are the best and budget-friendly alternatives of Atlas Search? (Open-source preferred)Thank you very much in advance!Kind regards,\nSerdar",
"username": "Ahmet_Serdar_Kocak"
},
{
"code": "",
"text": "Hi @Ahmet_Serdar_Kocak , you can create Atlas Search indexes on free tier Atlas clusters.",
"username": "amyjian"
}
] |
Atlas Search Alternative (Partial/Fuzzy Text Search)
|
2023-08-14T11:11:58.446Z
|
Atlas Search Alternative (Partial/Fuzzy Text Search)
| 582 |
|
null |
[
"queries",
"sharding"
] |
[
{
"code": "{\"t\":{\"$date\":\"2023-08-24T16:27:20.680+08:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn39097\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"bwms.blacklist\",\"command\":{\"find\":\"blacklist\",\"filter\":{\"mobile\":{\"$in\":[\"88f25d36955e91914a24acd93a38c425ac675611\"]},\"labels\":{\"$elemMatch\":{\"ecid\":0,\"labels\":{\"$in\":[\"TGHTC5SYGV4\",\"TG63J19VLC0\"]}}}},\"hint\":{\"mobile\":1},\"limit\":1,\"maxTimeMS\":10000,\"projection\":{\"labels.$\":1,\"mobile\":1},\"$db\":\"bwms\",\"$readPreference\":{\"mode\":\"primaryPreferred\"},\"lsid\":{\"id\":{\"$uuid\":\"94008ac1-0201-4304-939e-13d3ca01f458\"}},\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1692865639,\"i\":2}},\"signature\":{\"hash\":{\"$binary\":{\"base64\":\"fnpFN9mxs2ag8epNnLlJNxTQleQ=\",\"subType\":\"0\"}},\"keyId\":7268229982285987858}}},\"nShards\":1,\"nBatches\":1,\"cursorExhausted\":true,\"numYields\":0,\"nreturned\":0,\"queryHash\":\"6905B36F\",\"reslen\":228,\"readConcern\":{\"level\":\"local\",\"provenance\":\"implicitDefault\"},\"cpuNanos\":660518,\"remote\":\"172.16.215.25:43684\",\"protocol\":\"op_msg\",\"remoteOpWaitMillis\":1,\"durationMillis\":806}}\n{\"t\":{\"$date\":\"2023-08-24T16:27:20.680+08:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn39090\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"bwms.blacklist\",\"command\":{\"find\":\"blacklist\",\"filter\":{\"mobile\":{\"$in\":[\"88f25d36955e91914a24acd93a38c425ac675611\"]},\"labels\":{\"$elemMatch\":{\"ecid\":0,\"labels\":{\"$in\":[\"TGHTC5SYGV4\",\"TG63J19VLC0\"]}}}},\"hint\":{\"mobile\":1},\"limit\":1,\"maxTimeMS\":10000,\"projection\":{\"labels.$\":1,\"mobile\":1},\"$db\":\"bwms\",\"$readPreference\":{\"mode\":\"primaryPreferred\"},\"lsid\":{\"id\":{\"$uuid\":\"6414a0c3-51a5-423e-bc1b-1a4370a39338\"}},\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1692865639,\"i\":2}},\"signature\":{\"hash\":{\"$binary\":{\"base64\":\"fnpFN9mxs2ag8epNnLlJNxTQleQ=\",\"subType\":\"0\"}},\"keyId\":7268229982285987858}}},\"nShards\":1,\"nBatches\":1,\"cursorExhausted\":true,\"numYields\":0,\"nreturned\":0,\"queryHash\":\"6905B36F\",\"reslen\":228,\"readConcern\":{\"level\":\"local\",\"provenance\":\"implicitDefault\"},\"cpuNanos\":660582,\"remote\":\"172.16.215.25:43650\",\"protocol\":\"op_msg\",\"remoteOpWaitMillis\":2,\"durationMillis\":711}}\n{\"t\":{\"$date\":\"2023-08-24T16:27:20.680+08:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn39103\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"bwms.blacklist\",\"command\":{\"find\":\"blacklist\",\"filter\":{\"mobile\":{\"$in\":[\"88f25d36955e91914a24acd93a38c425ac675611\"]},\"labels\":{\"$elemMatch\":{\"ecid\":0,\"labels\":{\"$in\":[\"TGHTC5SYGV4\",\"TG63J19VLC0\"]}}}},\"hint\":{\"mobile\":1},\"limit\":1,\"maxTimeMS\":10000,\"projection\":{\"labels.$\":1,\"mobile\":1},\"$db\":\"bwms\",\"$readPreference\":{\"mode\":\"primaryPreferred\"},\"lsid\":{\"id\":{\"$uuid\":\"170a78cf-9a96-49c7-88e1-3c1a690fa409\"}},\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1692865639,\"i\":2}},\"signature\":{\"hash\":{\"$binary\":{\"base64\":\"fnpFN9mxs2ag8epNnLlJNxTQleQ=\",\"subType\":\"0\"}},\"keyId\":7268229982285987858}}},\"nShards\":1,\"nBatches\":1,\"cursorExhausted\":true,\"numYields\":0,\"nreturned\":0,\"queryHash\":\"6905B36F\",\"reslen\":228,\"readConcern\":{\"level\":\"local\",\"provenance\":\"implicitDefault\"},\"cpuNanos\":731029,\"remote\":\"172.16.215.25:43730\",\"protocol\":\"op_msg\",\"remoteOpWaitMillis\":2,\"durationMillis\":542}}\n",
"text": "Hi. I’m using MongoDB 7.0 in my project.\nI found the slow query msg in mongos log, it happened suddenly but periodically when I did the bench test (QPS is not high and the query is fast in mongsh shell). what should I do to fix it?\nPS:3 mongo + 6 sharding\nmongos log:",
"username": "Hu_Peter"
},
{
"code": "",
"text": "help!!! I need some advice.",
"username": "Hu_Peter"
},
{
"code": "mongshmongoshmongosh",
"text": "Hey @Hu_Peter,I found the slow query msg in mongos logAs per the documentation , the client operations (such as queries) will appear in the log if their duration exceeds the slow operation threshold.However, in order to better understand the issue here:t happened suddenly but periodically when I did the bench testCould you share more context on what type of bench test you are doing?QPS is not high and the query is fast in mongsh shellYou mentioned query performance seems fast when run directly in the mongosh shell but is slow elsewhere. To clarify:More information regarding where the slowness occurs and example queries would be helpful for us to assist you better.Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] |
Mongos 7.0 slow on mongos,but fast on mongod
|
2023-08-24T09:26:17.318Z
|
Mongos 7.0 slow on mongos,but fast on mongod
| 461 |
null |
[
"compass"
] |
[
{
"code": "",
"text": "Hi,I have a database named D1 and there are several collections inside that. For a particular collection C1 i want that no one can delete specific rows R1, R2 and R3 out of thousands of rows from either MongoDB Compass, MongoDB Shell or any other mechanism.\nThe only way to delete them is to stop mongodb services, removing authorization in mongodb.conf file and deleting the user and creating new user for the same.Thanks",
"username": "Harpreet_Singh_Sachdev"
},
{
"code": "",
"text": "Users are able to delete most records except some rows in the same table?i don’t recall mongodb has such built-in support in authZ field.What you may do is to put a proxy in front of mongodb servers and inspect the delete commands on your own, with an allow/deny list.",
"username": "Kobe_W"
}
] |
Want to restrict certain rows in specific collection of my database from deletion
|
2023-08-25T14:33:43.674Z
|
Want to restrict certain rows in specific collection of my database from deletion
| 387 |
null |
[
"react-native"
] |
[
{
"code": "export const getItems = createAsyncThunk(\"getItems\", async (user) => {\n\n const response = useRealm.objects(\"Item\").map((items) => {\n return items;\n });\n return response;\n})\nuserpartitionValueexport const useRealm = new Realm({\n schema: [ItemSchema],\n sync: {\n user: app?.currentUser,\n partitionValue: app?.currentUser,\n },\n});\nindex.ts?77fd:15 Uncaught Error: user must be of type 'object', got (null)\n at eval (index.ts?77fd:15)\n at Object../src/realm/index.ts (renderer.js:5394)\n at __webpack_require__ (renderer.js:791)\n at fn (renderer.js:102)\n at eval (index.ts?788b:2)\n at Object../src/redux/testReducer/index.ts (renderer.js:5427)\n at __webpack_require__ (renderer.js:791)\n at fn (renderer.js:102)\n at eval (index.ts?76a6:9)\n at Object../src/redux/index.ts (renderer.js:5416)\n",
"text": "Hey guys i wrote a redux-thunk function which seems to executing which which i’m not running. the code below:and i wrote a realm database function which user and partitionValue are number when the app loads which shouldn’t cause a problem as there’s no function calling. code below:But when i run the app i get this error:is there another way to bypass it?",
"username": "Tony_Ngomana"
},
{
"code": "<UserProvider fallback={LogIn}>\n <RealmProvider\n sync={{\n user: app.currentUser,\n flexible: true,\n onError: (_, error) => {\n console.error(error);\n },\n }}\n fallback={LoadingIndicator}>\n ERROR [Error: Exception in HostFunction: user must be of type 'object', got (null)]",
"text": "Having the same issue in react native ERROR [Error: Exception in HostFunction: user must be of type 'object', got (null)]",
"username": "muje_hoxe"
},
{
"code": "user: app.allUsers[0]",
"text": "The problem for me was that the UserProvider already took care of supplying the user to realm, so i had to remove the user altogether from the sync objet. For [Tony_Ngomana] I know this is an old issue but if you didn’t manage to solve you problem, have you tried user: app.allUsers[0] since the type of app?.currentUser isn’t compatible with the user’s attribute type.",
"username": "muje_hoxe"
}
] |
Realm: user must be of type 'object', got (undefined)
|
2021-02-06T11:13:48.412Z
|
Realm: user must be of type ‘object’, got (undefined)
| 4,495 |
null |
[
"aggregation",
"atlas-search"
] |
[
{
"code": "db.inventory.aggregate([\n {\n $search: {\n\t\"text\": {\n \"query\": \"..\",\n \"path\": [field_names]\n }\n }\n }, \n {\n$match:{condition}\n},\n {\n $group:\n }\n]);\n",
"text": "Using the below query to run in mongo atlas.There is no error. But there is no output as well. if $group is remove from the command, there is an output. Kindly let me know if there is other options to do grouping.",
"username": "Durga_Krishnamoorthi"
},
{
"code": "",
"text": "Hi @Durga_Krishnamoorthi,Please provide more details like full query and some sample documentsBest\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "_id,productID,vendorID,category,subCategory,productName,productDesc,productImage,unitPrice,unitPriceCurrency,shippingCharge,stockQty,availableQty,estimatedDelivery,cashOnDelivery,createdBy,createdDt,lastUpdBy,lastUpdDt,expanded\n5f48dc593ed75a76be5ae30b,PR-HEAB5,VD-AB00SM01,JBL Headset,Wired headset W201,JBL Headset Wired headset W201 Sold by VD-AB00SM01,Wired Headset - White Sold by VD-AB00SM01,false,1200,INR,5,10,10,5,Y,system,Tue Aug 18 2020 10:37:21 GMT+0530 (India Standard Time)1,system,2020-08-11T09:23:31.426Z,false\n5f48dc593ed75a76be5ae30c,PR-HEAB3,VD-AB00SM01,JBL Headset,Bluetooth Headset C101,JBL Headset Bluetooth Headset C101 sold by Sold by VD-AB00SM01,\"Leo JBL° C100si Wired Headset without Mic (Red, Wired in the ear) Sold by VD-AB00SM010\",http://192.168.2.94:8080/reference/api/v1.0/gfs/import/?filename=PR-HEAB3_VD-AB00SM01_1:1,1000,INR,5.01,300,294,5,Y,system,Tue Aug 18 2020 10:37:21 GMT+0530 (India Standard Time)1,admin,2020-08-26T14:09:16.832Z,false\n5f48dc593ed75a76be5ae30d,PR-HEAB14,VD-AB00SM01,SkullCandy Headset,Bluetooth Headset C108,SkullCandy Headset Bluetooth Headset C108 Sold by VD-AB00SM01,Bluetooth Headset - White Sold by VD-AB00SM01,false,2000,INR,5,300,300,5,Y,system,Tue Aug 18 2020 10:37:21 GMT+0530 (India Standard Time)1,system,2020-08-11T09:23:31.426Z,false\n5f48dc593ed75a76be5ae30e,PR-HEAB13,VD-AB00SM01,SkullCandy Headset,Bluetooth Headset C258,SkullCandy Headset Bluetooth Headset C258 Sold by VD-AB00SM01,Bluetooth Headset - Blue Sold by VD-AB00SM01,false,2500,INR,5,500,500,5,Y,system,Tue Aug 18 2020 10:37:21 GMT+0530 (India Standard Time)1,system,2020-08-11T09:23:31.426Z,false\n5f48dc593ed75a76be5ae30f,PR-HEAB10,VD-AB00SM01,JBL Headset,Wired headset with Mic W158,JBL Headset Wired headset with Mic W158 Sold by VD-AB00SM01,Wired Headset with Mic- Blue Sold by VD-AB00SM01,false,1700,INR,5,100,100,5,Y,system,Tue Aug 18 2020 10:37:21 GMT+0530 (India Standard Time)1,system,2020-08-11T09:23:31.426Z,false\n5f48dc593ed75a76be5ae310,PR-HEAB16,VD-AB00SM01,SkullCandy Headset,Wired headset W158,SkullCandy Headset Wired headset W158 Sold by VD-AB00SM01,Wired Headset - Blue Sold by VD-AB00SM01,false,1100,INR,5,700,700,5,Y,system,Tue Aug 18 2020 10:37:21 GMT+0530 (India Standard Time)1,system,2020-08-11T09:23:31.426Z,false\n5f48dc593ed75a76be5ae311,PR-HEAB18,VD-AB00SM01,SkullCandy Headset,Bluetooth Headset with Mic C501,SkullCandy Headset Bluetooth Headset with Mic C501 Sold by VD-AB00SM01,Bluetooth Headset with Mic - Black Sold by VD-AB00SM01,false,5000,INR,5,5,5,5,Y,system,Tue Aug 18 2020 10:37:21 GMT+0530 (India Standard Time)1,system,2020-08-11T09:23:31.426Z,false\n5f48dc593ed75a76be5ae312,PR-HEAB8,VD-AB00SM01,JBL Headset,Bluetooth Headset with Mic C108,JBL Headset Bluetooth Headset with Mic C108 Sold by VD-AB00SM01,Bluetooth Headset with Mic - White Sold by VD-AB00SM01,false,4000,INR,5,30,30,5,Y,system,Tue Aug 18 2020 10:37:21 GMT+0530 (India Standard Time)1,system,2020-08-11T09:23:31.426Z,false\n5f48dc593ed75a76be5ae313,PR-HEAB17,VD-AB00SM01,SkullCandy Headset,Wired headset W201,SkullCandy Headset Wired headset W201 Sold by VD-AB00SM01,Wired Headset - White Sold by VD-AB00SM01,false,1200,INR,5,10,10,5,Y,system,Tue Aug 18 2020 10:37:21 GMT+0530 (India Standard Time)1,system,2020-08-11T09:23:31.426Z,false\n5f48dc593ed75a76be5ae314,PR-HEAB20,VD-AB00SM01,SkullCandy Headset,Bluetooth Headset with Mic C101,SkullCandy Headset Bluetooth Headset with Mic C101 Sold by VD-AB00SM01,Bluetooth Headset with Mic - White Sold by VD-AB00SM01,false,4000,INR,5,30,30,5,Y,system,Tue Aug 18 2020 10:37:21 GMT+0530 (India Standard Time)1,system,2020-08-11T09:23:31.426Z,false\n5f48dc593ed75a76be5ae315,PR-HEAB19,VD-AB00SM01,SkullCandy Headset,Bluetooth Headset with Mic C251,SkullCandy Headset Bluetooth Headset with Mic C251 Sold by VD-AB00SM01,Bluetooth Headset with Mic - Blue Sold by VD-AB00SM01,false,4500,INR,5,20,20,5,Y,system,Tue Aug 18 2020 10:37:21 GMT+0530 (India Standard Time)1,system,2020-08-11T09:23:31.426Z,false\n5f48dc593ed75a76be5ae316,PR-HEAB6,VD-AB00SM01,JBL Headset,Bluetooth Headset with Mic C501,JBL Headset Bluetooth Headset with Mic C501 Sold by VD-AB00SM01,Bluetooth Headset with Mic - Black Sold by VD-AB00SM01,false,5000,INR,5,5,5,5,Y,system,Tue Aug 18 2020 10:37:21 GMT+0530 (India Standard Time)1,system,2020-08-11T09:23:31.426Z,false\n5f48dc593ed75a76be5ae317,PR-HEAB7,VD-AB00SM01,JBL Headset,Bluetooth Headset with Mic C258,JBL Headset Bluetooth Headset with Mic C258 Sold by VD-AB00SM01,Bluetooth Headset with Mic - Blue Sold by VD-AB00SM01,false,4500,INR,5,20,20,5,Y,system,Tue Aug 18 2020 10:37:21 GMT+0530 (India Standard Time)1,system,2020-08-11T09:23:31.426Z,false\n5f48dc593ed75a76be5ae318,PR-HEAB12,VD-AB00SM01,SkullCandy Headset,Bluetooth Headset C508,SkullCandy Headset Bluetooth Headset C508 Sold by VD-AB00SM01,Bluetooth Headset - Black Sold by VD-AB00SM01,false,3000,INR,5,100,100,5,Y,system,Tue Aug 18 2020 10:37:21 GMT+0530 (India Standard Time)1,system,2020-08-11T09:23:31.426Z,false\n5f48dc593ed75a76be5ae319,PR-HEAB11,VD-AB00SM01,JBL Headset,Wired headset with Mic W208,JBL Headset Wired headset with Mic W208 Sold by VD-AB00SM01,Wired Headset with Mic- White Sold by VD-AB00SM01,false,1800,INR,5,100,100,5,Y,system,Tue Aug 18 2020 10:37:21 GMT+0530 (India Standard Time)1,system,2020-08-11T09:23:31.426Z,false\n5f48dc593ed75a76be5ae31a,PR-HEAB22,VD-AB00SM01,SkullCandy Headset,Wired headset with Mic W151,SkullCandy Headset Wired headset with Mic W151 Sold by VD-AB00SM01,Wired Headset with Mic- Blue Sold by VD-AB00SM01,false,1700,INR,5,100,100,5,Y,system,Tue Aug 18 2020 10:37:21 GMT+0530 (India Standard Time)1,system,2020-08-11T09:23:31.426Z,false\n5f48dc593ed75a76be5ae31b,PR-HEAB15,VD-AB00SM01,SkullCandy Headset,Wired headset W108,SkullCandy Headset Wired headset W108 Sold by VD-AB00SM01,Wired Headset - Black Sold by VD-AB00SM01,false,1000,INR,5,400,400,5,Y,system,Tue Aug 18 2020 10:37:21 GMT+0530 (India Standard Time)1,system,2020-08-11T09:23:31.426Z,false\n5f48dc593ed75a76be5ae31c,PR-HEAB21,VD-AB00SM01,SkullCandy Headset,Wired headset with Mic W101,SkullCandy Headset Wired headset with Mic W101 Sold by VD-AB00SM01,Wired Headset with Mic- Black Sold by VD-AB00SM01,false,1500,INR,5,100,100,5,Y,system,Tue Aug 18 2020 10:37:21 GMT+0530 (India Standard Time)1,system,2020-08-11T09:23:31.426Z,false\n5f48dc593ed75a76be5ae31d,PR-HEAB24,VD-AB00SM01,JBL Headset,Bluetooth Headset C501,JBL Headset Bluetooth Headset C501 Sold by VD-AB00SM01,Bluetooth Headset - Black Sold by VD-AB00SM01,false,3000,INR,5,100,100,5,Y,system,Tue Aug 18 2020 10:37:21 GMT+0530 (India Standard Time)1,system,2020-08-11T09:23:31.426Z,false\ndb.inventory.aggregate([\n {\n $search: {\n\t \"text\": {\n \"query\": \"JBL\",\n \"path\": \"productName\"\n }\n }\n },\n {\n $group:{_id:\"$category\",Total:{$sum:1}} \n } \n]);\n",
"text": "",
"username": "Durga_Krishnamoorthi"
},
{
"code": "[\n {\n $search: {\n\t \"text\": {\n \"query\": \"JBL\",\n \"path\": \"productName\"\n }\n }\n },\n {\n $group:{_id:\"$category\",Total:{$sum:1}} \n } \n]\n",
"text": "This doesn’t give an error because you didn’t include the index?",
"username": "Richard_Thorne"
},
{
"code": "db.products.insertMany([\n {\n _id: 'A',\n category: 'speakers',\n productName: 'JBL Flip Essential'\n },\n {\n _id: 'B',\n category: 'speakers',\n productName: 'JBL Boombox 2 Squad'\n },\n {\n _id: 'C',\n category: 'headphones',\n productName: 'Sennheiser HD 350 BT'\n },\n {\n _id: 'D',\n category: 'headphones',\n productName: 'JBL Tune 520'\n },\n {\n _id: 'E',\n category: 'headphones',\n productName: 'JBL Tune 710'\n },\n {\n _id: 'F',\n category: 'headphones',\n productName: 'Sennheiser Epos Adapt'\n },\n]);\nprodudctNamedb.products.aggregate([\n {\n $search: {\n text: {\n query: 'jbl',\n path: 'productName'\n }\n }\n },\n {\n $group: {\n _id: '$category',\n total: {\n $sum: 1,\n }\n }\n }\n]);\n[ { _id: 'speakers', total: 2 }, { _id: 'headphones', total: 2 } ]{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"productName\": {\n \"type\": \"string\",\n }\n }\n }\n },\n",
"text": "Hello, everyone! I have modeled the issue using this dataset:If Atlas Search text index on field produdctName is specified, the aggregation belowproduces expected result:\n[ { _id: 'speakers', total: 2 }, { _id: 'headphones', total: 2 } ]Index used:If we drop the index - empty result will be returned and no error is thrown (executed in shell).There is no error. But there is no output as well. if $group is remove from the command, there is an output. Kindly let me know if there is other options to do grouping.No idea on this one. Since $search produces results, $group stage must output something. Even if we try to use non-existent field name.",
"username": "slava"
}
] |
Not able to use $group with $search in atlas search
|
2020-09-10T05:02:19.661Z
|
Not able to use $group with $search in atlas search
| 2,634 |
null |
[
"python"
] |
[
{
"code": "from langchain.vectorstores import MongoDBAtlasVectorSearch\n\nvectorstore = MongoDBAtlasVectorSearch(\n collection=db.embeddings,\n embedding=get_embedding(\"azureopenai\"),\n index_name=\"embedding_index\")\nretriever = vectorstore.as_retriever(\n search_kwargs={\n 'k': 5,\n 'filter': { 'project': 'heroes' }\n }\n)\n{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"embedding\": {\n \"dimensions\": 1536,\n \"similarity\": \"cosine\",\n \"type\": \"knnVector\"\n },\n \"project\": {\n \"type\": \"string\"\n }\n }\n }\n}\n{\n \"_id\": {\n \"$oid\": \"64e379206cfcf8a7866bce8c\"\n },\n \"text\": \"Spider-Man, créé par Stan Lee et Steve Ditko, est un super-héros de Marvel Comics. Peter\\nParker, un étudiant doué mais timide, est mordu par une araignée radioactive ...\",\n \"embedding\": [\n 0.0013639614901196446,\n -0.02883271683320636,\n 0.014490925689774099,\n -0.012036416665376559,\n ....\n ],\n \"source\": \"uploads/heroes/spiderman-short.pdf\",\n \"file\": \"spiderman-short.pdf\",\n \"project\": \"heroes\"\n}\n",
"text": "Hello,\nI created an Vector Search Index in my Atlas cluster, on the “embedding” field of a “embeddings” collection. It works well.Now I want to filter the results to only retrieve entries for a specific “project”. I use LangChain, and the MongoDBAtlasVectorSearch as a retriever. In the documentation it says I can add the filter, as explained here.My code:I then use the retriever in a LangChain chain. I got results (5, as expected), but the filter does not work, I got results from all projects (not only the ‘heroes’ project).Other info for context:Here is the index (I also added an index on the ‘project’ field, but it does change the results):And here is an example of a document stored in the ‘embeddings’ collection:Any hints or solutions?\nThanks a lot",
"username": "Jacky_Casas"
},
{
"code": "k = 5\nsearch_kwargs={\n 'k': k * 10, # overrequest k during search\n 'pre_filter': { 'path': 'project', 'equals', 'heroes' }\n 'post_filter_pipeline': [{'$limit' : k}] # limit results to top k\n}\n",
"text": "Hi Jacky,Thanks for the question! Our integration in langchain treats filters slightly differently from other vector stores. You would actually need to specify this as a ‘pre_filter’ not a ‘filter’ in order for this to work. The syntax will also look slightly different as you will need to specify the path (‘project’), operator (‘equals’) and value (‘heroes’) for the filter. The example below should make this more clear.I’d also recommend increasing the value of k to a larger number, and adding an additional post_filter_pipeline search_kwarg that limits the results to k. This will boost the accuracy of your results considerably.Your search kwargs with both of these changes should look like thisLet me know if you run into any other issues!",
"username": "Henry_Weller"
},
{
"code": "\"knnBeta.filter.equals\" must be a documentsearch_kwargs={\n 'k': k * 10,\n 'pre_filter': {\n 'text': {\n 'path': 'project',\n 'query': 'heroes'\n }\n },\n 'post_filter_pipeline': [ { '$limit': k } ]\n}\n",
"text": "Hey,\nThanks a lot for your answer!I tested this intensively.\nThe pre-filter like you suggested leads to an error \"knnBeta.filter.equals\" must be a document. Actually the ‘equals’ operator cannot match a string value.But I think it works like that with the ‘text’ operator:Does it make sense?Then, my second problem is that I need to filter on two fields (not only ‘project’, but also on ‘username’). How would you do that? I tested with the compound operator, but didn’t manage to make it work correctly.",
"username": "Jacky_Casas"
},
{
"code": "",
"text": "Nice catch on the pre_filter - yes you should use the ‘text’ filter in this situation, not ‘equals’.You should be able to use a compound filter here like you would with regular search. This post has an example of what this could look like. If you’re still running into issues would you mind sharing the syntax you are using?",
"username": "Henry_Weller"
},
{
"code": "from pymongo import MongoClient\nfrom langchain.vectorstores import MongoDBAtlasVectorSearch\nMONGODB_ATLAS_CLUSTER_URI =\"\"\n# initialize MongoDB python client\nclient = MongoClient(MONGODB_ATLAS_CLUSTER_URI)\n\ndb_name = \"langchain_db\"\ncollection_name = \"langchain_col\"\ncollection = client[db_name][collection_name]\nindex_name = \"langchain_demo\"\n\n# insert the documents in MongoDB Atlas with their embedding\ndocsearch = MongoDBAtlasVectorSearch.from_documents( docs\n ,model_NEW, collection=collection, index_name=index_name\n)\n",
"text": "hello , I am trying to create a vectorstore , to store a document and embeddings,i am getting the error SSL handshake failed",
"username": "YASH_SHARMA6"
},
{
"code": "client = MongoClient(mongodb_url, tlsCAFile=certifi.where())\n",
"text": "Hey,\nI think you may need to install cerfiti, then pass it in your mongodb client.\nShould be something like this",
"username": "T_A"
}
] |
Filtering the results of Vector Search with LangChain
|
2023-08-22T08:39:29.182Z
|
Filtering the results of Vector Search with LangChain
| 2,794 |
null |
[
"node-js",
"mongoose-odm"
] |
[
{
"code": "const CommentSchema = new mongoose.Schema(\n {\n commenter: {\n type: mongoose.Types.ObjectId,\n ref: \"user\",\n required: true,\n },\n post: {\n type: mongoose.Types.ObjectId,\n ref: \"post\",\n required: true,\n },\n content: {\n type: String,\n required: true,\n },\n parent: {\n type: mongoose.Types.ObjectId,\n ref: \"comment\",\n },\n children: [\n {\n type: mongoose.Types.ObjectId,\n ref: \"comment\",\n },\n ],\n edited: {\n type: Boolean,\n default: false,\n },\n },\n { timestamps: true }\n);\nconst commentSchema = new mongoose.Schema({\n text: {\n type: String,\n required: true,\n },\n author: {\n type: mongoose.Schema.Types.ObjectId,\n ref: 'User',\n required: true,\n },\n post: {\n type: mongoose.Schema.Types.ObjectId,\n ref: 'Post',\n required: true,\n },\n likes: [\n {\n type: mongoose.Schema.Types.ObjectId,\n ref: 'User',\n },\n ],\n replies: [\n {\n type: mongoose.Schema.Types.ObjectId,\n ref: 'Comment',\n },\n ],\n edited: {\n type: Boolean,\n default: false,\n },\n editedAt: {\n type: Date,\n },\n createdAt: {\n type: Date,\n default: Date.now,\n },\n});\n",
"text": "Hi everybody, I am making a social networking application. But I’m a little confused about the schema. Which of these is a more professional structure? Or what build would you recommend? Is it better to create a separate model for the comment or specify it as an array in the Post?",
"username": "R5_Vzn"
},
{
"code": "",
"text": "Hello, @R5_Vzn ! Welcome to the MongoDB community! Try to avoid the situation where your document contains array that can expand infinitely. This may cause a problem, when the document grows big enough, so it hits 16MB BSON-document limit. Moreover big documents may be the problem if you try to $group them in the aggregation pipeline, while this stage also has its limit of 100 MB, so it you have plenty of large documents you can quickly exceed that limitation.Comments that can have replies represent a tree structure. I would suggest you to model your schemas comments data like a tree structure with parent references.",
"username": "slava"
}
] |
Which schema is more professional?
|
2023-08-19T08:28:35.769Z
|
Which schema is more professional?
| 511 |
[
"aggregation",
"queries"
] |
[
{
"code": "{\n \"_id\": {\n \"$oid\": \"64e89dd304c2da1a0022b0d6\"\n },\n \"size\": 2, //number of Documents in data\n\n \"version\": \"abc_Unaggregated\",\n \"data\": [\n {\n \"request\": {\n \"a\" : \"x\",\n \"b\" : \"y\",\n \"c\" : \"z\"\n }\n },\n {\n \"request\": {\n \"a\" : \"x\",\n \"b\" : \"y\",\n \"c\" : \"z\"\n }\n }\n ]\n}\n[\n {\n $match:\n /**\n * query: The query in MQL.\n */\n {\n version: \"abc_Unaggregated\",\n },\n },\n {\n $unwind:\n /**\n * path: Path to the array field.\n * includeArrayIndex: Optional name for index.\n * preserveNullAndEmptyArrays: Optional\n * toggle to unwind null and empty values.\n */\n {\n path: \"$data\",\n },\n },\n {\n $group:\n /**\n * _id: The id of the group.\n * fieldN: The first field name.\n */\n {\n _id: {\n a: \"$data.request.a\",\n b: \"$data.request.b\",\n c: \"$data.request.c\"\n version: \"$version\",\n },\n },\n },\n {\n $set:\n /**\n * field: The field name\n * expression: The expression.\n */\n {\n \"_id.version\": {\n $replaceOne: {\n input: \"$_id.version\",\n find: \"_Unaggregated\",\n replacement: \"\",\n },\n },\n },\n },\n {\n $project:\n /**\n * _id: The id of the group.\n * fieldN: The first field name.\n */\n {\n \"placeholder.request\": {\n a: \"$_id.a\",\n b: \"$_id.b\",\n c: \"$_id.c\"\n },\n version: \"$_id.version\",\n },\n },\n {\n $group:\n /**\n * _id: The id of the group.\n * fieldN: The first field name.\n */\n {\n _id: {\n /**\n * needed to set new package\n */\n version: \"$version\",\n },\n data: {\n $push: \"$placeholder\",\n },\n },\n }\n]\n",
"text": "Hi,I am trying to write an aggregation pipeline to aggregate values in an array. Our Datamodel looks something likeThe size of the data array is limited to 100 entries per document. We have about 1,5 billion of those entries. From which about 10% are really different. So the idea is to aggregate the different requests into one. In the example above instead of the 2 requests there would be one left.The genral way I did it is to create an Aggregation-Pipeline with the following steps:The problem is here the data array would be 1,5 million or more entries large. That is way to much for a single document. Is there a way to distribute the push to different documents so that the end result will be multiple documents each containig a data array with 100 entries?I tried slice, limit and bucket. All of these don’t seem to be applicable to my problem.\nlimit only cuts away instead of distributing the documents.\nSlice seems to do the same: i can decrease the number of elements but not distribute them.\nWith buckets I would need to give boundaries, to miraculously get a size of 100 per bucket, which would call for variable boundaries so not really applicable to this problem.There are some more steps e.g. for merging back into the collection but they are not relevant to my question.We are using an Atlas Mongo DB version 5.0.19",
"username": "Philipp_Reichling"
},
{
"code": "db.requests.insertMany([\n {\n _id: 'R1',\n size: 2,\n // possible values: 'justAdded', 'toProcess', 'processed'\n version: 'justAdded', \n data: [\n {\n request: { // unique (1)\n a: 'x',\n b: 'y',\n c: 'z'\n }\n },\n {\n request: {\n a: 'x',\n b: 'y',\n c: 'z'\n }\n }\n ]\n },\n {\n _id: 'R2',\n size: 3,\n version: 'justAdded',\n data: [\n {\n request: {\n a: 'k', // new, unique (2)\n b: 'y',\n c: 'z'\n }\n },\n {\n request: {\n a: 'm', // new, unique (3)\n b: 'y',\n c: 'z'\n }\n },\n {\n request: {\n a: 'm', // new\n b: 'y',\n c: 'z'\n }\n }\n ]\n }\n]);\njustAddedrequestdatatoProcessprocesseddb.requests.updateMany(\n {\n version: 'justAdded'\n },\n {\n $set: {\n version: 'toProcess'\n }\n }\n);\nrequestsProcesseddb.requests.aggregate([\n {\n $match: {\n version: 'toProcess'\n }\n },\n {\n $unwind: '$data',\n },\n {\n $project: {\n _id: '$data.request'\n }\n },\n { \n $merge: {\n into: 'requestsProcessed',\n on: '_id',\n whenMatched: 'keepExisting',\n whenNotMatched: 'insert'\n }\n }\n]);\nrequestsProcessed[\n { _id: { a: 'x', b: 'y', c: 'z' } },\n { _id: { a: 'k', b: 'y', c: 'z' } },\n { _id: { a: 'm', b: 'y', c: 'z' } }\n]\ndb.requests.updateMany(\n {\n version: 'toProcess'\n },\n {\n $set: {\n version: 'processed'\n }\n }\n);\n",
"text": "Hello, @Philipp_Reichling ! Welcome to the community Idea\nIn your case I can suggest to change the flow a bit:To demonstrate the idea, I would take your dataset example and extend it a bit, like so:I assume, that your documents have versioning process like this:You can adapt this to have 0,1,2 numbering instead of textual stages or similar ones. You can add a separate field in your documents to support suggested flow, if needed.1. Select-mark documents to process.This is needed, so the version-update operation would know what documents to update after processing-aggregation is run.2. Process documents\nThis will write unique requests into a separate collection - requestsProcessed. It will be populated with next runs of the processing-aggregation. Form this collection you can get the list of unique request objects.Unique documents in the requestsProcessed collection:3. Update version of processed documents\nThis way you avoid document to be processed more than once.Let me know, if this solution works for you ",
"username": "slava"
}
] |
Aggregation Pipeline Distribute Documents into Arrays
|
2023-08-25T13:40:39.850Z
|
Aggregation Pipeline Distribute Documents into Arrays
| 235 |
|
[
"next-js",
"newyork-mug"
] |
[
{
"code": "",
"text": "\nNYC AI Hackathon1920×950 51.7 KB\nInterested in building AI applications, meeting local innovators, and winning some cool prizes? MongoDB and Modal are excited to bring together a group of builders for a day-long Hackathon in New York City on Saturday, August 26th.Come hack with the NYC AI community, try new frameworks, and ship code that brings your most ambitious ideas to reality. There will be prizes, food, and speakers/workshops (along with free credits and early access to products). All ideas welcome.Spots are limited, so check out the details and register ASAP: NYC AI Hackathon · Luma1st overall: Modal AirPod Maxes and Modal Patagonia jackets\n2nd overall: Vercel AirPods and Modal Patagonia Jackets\n3rd overall: Amazon Fire bundles\nBest use of MongoDB: Meta Quest 2\nBest use of Cohere API: DJI Drone set\nMost likely to become a business: Ramp AirPods\nBest LLM application: Langchain sweatshirts\nBest frontend: $1500 Vercel credits\n…and more!\n\nSpecial thanks to our: MongoDB, Cohere, AWS, Langchain, ElevenLabs, Ramp, and Vercel.\n\n9:30am-10pm Saturday, August 26th, 2023\nMongoDB, New York City\n-------Hackers are encouraged to form groups of 2-4 beforehand (please have each member register individually!). We will also set up a channel to reach out to other approved hackers to help form teams.Please note that this is an in-person event and we are unable to support remote teamsEvent Type: In-Person\nLocation: MongoDB HQ,1633 Broadway 38th floor, New York, NY 10019, United States",
"username": "Harshit"
},
{
"code": "",
"text": "What’s the hackathon schedule?",
"username": "Ranadeep_Singh"
}
] |
NYC AI Hackathon with Modal
|
2023-08-11T12:31:15.742Z
|
NYC AI Hackathon with Modal
| 1,321 |
|
null |
[] |
[
{
"code": "",
"text": "According to https://docs.mongodb.com/manual/reference/parameters/#mongodb-parameter-param.maxTransactionLockRequestTimeoutMillis this parameter should cause competing transaction to wait on the lock, not to fail.\nI set maxTransactionLockRequestTimeoutMillis to 3 seconds in mongod.conf:\nsetParameter:\nmaxTransactionLockRequestTimeoutMillis: 3000And after restarting mongod I can get this value in mongo shell (i.e. it is set), but I do not see its effect:It looks like this parameter is not working as described in the docs.Environment: local mongodb 5.0 on Windows, single node replicaset.",
"username": "Dmitriy_Shirokov"
},
{
"code": "setParameter:\n maxTransactionLockRequestTimeoutMillis: 3000\nobjId = ObjectId(\"613233675a5ea6722c960051\");\ndb.test.insert({_id: objId, item: \"A\"});\n session = db.getMongo().startSession( { readPreference: { mode: \"primary\" } } );\n coll = session.getDatabase(dbName).getCollection(\"test\");\n session.startTransaction( { readConcern: { level: \"local\" }, writeConcern: { w: \"majority\" } } );\n coll.updateOne({_id: objId}, { $set: {item: \"B\"}})\nsession = db.getMongo().startSession( { readPreference: { mode: \"primary\" } } );\ncoll = session.getDatabase(dbName).getCollection(\"test\");\nsession.startTransaction( { readConcern: { level: \"local\" }, writeConcern: { w: \"majority\" } } );\ncoll.updateOne({_id: objId}, { $set: {item: \"C\"}})\nWriteCommandError({\n \"errorLabels\" : [\n \"TransientTransactionError\"\n ],\n \"ok\" : 0,\n \"errmsg\" : \"WriteConflict error: this operation conflicted with another operation. Please retry your operation or multi-document transaction.\",\n \"code\" : 112,\n \"codeName\" : \"WriteConflict\",\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1632429457, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n },\n \"operationTime\" : Timestamp(1632429457, 1)\n})\n",
"text": "More info on how to reproduce:Environment: Windows, mongodb 5.0, single node replicaset.in mongod.cfgCreate DB and collection test with one object:Open 2 mongo shell windows to create 2 concurrent transactions: Window 1:Window 2:Second update fails immediately without waiting for 3 seconds:I got reply on stackoverflow that this is not how maxTransactionLockRequestTimeoutMillis supposed to work. Then the question remains: how is it supposed to work? Anybody can provide an example? Preferably in mongo shell, as this is the simplest to reproduce. I saw unit test in mongodb code testing this param, but the test seems to create lock directly, not through execution of an operation on data.",
"username": "Dmitriy_Shirokov"
},
{
"code": "",
"text": "Hi,I want to know how to add maxTransactionLockRequestTimeoutMillis by Mongo Ops Manager when startup replica also by Mongo Ops Manager ?The Modify config in Mongo Ops Manager does not have the maxTransactionLockRequestTimeoutMillis parameter.I try to add these line to the file config of automation Mongo Ops Manager in each node in replica:setParameter:\nmaxTransactionLockRequestTimeoutMillis: 3000But it will disappear after restart instance by Mongo Ops Manager",
"username": "Long_34383"
}
] |
Parameter max Transaction Lock Request Timeout Millis is not working
|
2021-09-23T14:13:22.945Z
|
Parameter max Transaction Lock Request Timeout Millis is not working
| 5,296 |
null |
[
"dot-net",
"containers"
] |
[
{
"code": " var encryptOptions = new EncryptOptions(\n algorithm: EncryptionAlgorithm.AEAD_AES_256_CBC_HMAC_SHA_512_Deterministic.ToString(),\n keyId: dataKeyId);\n\n var reader = new BinaryReader(stream);\n var data = new BsonBinaryData(reader.ReadBytes((int)stream.Length), BsonBinarySubType.Binary);\n\n // This is the line below that causes the memory spike\n var encryptedData = clientEncryption.Encrypt(data, encryptOptions, cancellationToken); \n\n using var encryptedStream = new MemoryStream(encryptedData.Bytes);\n base.WriteFileAsync(fileName, fileMetadata, encryptedStream, cancellationToken);\n",
"text": "We currently have client side encryption setup to encrypt all our sensitive files stored in MongoDB/GridFS. However, we recently encountered a problem where encrypting a file of around 100-200MB will use up more than 2-3GB of memory per file, and this continues to accumulate in RAM usage until the GC is called. Of course, the memory usage isn’t linear, for example for around 100 files, the memory spike is only up to 10-15GB for example. But since we are running in kubernetes/docker, the GC doesn’t seem to know the actual limit of memory where it needs to be called, ending with the pod being deleted due to memory pressure on the node in certain cases.I’m not sure if we’re doing something wrong, or if there’s a way to limit the memory usage for encrypting those somewhat large files. Below is the code we are using for the encryption:",
"username": "PBG"
},
{
"code": "",
"text": "Hi, @PBG,Thank you for reporting this issue. I have created CSHARP-4669 to investigate further. Please follow CSHARP-4669 in case we have trouble repro’ing the issue or have additional questions.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "Hello James, thank you. Will keep an eye on it!",
"username": "PBG"
},
{
"code": "",
"text": "Hello @James_Kovacs, sorry to bother again, but any updates regarding this?Thank you.",
"username": "PBG"
},
{
"code": "",
"text": "Thanks for reaching out again. We were able to reproduce the issue and have added our analysis to CSHARP-4669. We have some ideas of how to reduce memory usage and will implement them in the coming weeks. You can track progress on CSHARP-4669 as well as reach out to us with any questions or concerns.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "Great! Thank you James. It’s good to know you were able to find the issue.Looking forward to the fix. Thank you again.",
"username": "PBG"
}
] |
clientEncryption.Encrypt in C# uses a lot of memory relative to a somewhat large file
|
2023-06-05T14:38:18.914Z
|
clientEncryption.Encrypt in C# uses a lot of memory relative to a somewhat large file
| 816 |
null |
[
"replication"
] |
[
{
"code": "db.adminCommand({\n \"setDefaultRWConcern\" : 1,\n \"defaultWriteConcern\" : {\n \"w\" : 2\n },\n \"defaultReadConcern\" : { \"level\" : \"majority\" }\n})\nawait mongoose.connect(dbUrl,{replicaSet: 'replname', readPreference : 'primary'});\nawait User.create({ name: 'test-user'});'\nawait User.findOne({name:'test-user'})\nw:2NULL",
"text": "Hello, I have configured my replica set with 2 nodes (our use case is not fail over case), So we have one primary and one non-voting secondary, and no arbiter. The idea is to have normal application traffic go to the primary and some analytics-related queries go to the secondary and also some data analytics team uses the secondary node for read-only purposes.I have set the default global read and write concern in the following way.And my connection details as follows :Here comes my issue :on creating user like followingand reading it immediately :gives NULL results.here we have 2 issues ==>My assumption is, as we have mentioned ‘primary’ as the default readPreference in connection URL, the query should fetch results always from the primary., looks like it’s trying to fetch from the secondary.as we have set writeConcern as ==>w:2, even from secondary data should come instead of returning NULL.please let me know if am missing anything here.thanks",
"username": "Gangadhar_M"
},
{
"code": " \"defaultWriteConcern\": {\n \"w\": 2\n },\nw: \"majority\"\"majority\"read-after-writeread-after-write",
"text": "Hey @Gangadhar_M,Welcome to the MongoDB Community!I have configured my replica set with 2 nodes (our use case is not fail over case)As per the documentation:“After the write operation returns with a w: \"majority\" acknowledgment to the client, the client can read the result of that write with a \"majority\" readConcern.”However, in your case, it depends on how you are executing the code and reading your own writes. Could you please share your workflow? Also, could you provide information about where you have deployed your MongoDB server?Although, you have the option to read your own writes:If you are using a read preference of “majority”, and write concern of “majority” you are ensured to achieve read-after-write consistency across all operations that utilize the same thread.Also, in scenarios where multiple threads are in use and you are exclusively reading from primary nodes, you are guaranteed the achievement of read-after-write consistency, provided that you perform the read operation in the second thread only after the write operation has concluded in the first thread.To learn more, please refer to theIn case of further assistance, share the code snippet you are trying to execute to test this out.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Thanks Kushagra for response,\nWill go through this and comeback.thanks",
"username": "Gangadhar_M"
},
{
"code": "",
"text": "@Gangadhar_MSeems you have found the solution. Can you explain what is the root cause of your observed issue and what is the ultimate fix?",
"username": "Kobe_W"
},
{
"code": "db.adminCommand({\n \"setDefaultRWConcern\" : 1,\n \"defaultWriteConcern\" : {\n \"w\" : 2\n },\n \"defaultReadConcern\" : { \"level\" : \"majority\" }\n})\n",
"text": "Hi ,With following read and write concerns,And knowing little more on readConcerns, we come to conclusion that why I faced these issues. I just mentioned them below with their cause.Issue 1 :\nAs I have mentioned my writeConcern as ==> {w:2} data might not have been durable by the time am reading it through readConcern of ==> “majority”,\nMay be because of that I was not getting data even from primary nodeIssue 2 :\nSame thing applies to Secondary as well, with writeConcern as ==> {w:2} and read concern as ==>. “majority” you are trying to read durable data from secondary.\nBy the time that you are trying to read, it might not have been durable in the secondary.With this , readConcern also plays a major role while reading the data. In our case, as we are not using any Transaction for storing data, and speed is main concern,\n{readConcern: ‘local’} makes more sense for us. With this we get data faster as we are not looking at any fail over cases.Hope this helps.Thanks\nGangadhar M",
"username": "Gangadhar_M"
},
{
"code": "",
"text": "Thx for the info. That makes sense.",
"username": "Kobe_W"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
MongoDB replication issues
|
2023-08-18T08:00:12.354Z
|
MongoDB replication issues
| 589 |
null |
[] |
[
{
"code": "",
"text": "Hi when will be the direct query mode available for powerbi?",
"username": "Anand_Reddy"
},
{
"code": "",
"text": "Hi @Anand_Reddy Welcome to the Community!\nWe are shooting for end of the year for the DQ support. This is my best estimate, and I will be sure to update the community when we get close.",
"username": "Alexi_Antonino"
}
] |
Direct Query for POWERBI
|
2023-08-25T05:26:01.247Z
|
Direct Query for POWERBI
| 273 |
null |
[
"storage"
] |
[
{
"code": "kymongodb:SECONDARY> db.serverStatus().tcmalloc.tcmalloc.formattedString\n------------------------------------------------\nMALLOC: 98175938288 (93627.9 MiB) Bytes in use by application\nMALLOC: + 136393015296 (130074.5 MiB) Bytes in page heap freelist\nMALLOC: + 2417782424 ( 2305.8 MiB) Bytes in central cache freelist\nMALLOC: + 293952 ( 0.3 MiB) Bytes in transfer cache freelist\nMALLOC: + 39110200 ( 37.3 MiB) Bytes in thread cache freelists\nMALLOC: + 992469248 ( 946.5 MiB) Bytes in malloc metadata\nMALLOC: ------------\nMALLOC: = 238018609408 (226992.2 MiB) Actual memory used (physical + swap)\nMALLOC: + 52531675136 (50098.1 MiB) Bytes released to OS (aka unmapped)\nMALLOC: ------------\nMALLOC: = 290550284544 (277090.3 MiB) Virtual address space used\nMALLOC:\nMALLOC: 3141932 Spans in use\nMALLOC: 102 Thread heaps in use\nMALLOC: 4096 Tcmalloc page size\n------------------------------------------------\n",
"text": "I am using mongodb replicatset with the following configuration\nVersion : v4.0.28\noplogSizeMB: 614400\nstorageEngine : wiredTiger\nI have a problem when mongodb uses too much Memory. I check tcmalloc and see the parameters as below.\nLooks like Bytes in page heap freelist is too much.\nI want to ask about how to solve this problem?",
"username": "Thanh_Nguyen_Van"
},
{
"code": "",
"text": "Please help?, it very important :((",
"username": "Thanh_Nguyen_Van"
},
{
"code": "rs.status()mongod",
"text": "Hey @Thanh_Nguyen_Van,Welcome to the MongoDB Community!Version : v4.0.28The MongoDB 4.0 is no longer supported, and upgrading to a newer version is recommended. You can refer to the EOL Support Policies for more information on MongoDB versions and their support status.I am using MongoDB replica set. I have a problem when MongoDB uses too much memoryAlthough, could you share some additional details in order to better understand your issue:Look forward to hearing from you.Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] |
Mongo Memory Very Hight with Bytes in page heap freelist
|
2023-08-16T07:56:32.419Z
|
Mongo Memory Very Hight with Bytes in page heap freelist
| 635 |
null |
[
"transactions"
] |
[
{
"code": "inc",
"text": "Assuming a have a property “balance” of type Number.\nI am adding/subtracting this property by using the inc operator.\nIs there a way to validate transactions so that queries that would result in the property being negative are rejected?",
"username": "Florian_MineCrypto"
},
{
"code": "{ _id : 5 , balance : 300 }\ncollection.updateOne( { _id : 5 } ,\n { $inc : { balance : -400} } )\ncollection.updateOne( { _id : 5 , balance : { $gte : 400 } } ,\n { $inc : { balance : -400} } )\n",
"text": "You do not need transaction to do that.Start with documentSo rather than doing an update such as:that would result into a new balance of -100. You simply do:With this query the update will not occur.",
"username": "steevej"
},
{
"code": "",
"text": "Above solution is fine. But what if I want to check multiple fields. eg\ncollection.updateOne( { _id : 5 } ,\n{ $inc : { balance : -400, savings: 200, cost: 100} } )\nthe query should only update the values if they will be positive after updation. In above case, query should not update ‘balance’, but should increment ‘savings’ and ‘cost’",
"username": "Sudarshan_Dhatrak1"
},
{
"code": "db.balance.updateOne(\n // filter params\n {\n _id: 5, \n savings: { $gte: 400 }, // s1\n costs: { $gte: 250 }, // c1\n },\n // update params\n {\n $inc: {\n savings: -400, // s2\n costs: -250 // c2\n }\n }\n);\n",
"text": "@Sudarshan_Dhatrak1 , you can apply the @steevej’s solution above to your new requirements:If you decrease values using $inc, make sure its absolute value in update params is not greater, then the value in filter params:\n|s1| must be >= |s2]\n|c1| must be >= |c2]",
"username": "slava"
}
] |
Validating transactions: Prevent negative numbers
|
2022-05-13T13:08:02.907Z
|
Validating transactions: Prevent negative numbers
| 3,690 |
[
"atlas-cluster"
] |
[
{
"code": "",
"text": "Hi there. My objective is to create a new cluster for each review app. Has been working fine in the “shared” space with M2/M5 → but now I wanted to upgrade to M10 so we can have MongoDB version 7 in our review environment.However, this does not seem to be possible - since it takes 7-10 minutes Why does it take such a long time to create a dedicated M10 cluster?\nCleanShot 2023-08-23 at 09.51.07@2x2488×890 155 KB\n",
"username": "Alex_Bjorlig"
},
{
"code": "",
"text": "Hi @Alex_Bjorlig,To my knowledge and my own experience with M10+ creations, I believe they have been generally created within 7-10 minutes even in the past as well.I’d just like to understand the context behind your question regarding the amount of time to create an M10 - Is this more so a comparison between the times of setting up a shared tier vs dedicated tier cluster? Or are there any actual errors you’re encountering with the M10+ tier cluster creations?I understand you’ve noted “working fine in the “share” space with M2/M5” so I am wondering is it some application execution that cannot currently deal the 7-10 minute creation time of the M10?Thanks and looking forward to hearing from you.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "So the context here is creating a backend for every review app (based on open pull requests).\nBut I have solved the problem, by simply creating a new database per review app (instead of a new cluster).",
"username": "Alex_Bjorlig"
},
{
"code": "",
"text": "Gotcha. Thanks for providing the solution here too.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Why does it take 7-10 minutes to create a dedicated Atlas Cluster?
|
2023-08-23T07:54:05.414Z
|
Why does it take 7-10 minutes to create a dedicated Atlas Cluster?
| 424 |
|
null |
[
"queries",
"node-js"
] |
[
{
"code": "db.collection.find(query, projection, options)db.collection.find(query, options)projectionoptions",
"text": "The example function given in the documentation is db.collection.find(query, projection, options), but it’s actually db.collection.find(query, options), with projection being an option in options, which Is this an error?The document address is: https://www.mongodb.com/docs/v7.0/reference/method/db.collection.find/",
"username": "Dreams_Empty"
},
{
"code": "db.getCollection(\"Cinemas\").find({})\ndb.getCollection(\"Cinemas\").find({}, {'Name':'$CinemaName'})\ndb.getCollection(\"Cinemas\").find({}, {'Name':'$CinemaName'}, {allowDiskUse:true})\n",
"text": "What versions of the server / shell are you using? I don’t normally use a project on a find and just jump straight into an aggregation but this is what I did:\nimage949×161 6.32 KB\nAnd then:I was also able to do this:",
"username": "John_Sewell"
},
{
"code": "[email protected](): FindCursor<WithId<TSchema>>;\nfind(filter: Filter<TSchema>, options?: FindOptions): FindCursor<WithId<TSchema>>;\nfind<T extends Document>(filter: Filter<TSchema>, options?: FindOptions): FindCursor<T>;\n",
"text": "I am writing the programme on the nodejs platform and the dependency is [email protected] am looking at the prototype of the find function in my IDE and it is:",
"username": "Dreams_Empty"
},
{
"code": "",
"text": "In that case the difference if between the shell method and the node.js API interface.As you say, the node.js find API passes a projection as an option but the shell call does not, the documentation you linked above was for the shell interface (see yellow warning on the link you sent).The node.js find API here documents the node.js surface:Documentation for mongodb",
"username": "John_Sewell"
},
{
"code": "",
"text": "I’m very sorry I didn’t notice this detail and I hope you can understand.",
"username": "Dreams_Empty"
},
{
"code": "",
"text": "No worries, I’ve done worse, and probably will do again!",
"username": "John_Sewell"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Is the documentation on `find projection` misdescribed?
|
2023-08-24T15:42:45.430Z
|
Is the documentation on `find projection` misdescribed?
| 400 |
null |
[
"data-modeling",
"python"
] |
[
{
"code": "[{ \"_id\": \"Programming\", \"path\": \",Books,\" }, { \"_id\": \"Databases\", \"path\": \",Books,Programming,\" }]db.categories.find( { path: \"^,Books,\" } )",
"text": "Hi, I need help with this section of the MongoDB docs https://docs.mongodb.com/manual/tutorial/model-tree-structures-with-materialized-paths/Im using python and it seems like this part of the docs lack a python translation. Is it even possible to use this feature with python?If i for example save nodes like this: [{ \"_id\": \"Programming\", \"path\": \",Books,\" }, { \"_id\": \"Databases\", \"path\": \",Books,Programming,\" }]and then query the data like this: db.categories.find( { path: \"^,Books,\" } )\nI dont get any output. Maybe it’s not even thought to implement this with python. What can I do?edit: note, that I changed the statements to work with python. I put quotation marks around the “_id” and the “path” statements, as well as around the statement in the “find” function.",
"username": "Moritz_Honscheidt"
},
{
"code": "",
"text": "I am pretty sure you need to use https://docs.mongodb.com/manual/reference/operator/query/regex/.I am not absolutely sure since I do not use the python driver.",
"username": "steevej"
},
{
"code": "",
"text": "Hi @Moritz_Honscheidt@steevej suggestion is correct, you should use the $regex operator in Python passing everything as a dictionary for it’s options. This will allow you to perform the necessary regular expression searching that is discussed in the manual page you’ve referenced.If you have more specific Python questions, I’d ask them in the Drivers & ODM category as there will be more readers there who have greater familiarity with using regular expressions and Python.Kindest regards,\nEoin",
"username": "Eoin_Brazil"
},
{
"code": "",
"text": "Thank you Eoin, would you as well show me what the code would look like with my example above?\nCheers!",
"username": "Moritz_Honscheidt"
},
{
"code": " from pymongo import MongoClient\n client = MongoClient()\n db = client.test\n categories_col = db.categories\n categories_col.insert_one({\"subjects\":[{ \"_id\": \"Programming\", \"path\": \",Books,\" }, { \"_id\": \"Databases\", \"path\": \",Books,Programming,\" }]})\n query = { \"subjects.path\": { \"$regex\": '^,Books,' } }\n print(categories_col.find(query).next())\n",
"text": "Hi @Moritz_Honscheidt\nIn terms of:db.categories.find( { path: “^,Books,” } )It would look like the code extract below. I’ve excluded error handling, any connection details beyond assuming a default local mongod.This code segment should be sufficient to give an indicative step on where to go next. In terms of follow-ups, I’d suggest you post further questions to Working with Data or to ODM & Drivers as this has moved outside of the scope of M220P and there will be a wider pool of viewers in those forums who can provide further assistance should you require it.Kindest regards,\nEoin",
"username": "Eoin_Brazil"
},
{
"code": "",
"text": "Yes, Python supports materialized path trees. You can implement them to manage hierarchical data structures efficiently. Libraries like SQLAlchemy can aid in working with databases, and custom implementations can be developed using Python’s string manipulation capabilities.Welcome to the world of Python programming! Whether you are a complete beginner or an aspiring developer looking to expand your skillset, embarking on a structured learning journey is crucial to mastering Python. In this blog, we will guide you...",
"username": "Vishakha_Singh1"
}
] |
Is it possible to use materialized path trees with python?
|
2021-06-09T09:49:26.167Z
|
Is it possible to use materialized path trees with python?
| 2,385 |
null |
[
"aggregation",
"queries",
"atlas-search"
] |
[
{
"code": "{\n \"_id\": {\"$oid\": \"64e6216ba465960d9107323f\"},\n \"log\": {\n \"created_by\": \"303feff3-0d01-4322-8ebe-ee27d55be2df\",\n \"update_count\": 1,\n \"updated_at\": {\"$date\": \"2023-08-23T15:10:47.537Z\"},\n \"updates_log\": [\n {\n \"updated_at\": {\"$date\": \"2023-08-23T15:10:47.537Z\"},\n \"updated_by\": \"303feff3-0d01-4322-8ebe-ee27d55be2df\",\n \"new_state\": {\"msg\": \"Example Message X\"},\n \"original_state\": {\"msg\": \"Example Message\"},\n }\n ],\n },\n \"cnf\": {\"st\": 1},\n}\ncnf.stlog.updated_atresult = message_collection.aggregate(\n[\n {\"$match\": {\"cnf.st\": {\n \"$in\": [1, -4]}}},\n {\n \"$facet\": {\n \"updated_events_last_24\": [\n {\"$match\": {\"log.update_count\": {\"$gt\": 0},\n \"log.updated_at\": {\"$gte\": QueryParams.get_last_24()}}},\n {\"$group\": {\"_id\": {}, \"count\": {\"$sum\": 1}}},\n ],\n }\n }\n ]\n)\n\nlist(result)\nresult = message_collection.aggregate(\n [\n {\n \"$searchMeta\": {\n \"index\": \"MsgAtlasIndex\",\n \"count\": {\"type\": \"total\"},\n \"compound\": {\n \"must\": [\n {\n \"range\": {\"path\": \"log.update_count\", \"gt\": 0}\n },\n {\n \"range\": {\"path\": \"log.updated_at\", \"gte\": QueryParams.get_last_24()}\n },\n {\n \"in\": {\n \"path\": \"cnf.st\",\n \"value\": [1, -4]\n }\n },\n ]\n }\n }\n }\n ]\n)\n\nlist(result)\nfrom datetime import datetime, timedelta\nclass QueryParams:\n \"\"\"Query params.\"\"\"\n\n @staticmethod\n def get_last_24() -> datetime:\n \"\"\"Pattern for time: 24 hours.\"\"\"\n return datetime.now() - timedelta(hours=24)\n",
"text": "I have documents like this sample:I already have this aggregation to count documents based on cnf.st and log.updated_at for the last 24 hours (means the documents updated in the last 24 hours).Aggregation:Now I’m trying to apply it using Atlas searchMeta approach like below:But I always getting count 0.Any idea please?To get the time within the last 24 hours used above:",
"username": "ahmad_al_sharbaji"
},
{
"code": "{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"cnf\": {\n \"fields\": {\n \"st\": {\n \"type\": \"number\"\n }\n },\n \"type\": \"document\"\n },\n \"log\": {\n \"fields\": {\n \"update_count\": {\n \"type\": \"number\"\n },\n \"updated_at\": {\n \"type\": \"date\"\n }\n },\n \"type\": \"document\"\n }\n }\n }\n}\ndb.myLogs.insertMany([\n {\n _id: 'A',\n log: {\n update_count: 1,\n updated_at: ISODate('2023-08-23T15:00:00.000Z'), // 15:00\n },\n cnf: {\n st: 1\n },\n },\n {\n _id: 'B',\n log: {\n update_count: 2,\n updated_at: ISODate('2023-08-23T12:00:00.000Z'), // 12:00\n },\n cnf: {\n st: 2\n },\n },\n {\n _id: 'C',\n log: {\n update_count: 1,\n updated_at: ISODate('2023-08-23T15:15:00.000Z'), // 15:15\n },\n cnf: {\n st: 1\n },\n }\n]);\ndb.myLogs.aggregate([\n {\n $searchMeta: {\n index: 'logs-search-test',\n count: {\n type: 'total'\n },\n compound: {\n must: [\n {\n range: {\n path: 'log.update_count',\n gt: 0\n },\n },\n {\n range: {\n path: 'log.updated_at',\n gte: ISODate('2023-08-23T15:00:00.000Z') // 15:00\n }\n },\n {\n in: {\n path: 'cnf.st',\n value: [-4, 1]\n }\n }\n ]\n },\n },\n },\n]);\n// Documents A and C\n{ count: { total: Long(\"2\") } } ]\n",
"text": "Hello, @ahmad_al_sharbaji !I tried to run your queries against the data you provided - everything works on my side .Make sure you correctly specified Atlas Search indexes for the fields you are using in your pipeline with $searchMeta.To support this given query, your “MsgAtlasIndex” should be similar to this:Simplified version of your dataset, that I used:Pipeline with $searchMeta I used:Output:Let me know, if this helped ",
"username": "slava"
},
{
"code": "",
"text": "@slava Exactly. My query was correct and It was the index definition. Thank you, Beast .",
"username": "ahmad_al_sharbaji"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
How to deal with date range using Atlas searchMeta
|
2023-08-23T15:22:42.790Z
|
How to deal with date range using Atlas searchMeta
| 503 |
null |
[
"queries",
"data-modeling",
"java",
"crud"
] |
[
{
"code": "",
"text": "Hi, we are working on a project where we have to store the large amount of hierarchical data into MongoDB. To do so we are creating sub batches of 1000 documents and running them across threads. But while performing the bulk write operation in threads it is taking very long time to store data in to the database around and many times we are getting GC-Overhead issue while performing the operation.Following the specifications of the threads and the records handled by threads.\nTotal no of threads - 10\nRecords per threads - 1000\nAverage number of records in total - 1,00,000\nAverage time taken by the bulk-write operation - 20 minsWe are using UpdateManyModule with filter and new UpdateOptions().upsert(true) along with the BulkWriteOptions().ordered(false) while storing the data into MongoDbDoes anyone have any idea why it may be taking more time ?",
"username": "Mrunal_Nagare"
},
{
"code": "",
"text": "heavy writes take a bit time. DB servers need to do a lot of things upon write, e.g. create index if any, move data blocks around, allocate resources, replicate the update…If anything goes slower, the write takes longer time. So check your server metrics.",
"username": "Kobe_W"
},
{
"code": "",
"text": "Thanks @Kobe_W for the response.We were able to get the expected output after monitoring the server metrics it seems like mongoDb was not getting enough RAM so the operations were either stuck or failing. After configuring them correctly we were able to get the expected results.",
"username": "Mrunal_Nagare"
}
] |
Uploading Large number of records in batches using MongoCollection.bulkWrite() method in Sprint boot
|
2023-08-14T05:42:07.029Z
|
Uploading Large number of records in batches using MongoCollection.bulkWrite() method in Sprint boot
| 530 |
[] |
[
{
"code": "",
"text": "Hi,We are currently experiencing issues with our app services not being able to be deployed via github.\nThe deployment always fails with status: Failed: failed to handle webhook push event.\nWith seems to indicate an error with the github app the app service is connected to.\nAre there any known issues or has anyone else experienced this issue in the past and may be able to help?\nimage1901×107 10.1 KB\nBR\nD",
"username": "Daniel_Rollenmiller1"
},
{
"code": "",
"text": "Same problem here.\nScreenshot 2023-08-24 at 15.37.301171×334 48 KB\n",
"username": "Benoit_Werner"
},
{
"code": "",
"text": "Hi all,There was an issue with our latest release that has since been resolved. This should be working again now, sorry for any inconvenience!",
"username": "Kiro_Morkos"
},
{
"code": "",
"text": "We already suspected something like this. Seems to work fine again, thanks ",
"username": "Daniel_Rollenmiller1"
}
] |
App Service deployment "failed to handle webhook push event"
|
2023-08-24T11:34:11.643Z
|
App Service deployment “failed to handle webhook push event”
| 256 |
|
null |
[
"queries",
"sharding",
"indexes"
] |
[
{
"code": "find({file_id: 'xxx', id2: 'xxx'}).sort({version: -1}).limit(100)",
"text": "There is a collection with a sharding key on the hashed index file_id . In addition, commonly used queries in the business require both id2 and version as query conditions. (it will contain file_id in the query condition as well for shard routing) . For example: find({file_id: 'xxx', id2: 'xxx'}).sort({version: -1}).limit(100)Given that I already have the hashed index file_id, do I still need to create a compound index (file_id, id2, version) ? Or is it sufficient to create an index on (id2, version) only?",
"username": "Liu_Wenzhe"
},
{
"code": "",
"text": "do I still need to create a compound index (file_id, id2, version)With only file_id index, you can check explain output and see if the query is using that. If not, you may have to create (file_id, id2, version).This is also related to ESR rule.",
"username": "Kobe_W"
}
] |
Compound index question for sharded collection
|
2023-08-24T10:50:06.491Z
|
Compound index question for sharded collection
| 336 |
[] |
[
{
"code": "",
"text": "\nScreenshot 2023-08-08 at 7.35.07 AM909×588 75.8 KB\n\nHi, I use stitch plateform to connect mongodb atlas but it error “No address associated with hostname”\",then I want to know host name of Atlas.",
"username": "thanachot_supawasut"
},
{
"code": "ping cluster0.spk7r49.mongodb.net\nping: cannot resolve cluster0.spk7r49.mongodb.net: Unknown host\ncluster0.spk7r49.mongodb.netnslookupnslookup -type=srv _mongodb._tcp.cluster0.spk7r49.mongodb.net\nServer:\t\t127.0.0.1\nAddress:\t127.0.0.1#53\n\nNon-authoritative answer:\n_mongodb._tcp.cluster0.spk7r49.mongodb.net\tservice = 0 0 27017 ac-trrc202-shard-00-00.spk7r49.mongodb.net.\n_mongodb._tcp.cluster0.spk7r49.mongodb.net\tservice = 0 0 27017 ac-trrc202-shard-00-01.spk7r49.mongodb.net.\n_mongodb._tcp.cluster0.spk7r49.mongodb.net\tservice = 0 0 27017 ac-trrc202-shard-00-02.spk7r49.mongodb.net.\n",
"text": "Hi @thanachot_supawasutstitch plateformDo you have a link for this? I’m not too familiar with this platform myself. Based off the error itself and the naming convention / format it seems like it’s interpreting an Atlas SRV record as the hostname instead of using the hostnames associated with such record but this is just a guess at this stage.For example:Since cluster0.spk7r49.mongodb.net is not the actual hostname, I cannot ping it in this case.Using the nslookup on this srv record with associated prefix I can get the 3 hostnames associated with it:If you’re following any particular connection guide, please send that as well.Regards,\nJason",
"username": "Jason_Tran"
}
] |
Stitch connect to mongodb Atlas,error host name?
|
2023-08-08T01:18:39.462Z
|
Stitch connect to mongodb Atlas,error host name?
| 439 |
|
null |
[
"node-js",
"mongoose-odm",
"connecting",
"atlas-cluster"
] |
[
{
"code": "had an error today to connect my node js project to atlas, \ncurrently I'm on cluster level M0 (free)\nnode js v18.12.1\n\"express\": \"^4.18.2\",\n\"mongodb\": \"^5.8.0\",\n\"mongoose\": \"^7.4.4\",\n\nmongoose.connect('mongodb+srv://<username>:<password>@userdata.gl8wfbp.mongodb.net/?retryWrites=true&w=majority', {\n useNewUrlParser: true,\n use UnifiedTopology: true,\n});\n\nget this error message:\n \nnode:internal/error:484\n ErrorCaptureStackTrace(err);\n ^\n\nError: queryTxt ETIMEOUT userdata.gl8wfbp.mongodb.net\n in QueryReqWrap.onresolve [as done] (node:internal/dns/promises:251:17) {\n errno: undefined,\n code: 'ETIMEOUT',\n system call: 'queryTxt',\n hostname: 'userdata.gl8wfbp.mongodb.net'\n}\n\nsomeone please help me to solve this error, its taking my time and effort a lot.\n\nbest regards\nDani\n",
"text": "",
"username": "Dani_Haldi"
},
{
"code": "<username><password>",
"text": "Hi @Dani_Haldi, Welcome to the mongo community.It looks like you don’t put your username and password on your connection string.If you don’t have a user and password yet, can be good to look at this docs. Here is the explanation about create a user on atlas.After get your username and password, put these values in your connection string exactly where is <username> and <password>. If you have more questions about connection strings, can be helpful access this docs.",
"username": "Jennysson_Junior"
},
{
"code": "",
"text": "I have entered the username and password correctly but the error always appears as I explained",
"username": "Dani_Haldi"
},
{
"code": "code: 'ETIMEOUT',\n system call: 'queryTxt',\n hostname: 'userdata.gl8wfbp.mongodb.net'\n",
"text": "Looks to be a DNS related error. Try a different DNS for troubleshooting purposes.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "I’ve tried what you suggested, and still get the same error ",
"username": "Dani_Haldi"
},
{
"code": "mongosh",
"text": "Can you share more details on:Additionally it may sound odd but we can’t rule it out completely, please verify you’re using the correct connection string. Check using the connection string procedure in the Atlas UI. I’ve seen some cases people created a cluster, copied that particular string, deleted said cluster then created a new one forgetting to get the new connection string.Regards,\nJason",
"username": "Jason_Tran"
}
] |
Atlas-cluster connection error
|
2023-08-23T13:47:23.485Z
|
Atlas-cluster connection error
| 497 |
null |
[] |
[
{
"code": "",
"text": "According to this page, I can restore from continuous back using the UI or CLI.What about the REST API?` Is that possible?In case I have to go with the CLI, do you have any recommendations on installing/authenticating the CLI on Github Actions? Thanks ",
"username": "Alex_Bjorlig"
},
{
"code": "deliveryType\"pointInTime\"",
"text": "Hi Alex,Based off your other threads it seems you’ve gotten the answer for this but just for other’s reference who view it I believe the following is associated with the restore link you provided:In this scenario, the request body field deliveryType would be \"pointInTime\".Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Can you restore continious cloud backup with the REST API?
|
2023-08-11T10:57:28.495Z
|
Can you restore continious cloud backup with the REST API?
| 321 |
null |
[
"atlas-search"
] |
[
{
"code": "[\n {\n $search: {\n index: \"TextSearch\",\n regex: {\n query: [\".*to.*\"],\n path: \"Name\",\n allowAnalyzedField : true\n }\n }\n }\n]\n[\n {\n $search: {\n index: \"TextSearch\",\n regex: {\n query: [\".*to\\\\?.*\"],\n path: \"Name\",\n allowAnalyzedField : true\n }\n }\n }\n]\n",
"text": "Escaping special characters does not seem to be working with the Atlas Search + Regex combination. I am running this directly via the Atlas Search Portal in the “index tester” section. No code, no drivers.Does work and returns anything with “to” in itDoes not work, I would expect it to return anything with “to?” in it.",
"username": "Mark_Mann"
},
{
"code": "",
"text": "Actually I may have found my answer. I believe the standard analyzer removes punctuation, which probably includes most special regex characters.Use the Atlas Search standard analyzer to divide text into terms based on word boundaries, convert terms to lowercase, and remove punctuation.So maybe I can turn this into an advice thread on the same topic?What should I use for a “fuzzy” meaning regex, not language or phonetic fuzzy, to fully utilize atlas search on a simple text field?Example:\nData:\nAmerica Fast And Proud\nInternational Fasteners\nHeavy Bolts & Fasteners\nSteadfast Industrial ManufacturingI want a search of “fast” to return all 4. Text does not work because it requires full match so it misses the last 3. Autocomplete does not work because it requires “fast” to be the first 4 letters of a token, so I use regex and fill in the start and end with “.*”, which seems to work.However what would be the appropriate analzyer & Search to match ANY input from a user, including special characters, no “fuzzy” in the sense of swapping out characters or finding similar items, allowing for start and end wildcards and case insensitivity.Example of “real-world” where special characters matter:\nPO#1234\nLot-4567\nPart Number 2346/2124\nHeat Treat 45#999In those cases if someone searches “#999” or “46/21” I want to treat that literally with a wildcard on each end.",
"username": "Mark_Mann"
},
{
"code": " \"analyzers\": [\n {\n \"charFilters\": [],\n \"name\": \"whitespaceLower\",\n \"tokenFilters\": [\n {\n \"type\": \"lowercase\"\n }\n ],\n \"tokenizer\": {\n \"type\": \"whitespace\"\n }\n }\n ],\n",
"text": "Well, I created and solved my own thread The info is out there but a bit buried…What I needed was a custom Analyzer. In my case, what I wanted was whitespace + ignore case, which works out to the below. Then I just set my fields to use that. Success so far!",
"username": "Mark_Mann"
},
{
"code": "",
"text": "Nice! Thanks for posting your solution @Mark_Mann - I’m sure it’ll be useful to others which encounter the same / similar use cases, especially with the examples you provided.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Atlas Search + $regex
|
2023-08-24T18:19:44.785Z
|
Atlas Search + $regex
| 442 |
null |
[
"aggregation",
"golang",
"atlas-search",
"text-search"
] |
[
{
"code": "matchStage := bson.D{{\"$match\", bson.D{{\"user_id\", userID}}}}\nlookupStage := bson.D{{\"$lookup\", bson.D{\n\t{\"from\", \"members\"},\n\t{\"localField\", \"member_id\"},\n\t{\"foreignField\", \"_id\"},\n\t{\"as\", \"members\"}},\n}}\ncursor, cursorErr := r.connect.User.Aggregate(ctx, mongo.Pipeline{\n\tmatchStage, lookupStage,\n})\nmatchStage = bson.D{{\"$match\", bson.D{\n\t{\"user_id\", userID},\n\t{\"$text\", bson.D{{\"$search\", search}}},\n}}}\n",
"text": "Hello, I am trying to do a nested search on an joined collection using aggregation but from the documentation, I can use $text on the root collection, nothing on the nested collection. Is it possible to achieve this?Here’s my coderegular search looks like thisbut this will search the users table but what I want to search is the members table I just joined in the lookup, how do i achieve this please?",
"username": "Franklin_Isaiah"
},
{
"code": "db.groups.insertMany([\n {\n _id: 'G1',\n members: ['M1', 'M2', 'M3']\n },\n {\n _id: 'G2',\n members: ['M4', 'M5']\n },\n]);\ndb.members.insertMany([\n {\n _id: 'M1',\n name: 'Fred Weasley'\n },\n {\n _id: 'M2',\n name: 'Seamus Finnigan'\n },\n {\n _id: 'M3',\n name: 'Arthur Weasley'\n },\n {\n _id: 'M4',\n name: 'Neville Longbottom'\n },\n {\n _id: 'M5',\n name: 'Percy Weasley'\n }\n]);\ndb.groups.aggregate([\n {\n $lookup: {\n from: 'members',\n let: {\n membersIds: '$members',\n },\n pipeline: [\n {\n $search: {\n text: {\n query: 'Weasley',\n path: 'name'\n }\n }\n },\n {\n $match: {\n $expr: {\n $in: ['$_id', '$$membersIds']\n }\n }\n }\n ],\n as: 'members',\n }\n }\n]);\n[\n {\n _id: 'G1',\n members: [\n { _id: 'M3', name: 'Arthur Weasley' },\n { _id: 'M1', name: 'Fred Weasley' }\n ]\n },\n {\n _id: 'G2', \n members: [\n { _id: 'M5', name: 'Percy Weasley' }\n ]\n]\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"name\": {\n \"type\": \"string\"\n }\n }\n }\n}\n",
"text": "Hello, @Franklin_Isaiah ! Welcome to the MongoDB community! Indeed, with $text operator you can query the documents from one collection and you can not use it within aggregation pipeline.To be able to perform text search in joined documents from other collections, you need to use Atlas Search, which provides $search stage just for this case. As per documentation, $search must be the very first stage of any pipeline. That means, that we can not use it before $lookup stage, but we can use it the $lookup’s pipeline.I will demonstrate how it works with the examples below.First, we create test data:Then, we can join group members and filter them by name with the $search stage.Output:Note, that in order $search stage worked, you need to define proper index on Atlas. I used this one:",
"username": "slava"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Is it possible to implement search in a lookup stage or match stage for joined collection
|
2023-08-23T21:55:26.796Z
|
Is it possible to implement search in a lookup stage or match stage for joined collection
| 434 |
null |
[
"kubernetes-operator"
] |
[
{
"code": "apiVersion: mongodb.com/v1\nkind: MongoDB\nmetadata:\n name: mymongodb\nspec:\n ...\n connectivity:\n replicaSetHorizons:\n - \"external\": \"mydb-db-0.my-domain.com:27017\"\n - \"external\": \"mydb-db-1.my-domain.com:27017\"\n - \"external\": \"mydb-db-2.my-domain.com:27017\"\n externalAccess:\n externalService:\n annotations:\n service.beta.kubernetes.io/azure-load-balancer-internal-subnet: mysubnet\nexternal-dns.alpha.kubernetes.io/hostname: mydb-db-0.my-domain.com\nexternal-dns.alpha.kubernetes.io/hostname: mydb-db-1.my-domain.com\n",
"text": "From the documentation for the MongoDB resource, which is deployed by the operator, I’ve successfully added annotations that apply across each service created for each replica in my cluster.For integration with the external-dns operator (to create DNS names for each LoadBalancer service’s IP), is there a way to specify that each service created by the operator should have an annotation containing the DNS name for that specific instance? I’ve reviewed the docs at length and it seems I can only apply annotations that have a shared value across each generated service.For example I have something like the following in the MongoDB resource spec:In the above scenario, the annotation is added equally to each generated service.I want in the MongoDB resource spec to be able to define an annotation with a distinct value for each service. For example I want generated service “mydb-db-0-svc-external” to have annotation:Whereas service “mydb-db-1-svc-external” would have annotation:etc.",
"username": "emm_aitch"
},
{
"code": "",
"text": "Right now there’s unfortunately no way to achieve this.The only option is to create your services manually with that annotation in place.But it is something we’re planning to support at some point in the futre.It would be great if you could raise a feedback idea for this here: feedback.mongodb.com as when we update it there, you’d then get notified!",
"username": "Dan_Mckean"
},
{
"code": "",
"text": "Requested, and thank you for the feedback. A lot of times it helps to just be told “you can’t do that” so I can move on to other ways. It is working for me to create the services with distinct annotations separately, before applying the MongoDB resource spec.",
"username": "emm_aitch"
}
] |
External access: setting DNS name annotations on generated services for external-dns operator
|
2023-08-22T17:55:38.350Z
|
External access: setting DNS name annotations on generated services for external-dns operator
| 451 |
[
"python",
"connecting",
"atlas-cluster"
] |
[
{
"code": "from pymongo import MongoClient\n\nclient = MongoClient(\n \"mongodb+srv://culturebot:[pass]@cluster0.whzdc.mongodb.net/culturebot?retryWrites=true&w=majority\"\n)\nprint(client.list_database_names())\nTraceback (most recent call last):\n File \"C:\\Users\\D\\code\\thesadru\\culturebot\\_.py\", line 6, in <module>\n print(client.list_database_names())\n File \"C:\\Users\\D\\AppData\\Roaming\\Python\\Python39\\site-packages\\pymongo\\mongo_client.py\", line 1918, in list_database_names\n for doc in self.list_databases(session, nameOnly=True)]\n File \"C:\\Users\\D\\AppData\\Roaming\\Python\\Python39\\site-packages\\pymongo\\mongo_client.py\", line 1899, in list_databases\n res = admin._retryable_read_command(cmd, session=session)\n File \"C:\\Users\\D\\AppData\\Roaming\\Python\\Python39\\site-packages\\pymongo\\database.py\", line 755, in _retryable_read_command\n return self.__client._retryable_read(\n File \"C:\\Users\\D\\AppData\\Roaming\\Python\\Python39\\site-packages\\pymongo\\mongo_client.py\", line 1460, in _retryable_read\n server = self._select_server(\n File \"C:\\Users\\D\\AppData\\Roaming\\Python\\Python39\\site-packages\\pymongo\\mongo_client.py\", line 1278, in _select_serverer\n server = topology.select_server(server_selector)\n File \"C:\\Users\\D\\AppData\\Roaming\\Python\\Python39\\site-packages\\pymongo\\topology.py\", line 241, in select_server\n return random.choice(self.select_servers(selector,\n File \"C:\\Users\\D\\AppData\\Roaming\\Python\\Python39\\site-packages\\pymongo\\topology.py\", line 199, in select_servers\n server_descriptions = self._select_servers_loop(\n File \"C:\\Users\\D\\AppData\\Roaming\\Python\\Python39\\site-packages\\pymongo\\topology.py\", line 215, in _select_servers_loop\n raise ServerSelectionTimeoutError(\npymongo.errors.ServerSelectionTimeoutError: cluster0-shard-00-02.whzdc.mongodb.net:27017: timed out,cluster0-shard-00-00.whzdc.mongodb.net:27017: timed out,cluster0-shard-00-01.whzdc.mongodb.net:27017: timed out, Timeout: 30s, Topology Description: <TopologyDescription id: 60f00e93a72f19041df4afa6, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('cluster0-shard-00-00.whzdc.mongodb.net', 27017) server_type: Unknown, rtt: None, error=NetworkTimeout('cluster0-shard-00-00.whzdc.mongodb.net:27017: timed out')>, <ServerDescription ('cluster0-shard-00-01.whzdc.mongodb.net', 27017) server_type: Unknown, rtt: None, error=NetworkTimeout('cluster0-shard-00-01.whzdc.mongodb.net:27017: timed out')>, <ServerDescription ('cluster0-shard-00-02.whzdc.mongodb.net', 27017) server_type: Unknown, rtt: None, error=NetworkTimeout('cluster0-shard-00-02.whzdc.mongodb.net:27017: timed out')>]>\n0.0.0.0",
"text": "Hello, I have been using pymongo with atlas for a while now, and suddenly around two hours ago, I must have done something wrong because the same code I’ve been using the entire time suddenly stopped working.I have attempted to isolate the issue and made this reproducible exampleand this is the full tracebackThe same error is raised even for other operations.I have attempted to search around for a solution. Most of them consisted of using different parameters for the client or adding parameters to the query of the connection string, all of which yield the same result.I have of course added both my IP and 0.0.0.0 to the network access.\nimage1447×208 18.8 KBVersions:Any directions would be appreciated, I can always provide more info.",
"username": "sadru"
},
{
"code": "NetworkTimeout('cluster0-shard-00-01.whzdc.mongodb.net:27017: timed out')\npip listping cluster0-shard-00-01.whzdc.mongodb.net",
"text": "This error means that pymongo timed out while waiting for a response from the remote server. Usually this means there is a network issue between your machine and the database.",
"username": "Shane"
},
{
"code": "absl-py 0.13.0\naiohttp 3.6.3\nanime-downloader 5.0.7\nanimethemes-dl 2.2.2.3\nappconfigpy 1.0.2\nappdirs 1.4.4\nart 5.1\nastunparse 1.6.3\nasync-timeout 3.0.1\nasyncio-dgram 2.0.0\natomicwrites 1.4.0\nattrs 20.3.0\nAutomat 20.2.0\nautopep8 1.5.4\nbackcall 0.2.0\nbcrypt 3.2.0\nbeartype 0.5.1\nbeautifulsoup4 4.9.3\nbinary 1.0.0\nblack 21.6b0\nbleach 3.2.1\nblinker 1.4\nboto3 1.16.56\nbotocore 1.19.56\nBrotli 1.0.9\nbrowser-cookie3 0.12.1\nbrowsermob-proxy 0.8.0\nbs4 0.0.1\nbson 0.5.10\nCacheControl 0.12.6\ncachetools 4.2.1\ncachy 0.3.0\ncertifi 2020.12.5\ncffi 1.14.5\ncfscrape 2.1.1\nchardet 3.0.4\ncleo 0.8.1\nclick 7.1.2\nclikit 0.6.2\ncloudscraper 1.2.58\ncolorama 0.4.4\ncoloredlogs 15.0\ncommonmark 0.9.1\nconstantly 15.1.0\ncrashtest 0.3.1\ncryptography 3.3.1\ncssselect 1.1.0\ncv 1.0.0\ncycler 0.10.0\nCython 0.29.21\nDataProperty 0.50.0\ndebtcollector 2.2.0\ndebugpy 1.3.0\ndecorator 4.4.2\ndecouple 0.0.7\ndeprecation 2.1.0\ndeskew 0.10.32\ndiscord 1.0.1\ndiscord-pretty-help 1.3.2\ndiscord.py 1.7.2\ndiscord.py-stubs 1.7.1\nDiscordUtils 1.2.6\ndiscum 1.0.1\ndistlib 0.3.2\ndnspython 2.1.0\ndoc-search 1.0.7\ndocutils 0.16\nEditorConfig 0.12.3\nenvinfopy 0.0.3\nexcelrd 2.0.3\neyeD3 0.9.5\nfake-useragent 0.1.11\nfasttext 0.9.2\nfeedparser 6.0.8\nfilelock 3.0.12\nfiletype 1.0.7\nfirebase-admin 4.5.2\nFlask 1.1.2\nFlask-Cache 0.13.1\nFlask-Caching 1.10.0\nFlask-Cors 3.0.10\nflask-swagger 0.2.14\nflatbuffers 1.12\nfuture 0.18.2\nfuzzywuzzy 0.18.0\ngallery-dl 1.16.5\ngast 0.4.0\ngenshinstats 1.4.4.1\ngenshinstats-api 1.1\ngmail 0.6.3\ngoogle 3.0.0\ngoogle-api-core 1.26.1\ngoogle-api-python-client 2.3.0\ngoogle-auth 1.27.1\ngoogle-auth-httplib2 0.1.0\ngoogle-auth-oauthlib 0.4.4\ngoogle-cloud-bigquery 2.13.0\ngoogle-cloud-core 1.6.0\ngoogle-cloud-firestore 2.0.2\ngoogle-cloud-storage 1.36.1\ngoogle-crc32c 1.1.2\ngoogle-pasta 0.2.0\ngoogle-resumable-media 1.2.0\ngoogleapis-common-protos 1.53.0\ngooglemaps 4.4.5\ngoogletrans 3.0.0\ngreenlet 1.0.0\ngrpcio 1.34.1\ngunicorn 20.0.4\nh11 0.9.0\nh2 3.2.0\nh5py 3.1.0\nhpack 3.0.0\nhstspreload 2020.12.22\nhtml2markdown 0.1.7\nhtml5lib 1.1\nhttpcore 0.9.1\nhttplib2 0.19.0\nhttpx 0.13.3\nhumanfriendly 9.1\nhumanize 3.7.0\nhyperframe 5.2.0\nhyperlink 21.0.0\nidna 2.8\nimageio 2.9.0\nincremental 21.3.0\niniconfig 1.1.1\nipykernel 6.0.2\nipython 7.25.0\nipython-genutils 0.2.0\niso8601 0.1.13\nitemadapter 0.2.0\nitemloaders 1.0.4\nitsdangerous 1.1.0\njedi 0.18.0\njeepney 0.6.0\nJinja2 2.11.3\njmespath 0.10.0\njoblib 1.0.1\nJs2Py 0.71\njsbeautifier 1.13.5\njsonpointer 2.1\njsonschema 3.2.0\njupyter-client 6.1.12\njupyter-core 4.7.0\nkaitaistruct 0.9\nKeras 2.4.3\nkeras-nightly 2.5.0.dev2021032900\nKeras-Preprocessing 1.1.2\nkeyboard 0.13.5\nkeyring 21.5.0\nkiwisolver 1.3.1\nlockfile 0.12.2\nloguru 0.5.3\nlxml 4.6.3\nlz4 3.1.3\nMako 1.1.4\nmando 0.6.4\nMarkdown 3.3.3\nMarkupSafe 1.1.1\nmatplotlib 3.4.2\nmatplotlib-inline 0.1.2\nmbstrdecoder 1.0.1\nmcstatus 6.1.0\nmechanize 0.4.5\nmock 4.0.3\nmotor 2.4.0\nMouseInfo 0.1.3\nmsgfy 0.1.0\nmsgpack 1.0.2\nmultidict 4.7.6\nmutagen 1.45.1\nmypy 0.800\nmypy-extensions 0.4.3\nmysql 0.0.2\nmysql-connector-python 8.0.23\nmysqlclient 2.0.3\nnbformat 5.0.8\nnetaddr 0.8.0\nnetifaces 0.11.0\nnetworkx 2.5.1\nnltk 3.6.2\nnumpy 1.21.1\noauth2client 4.1.3\noauthlib 3.1.0\nopencv-python 4.5.1.48\nopt-einsum 3.3.0\norjson 3.5.2\nos-service-types 1.7.0\noslo.i18n 5.0.1\npackaging 20.9\npandas 1.2.0\nparse 1.19.0\nparsel 1.6.0\nparso 0.8.2\npastel 0.2.1\npath 15.0.1\npath.py 12.5.0\npathspec 0.8.1\npathvalidate 2.3.1\npbkdf2 1.3\npbr 5.6.0\npdoc3 0.9.2\npeewee 3.13.3\npexpect 4.8.0\nphonenumbers 8.12.21\npickleshare 0.7.5\nPillow 8.1.0\npip 21.1.3\npip-check-reqs 2.2.0\npkginfo 1.6.1\nplotly 5.1.0\npluggy 0.13.1\npoetry 1.1.6\npoetry-core 1.0.3\npraw 7.2.0\nprawcore 2.0.0\npriority 1.3.0\nprompt-toolkit 3.0.18\nProtego 0.1.16\nproto-plus 1.14.2\nprotobuf 3.15.5\npsutil 5.8.0\nptyprocess 0.7.0\npy 1.10.0\npyaes 1.6.1\npyasn1 0.4.8\npyasn1-modules 0.2.8\nPyAutoGUI 0.9.52\npybind11 2.7.0\nPyBluez 0.23\npycana 0.1\npycodestyle 2.6.0\npycparser 2.20\npycryptodome 3.10.1\npycryptodomex 3.9.9\npydantic 1.8.2\nPyDispatcher 2.0.5\npydivert 2.1.0\nPyDrive 1.3.1\npyee 8.1.0\npyfiglet 0.8.post1\npygame 2.0.1\nPyGetWindow 0.0.9\nPygments 2.8.1\npyjsparser 2.7.1\npylev 1.4.0\npymongo 3.12.0\nPyMsgBox 1.0.9\nPyMySQL 1.0.2\nPyNaCl 1.4.0\npynput 1.7.3\npyOpenSSL 20.0.1\npyparsing 2.4.7\npyperclip 1.8.1\npypinfo 19.0.0\npyppeteer 0.2.5\nPyQt5 5.15.2\nPyQt5-sip 12.8.1\npyquery 1.4.3\npyreadline 2.1\npyreadline3 3.3\nPyRect 0.1.4\npyrsistent 0.17.3\nPyScreeze 0.1.26\nPySimpleGUI 4.39.1\npySmartDL 1.3.4\nPySocks 1.7.1\npytablereader 0.30.1\npyte 0.8.0\npytesseract 0.3.8\npytest 6.2.3\npython-brainfuck 0.9.1\npython-dateutil 2.8.1\npython-gmaps 0.3.1\npython-Levenshtein 0.12.1\npytube 10.8.4\nPyTweening 1.0.3\npytz 2020.5\nPyWavelets 1.1.1\npywin32 300\npywin32-ctypes 0.2.0\nPyYAML 5.1\npyzmq 22.1.0\nqueuelib 1.5.0\nradon 5.0.1\nrandom-user-agent 1.0.1\nrapidfuzz 1.0.0\nreadme-renderer 28.0\nregex 2020.11.13\nrequests 2.25.1\nrequests-cache 0.5.2\nrequests-html 0.10.0\nrequests-oauthlib 1.3.0\nrequests-toolbelt 0.9.1\nrequestsexceptions 1.4.0\nretryrequests 0.0.3\nrfc3986 1.4.0\nrich 9.11.0\nrsa 4.7.2\ns3transfer 0.3.4\nscikit-image 0.18.2\nscikit-learn 0.24.2\nscipy 1.5.4\nScrapy 2.5.0\nseaborn 0.11.1\nSecretStorage 3.3.1\nselenium 3.141.0\nselenium-wire 4.0.4\nsentry-sdk 0.19.4\nservice-identity 18.1.0\nsetuptools 56.0.0\nsgmllib3k 1.0.0\nshellingham 1.4.0\nSimpleSQLite 1.1.4\nsix 1.15.0\nsniffio 1.2.0\nsoupsieve 2.1\nspotdl 3.6.1\nspotipy 2.18.0\nSQLAlchemy 1.4.12\nsqlitebiter 0.34.1\nsqliteschema 1.0.5\nstevedore 3.3.0\nstockfish 3.13.0\ntabledata 1.1.3\ntabulate 0.8.7\ntcolorpy 0.0.8\ntenacity 8.0.1\ntensorboard 2.5.0\ntensorboard-data-server 0.6.1\ntensorboard-plugin-wit 1.8.0\ntensorflow 2.5.0\ntensorflow-estimator 2.5.0\ntermcolor 1.1.0\nthefuck 3.30\nthreadpoolctl 2.2.0\ntifffile 2021.7.2\ntinydb 4.4.0\ntinyrecord 0.2.0\ntoml 0.10.2\ntomlkit 0.7.2\ntornado 6.1\ntox 3.23.1\ntqdm 4.56.0\ntraitlets 5.0.5\ntwine 3.3.0\nTwisted 21.2.0\ntwisted-iocpsupport 1.0.1\ntyped-ast 1.4.2\ntypeguard 2.12.1\ntypepy 1.1.2\ntyping-extensions 3.7.4.3\ntzlocal 2.1\nua-parser 0.10.0\nujson 4.0.2\nUnidecode 1.2.0\nupdate-checker 0.18.0\nuritemplate 3.0.1\nurllib3 1.26.4\nvirtualenv 20.4.7\nw3lib 1.22.0\nwcwidth 0.2.5\nwebencodings 0.5.1\nwebsocket-client 0.57.0\nwebsockets 8.1\nWerkzeug 1.0.1\nwheel 0.36.2\nwin-unicode-console 0.5\nwin32-setctime 1.0.3\nwrapt 1.12.1\nwsproto 1.0.0\nXlsxWriter 1.4.0\nyarl 1.5.1\nyoutube-dl 2021.5.16\nytmusicapi 0.14.3\nzipp 3.4.1\nzope.interface 5.3.0\n> ping cluster0-shard-00-01.whzdc.mongodb.net\n\nPinging ec2-18-196-119-49.eu-central-1.compute.amazonaws.com [18.196.119.49] with 32 bytes of data:\nReply from 18.196.119.49: bytes=32 time=19ms TTL=39\nReply from 18.196.119.49: bytes=32 time=19ms TTL=39\nReply from 18.196.119.49: bytes=32 time=19ms TTL=39\nReply from 18.196.119.49: bytes=32 time=19ms TTL=39\n\nPing statistics for 18.196.119.49:\n Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),\nApproximate round trip times in milli-seconds:\n Minimum = 19ms, Maximum = 19ms, Average = 19ms\n> mongosh cluster0-shard-00-01.whzdc.mongodb.net\n\nCurrent Mongosh Log ID: 60f7ca8b2aae42386b38edb3\nConnecting to: mongodb://cluster0-shard-00-01.whzdc.mongodb.net:27017/test?directConnection=true\nMongoServerSelectionError: connect ETIMEDOUT 18.196.119.49:27017\n",
"text": "I’ve been trying to connect using company wifi, excuse me for not knowing much about networking but I guess it’s because there’s some system in place that’d prevent me from connecting to addresses like these.\nIt seems to work completely fine when I’m at home for some reason.When I ping the cluster:I installed the mongo shell and attempted to connect to the cluster, still got a timeout after ~30s",
"username": "sadru"
},
{
"code": "",
"text": "Hi, Make sure you entered the user password, not the MongoDB account password. I encountered similar issue.",
"username": "nandhan_raj"
},
{
"code": "",
"text": "I’m having the same issue. Is there no solution for this? I tried to do what @shane told and got the same results as @sadru.",
"username": "Tom_Cajot"
},
{
"code": "",
"text": "This thread is 2 months old.Please start a new thread and post a screenshot of what you are trying that shows the issue you are having.If you could not resolve your issue with what was posted in this thread, then your issue is slightly different and should be on its own thread.",
"username": "steevej"
},
{
"code": "",
"text": "I am going through the same problem recently. Did you get any solution to this issue?",
"username": "mithilesh_nakade"
},
{
"code": "",
"text": "As already mentionedThis thread is 2 months old.Now 3 months, and more since the original message.As already mentionedIf you could not resolve your issue with what was posted in this thread, then your issue is slightly different and should be on its own thread.As already mentionedPlease start a new thread and post a screenshot of what you are trying that shows the issue you are having.",
"username": "steevej"
},
{
"code": "",
"text": "Hello, buddy.\nI had this problem before, so i have tried everything to solve it and i did.\nSo, most probably you added your own ip adress when you were creating a db, but you need to allow all the ip adresses that will connect to your db.\nYou may change it in your page in network access.\nThank you for attention! \nI hope it will be useful for you.",
"username": "FaNcY"
},
{
"code": "",
"text": "Couldn’t connect with enabled VPN (ProtonVPN). Works without it.",
"username": "amordo_N_A"
},
{
"code": "",
"text": "@sadru Go to Mongo db atlas dashboard\nSelect your current project\nclick on Network Access (left side menu)\nIn the “IP Access List” tab\nMake sure you don’t have an old dynamic IP of your machine set there\nOr you can add 0.0.0.0/0 (includes your current IP address) to access from anywhere",
"username": "Andrei_Pradan"
},
{
"code": "",
"text": "This worked for me, thanks you so much",
"username": "Mwanajamii_Services"
},
{
"code": "",
"text": "You’re the most generous Lord in all the Land, thank you sir ",
"username": "Sangeeth_Joseph"
},
{
"code": "",
"text": "Thx bro you are a truly god",
"username": "Diego_Silva_de_Franca"
},
{
"code": "connect_string = 'mongodb+srv://{}:{}@{}/?retryWrites=true&authSource=the_database&authMechanism=SCRAM-SHA-1'.format(username, password, hostname)\npy_client = MongoClient(connect_string)\n",
"text": "Another solution-\nI created a SCRAM user in the database access tab.\nThen, I added &authSource=the_database&authMechanism=SCRAM-SHA-1 to the connection string. Like this:",
"username": "Omer_Hertz"
},
{
"code": "unable to get local issuer certificate [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed open \"/Applications/Python <version_number>/Install certificates.command\"\n",
"text": "I asked a team member in chat and they gave a working answer!This is what they said:\nIf you are receiving an SSL certificate error when connecting to your Atlas cluster with PyMongo, such as:This indicates that Python does not have access to the system’s root certificates.This often occurs because OpenSSL does not have access to the system’s root certificates or the certificates are out of date. Linux users should ensure that they have the latest root certificate updates installed from their Linux vendor.A sample command script is included in “/Applications/Python <version_number>” to install a curated bundle of default root certificates from the third-party certifi package. Open a terminal window and run the following command, with your Python version.Alternatively, you can head to the Applications folder in Finder and double click “Install Certificates.command” to run the script:",
"username": "Sivayogeith_Umamaheswaran"
},
{
"code": "",
"text": "I logged in to thanks you very much ",
"username": "pichaya1988"
},
{
"code": "",
"text": "Not sure how special it is but for me it works by replacing the url with mongodb+srv://:@cluster0.gr4blbt.mongodb.net/?ssl=true&ssl_cert_reqs=CERT_NONE&retryWrites=true&w=majorityBasically adding ssl=true&ssl_cert_reqs=CERT_NONE and remove the “?” in the end",
"username": "Chen_Huang"
},
{
"code": "",
"text": "ssl_cert_reqs=CERT_NONE is insecure and not recommended. See ServerSelectionTimeoutError [SSL: CERTIFICATE_VERIFY_FAILED] Trying to understand the origin of the problem for how to fix TLS errors securely.",
"username": "Shane"
},
{
"code": "",
"text": "\nOpera Snapshot_2023-06-25_075518_ashuradhipathi-opulent-doodle-v6pvppw46952p657.github.dev1386×824 87.4 KB\n\nI am facing this issue can anyone help me out",
"username": "PREM_KIRAN_YADAV_LAKNABOINA"
}
] |
pymongo.errors.ServerSelectionTimeoutError with atlas even when added to network access
|
2021-07-15T10:51:15.307Z
|
pymongo.errors.ServerSelectionTimeoutError with atlas even when added to network access
| 75,892 |
|
[
"dot-net",
"replication",
"compass",
"mongodb-shell",
"containers"
] |
[
{
"code": "version: \"3\"\nservices:\n mongo1:\n hostname: mongo1\n container_name: localmongo1\n image: mongo\n expose:\n - 27017\n ports:\n - 27017:27017\n restart: always\n entrypoint: [ \"/usr/bin/mongod\", \"--bind_ip_all\", \"--replSet\", \"rs0\" ]\n volumes:\n - ~/.volumes/mongo/data1/db:/data/db\n - ~/.volumes/mongo/data1/configdb:/data/configdb\n mongo2:\n hostname: mongo2\n container_name: localmongo2\n image: mongo\n expose:\n - 27017\n ports:\n - 27018:27017\n restart: always\n entrypoint: [ \"/usr/bin/mongod\", \"--bind_ip_all\", \"--replSet\", \"rs0\" ]\n volumes:\n - ~/.volumes/mongo/data2/db:/data/db\n - ~/.volumes/mongo/data2/configdb:/data/configdb\n mongo3:\n hostname: mongo3\n container_name: localmongo3\n image: mongo\n expose:\n - 27017\n ports:\n - 27019:27017\n restart: always\n entrypoint: [ \"/usr/bin/mongod\", \"--bind_ip_all\", \"--replSet\", \"rs0\" ]\n volumes:\n - ~/.volumes/mongo/data3/db:/data/db\n - ~/.volumes/mongo/data3/configdb:/data/configdb\n\n mongoinit:\n image: mongo:6.0.7\n # this container will exit after executing the command\n restart: \"no\"\n depends_on:\n - localmongo1\n - localmongo2\n - localmongo3\n command: >\n mongosh --host localhost:27017 --eval \n '\n db = (new Mongo(\"localhost:27017\")).getDB(\"test\");\n config = {\n \"_id\" : \"rs0\",\n \"members\" : [\n {\n \"_id\" : 0,\n \"host\" : \"mongo1:27017\"\n },\n {\n \"_id\" : 1,\n \"host\" : \"mongo2:27017\"\n },\n {\n \"_id\" : 2,\n \"host\" : \"mongo3:27017\"\n }\n ]\n };\n rs.initiate(config);\n ' \n\nCurrent Mongosh Log ID: 64e54c92cecb757d4468cd86\nConnecting to: mongodb://localhost:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.10.1\nMongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017\nmongodb://127.0.0.1:27017\nmongodb://localhost:27017?readPreference=primaryPreferred\n",
"text": "Hi,For my demo C# project I am trying to create a replica set through docker compose and connect to it first from NoSQLBooster/ Mongodb Compass and then from my C# application. However, the I am getting different errors during the process and not able to connect to the replica set at all.After spending a lot of times searching through internet and trying various options, I finally decided to create a simple docker compose file to demonstrate my situation. This docker compose file is taken from one of the github samples I stumbled upon, which is very similar to other solutions I have seen as well, which creates 3 instances of mongodb, add them to a replica set name rs0 and then use mongo init to initialize the replicaset.Here is the complete docker compose file:After I run this, I get the following message:\nimage1208×121 5.46 KB\nWhen I open the mongoinit container logs, I get the following message:Then I tried to connect to the primary db from NoSQLBooster, got a connection successBut when I completed the connection, it’s showing this message: not primary and secondaryOK = falseI tried this connection string:but got the error as shown below:\n\nimage597×625 56.1 KB\nAny help to solve this issue will be appreciated. Thank you!",
"username": "UB_K"
},
{
"code": "",
"text": "Hi, @UB_K,You have created 3 Docker instances, but have not configured a shared virtual network for them to communicate on. Thus the host can communicate with each Docker container, but the Docker containers cannot communicate amongst themselves. The following step-by-step guide will hopefully help:Learn how to create a MongoDB cluster with Docker in this step-by-step tutorial.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Unable to connect to MongoDB cluster created through docker compose
|
2023-08-23T00:14:16.536Z
|
Unable to connect to MongoDB cluster created through docker compose
| 541 |
|
null |
[
"aggregation",
"data-modeling",
"attribute-pattern"
] |
[
{
"code": "",
"text": "How can I define the schema of attribute patterns? How can define Schema when we are using the field like key and value?",
"username": "Iram_Barkat"
},
{
"code": "",
"text": "Hello, @Iram_Barkat ! Welcome to the community!I’d suggest you to start reading with this article about Attribute pattern, try it out and come back, if you have more questions ",
"username": "slava"
}
] |
Attribute Pattern - Schema Desin
|
2023-08-18T07:17:41.058Z
|
Attribute Pattern - Schema Desin
| 477 |
null |
[
"swift"
] |
[
{
"code": "initialSubscriptionsflexibleSyncConfiguration",
"text": "I created a realm mongo db app. I first created a UserInfo class and added it to initialSubscriptions when creating flexibleSyncConfiguration.\nThen I added another class Account and forgot to add it to initialSubscriptions and keep getting error freed pointer was not the last allocation which threw me on wrong witch hunt and after wasting two days I found that I missed it’s subscription.\nSo is this expected behavior or the error should have a bit relevant to the problem",
"username": "mansoor_ali1"
},
{
"code": "",
"text": "@mansoor_ali1 Welcome to the forums.Sorry to hear about the trouble. The description of the issue is a bit vague - do you have some example code we could see that replicates the issue?I crafted and app and followed what you described - the only issue I ran into was that Account didn’t sync (as they where not in the subscriptions) so perhaps I am doing something different.The error sounds more like it would have to do with LinkingObjects - do you use those?",
"username": "Jay"
}
] |
When a Realm Object is not added to subscription trying to add it to realm throws confusing error
|
2023-08-24T11:07:20.378Z
|
When a Realm Object is not added to subscription trying to add it to realm throws confusing error
| 406 |
null |
[
"java"
] |
[
{
"code": "",
"text": "Right now, I am upgrading java mongodb legacy driver to mongodb driver sync. In DBCursor class, there was the method “.getCollection()”, it gets the DBCollection from the object of DBCursor, but MongoCursor class does not have such method (I want to get the MongoCollection from MongoCursor object). Is there any way to do this?",
"username": "Merch_N_A"
},
{
"code": "",
"text": "There is not. Generally the new API does not include these sort of back pointers, e.g. MongoCollection to MongoDatabase, etc. One good reason not to for MongoCursor is that cursors can operate at a higher level than a single collection for some operations, so there is no single MongoCollection to associate it with.Regards,\nJeff",
"username": "Jeffrey_Yemin"
}
] |
MongoCursor to MongoCollection
|
2023-08-24T12:11:26.344Z
|
MongoCursor to MongoCollection
| 392 |
null |
[
"storage"
] |
[
{
"code": "",
"text": "{“t”:{“$date”:“2023-08-24T09:13:41.087+05:30”},“s”:“W”, “c”:“STORAGE”, “id”:22347, “ctx”:“initandlisten”,“msg”:“Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade.”}\n{“t”:{“$date”:“2023-08-24T09:13:41.087+05:30”},“s”:“F”, “c”:“ASSERT”, “id”:23091, “ctx”:“initandlisten”,“msg”:“Fatal assertion”,“attr”:{“msgid”:28561,“file”:“src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp”,“line”:696}}\n{“t”:{“$date”:“2023-08-24T09:13:41.087+05:30”},“s”:“F”, “c”:“ASSERT”, “id”:23092, “ctx”:“initandlisten”,“msg”:“\\n\\n***aborting after fassert() failure\\n\\n”}",
"username": "Rahul_Panwar1"
},
{
"code": "",
"text": "Have you checked the Platform Support Notes?",
"username": "Jack_Woehr"
}
] |
{"t":{"$date":"2023-08-24T09:13:41.087+05:30"},"s":"W", "c":"STORAGE", "id":22347, "ctx":"initandlisten","msg":"Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade."} {"t":{"$date":"202
|
2023-08-24T03:55:15.924Z
|
{"t":{"$date":"2023-08-24T09:13:41.087+05:30"},"s":"W", "c":"STORAGE", "id":22347, "ctx":"initandlisten","msg":"Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade."} {"t":{"$date":"202
| 403 |
null |
[] |
[
{
"code": "{\n \"_id\": {\n \"$oid\": \"63c8c45228636a53ac28cfd7\"\n },\n \"address\": {\n \"city\": \"Cheyenne\",\n \"region\": \"WY\"\n },\n \"geo\": {\n \"lat\": \"41.135\",\n \"lon\": \"-104.7902\"\n }\n}\n",
"text": "I have a full US city-data collection with latitude and longitude coordinates…\nI also have over 1.5 million address records for customers. I need to use the combination of city/state as a key and upsert the longitude, latitude data into my geo field for the associated customer city/state.Structured data sets… relatively easy, but I’m new to unstructured… They are on the same cluster in different databases… I get that I need to have the collections in the same DB (seems silly to not be able to merge data from different DBs, but). I just don’t know how to upsert from many to many in Mongo. Appreciate any help.For example… My city_geo collection has:My Customer collection has a number of fields but has the object ‘address’ with the same data (city, region), but it has a NULL geo field… I need to update the geo field in the customer data with the geo object data from the city_geo collection by matching the ‘address’ object. This city_geo is an aggregation of the original city_data (with 43 string fields) into a same database output to the same DB as the customer data collection as city_geo with the 2 objects above.",
"username": "Bill_Fetters"
},
{
"code": "db.customers.find().forEach(function (doc1) {\n var doc2 = db.city_geo.findOne({ city: doc1.address.city, region: doc1.address.region}, { geo: 1});\n if (doc2 != null) {\n doc1.geo.lat = doc2.geo.lat;\n doc1.geo.lon = doc2.geo.lon;\n db.customers.save(doc1);\n }\n},\n{\n allowDiskUse: true\n}\n);\n",
"text": "Something like this seems to work, but it is taking forever and I don’t know if it will finish. I’m sure that there is a much better way to do it…",
"username": "Bill_Fetters"
},
{
"code": " db.customers.save(doc1);/* I want to simplify the $merge by having less data to merge */\nproject_address = { \"$project\" : {\n \"address\" : 1\n} }\nlookup = { \"$lookup\" : {\n \"from\" : \"geo_city\" ,\n \"localField\" : \"address\" ,\n \"foreignField\" : \"address\" ,\n \"as\" : \"geo\" ,\n \"pipeline\" : [ { \"$project\" : { \"$geo\" : 1 } } ]\n} }\n/* the output of $lookup is an array but I want an object */\nproject_geo = { \"$project\" : {\n \"geo\" : { \"$arrayElemAt\" : [ \"$geo\" : 0 } }\n} }\nmerge = { \"$merge\" : {\n \"into\" : \"customers\" ,\n \"on\" : \"_id\"\n} }\n",
"text": "Since you are calling db.customers.save(doc1);I assume that you use mongoose.The main issue here is that you do every document 1 by 1. Slow, very slow. You need to use bulk operation.One way to do that is to use an aggregation pipeline that performs a $lookup and terminates with a $merge. Something along the untested lines:The second issue is that you definitively need the unique index { address:1 } on the city_geo collection.",
"username": "steevej"
},
{
"code": "",
"text": "OMGoodness… My notification didn’t come across. I jjust checked this.\nSteeve, you are the man. That is exactly the push I needed… I’ve just been sitting here with FOTU (fear of the unknown).\nI reverted this to a mongosh js file, but followed the same pattern. Like a charm and it completed in 36 minutes. AND… fantastic call on the unique index. what a difference that made.Thanks.",
"username": "Bill_Fetters"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Upsert lat lng from city-data collection into customer-data collection based on city, state
|
2023-08-14T21:35:34.552Z
|
Upsert lat lng from city-data collection into customer-data collection based on city, state
| 477 |
null |
[
"aggregation",
"crud"
] |
[
{
"code": "analysis.1234.labelings.dblabel1234{\n \"studyID\": \"ABC\",\n \"Condition\": \"Healthy\",\n \"analysis\": {\n \"1234\": { // dynamic key \n \"labelings\": {\n \"leiden\": \"0\",\n \"dblabel\": \"dblabel old\"\n }\n }\n }\n}\n[\n {\n $set: {\n analysis: {\n $objectToArray: \"$analysis\",\n },\n },\n },\n {\n $match: {\n \"analysis.v.labelings.dblabel\":\n \"dblabel old\",\n },\n },\n {\n $set: {\n \"analysis.v.labelings.dblabel\": \"new string\",\n },\n },\n {\n $set: {\n analysis: {\n $arrayToObject: \"$analysis\",\n },\n },\n },\n]\nupdateMany()updateMany()MongoServerError: $match is not allowed to be used within an update",
"text": "Hi\nI would like to update all documents which match a specific string which is under a dynamic path: analysis.1234.labelings.dblabel. The 1234 is unfortunately not fixed. I want to update the string “dblabel old” to a “new string”.I have a document structure like this:I thought I already found a solution using aggregations:But unfortunately, aggregations do not change the DB. Then I need to use updateMany() apparently? But when I put my aggregations into a an updateMany() command I get this error:\nMongoServerError: $match is not allowed to be used within an updateNow I am stuck and unsure how to proceed? Although I almost got what I wanted, I can not use it with a update command. Could you point me in the right direction on how to achieve my goal?\nThanks\nManuel–\nMongoDB Version 4.4.2",
"username": "Manuel_Kohler"
},
{
"code": "{\n $merge: {\n into: 'yourCollectionName',\n on: '_id',\n whenMatched: 'merge',\n whenNotMatched: 'discard'\n }\n}\n",
"text": "Hello, @Manuel_Kohler ! Welcome to the MongoDB community! To persist updated document, you can just add $merge stage at the end of your pipeline:",
"username": "slava"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Update many with a dynamic key
|
2023-08-24T13:36:14.481Z
|
Update many with a dynamic key
| 372 |
null |
[
"data-api"
] |
[
{
"code": "{\n \"error\": \"user does not belong to app\",\n \"error_code\": \"UserAppDomainMismatch\",\n \"link\": \"https://realm.mongodb.com/groups/.......................\"\n}\n",
"text": "Hi,When i authenticate a user using JWT authentication and use the token received in response, I get the following errorI am unable to find the cause of this error and hence unable to resolve the same.Any help appreciated.",
"username": "Kaustubh_Joshi3"
},
{
"code": "",
"text": "Hi @Kaustubh_Joshi3,When i authenticate a user using JWT authentication and use the token received in responseWhere is the JWT coming from, and how are you using it within the HTTP request? Are you complying with the JWT Authentication for Data API?If you post the exact request (you can remove the signature part of the JWT, but looking at the payload helps), we may give you a better diagnosis.",
"username": "Paolo_Manna"
},
{
"code": " const app = new Realm.App({\n id: \"app-id\",\n });\n \n const credentials = await Realm.Credentials.jwt(response.credential);\n const user = await app.logIn(credentials);\n alert(`Logged in with id: ${user.id}`);\n \nlet headers = new HttpHeaders({ 'Content-Type': 'application/json', 'Authorization': 'Bearer ' + user.accessToken});\nlet headers = new HttpHeaders({ 'Content-Type': 'application/json', 'jwtTokenString': user.accessToken});\n",
"text": "Hi Paolo,Thanks for your response.I tried various ways to connect using REST API initially using Google and then JWT auth.The below scenarios are with Custom JWT Authentication.First, I enabled ‘Custom JWT Authentication’ in ‘Authentication Providers’.Then inside the code,I receive the token from Google and then I use it in the following wayI receive the ‘user’ object.Then, I tried using the ‘user.accessToken’ in my HTTP requests in following way:When I use this way, I get aUserAppDomainMismatch ErrorError:\nuser does not belong to appAuthentication Method:\nBearer AuthenticationIn this case, I get below responseAccess to XMLHttpRequest at ‘https://data.mongodb-api.com/app/data-ioyhu/endpoint/data/v1/action/find’ from origin ‘https://…’ has been blocked by CORS policy: Response to preflight request doesn’t pass access control check: No ‘Access-Control-Allow-Origin’ header is present on the requested resource.In the network access for the time being all requests are allowed.I have one query, How come one request passes through (Bearer Auth - even though it gives an error) while other get blocked with CORS error?Awaiting your advice.With regards,\nKaustubh",
"username": "Kaustubh_Joshi3"
},
{
"code": "jwtTokenString",
"text": "Hi @Kaustubh_Joshi3This should work, so it’s probable there’s a misconfiguration somewhere in your app setup (the audience, perhaps?). Again, if you can provide the whole raw request, or at least the payload of the JWT, and the app ID you’re trying to connect to, I can check whether everything matches. Feel free to use a private message if there are details you don’t want to share in a public forum.This however is wrong: you should use jwtTokenString directly with the JWT you get from the outside service, i.e. the whole login process with Realm is redundant.Let me know how you want to proceed.",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "For all those referring this thread.I shared test payloads with Paolo (from MongoDB) and he helped to understand the cause of the error.I had created a custom app using App Services and was trying to access the database using DataService API.Data APIs are part of an app, and are specific (hence different) for each app.\nAny newly created app doesn’t have any Data API of its own.So even though I was able to sign-in into the new app, I was using the Data API of the default app. i.e. I was signing into one app and using the data api of another.Hence, I was getting an error ‘user does not belong to app’.Thanks to Paolo for the help.",
"username": "Kaustubh_Joshi3"
}
] |
Data API using Bearer authentication "user does not belong to app"
|
2023-08-20T13:07:41.164Z
|
Data API using Bearer authentication “user does not belong to app”
| 467 |
null |
[] |
[
{
"code": "",
"text": "Hi,I needed to update my RN(0.64.2) and realm(10.9.1) to the RN(0.72.4) and realm(12.0.0). After that, Realm’s callfunction method stopped working (Class: User).When trying to run I always get the error: [error] App: call_function: XXXXXXX service_name: → 500 ERROR: (BadValue) cannot compare to undefinedCan anybody help me?Thx",
"username": "Bruno_Nobre"
},
{
"code": "",
"text": "I also tried using the @realm/react package as suggested in: https://www.mongodb.com/docs/realm/sdk/react-native/app-services/call-a-function/\nIt gives the same error.",
"username": "Bruno_Nobre"
},
{
"code": "",
"text": "Resolved in Method callfunction stopped work in realm 12.0.0 · Issue #6095 · realm/realm-js · GitHubThanks kneth",
"username": "Bruno_Nobre"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Method callfunction stopped work in realm 12.0.0
|
2023-08-24T00:48:44.349Z
|
Method callfunction stopped work in realm 12.0.0
| 334 |
null |
[] |
[
{
"code": "",
"text": "Hi I wanted to check if vector search powered by Atlas search stores the embeddings in memory or disk?\nAlso do i need to load collection in memory before i want to query it if it stores the data in disk?",
"username": "Piyush_Sinha1"
},
{
"code": "",
"text": "Hi @Piyush_Sinha1 and welcome to MongoDB community forums!!Hi I wanted to check if vector search powered by Atlas search stores the embeddings in memory or disk?Could you help me understand what do you refer by in disk and the in memory?Also do i need to load collection in memory before i want to query it if it stores the data in disk?The following blog post would help you to understand further on how Vector Search works.Also, it would be help if you could elaborate on the statement in more detail?Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Hey @Piyush_Sinha1, to clarify a bit, embeddings are stored inside of your documents which are on disk. A Vector Search Index that is built on top of those embeddings are stored/persisted on disk as well, but when you execute a Vector Search query that index is brought into memory. For the best performance you will want to have enough memory free to bring the entire index into memory.",
"username": "Benjamin_Flast"
}
] |
Atlas vector search
|
2023-08-10T12:35:05.009Z
|
Atlas vector search
| 423 |
null |
[
"connecting",
"php"
] |
[
{
"code": "serverSelectionTryOnce",
"text": "Hello community,I hope someone will be able to help me debug some connection errors in our PHP application.Our connection between PHP and MongoDB is working fine most of the time. However, some requests fail with either the following error messages:I have found multiple mentions of these errors online, but they all deal with situations where every requests fails. I haven’t found a reason yet why most requests would work normally, but some fail.Our environment:What can I do to debug this issue?",
"username": "Freek_Vandeursen"
},
{
"code": "",
"text": "did you find a resolution?",
"username": "Dominic_Taylor"
},
{
"code": "",
"text": "Unfortunately not @Dominic_Taylor . Tweaking timeout seemed to reduce it a bit, but I didn’t manage to get rid of those errors completely.",
"username": "Freek_Vandeursen"
},
{
"code": "",
"text": "it will be great finding a solution on this",
"username": "Diego_Chicaiza"
},
{
"code": "",
"text": "Welcome to the MongoDB community forums @Diego_Chicaiza !@Freek_Vandeursen , are you using a self-hosted MongoDB server?I was also wondering if this happens when under heavy load, or just randomly when there’s no reason to run out of available connections/resources that might cause intermittent issues. What’s an approximate ratio of success vs. fail in your opinion?",
"username": "Hubert_Nguyen1"
},
{
"code": "",
"text": "Hello, have you found a solution for this error? The same thing is happening to me. I have PHP version 7.4 and I’m using the mongo driver with version 1.6.1",
"username": "Julian_G"
}
] |
Intermittent connection errors on PHP
|
2021-01-19T10:45:17.999Z
|
Intermittent connection errors on PHP
| 6,440 |
[] |
[
{
"code": "",
"text": "Hi, I am authenticating users using email password login. and in user settings I have enabled custom user data and provided db, collection and userId field for storing it. But when I click add new user on interface and create a user with email and password. It does not automatically store custom user data to given collection I also tried adding userCreationFunction() but that does not get triggered too.\nmongo1100×632 60.7 KB\n",
"username": "Asad"
},
{
"code": "",
"text": "Hi Asad,Thanks for posting and welcome to the community!Enabling custom user data is not designed to automatically create data in the specified collection for you.Please see the section below on creating and managing custom user data.If you need users to be created in this collection when they log in, you can use an Authentication Trigger to create these documents via a function upon first time creation/registration of the user.Regards\nManny",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Atlas App Services User custom data not generated
|
2023-08-23T06:08:35.389Z
|
Atlas App Services User custom data not generated
| 278 |
|
null |
[
"storage"
] |
[
{
"code": "# service mongod status\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)\n Active: failed (Result: exit-code) since Wed 2023-06-28 00:27:41 UTC; 2s ago\n Docs: https://docs.mongodb.org/manual\n Process: 302981 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=1/FAILURE)\n Main PID: 302981 (code=exited, status=1/FAILURE)\n\nJun 28 00:27:41 creazilla-mongo systemd[1]: Started MongoDB Database Server.\nJun 28 00:27:41 creazilla-mongo mongod[302981]: about to fork child process, waiting until server is ready for connections.\nJun 28 00:27:41 creazilla-mongo mongod[302996]: forked process: 302996\nJun 28 00:27:41 creazilla-mongo mongod[302981]: ERROR: child process failed, exited with 1\nJun 28 00:27:41 creazilla-mongo mongod[302981]: To see additional information in this output, start without the \"--fork\" option.\nJun 28 00:27:41 creazilla-mongo systemd[1]: mongod.service: Main process exited, code=exited, status=1/FAILURE\nJun 28 00:27:41 creazilla-mongo systemd[1]: mongod.service: Failed with result 'exit-code'.\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongodb\n# engine:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/lib/mongodb/mongod.log\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1\n bindIpAll: true\n\n# how the process runs\nprocessManagement:\n fork: true\n timeZoneInfo: /usr/share/zoneinfo\n\nsecurity:\n authorization: enabled\n\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:\n",
"text": "I’m using a ubuntu VPS only for MongoDB.\nI recently installed mongodb from official site.\nI changed few configs like adding security etc… and it was working fine.Few days later, today when I made a minor change in config and restarted the service. It suddenly stopped working and even after reverting the change, its still the same.\nDB contains some data that is very previous to me, so i can’t just start over Here’s mongoDB status:My config file:Let me know if anything else is needed.",
"username": "karan_sharma3"
},
{
"code": "Jun 28 00:27:41 creazilla-mongo mongod[302981]: To see additional information in this output, start without the \"--fork\" option.\n/usr/bin/mongod --config /etc/mongod.conf\nfork: false\n",
"text": "Did you trying doing this, starting mongod manually? As shown earlier in the output, the command is:and setin the config",
"username": "Jack_Woehr"
},
{
"code": "/usr/bin/mongod --config /etc/mongod.confsudo -u mongodb /usr/bin/mongod --config /etc/mongod.conffork: true",
"text": "/usr/bin/mongod --config /etc/mongod.confDo this as mongodb though sudo -u mongodb /usr/bin/mongod --config /etc/mongod.confAlso see https://jira.mongodb.org/browse/SERVER-74345 for which the easy resolution is to upgrade to the latest patch release. Or remove fork: true.",
"username": "chris"
},
{
"code": "/usr/bin/mongod --config /etc/mongod.conf\nroot@creazilla-mongo:~# /usr/bin/mongod --config /etc/mongod.conf\nabout to fork child process, waiting until server is ready for connections.\nforked process: 304905\nERROR: child process failed, exited with 14\nTo see additional information in this output, start without the \"--fork\" option.\nforksystemctl status mongod\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)\n Active: failed (Result: exit-code) since Wed 2023-06-28 04:02:11 UTC; 2s ago\n Docs: https://docs.mongodb.org/manual\n Process: 304930 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=1/FAILURE)\n Main PID: 304930 (code=exited, status=1/FAILURE)\n\nJun 28 04:02:11 creazilla-mongo systemd[1]: Started MongoDB Database Server.\nJun 28 04:02:11 creazilla-mongo mongod[304930]: {\"t\":{\"$date\":\"2023-06-28T04:02:11.453Z\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":20574, \"ctx\":\"-\",\"msg\":\"Error during global initialization\",\"attr\":{\"error\":{\"code\":38,\"codeName\":\"FileNotOpen\",\"errmsg\":\"Can't initialize rotatable log file :: caused by :: Failed to open /var/lib/mongodb/mongod.log\"}}}\nJun 28 04:02:11 creazilla-mongo systemd[1]: mongod.service: Main process exited, code=exited, status=1/FAILURE\nJun 28 04:02:11 creazilla-mongo systemd[1]: mongod.service: Failed with result 'exit-code'.\n",
"text": "Hello, thank you for responding. Its very kind of you and I appreciate it.\nSo here’s what i did based on your response:ran command:output:Clearly, as you stated I need to turn fork off, so after doing that. I re-ran the command.\nThis time i got no output actually.So I tried:I think I restarted the mongodb service as well.output:",
"username": "karan_sharma3"
},
{
"code": "# sudo -u mongodb /usr/bin/mongod --config /etc/mongod.conf\n{\"t\":{\"$date\":\"2023-06-28T04:23:46.785Z\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":20574, \"ctx\":\"-\",\"msg\":\"Error during global initialization\",\"attr\":{\"error\":{\"code\":38,\"codeName\":\"FileNotOpen\",\"errmsg\":\"Can't initialize rotatable log file :: caused by :: Failed to open /var/lib/mongodb/mongod.log\"}}}\n",
"text": "Hi, thank you for responding.\nHere’s output to command you suggested:It seems that the issue is with log file permission, even in my previous research on pursuit of solving this issue. I found a thread where people were discussing exactly this.",
"username": "karan_sharma3"
},
{
"code": "Jun 28 04:02:11 creazilla-mongo mongod[304930]: {\"t\":{\"$date\":\"2023-06-28T04:02:11.453Z\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":20574, \"ctx\":\"-\",\"msg\":\"Error during global initialization\",\"attr\":{\"error\":{\"code\":38,\"codeName\":\"FileNotOpen\",\"errmsg\":\"Can't initialize rotatable log file :: caused by :: Failed to open /var/lib/mongodb/mongod.log\"}}}\n/var/lib/mongodb/mongod.log/var/log/mongodb/mongod.log",
"text": "Why can’t mongod open /var/lib/mongodb/mongod.logIs that where the logfile should go? Or maybe /var/log/mongodb/mongod.log ? Is there a typo in the config?If so, then if you figure out why it’s not opening the logfile, which seems to be the error which is making mongod quit, then you’ve solved your problem.",
"username": "Jack_Woehr"
},
{
"code": "# systemctl status mongod\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)\n Active: failed (Result: exit-code) since Wed 2023-06-28 04:54:12 UTC; 8s ago\n Docs: https://docs.mongodb.org/manual\n Process: 305122 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=14)\n Main PID: 305122 (code=exited, status=14)\n\nJun 28 04:54:11 systemd[1]: Started MongoDB Database Server.\nJun 28 04:54:12 systemd[1]: mongod.service: Main process exited, code=exited, status=14/n/a\nJun 28 04:54:12 systemd[1]: mongod.service: Failed with result 'exit-code'.\n",
"text": "Seems like it, I might have made that changed while playing around with values.Now I’ve updated the value with one you provided and restarted the server. service status looks like this:",
"username": "karan_sharma3"
},
{
"code": "",
"text": "Does mongod have write permissions on the new logpath directory you have given?",
"username": "Ramachandra_Tummala"
},
{
"code": "mongodsudo -u mongodbchown -R mongodb:mongodb /var/lib/mongodb\nchown -R mongodb:mongodb /var/log/mongodb\nchown mongodb:mongodb /tmp/mongo*.sock\n",
"text": "Check the log now.If you ran mongod as root earlier then some files may have been created with root permissions (why I said to run it with sudo -u mongodb)You may have permissions that need to be reset on the data/log dirs and mongo socket.",
"username": "chris"
},
{
"code": "# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongodb\n# engine:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1\n bindIpAll: true\n\n# how the process runs\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\nsecurity:\n authorization: enabled\n# systemctl status mongod\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)\n Active: failed (Result: exit-code) since Wed 2023-06-28 15:15:41 UTC; 1min 16s ago\n Docs: https://docs.mongodb.org/manual\n Process: 310905 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=14)\n Main PID: 310905 (code=exited, status=14)\n\nJun 28 15:15:40 systemd[1]: Started MongoDB Database Server.\nJun 28 15:15:41 systemd[1]: mongod.service: Main process exited, code=exited, status=14/n/a\nJun 28 15:15:41 systemd[1]: mongod.service: Failed with result 'exit-code'.\n",
"text": "Here’s updated config:Status:",
"username": "karan_sharma3"
},
{
"code": "cat /etc/passwd\nfind / -iname mongodb-27017.lock\nchown mongod?:mongod? /tmp/mongo-27017.lock\n",
"text": "Hi @karan_sharma3,\nWhich is the user created from installation of MongoDB?\nIs mongod or mongodb? Please, cat & paste the output of this file:There is a file named mongodb-27017.lock?\nIt should be place in /tmp or you can find It with the follow command:When you have find It, check the permission of it.\nIf the permission of user and group are different from mongod change the permission in:After all restart the service.Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "/var/log/mongodb/mongod.log",
"text": "The next thing you need to do is look in the log file. /var/log/mongodb/mongod.log",
"username": "chris"
},
{
"code": "{\"t\":{\"$date\":\"2023-06-27T23:58:44.770+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"-\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2023-06-27T23:58:44.776+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-06-27T23:58:44.778+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"main\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-06-27T23:58:44.779+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2023-06-27T23:58:44.793+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-06-27T23:58:44.793+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-06-27T23:58:44.793+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2023-06-27T23:58:44.793+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-06-27T23:58:44.794+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":302287,\"port\":27018,\"dbPath\":\"/var/lib/mongodb\",\"architecture\":\"64-bit\",\"host\":\"creazilla-mongo\"}}\n{\"t\":{\"$date\":\"2023-06-27T23:58:44.794+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.6\",\"gitVersion\":\"26b4851a412cc8b9b4a18cdb6cd0f9f642e06aa7\",\"openSSLVersion\":\"OpenSSL 1.1.1f 31 Mar 2020\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"ubuntu2004\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-06-27T23:58:44.794+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Ubuntu\",\"version\":\"20.04\"}}}\n{\"t\":{\"$date\":\"2023-06-27T23:58:44.794+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"*\",\"port\":27018},\"processManagement\":{\"fork\":true,\"timeZoneInfo\":\"/usr/share/zoneinfo\"},\"security\":{\"authorization\":\"enabled\"},\"storage\":{\"dbPath\":\"/var/lib/mongodb\"},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/var/log/mongodb/mongod.log\"}}}}\n{\"t\":{\"$date\":\"2023-06-27T23:58:44.795+00:00\"},\"s\":\"E\", \"c\":\"NETWORK\", \"id\":23024, \"ctx\":\"initandlisten\",\"msg\":\"Failed to unlink socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27018.sock\",\"error\":\"Operation not permitted\"}}\n{\"t\":{\"$date\":\"2023-06-27T23:58:44.795+00:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":40486,\"file\":\"src/mongo/transport/transport_layer_asio.cpp\",\"line\":1126}}\n{\"t\":{\"$date\":\"2023-06-27T23:58:44.795+00:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n\n{\"t\":{\"$date\":\"2023-06-28T04:54:12.045+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"-\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2023-06-28T04:54:12.048+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-06-28T04:54:12.048+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"main\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-06-28T04:54:12.048+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2023-06-28T04:54:12.060+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-06-28T04:54:12.060+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-06-28T04:54:12.060+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2023-06-28T04:54:12.060+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-06-28T04:54:12.060+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":305122,\"port\":27017,\"dbPath\":\"/var/lib/mongodb\",\"architecture\":\"64-bit\",\"host\":\"creazilla-mongo\"}}\n{\"t\":{\"$date\":\"2023-06-28T04:54:12.061+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.6\",\"gitVersion\":\"26b4851a412cc8b9b4a18cdb6cd0f9f642e06aa7\",\"openSSLVersion\":\"OpenSSL 1.1.1f 31 Mar 2020\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"ubuntu2004\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-06-28T04:54:12.061+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Ubuntu\",\"version\":\"20.04\"}}}\n{\"t\":{\"$date\":\"2023-06-28T04:54:12.061+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"*\",\"port\":27017},\"processManagement\":{\"timeZoneInfo\":\"/usr/share/zoneinfo\"},\"security\":{\"authorization\":\"enabled\"},\"storage\":{\"dbPath\":\"/var/lib/mongodb\"},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/var/log/mongodb/mongod.log\"}}}}\n{\"t\":{\"$date\":\"2023-06-28T04:54:12.062+00:00\"},\"s\":\"E\", \"c\":\"NETWORK\", \"id\":23024, \"ctx\":\"initandlisten\",\"msg\":\"Failed to unlink socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\",\"error\":\"Operation not permitted\"}}\n{\"t\":{\"$date\":\"2023-06-28T04:54:12.062+00:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":40486,\"file\":\"src/mongo/transport/transport_layer_asio.cpp\",\"line\":1126}}\n{\"t\":{\"$date\":\"2023-06-28T04:54:12.062+00:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n\n{\"t\":{\"$date\":\"2023-06-28T04:56:30.783+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"-\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2023-06-28T04:56:30.784+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"main\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-06-28T04:56:30.786+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-06-28T04:56:30.787+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2023-06-28T04:56:30.800+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-06-28T04:56:30.800+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-06-28T04:56:30.800+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2023-06-28T04:56:30.800+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-06-28T04:56:30.800+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":305155,\"port\":27017,\"dbPath\":\"/var/lib/mongodb\",\"architecture\":\"64-bit\",\"host\":\"creazilla-mongo\"}}\n{\"t\":{\"$date\":\"2023-06-28T04:56:30.800+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.6\",\"gitVersion\":\"26b4851a412cc8b9b4a18cdb6cd0f9f642e06aa7\",\"openSSLVersion\":\"OpenSSL 1.1.1f 31 Mar 2020\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"ubuntu2004\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-06-28T04:56:30.800+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Ubuntu\",\"version\":\"20.04\"}}}\n{\"t\":{\"$date\":\"2023-06-28T04:56:30.800+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"*\",\"port\":27017},\"processManagement\":{\"timeZoneInfo\":\"/usr/share/zoneinfo\"},\"security\":{\"authorization\":\"enabled\"},\"storage\":{\"dbPath\":\"/var/lib/mongodb\"},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/var/log/mongodb/mongod.log\"}}}}\n{\"t\":{\"$date\":\"2023-06-28T04:56:30.801+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/var/lib/mongodb\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2023-06-28T04:56:30.801+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22297, \"ctx\":\"initandlisten\",\"msg\":\"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-06-28T04:56:30.802+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=7303M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2023-06-28T04:56:31.385+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":11,\"message\":\"[1687928191:385730][305155:0x7efe804abcc0], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: __posix_file_lock, 393: /var/lib/mongodb/WiredTiger.lock: handle-lock: fcntl: Resource temporarily unavailable\"}}\n{\"t\":{\"$date\":\"2023-06-28T04:56:31.386+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":16,\"message\":\"[1687928191:386758][305155:0x7efe804abcc0], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: __conn_single, 1806: WiredTiger database is already being managed by another process: Device or resource busy\"}}\n{\"t\":{\"$date\":\"2023-06-28T04:56:31.387+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":11,\"message\":\"[1687928191:387072][305155:0x7efe804abcc0], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: __posix_file_lock, 393: /var/lib/mongodb/WiredTiger.lock: handle-lock: fcntl: Resource temporarily unavailable\"}}\n{\"t\":{\"$date\":\"2023-06-28T04:56:31.387+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":16,\"message\":\"[1687928191:387141][305155:0x7efe804abcc0], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: __conn_single, 1806: WiredTiger database is already being managed by another process: Device or resource busy\"}}\n{\"t\":{\"$date\":\"2023-06-28T04:56:31.387+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":11,\"message\":\"[1687928191:387352][305155:0x7efe804abcc0], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: __posix_file_lock, 393: /var/lib/mongodb/WiredTiger.lock: handle-lock: fcntl: Resource temporarily unavailable\"}}\n{\"t\":{\"$date\":\"2023-06-28T04:56:31.387+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":16,\"message\":\"[1687928191:387412][305155:0x7efe804abcc0], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: __conn_single, 1806: WiredTiger database is already being managed by another process: Device or resource busy\"}}\n{\"t\":{\"$date\":\"2023-06-28T04:56:31.387+00:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22347, \"ctx\":\"initandlisten\",\"msg\":\"Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade.\"}\n{\"t\":{\"$date\":\"2023-06-28T04:56:31.387+00:00\"},\"s\":\"F\", \"c\":\"STORAGE\", \"id\":28595, \"ctx\":\"initandlisten\",\"msg\":\"Terminating.\",\"attr\":{\"reason\":\"16: Device or resource busy\"}}\n{\"t\":{\"$date\":\"2023-06-28T04:56:31.387+00:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":28595,\"file\":\"src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp\",\"line\":708}}\n{\"t\":{\"$date\":\"2023-06-28T04:56:31.387+00:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n\n{\"t\":{\"$date\":\"2023-06-28T04:58:42.411+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"-\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2023-06-28T04:58:42.411+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-06-28T04:58:42.412+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"main\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-06-28T04:58:42.413+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2023-06-28T04:58:42.424+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-06-28T04:58:42.424+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-06-28T04:58:42.424+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2023-06-28T04:58:42.424+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-06-28T04:58:42.426+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":305177,\"port\":27017,\"dbPath\":\"/var/lib/mongodb\",\"architecture\":\"64-bit\",\"host\":\"creazilla-mongo\"}}\n{\"t\":{\"$date\":\"2023-06-28T04:58:42.426+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.6\",\"gitVersion\":\"26b4851a412cc8b9b4a18cdb6cd0f9f642e06aa7\",\"openSSLVersion\":\"OpenSSL 1.1.1f 31 Mar 2020\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"ubuntu2004\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-06-28T04:58:42.426+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Ubuntu\",\"version\":\"20.04\"}}}\n{\"t\":{\"$date\":\"2023-06-28T04:58:42.426+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"*\",\"port\":27017},\"processManagement\":{\"timeZoneInfo\":\"/usr/share/zoneinfo\"},\"security\":{\"authorization\":\"enabled\"},\"storage\":{\"dbPath\":\"/var/lib/mongodb\"},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/var/log/mongodb/mongod.log\"}}}}\n{\"t\":{\"$date\":\"2023-06-28T04:58:42.426+00:00\"},\"s\":\"E\", \"c\":\"NETWORK\", \"id\":23024, \"ctx\":\"initandlisten\",\"msg\":\"Failed to unlink socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\",\"error\":\"Operation not permitted\"}}\n{\"t\":{\"$date\":\"2023-06-28T04:58:42.426+00:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":40486,\"file\":\"src/mongo/transport/transport_layer_asio.cpp\",\"line\":1126}}\n{\"t\":{\"$date\":\"2023-06-28T04:58:42.426+00:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n\n{\"t\":{\"$date\":\"2023-06-28T15:15:40.472+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"-\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2023-06-28T15:15:40.474+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-06-28T15:15:40.474+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"main\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-06-28T15:15:40.475+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2023-06-28T15:15:40.487+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-06-28T15:15:40.487+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-06-28T15:15:40.487+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2023-06-28T15:15:40.487+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-06-28T15:15:40.487+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":310905,\"port\":27017,\"dbPath\":\"/var/lib/mongodb\",\"architecture\":\"64-bit\",\"host\":\"creazilla-mongo\"}}\n{\"t\":{\"$date\":\"2023-06-28T15:15:40.487+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.6\",\"gitVersion\":\"26b4851a412cc8b9b4a18cdb6cd0f9f642e06aa7\",\"openSSLVersion\":\"OpenSSL 1.1.1f 31 Mar 2020\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"ubuntu2004\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-06-28T15:15:40.487+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Ubuntu\",\"version\":\"20.04\"}}}\n{\"t\":{\"$date\":\"2023-06-28T15:15:40.487+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"*\",\"port\":27017},\"processManagement\":{\"timeZoneInfo\":\"/usr/share/zoneinfo\"},\"security\":{\"authorization\":\"enabled\"},\"storage\":{\"dbPath\":\"/var/lib/mongodb\"},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/var/log/mongodb/mongod.log\"}}}}\n{\"t\":{\"$date\":\"2023-06-28T15:15:40.489+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/var/lib/mongodb\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2023-06-28T15:15:40.489+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22297, \"ctx\":\"initandlisten\",\"msg\":\"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-06-28T15:15:40.489+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=7303M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2023-06-28T15:15:41.065+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":11,\"message\":\"[1687965341:64720][310905:0x7f8f8e3f6cc0], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: __posix_file_lock, 393: /var/lib/mongodb/WiredTiger.lock: handle-lock: fcntl: Resource temporarily unavailable\"}}\n{\"t\":{\"$date\":\"2023-06-28T15:15:41.065+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":16,\"message\":\"[1687965341:65062][310905:0x7f8f8e3f6cc0], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: __conn_single, 1806: WiredTiger database is already being managed by another process: Device or resource busy\"}}\n{\"t\":{\"$date\":\"2023-06-28T15:15:41.065+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":11,\"message\":\"[1687965341:65767][310905:0x7f8f8e3f6cc0], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: __posix_file_lock, 393: /var/lib/mongodb/WiredTiger.lock: handle-lock: fcntl: Resource temporarily unavailable\"}}\n{\"t\":{\"$date\":\"2023-06-28T15:15:41.066+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":16,\"message\":\"[1687965341:65967][310905:0x7f8f8e3f6cc0], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: __conn_single, 1806: WiredTiger database is already being managed by another process: Device or resource busy\"}}\n{\"t\":{\"$date\":\"2023-06-28T15:15:41.067+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":11,\"message\":\"[1687965341:66842][310905:0x7f8f8e3f6cc0], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: __posix_file_lock, 393: /var/lib/mongodb/WiredTiger.lock: handle-lock: fcntl: Resource temporarily unavailable\"}}\n{\"t\":{\"$date\":\"2023-06-28T15:15:41.067+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":16,\"message\":\"[1687965341:67034][310905:0x7f8f8e3f6cc0], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: __conn_single, 1806: WiredTiger database is already being managed by another process: Device or resource busy\"}}\n{\"t\":{\"$date\":\"2023-06-28T15:15:41.067+00:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22347, \"ctx\":\"initandlisten\",\"msg\":\"Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade.\"}\n{\"t\":{\"$date\":\"2023-06-28T15:15:41.067+00:00\"},\"s\":\"F\", \"c\":\"STORAGE\", \"id\":28595, \"ctx\":\"initandlisten\",\"msg\":\"Terminating.\",\"attr\":{\"reason\":\"16: Device or resource busy\"}}\n{\"t\":{\"$date\":\"2023-06-28T15:15:41.067+00:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":28595,\"file\":\"src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp\",\"line\":708}}\n{\"t\":{\"$date\":\"2023-06-28T15:15:41.067+00:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n",
"text": "Last 200 lines of log file:",
"username": "karan_sharma3"
},
{
"code": "/etc/passwd# cat /etc/passwd\nroot:x:0:0:root:/root:/bin/bash\ndaemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin\nbin:x:2:2:bin:/bin:/usr/sbin/nologin\nsys:x:3:3:sys:/dev:/usr/sbin/nologin\nsync:x:4:65534:sync:/bin:/bin/sync\ngames:x:5:60:games:/usr/games:/usr/sbin/nologin\nman:x:6:12:man:/var/cache/man:/usr/sbin/nologin\nlp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin\nmail:x:8:8:mail:/var/mail:/usr/sbin/nologin\nnews:x:9:9:news:/var/spool/news:/usr/sbin/nologin\nuucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin\nproxy:x:13:13:proxy:/bin:/usr/sbin/nologin\nwww-data:x:33:33:www-data:/var/www:/usr/sbin/nologin\nbackup:x:34:34:backup:/var/backups:/usr/sbin/nologin\nlist:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin\nirc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologin\ngnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin\nnobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin\nsystemd-network:x:100:102:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin\nsystemd-resolve:x:101:103:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin\nsystemd-timesync:x:102:104:systemd Time Synchronization,,,:/run/systemd:/usr/sbin/nologin\nmessagebus:x:103:106::/nonexistent:/usr/sbin/nologin\nsyslog:x:104:110::/home/syslog:/usr/sbin/nologin\n_apt:x:105:65534::/nonexistent:/usr/sbin/nologin\ntss:x:106:111:TPM software stack,,,:/var/lib/tpm:/bin/false\nuuidd:x:107:112::/run/uuidd:/usr/sbin/nologin\ntcpdump:x:108:113::/nonexistent:/usr/sbin/nologin\nlandscape:x:109:115::/var/lib/landscape:/usr/sbin/nologin\npollinate:x:110:1::/var/cache/pollinate:/bin/false\nfwupd-refresh:x:111:116:fwupd-refresh user,,,:/run/systemd:/usr/sbin/nologin\nusbmux:x:112:46:usbmux daemon,,,:/var/lib/usbmux:/usr/sbin/nologin\nsshd:x:113:65534::/run/sshd:/usr/sbin/nologin\nsystemd-coredump:x:999:999:systemd Core Dumper:/:/usr/sbin/nologin\nlxd:x:998:100::/var/snap/lxd/common/lxd:/bin/false\nmongodb:x:114:65534::/home/mongodb:/usr/sbin/nologin\n# find / -iname mongodb-27017.sock\n/tmp/mongodb-27017.sock\n# chown mongodb:mongodb /tmp/mongodb-27017.sock\n# systemctl status mongod\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)\n Active: failed (Result: exit-code) since Wed 2023-06-28 21:25:57 UTC; 11s ago\n Docs: https://docs.mongodb.org/manual\n Process: 313061 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=14)\n Main PID: 313061 (code=exited, status=14)\n\nJun 28 21:25:56 creazilla-mongo systemd[1]: Started MongoDB Database Server.\nJun 28 21:25:57 creazilla-mongo systemd[1]: mongod.service: Main process exited, code=exited, status=14/n/a\nJun 28 21:25:57 creazilla-mongo systemd[1]: mongod.service: Failed with result 'exit-code'.\n",
"text": "Output of /etc/passwd:And then,Lastly, this command got executed:and then, restarted the server, but still same error!",
"username": "karan_sharma3"
},
{
"code": "",
"text": "Hi @karan_sharma3,\nNo file.sock, but file.lock",
"username": "Fabio_Ramohitaj"
},
{
"code": "{\"t\":{\"$date\":\"2023-06-28T15:15:41.065+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":11,\"message\":\"[1687965341:64720][310905:0x7f8f8e3f6cc0], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: __posix_file_lock, 393: /var/lib/mongodb/WiredTiger.lock: handle-lock: fcntl: Resource temporarily unavailable\"}} pkill -x mongod",
"text": "{\"t\":{\"$date\":\"2023-06-28T15:15:41.065+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":11,\"message\":\"[1687965341:64720][310905:0x7f8f8e3f6cc0], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: __posix_file_lock, 393: /var/lib/mongodb/WiredTiger.lock: handle-lock: fcntl: Resource temporarily unavailable\"}} The starting process can’t get the lock file. This could be permissions or another mongod is running.Check for a running mongod and kill it if there is one pkill -x mongod.I posted earlier about setting the permissions on all the files, you possibly started mongod as root so this should be done.Finally keep checking the log file, now that the log can be written to the error(s) preventing startup will be logged there.",
"username": "chris"
},
{
"code": "find / -iname mongodb-27017.lock\n",
"text": "I’m unable to find lock file in system tho. It’s not present.Output of above command is empty/nothing.Also I tried steps mentioned in solution from askubuntu.com link, didn’t work.",
"username": "karan_sharma3"
},
{
"code": "lockhistory",
"text": "Hey, I think I may have deleted the .lock file myself after seeing some answer in stackoverflow, before starting out here.I searched for lock in output of history command and found this:\nScreenshot 2023-06-29 at 10.03.08 PM728×636 136 KB\nI’m not sure how bad is it, but I definitely want to get over this Is this still solvable and/or data is recoverable.",
"username": "karan_sharma3"
},
{
"code": "@chris @Fabio_Ramohitaj @Ramachandra_Tummala @Jack_Woehr~# history 50\n 379 systemctl status mongod\n 380 /usr/bin/mongod --config /etc/mongod.conf\n 381 systemctl status mongod\n 382 nano /etc/mongod.conf \n 383 systemctl status mongod\n 384 systemctl restart mongod\n 385 systemctl status mongod\n 386 exit\n 387 sudo -u mongodb\n 388 chown -R mongodb:mongodb /var/lib/mongodb\n 389 chown -R mongodb:mongodb /var/log/mongodb\n 390 chown mongodb:mongodb /tmp/mongo*.sock\n 391 systemctl restart mongod\n 392 systemctl status mongod\n 393 nano /etc/mongod.conf \n 394 systemctl status mongod\n 395 ls\n 396 tail -n 200 /var/log/mongodb/mongod.log\n 397 /var/log/mongodb/mongod.log\n 398 tail -n 200 /var/log/mongodb/mongod.log\n 399 tail -n 100 /var/log/mongodb/mongod.log\n 400 find / -iname mongodb-27017.lock\n 401 cat /etc/passwd\n 402 find / -iname mongodb-27017.lock\n 403 ls /tmp/mongodb-27017.sock \n 404 cat /tmp/mongodb-27017.sock \n 405 ls /tmp/mongodb-27017.sock \n 406 find / -iname mongodb-27017.sock\n 407 find / -iname mongodb-27017.lock\n 408 chown mongod?:mongod? /tmp/mongo-27017.lock\n 409 chown mongod:mongod /tmp/mongo-27017.lock\n 410 chown mongodb:mongodb /tmp/mongo-27017.lock\n 411 chown mongodb:mongodb /tmp/mongodb-27017.sock\n 412 systemctl restart mongod\n 413 systemctl status mongod\n 414 ls migration/\n 415 sudo chown -R mongodb:mongodb /var/lib/mongodb\n 416 sudo chown mongodb:mongodb /tmp/mongodb-27017.sock\n 417 sudo service mongod restart\n 418 sudo service mongod status\n 419 sudo chown -R mongodb:mongodb /var/lib/mongodb\n 420 sudo chown mongodb:mongodb /tmp/mongodb-27017.sock\n 421 sudo service mongod restart\n 422 systemctl restart mongod\n 423 systemctl status mongod\n 424 pwd\n 425 whoami\n 426 service mongod status\n 427 history -n 50\n 428 history 50\n",
"text": "Heyyyy guys,\nIDK what fixed this but its working fine now.I would like to thank this community for helping me out and responding to this thread \n@chris @Fabio_Ramohitaj @Ramachandra_Tummala @Jack_Woehr Thank you so much Can’t mention more than 2 people as new member.Here’s the last 50 commands that I ran:",
"username": "karan_sharma3"
},
{
"code": "",
"text": "I’ll @mention them for you: @Jack_Woehr @Ramachandra_Tummala @Fabio_RamohitajThank you for a good initial post that was well formatted, it matters!",
"username": "chris"
}
] |
My mongodb server failing to start/restart
|
2023-06-28T00:39:21.077Z
|
My mongodb server failing to start/restart
| 2,461 |
null |
[] |
[
{
"code": "",
"text": "Can you share any information about when you expect M0/M2/M5 shared clusters to be upgraded to MongoDB v7?Thanks ",
"username": "Alex_Bjorlig"
},
{
"code": "",
"text": "Hi @Alex_Bjorlig,Similarly to another post regarding MongoDB version 6 on free/shared tier cluster, MongoDB doesn’t communicate any exact future timelines for version 7 for shared tier clusters as things are subject to change.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Ok, what about an approximate timeline then?",
"username": "Alex_Bjorlig"
},
{
"code": "",
"text": "Hi Alex,Ok, what about an approximate timeline then?We typically make the newest version of MongoDB available in the shared tier within two quarters of the GA. However, this is just an estimate and is of course subject to change.Hope this helps.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Is there a timeline for upgrading M0,M2,M5 to MongoDB 7.0?
|
2023-08-23T07:57:50.001Z
|
Is there a timeline for upgrading M0,M2,M5 to MongoDB 7.0?
| 387 |
null |
[
"python",
"atlas",
"motor-driver"
] |
[
{
"code": "pymongo.errors.ConfigurationError: The \"dnspython\" module must be installed to use mongodb+srv:// URIs. To fix this error install pymongo with the srv extra: /path/to/virtualenvs/PK76f_xS/bin/python -m pip install \"pymongo[srv]\"",
"text": "I’m trying to work with the Motor driver with Asyncio in connecting to Atlas. When I follow the official guide and use my Atlas connection string I get the following error:pymongo.errors.ConfigurationError: The \"dnspython\" module must be installed to use mongodb+srv:// URIs. To fix this error install pymongo with the srv extra: /path/to/virtualenvs/PK76f_xS/bin/python -m pip install \"pymongo[srv]\"I don’t have PyMongo installed because I’m using Motor so I’m confused as to why I need to install it to use Motor. Am I doing something wrong or is this error and subsequent installation required to work with Motor and Asyncio?Thanks.",
"username": "Ian"
},
{
"code": "motordnspythondnspythonmongodb+srv://python -m pip install dnspython",
"text": "Welcome to the MongoDB Community @Ian!PyMongo is a dependency of Motor, and will automatically be installed with motor.PyMongo has several optional dependencies for features like different authentication, connection, and compression methods. dnspython is currently not installed by default.Per the error message, the dnspython module must be installed in order to use mongodb+srv:// URIs. You can use the provided syntax or install using: python -m pip install dnspython.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thanks @Stennie_X,I didn’t realise PyMongo was a dependency of Motor.Are there any good examples of working with Motor? The guide shows a single function to connection and then leaves you on your own. Other tutorials I’ve seen like this one, suggest dropping the connection details in your code and presumably re-using the connection on every request?The official docs seem to suggest running the event loop on every function call?I’m a little confused as to how to set things up. Are you able to clarify or provide a good example codebase I can look through?Thanks again.",
"username": "Ian"
},
{
"code": "await>>> loop = asyncio.get_event_loop()\n>>> loop.run_until_complete(do_find())\nasyncio.run()AsyncIOMotorClientimport asyncio\nimport motor.motor_asyncio\n\nasync def main():\n client = motor.motor_asyncio.AsyncIOMotorClient('mongodb://localhost:27017')\n coll = client.test.test\n await coll.insert_one({'hello': 'world'})\n print(await coll.find_one())\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n",
"text": "Hi @ian,Since older versions of Python don’t support top-level await calls, so the Motor tutorial in the documentation uses:In general you just need to run your asyncio app like normal via asyncio.run() and start using the AsyncIOMotorClient.Here is a concise example app (thanks to @Shane on the Python driver team):If you are looking for more comprehensive Python examples, check out the MongoDB Developer Hub (Python).Here are some articles (with associated GitHub repos) that may be of interest if you want to use Motor with different web frameworks:Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "loopasyncio.run()database.py",
"text": "Super helpful, thanks @Stennie_XSo, just for clarity, if I’m using newer version of Python (3.6+ ?) then I don’t need to set the loop as in the documented example but can just run with asyncio.run()?And from your concise example it looks like the database connection code can be extracted to a database.py file and imported, the same as with PyMongo connection code? This same connection would be shared among all requests to the database?I just wanted to confirm that because I came across another article which said the opposite:Unlike PyMongo you can’t declare the mongoDB initialisation in the top. Here you have to initialise on every function that want to access the mongoDB.",
"username": "Ian"
},
{
"code": "async def main():\n async with client:\n Motorclient = motor.motor_asyncio.AsyncIOMotorClient(\"mongodb+srv://tmmadmin:[email protected]/?retryWrites=true&w=majority\")\n client.Tmm_db = Motorclient.Tmm_data\n client.Tmm_userData = Tmm_db.Tmm_userData\n\n print('trying')\n user_info = await client.Tmm_userData.find_one({'dev_id': 1074830077260476466})\n print(user_info)\n\nloop = asyncio.get_event_loop()\nloop.run_until_complete(main())\n",
"text": "Hi there, anyway on how to utilize this library? I have been struggling for a few hours now to get my asyncio loop working with my database. My very basic setup:This however, keeps returning ServerSelectionTimeoutError",
"username": "Pogo_Digitalism"
}
] |
Connecting to MongoDB Atlas using Motor
|
2022-03-03T09:43:38.415Z
|
Connecting to MongoDB Atlas using Motor
| 6,375 |
null |
[
"queries",
"node-js"
] |
[
{
"code": "[\n _id: '63410263611986740717e152'\nusers: [{\n_id: '62211243611986740717e152'\nname: John\n},\n{\n_id: '61111243611986740717e152'\nname: Jane\n},\n]\n]\n",
"text": "Hi, I’m working on a project where I need to conditionally add an ! to the end of the users name if its _id matches the one given. I think I should be able to do this using $concat, but I’ve tried playing around with this in mongo playground, but I can’t seem to get it to work. I would really appreciate any help or advice. Thank you!https://mongoplayground.net/p/WitBRRL3NlcNote: My document looks like this:And I need to add ! to the end of the name if the users id is equal to the one given",
"username": "Geenzie"
},
{
"code": "12\"John\"\"John!\"db.collection.update({\n _id: 1\n},\n[\n {\n $set: {\n \"users\": {\n $map: {\n input: \"$users\",\n as: \"users\",\n in: {\n $cond: {\n if: {\n $eq: [\n \"$$users._id\",\n 12\n ]\n },\n then: {\n \"_id\": \"$$users._id\",\n \"name\": {\n $concat: [\n \"$$users.name\",\n \"!\"\n ]\n }\n },\n else: {\n \"_id\": \"$$users._id\",\n \"name\": \"$$users.name\"\n }\n }\n }\n }\n }\n }\n }\n])\n_idname$map",
"text": "Hello @Geenzie,Hi, I’m working on a project where I need to conditionally add an ! to the end of the users name if its _id matches the one given.Based off the playground link, I assume the provided value in this case is 12 in which you want the name \"John\" to turn into \"John!\" - However, please correct me if I am wrong here.Would the following work for you? I only tested it on the playground sample documents:Ref: Playground linkWhen the name did not match the provided value, I set the _id and name values to what they already were / are.Please test thoroughly to ensure this suits all your use case and requirements as I have only tried this on the playground sample documents.For reference, I utilised $map in this example.Hope the above helps.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "$set_id$push$pull",
"text": "Hi @Jason_Tran,Thank you for your response! This worked. I have a question though. Is there a way to conditionally $set/ change where only the user whose _id matches data would be updated? I’m pretty new to this, but it seems like this could be an expensive process. Would a simple $push and $pull be less energy/ performance intensive?",
"username": "Geenzie"
}
] |
Conditionally add to the end of a string
|
2023-08-21T00:40:23.000Z
|
Conditionally add to the end of a string
| 283 |
null |
[
"aggregation"
] |
[
{
"code": "const collection = db.collection('table_2').aggregate([\n {\n $lookup: {\n from: \"table_1\",\n let: {\n id_table_3: \"$_id\"\n },\n pipeline: [\n {\n $match: {\n $expr: {\n $eq: [\n \"$$id_table_3\",\n \"$id_table_3\"\n ]\n }\n }\n },\n {\n $lookup: {\n from: \"table_3\",\n let: {\n table_1_id: \"$_id\"\n },\n pipeline: [\n {\n $match: {\n $expr: {\n $eq: [\n \"$$table_1_id\",\n \"$table_1_id\"\n ]\n }\n }\n }, \n as: \"dot_resultado_01\"\n }\n }\n ],\n as: \"orc_resultado_02\"\n },\n \n },\n { \"$group\": {\n \"_id\": { campo:\"$_id\", somar: \"$orc_resultado_02.dot_resultado_01.tb_3_valor\",}\n }},\n\n]).toArray();\n[],\n[],\n[]\n\nresult: \n[\n\t{\n\t\t\"_id\": {\n\t\t\t\"campo\": \"5545454545454545\",\n\t\t\t\"somar\": [\n\t\t\t\t[\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$numberDecimal\": \"555\"\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t]\n\t\t}\n\t},\n\t{\n\t\t\"_id\": {\n\t\t\t\"campo\": \"54555454545454\",\n\t\t\t\"somar\": []\n\t\t}\n\t},\n\t{\n\t\t\"_id\": {\n\t\t\t\"campo\": \"887854545454\",\n\t\t\t\"somar\": [\n\t\t\t\t[],\n\t\t\t\t[\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$numberDecimal\": \"223.00\"\n\t\t\t\t\t}\n\t\t\t\t],\n\t\t\t\t[],\n\t\t\t\t[],\n\t\t\t\t[]\n\t\t\t]\n\t\t}\n\t}\n]\n",
"text": "Hi,I would like your help!table_1\n_id\nid_table_3table_2\t\n_id\nid_table_1\n– subquery - sum total tb_3_valortable_3\n_id\ntb_3_valorI would like to remove these lines from the array",
"username": "Erysson_Barros"
},
{
"code": "db.employees.insertMany([\n {\n _id: 'E1',\n name: 'Urko',\n },\n {\n _id: 'E2',\n name: 'Sashko',\n },\n {\n _id: 'E3',\n name: 'Ilko',\n },\n]); \ndb.workdays.insertMany([\n {\n date: ISODate('2023-08-21'),\n employeeId: 'E1',\n usdEarned: 380,\n },\n {\n date: ISODate('2023-08-22'),\n employeeId: 'E1',\n usdEarned: 430,\n },\n {\n date: ISODate('2023-08-21'),\n employeeId: 'E2',\n usdEarned: 450,\n },\n {\n date: ISODate('2023-08-22'),\n employeeId: 'E2',\n usdEarned: 0,\n },\n]);\ndb.overtime.insertMany([\n {\n date: ISODate('2023-08-20'),\n employeeId: 'E1',\n usdEarned: 120,\n },\n]);\n// Solution 1 (with pipeline in $lookup)\ndb.employees.aggregate([\n {\n $lookup: {\n from: 'workdays',\n let: {\n employeeId: '$_id',\n },\n pipeline: [\n {\n $match: {\n $expr: {\n $eq: ['$employeeId', '$$employeeId']\n }\n }\n },\n {\n $group: {\n _id: null,\n total: {\n $sum: '$usdEarned'\n }\n }\n }\n ],\n as: 'earnings'\n }\n },\n {\n $lookup: {\n from: 'overtime',\n let: {\n employeeId: '$_id',\n },\n pipeline: [\n {\n $match: {\n $expr: {\n $eq: ['$employeeId', '$$employeeId']\n }\n }\n },\n {\n $group: {\n _id: null,\n total: {\n $sum: '$usdEarned'\n }\n }\n }\n ],\n as: 'overtime'\n }\n },\n {\n $unwind: {\n path: '$earnings',\n preserveNullAndEmptyArrays: true,\n }\n },\n {\n $unwind: {\n path: '$overtime',\n preserveNullAndEmptyArrays: true,\n }\n },\n {\n $project: {\n name: '$name',\n totalUsdEarned: {\n $sum: ['$earnings.total', '$overtime.total']\n }\n }\n }\n]);\n// Solution 2: With sequential $lookup + $group + $unwind stages\ndb.employees.aggregate([\n {\n $lookup: {\n from: 'workdays',\n localField: '_id',\n foreignField: 'employeeId',\n as: 'earnings'\n }\n },\n {\n $unwind: {\n path: '$earnings',\n preserveNullAndEmptyArrays: true,\n }\n },\n {\n $group: {\n _id: {\n _id: '$_id',\n name: '$name',\n },\n totalUsdEarned: {\n $sum: '$earnings.usdEarned'\n }\n }\n },\n {\n $lookup: {\n from: 'overtime',\n localField: '_id._id',\n foreignField: 'employeeId',\n as: 'overtime'\n }\n },\n {\n $unwind: {\n path: '$overtime',\n preserveNullAndEmptyArrays: true,\n }\n },\n {\n $group: {\n _id: {\n name: '$_id.name',\n totalUsdEarned: '$totalUsdEarned',\n },\n totalUsdFromOvertime: {\n $sum: '$overtime.usdEarned'\n }\n }\n },\n {\n $project: {\n _id: false,\n name: '$_id.name',\n totalUsdEarned: {\n $sum: ['$_id.totalUsdEarned','$totalUsdFromOvertime']\n }\n }\n }\n]);\n[\n { name: 'Urko', totalEarned: 810 },\n { name: 'Ilko', totalEarned: 0 },\n { name: 'Sashko', totalEarned: 450 }\n]\n",
"text": "Hello, @Erysson_Barros ! Welcome to the MongoDB community! Since you veiled your collection structure and did not provide sample documents from your collections, I will make up my own simple test dataset:Solution 1 (with pipeline in $lookup)Solution 2: With sequential $lookup + $group + $unwind stagesOutput is the same for both aggregations:Feel free to adapt before usage any of those approaches whichever you feel is more convenient or efficient in your specific use case ",
"username": "slava"
},
{
"code": "",
"text": "hi @slava\nthank you very much! it was a big help! ",
"username": "Erysson_Barros"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Subquery with total
|
2023-08-22T15:05:48.316Z
|
Subquery with total
| 291 |
null |
[] |
[
{
"code": "",
"text": "Hello guys and girls,i am a bit of a beginner in mongodb and very insecure, how to design the schemas for my webapp (MERN-Stacks and hosted on AWS-EC2), so i would appreciate a bit feedback, if i am doing some very stupid beginner mistakes.I want to build a social media plattform, where everyone can post texts and other can rate it and write comments.The user should also be able to look up all their own ratings and comments in their profile(so only to put the ratings and comments in the documents of the related text seems to me wrong, otherwise i would have to „scan“ every text document, to find, where the user posted a comment or rated).But also to duplicate all the content to put the comments and ratings into the userdocument AND the textdocument seems also be wrong (because you should avoid duplicate data, because of storage and maintain reasons).So my design would be to have 4 different collections (Users, Texts, Ratings, Comments).If now someones open a text, i would grab every comment and rating, that have a ref from the requested text.If then some user looks up his own comments or ratings i would grab every comment or rating, that have a ref to from the user.Is this fine? Or would it need to much processor power? For me it seems a bit „strange“ to look everytime, every comment or rating! But i heard, that the mongodb is very fast to look up complete collections (even big ones) and only cost very low processor power. I am bit of a nooby.I am also thinking about using a hybrid-approach and to add the first 10 comments also in the related text document and add the average rating (and only update the last one every 3 hours). Is this necessary for 1000 or even 10k users?The next but small question is: I want to show the name author of the text, but with the ref-command of mongodb i am only having the user_id of author (So, then I would have to search all users and assign the particular user_id to the authorname or username, everytime i display a text). Would it make sense also to add the authorname to the textdocument and also only update it manually in the backend every 12 hours? Or i am overestimating the necessary processor power again?Thanks for your feedback and advices.",
"username": "Sun_23"
},
{
"code": " // Comment with embedded user data\n {\n text: String,\n user: {\n _id: ObjectId,\n name: String,\n }\n };\n",
"text": "Hello, @Sun_23 ! Welcome to the community! Data model can greatly be affected by the frequency of writes and the way you get necessary data for your application and many other factors, so it is not easy to provide some strict rules for data modelling for your specific case.Although, I can give you some advices:",
"username": "slava"
}
] |
Feedback to schema-structure
|
2023-08-21T18:26:07.242Z
|
Feedback to schema-structure
| 260 |
null |
[
"islamabad-mug"
] |
[
{
"code": "",
"text": "Hi Eveyone , Hamza Shabbir here. I am a Backend Developer working a YC Based Startup in Islamabad, Pakistan We Recenlty Started using MongoDB at our company and I was tasked with moving our Core APIs to MongoDB so getting along great with that.I am a Technical Community Builder and have worked in AWS Community in Pakistan and extremely excited to launch MUG Islamabad and bring together the Tech Community and Students to foster more lerning and growth oppurtunities.I’d be extremly excited and It would be an honour if you could join MUG Islamabad and follow our journey You can connect with me on LinkedinThank you everyone and MongoDB for the Support in Launching our MUG ",
"username": "Hamza_Shabbir1"
},
{
"code": "",
"text": "Welcome to the community @Hamza_Shabbir1,\nWe hope you’re enjoying it as well as finding it simpler and easier to work with MongoDB!Your initiative to launch MUG Islamabad is valuable. I hope you’ll find the support you need from the community here on the forums.",
"username": "Harshit"
},
{
"code": "",
"text": "Aloha @Hamza_Shabbir1 and welcome to the MongoDb Community! I look forward to working with you!",
"username": "Karissa_Fuller"
}
] |
Hamza Shabbir - MUG Leader Intro
|
2023-08-13T16:19:14.118Z
|
Hamza Shabbir - MUG Leader Intro
| 580 |
null |
[
"dot-net"
] |
[
{
"code": " [MapTo(\"photos\")]\n [Realms.Preserve]\n [WovenProperty]\n public IList<ShowroomPhoto> Photos\n {\n get\n {\n if (this.\\u003CPhotos\\u003Ek__BackingField == null)\n this.\\u003CPhotos\\u003Ek__BackingField = this.GetListValue<ShowroomPhoto>(\"photos\");\n return this.\\u003CPhotos\\u003Ek__BackingField;\n }\n }\n",
"text": "In our .NET app (Xamarin iOS + Android), we are getting a notification about a change to the parent object when only a single child object has changed. The relationship between parent and child is one-to-many. We’re looking for a way to avoid getting those notifications. The parent object contains a List of child objects like this - see snippet below. When a ShowroomPhoto changes, we get a notification about the changed parent object. Both parent and child objects subclass RealmObject.After some research, it looks like keypath filtering might be one way to solve the issue for us. However, this feature seems to be only available in your Swift SDK, but not in the .NET SDK that we are using:Add support for keypath filtering for notifications · Issue #1398 · realm/realm-dotnet (github.com)",
"username": "Philip_Hadley1"
},
{
"code": "",
"text": "How are you subscribing to notifications? If you have a minimal repro project that you can share, that’ll be great!",
"username": "nirinchev"
},
{
"code": "var queryableShowroomVisits = userDataSource.All<ShowroomVisit>().Where(x => x.MarketId == brand.MarketId && x.ExhibitorCoreBrandId == brand.ExhibitorCoreBrandId).OrderByDescending(x => x.Timestamp);\nvar subscription = queryableShowroomVisits.SubscribeForNotifications( updatedValues, changes, error =>\n{\nDebug.WriteLine(\"got a notification\");\n});\n",
"text": "Providing example code will take a bit of effort, but we’re not doing anything unusual on the front-end.\nThe parent object is ShowroomVisit, so we query for those:",
"username": "Philip_Hadley1"
},
{
"code": "SubscribeForNotificationsShowroomVisit",
"text": "Okay, so this is expected when using the SubscribeForNotifications API. Indeed, keypath filtering would allow you to customize the behavior, but there are no immediate plans to work on that for the .NET SDK. Are you interested in changes only in the collection or also in the objects inside the collection? I.e. do you care that ShowroomVisit changed if the collection itself was unchanged?",
"username": "nirinchev"
},
{
"code": "ShowroomVisit",
"text": "Are you interested in changes only in the collection or also in the objects inside the collection? I.e. do you care that ShowroomVisit changed if the collection itself was unchanged?Sorry, I don’t understand your question… We only want to receive a notification when:The only time we want to receive a notification about child collection objects is when a child object is added/removed to/from the parent. This would be my expectation for default behavior because the parent internally stores a collection of child ID’s, right?",
"username": "Philip_Hadley1"
},
{
"code": "CollectionChangedSubscribeForNotifications",
"text": "I see. Unfortunately, this is not something that is currently supported by the .NET SDK. I was asking about what information you need from the change event because the CollectionChanged API is an alternative to the SubscribeForNotifications one and the former only notifies for changes of the collection and not the objects themselves, but it appears that won’t work for you.I understand your point about the default being listening for top-level property modifications only - it is intuitive to an extend, but it’s different from the feedback we’ve received early on when developing the product. The main driver for the notifications feature was displaying items in a collection view and updating the UI in response to data changes. Each cell in the collection view might be displaying arbitrary graph of references from the top level object (e.g. a listview showing schools might want to display the address or the first/last name of the dean), which is why early adopters of the database were asking for deep change tracking.Unfortunately, we don’t plan to change the default and as I said there are no immediate plans to add support for keypath filtering. If you’re using Atlas Device Sync and this feature is critical for going into production with your app, you should reach out to your account executive and provide them with timeframes for when you need it and more information about the use case. They will then work with product and engineering to either find a workaround or reorder the backlog to prioritize keypath filtering support higher.",
"username": "nirinchev"
},
{
"code": "class Dog {\n primaryKey: 1234\n}\n\nclass Person {\n primaryKey: abcd\n dogList: 1234, 5678 etc\n}\nclass Dog {\n primaryKey: 1234\n personKey: abcd\n}\n",
"text": "A couple of possibilities:Because the nature of Realm is to fire events when properties change, or when related object properties change, you could get around having events fire by not relating objects (by reference) but instead just store their primaryKey. It’s very un-relational but can work.Another option is keep the Persons primary key as a property of the Dog classthen you can add observers Dog and when an event Fires due to a change you’ll know what person that Dog belongs to.",
"username": "Jay"
},
{
"code": "",
"text": "Thanks for the info, guys. Please let the product manager know that we would very much like to have keypath filtering, just like the Swift SDK !I think this is the backlog item: Add support for keypath filtering for notifications · Issue #1398 · realm/realm-dotnet (github.com)Can you explain what state this ticket is in? It is marked as “Open”, but also: “Merged” and “Blocked”. ???",
"username": "Philip_Hadley1"
},
{
"code": "",
"text": "It’s open. The linked PR in the Cocoa SDK is merged and the blocked label must have applied inadvertently.",
"username": "nirinchev"
},
{
"code": "",
"text": "That’s great news! Is there a timeline/schedule for release of this feature in the .NET SDK ? ",
"username": "Philip_Hadley1"
},
{
"code": "",
"text": "My alternative is to write client-side code to filter subscriptions, so I have to ask again: Is there a timeline to release keypath filtering for the .NET SDK?",
"username": "Philip_Hadley1"
},
{
"code": "",
"text": "@Philip_Hadley1This is a super good request and really should be on the radar - perhaps adding a note onto the git linked above would be a good idea.However, based on what’s shown on that git as well as what was said above, I don’t think there is a timeline so probably best to search for alternatives for now.there are no immediate plans to add support for keypath filtering. If you’re using Atlas Device Sync and this feature is critical for going into production with your app, you should reach out to your account executive and provide them with timeframes for when you need it and more information about the use case.",
"username": "Jay"
}
] |
Unwanted notifications about changed objects in child collection
|
2023-08-18T20:33:43.214Z
|
Unwanted notifications about changed objects in child collection
| 987 |
null |
[
"indexes"
] |
[
{
"code": "",
"text": "I have a collection called ‘articles’ with the following fields (among others):I currently already have these indexes:\n{ v: 2, key: { authorIds: 1, published: 1 }, name: ‘authorIds_1_published_1’ }\n{ v: 2, key: { sourceId: 1, published: 1 }, name: ‘sourceId_1_published_1’}But now the need arises for another compound index:\n{ v: 2, key: { serialId: 1, published: 1 }, name: ‘serialId_1_published_1’}I am wondering at this point if its more efficient to just have an index on the published, sourceId, serialId and authorIds seperately instead of having 3 compound indexes?",
"username": "Stefan_Bruins"
},
{
"code": "",
"text": "I am wondering at this point if its more efficient to just have an index on the published, sourceId, serialId and authorIds seperately instead of having 3 compound indexes?Nobody can know for sure what is the best for your data size, traffic and use-cases. Only you can determine the best performing indexes based on your query patterns. It simply depends on too many factors. For example, if for a given serialId you only have 1 published value, have published in the index is kind of useless for performance, but the size penalty will be so small that you might as well have it especially if you query on serialId equality and project on published. Not that documents need to be fetched if a queried,sorted or projected field is not in the index; so if you have really big documents you would want to avoid fetching.",
"username": "steevej"
}
] |
Separate indexes versus compound indexes
|
2023-08-23T07:03:39.768Z
|
Separate indexes versus compound indexes
| 344 |
null |
[
"aggregation",
"queries",
"crud",
"transactions"
] |
[
{
"code": "db.places.insertMany([\n { userid: \"a\", \"location\": \"Bishan\"},\n { userid: \"b\", \"location\": \"Bukit Timah\"},\n { userid: \"c\", \"location\": \"Ang Mo Kio\"},\n { userid: \"d\", \"location\": \"Segar\"},\n { userid: \"e\", \"location\": \"Fajar\"},\n { userid: \"f\", \"location\": \"dover\" },\n { userid: \"g\", \"location\": \"Buona Vista\"},\n { userid: \"h\", \"location\": \"Marina Bay\"},\n { userid: \"i\", \"location\": \"Rocher\"},\n { userid: \"j\", \"location\": \"down town\"},\n { userid: \"k\", \"location\": \"Jurong\"},\n { userid: \"l\", \"location\": \"Pungol\"},\n { userid: \"m\", \"location\": \"One North\"},\n { userid: \"n\", \"location\": \"Cho Chu Kang\"},\n { userid: \"o\", \"location\": \"Yishun\"}\n]);\nvar counter = 0;\nvar batch = 1;\nvar threshold = 5;\nvar mapFunction = function() {\n var key = this.userid;\n \n if(counter >= threshold){\n batch = batch+1;\n counter = 0;\n }\n counter = counter +1;\n var value = { location: this.location, batch: batch };\n emit( key, value );\n};\nvar reduceFunction = function(key, value) {\n};\ndb.places.mapReduce(\n mapFunction,\n reduceFunction,\n {\n out: \"places_RV\",\n scope: {\n batch : batch,\n counter: counter,\n threshold : threshold\n }\n }\n)\nMongoDB Enterprise > db.places_RV.find().sort( { _id: 1 } )\n{ \"_id\" : \"a\", \"value\" : { \"location\" : \"Bishan\", \"batch\" : 1 } }\n{ \"_id\" : \"b\", \"value\" : { \"location\" : \"Bukit Timah\", \"batch\" : 1 } }\n{ \"_id\" : \"c\", \"value\" : { \"location\" : \"Ang Mo Kio\", \"batch\" : 1 } }\n{ \"_id\" : \"d\", \"value\" : { \"location\" : \"Segar\", \"batch\" : 1 } }\n{ \"_id\" : \"e\", \"value\" : { \"location\" : \"Fajar\", \"batch\" : 1 } }\n{ \"_id\" : \"f\", \"value\" : { \"location\" : \"dover\", \"batch\" : 2 } }\n{ \"_id\" : \"g\", \"value\" : { \"location\" : \"Buona Vista\", \"batch\" : 2 } }\n{ \"_id\" : \"h\", \"value\" : { \"location\" : \"Marina Bay\", \"batch\" : 2 } }\n{ \"_id\" : \"i\", \"value\" : { \"location\" : \"Rocher\", \"batch\" : 2 } }\n{ \"_id\" : \"j\", \"value\" : { \"location\" : \"down town\", \"batch\" : 2 } }\n{ \"_id\" : \"k\", \"value\" : { \"location\" : \"Jurong\", \"batch\" : 3 } }\n{ \"_id\" : \"l\", \"value\" : { \"location\" : \"Pungol\", \"batch\" : 3 } }\n{ \"_id\" : \"m\", \"value\" : { \"location\" : \"One North\", \"batch\" : 3 } }\n{ \"_id\" : \"n\", \"value\" : { \"location\" : \"Cho Chu Kang\", \"batch\" : 3 } }\n{ \"_id\" : \"o\", \"value\" : { \"location\" : \"Yishun\", \"batch\" : 3 } }\n",
"text": "Hello team,How can I implement the below logic in mongo 4.4 with $function, $expr, $where. Is there an option to define global variable to be used across multiple records in 4.4 version. Below is the example I tried with MR in 4.2 version.Since MR will be deprecated starting with v5.0. I’m trying to migrate the below logic as java script function in v4.4. I couldn’t find options with mongo aggregate to implement the below use case without adding transactions to an array. But the problem with this idea is it could run into 16 MB limitation if the volume of participating transactions is high.The below MR picks all records in a collection groups and classifies the records into multiple batches. i.e., “5” in each batch.Global variables:Map Function:Reduce Function:MRRegards,\nRama",
"username": "Laks"
},
{
"code": "// Solution 1. $accumulator inside $group\ndb.places.aggregate([\n {\n $group: {\n _id: null,\n result: {\n $accumulator: {\n init: function() {\n return { i: 0, batchN: 0, batches: [] };\n },\n accumulateArgs: [{\n userid: '$userid',\n location: '$location',\n }],\n accumulate: function(state, arg) {\n const maxBatchSize = 5; // max documents per batch constant\n\n const currentI = state.i + 1;\n const currentBatchN = Math.ceil(currentI / maxBatchSize);\n \n const currentBatches = state.batches.concat({\n userid: arg.userid,\n value: {\n location: arg.location,\n batch: currentBatchN\n }\n });\n \n return {\n i: currentI,\n batchN: currentBatchN,\n batches: currentBatches\n }\n },\n merge: function(state1, state2) {\n // return empty object, because we do not merge objects\n return {};\n },\n lang: 'js',\n }\n }\n },\n },\n {\n $unwind: '$result.batches',\n },\n {\n $project: {\n _id: '$result.batches.userid',\n value: '$result.batches.value',\n }\n }\n]);\n// Solution 2. $group and then $reduce\ndb.places.aggregate([\n {\n $group: {\n _id: null,\n batchesWithoutNo: {\n $push: {\n userid: '$userid',\n location: '$location'\n }\n }\n },\n },\n {\n $project: {\n result: {\n $reduce: {\n input: '$batchesWithoutNo',\n initialValue: {\n maxBatchSize: 5, // max documents per batch constant\n i: 0,\n batches: [],\n },\n in: {\n maxBatchSize: '$$value.maxBatchSize',\n i: {\n $add: ['$$value.i', 1],\n },\n batchN: {\n $ceil: {\n $divide: [\n { $add: ['$$value.i', 1 ]},\n '$$value.maxBatchSize'\n ]\n }\n },\n batches: {\n $concatArrays: [\n '$$value.batches',\n [\n {\n userid: '$$this.userid',\n value: {\n location: '$$this.location',\n batch: {\n $ceil: {\n $divide: [\n { $add: ['$$value.i', 1 ]},\n '$$value.maxBatchSize' \n ]\n }\n }\n }\n }\n ],\n ]\n }\n }\n }\n }\n }\n },\n {\n $unwind: '$result.batches',\n },\n {\n $project: {\n _id: '$result.batches.userid',\n value: '$result.batches.value',\n }\n }\n]);\n// Solution 3. $bucketAuto\nlet batchGroups = db.places.aggregate([\n {\n $bucketAuto: {\n groupBy: '$_id',\n buckets: 3,\n output: {\n batches: {\n $addToSet: {\n userid: '$userid',\n location: '$location'\n }\n }\n }\n }\n },\n]).toArray();\n[\n {\n _id: {\n min: ObjectId(\"64e5cc029286257cdfcc4ddf\"),\n max: ObjectId(\"64e5cc029286257cdfcc4de4\")\n },\n batches: [\n { userid: 'b', location: 'Bukit Timah' },\n { userid: 'a', location: 'Bishan' },\n { userid: 'd', location: 'Segar' },\n { userid: 'c', location: 'Ang Mo Kio' },\n { userid: 'e', location: 'Fajar' }\n ]\n },\n {\n _id: {\n min: ObjectId(\"64e5cc029286257cdfcc4de4\"),\n max: ObjectId(\"64e5cc029286257cdfcc4de9\")\n },\n batches: [\n { userid: 'f', location: 'dover' },\n { userid: 'i', location: 'Rocher' },\n { userid: 'h', location: 'Marina Bay' },\n { userid: 'j', location: 'down town' },\n { userid: 'g', location: 'Buona Vista' }\n ]\n },\n {\n _id: {\n min: ObjectId(\"64e5cc029286257cdfcc4de9\"),\n max: ObjectId(\"64e5cc029286257cdfcc4ded\")\n },\n batches: [\n { userid: 'l', location: 'Pungol' },\n { userid: 'm', location: 'One North' },\n { userid: 'n', location: 'Cho Chu Kang' },\n { userid: 'o', location: 'Yishun' },\n { userid: 'k', location: 'Jurong' }\n ]\n }\n]\nfunction transformToBatches(batchgroups) {\n const ungrouppedBatches = [];\n batchgroups.forEach(function (batchGroup, index) {\n batchGroup.batches.forEach(function (batch) {\n ungrouppedBatches.push({\n _id: batch.userid,\n value: {\n location: batch.location,\n batch: index + 1\n }\n });\n });\n });\n return ungrouppedBatches;\n}\n\n// pass result (batchGroups object) from the aggregation above \n// into this function call\ntransformToBatches(batchGroups);\n[\n { _id: 'b', value: { location: 'Bukit Timah', batch: 1 } },\n { _id: 'a', value: { location: 'Bishan', batch: 1 } },\n { _id: 'd', value: { location: 'Segar', batch: 1 } },\n { _id: 'c', value: { location: 'Ang Mo Kio', batch: 1 } },\n { _id: 'e', value: { location: 'Fajar', batch: 1 } },\n { _id: 'f', value: { location: 'dover', batch: 2 } },\n { _id: 'i', value: { location: 'Rocher', batch: 2 } },\n { _id: 'h', value: { location: 'Marina Bay', batch: 2 } },\n { _id: 'j', value: { location: 'down town', batch: 2 } },\n { _id: 'g', value: { location: 'Buona Vista', batch: 2 } },\n { _id: 'l', value: { location: 'Pungol', batch: 3 } },\n { _id: 'm', value: { location: 'One North', batch: 3 } },\n { _id: 'n', value: { location: 'Cho Chu Kang', batch: 3 } },\n { _id: 'o', value: { location: 'Yishun', batch: 3 } },\n { _id: 'k', value: { location: 'Jurong', batch: 3 } }\n]\n",
"text": "Hello, @Laks !I have come up with 3 solutions for your case :Solution 1. $accumulator inside $group\nUses custom js-code to calculate batch number for a given document.Solution 2. $reduce after $group\nThis solution works similar to the previous one, but it does not use custom js-code, so it should work faster.Both solutions above work as expected, but since they use $group stage, they might hit 16MB BSON-document size limit.If you decide to stick with one of those, and 16MB limitation is the real problem for you - check if that’s possible for you to run these pipelines with a $match + $limit stages, to process the whole collection with few runs. Note, this may you will have to distinguish documents that have been processed with your aggregation pipeline and what documents - not. This, for example, may involve adding additional boolean field to your documents.Solution 3. $bucketAuto.\nSince you need to distribute your documents between batches, I would suggest to look at [$bucketAuto]. (https://www.mongodb.com/docs/manual/reference/operator/aggregation/bucketAuto/) pipeline stage. I wGiven your example dataset, running the following pipeline:Would produce the following results:As you can see, documents are grouped in batches, only batch numbers are missing. But do you really need that number, if you have relevant document groped under corresponding batch? You can add batch numbers with a js-function, that is not a part of aggregation pipeline and you can execute it in the mongo shell or within your application code:All three solutions produce the same output:",
"username": "slava"
}
] |
$function & $where usage for a MR query
|
2023-08-23T07:56:39.115Z
|
$function & $where usage for a MR query
| 350 |
null |
[
"node-js"
] |
[
{
"code": " queryResults.dbRefField // automatically resolves cursor to dbref\n import { resolveDbRef } from 'mongodb'\n await resolveDbRef(queryResults.dbRefField)\n",
"text": "I want to use DBRef and I can see this page states that mongodb driver for nodejs supports dbref. What is this support? Is there a method with which I can resolve dbrefs? Ideally it would be something like this:orbut I couldn’t find any of these in documentation and I don’t want to use $lookup because I need basic syntax, not aggregation. Is this class is the only reason docs says dbrefs are “supported” by nodejs?",
"username": "111731"
},
{
"code": "",
"text": "I tried to format code but it doesn’t seem to work, also I tried to edit my post but there is no edit button \nI spent so much time trying to just login into forums because it kept redirecting me somewhere on 404 page. I encountered so many bugs in mongodb for last few months, which I can’t report because JIRA site is glitchy and doesn’t let me post bug report, I’m thinking about moving back to mysql or to something else…",
"username": "111731"
},
{
"code": "",
"text": "Welcome to the MongoDB Community Forums @111731 !Is this class is the only reason docs says dbrefs are “supported” by nodejs?Yes. I would avoid using the legacy DBRef type and instead use manual referencing. There is limited support for DBRefs in modern drivers, tools, and aggregation queries and the documentation should have stronger discouragement (I’ll raise an issue).I tried to format code but it doesn’t seem to workYou can use GitHub-style code fences (```); I added these to your first post. See Formatting code and log snippets in posts.I tried to edit my post but there is no edit buttonWe’ve unfortunately had to adjust the edit limit for new users due to spammers who sign up for accounts and repeatedly edit to try to bypass detection. As you spend a bit more time on the forums you there are increased Trust levels and forum privileges.JIRA site is glitchy and doesn’t let me post bug report,I’m not aware of any specific issue affecting JIRA or forum logins. Can you share more details on the sort of glitches you are experiencing?Login issues are more common if you have very restricted cookie settings. What browser version are you using? Do you have any additional ad blockers or privacy extensions enabled?Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi,\ndespite the disencouragement, I read thisIf you have documents in a single collection that relate to documents in more than one collection, you may need to consider using DBRefsSo what should actually be the proper way to perform references among documents with other collection’s documents?And how the query in order to know if the object is created and then use that document’s _id or it does not yet exist so in need to be created first and afterwards put that new _id as the document reference.Thanks!\nkm",
"username": "kevin_Morte_i_Piferrer"
},
{
"code": "_id$id$ref$db \"creator\" : {\n \"$ref\" : \"creators\",\n \"$id\" : ObjectId(\"5126bc054aed4daf9e2ab772\"),\n \"$db\" : \"users\"\n }\n \"creator\" : {\n \"coll\" : \"creators\",\n \"_id\" : ObjectId(\"5126bc054aed4daf9e2ab772\"),\n \"db\" : \"users\"\n }\n \"creator_id\": ObjectId(\"5126bc054aed4daf9e2ab772\")\n",
"text": "If you have documents in a single collection that relate to documents in more than one collection, you may need to consider using DBRefsSo what should actually be the proper way to perform references among documents with other collection’s documents?Hi @kevin_Morte_i_Piferrer,Proper approach is a matter of opinion, but I would narrow this use case down to: if you have a single field that relates to multiple collections you may want to consider using DBRefs or an equivalent subdocument format.However, I’d still be strongly inclined to use manual references (i.e. an equivalent subdocument format) since the DBRef BSON type will be more difficult to work with in server-side queries.The full DBRef includes an _id ($id), collection name ($ref), and optional database name ($db).Borrowing the docs example:A manual reference might look like:If the related collection is in the same database, you could simplify to:You would have to construct appropriate queries if you want to find or combine related data, but that is still the case with DBRefs. DBRefs have limited server-side support, and have to be resolved using additional queries to return the referenced documents.Overall DBRefs are a legacy convention best avoided in modern applications using MongoDB.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "I think this should be reflected in the documentation… especially since it is 2023 and I am experiencing the same thing as @kevin_Morte_i_Piferrer, with regards to the relating of documents to documents in more than one collection.",
"username": "Walter_Karabin"
}
] |
What is the "support" for DBRef in NodeJS driver?
|
2021-09-03T15:42:06.152Z
|
What is the “support” for DBRef in NodeJS driver?
| 7,227 |
[
"compass"
] |
[
{
"code": "",
"text": "\nimage1142×364 19.9 KB\n\nThere are documents but it always shows 0.CLUSTER : Standalone\nEDITION : MongoDB 5.0.6 CommunityAny ideas??Thanks, Harvey",
"username": "LeThAL_LAD_N_A"
},
{
"code": "collectionInfocountDocuments",
"text": "@LeThAL_LAD_N_A I think this is a limitation of timeseries. In that view Compass uses the collectionInfo information which for time series does not include the number of documents. To display the number of documents, Compass would need to do a countDocuments which might be an expensive operation.",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "Is there a way to enable it, I don’t mind that it’s an expensive operation",
"username": "LeThAL_LAD_N_A"
},
{
"code": "",
"text": "Right now there is not a way to enable that. However, if you open the collection, you will see the count as soon as it’s loaded.\nimage1631×1095 130 KB\n",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "Something similar just happened after update 1.39.2.\nAfter letting that update install, document counts for DocumentDB clusters no just show a hyphen for count. And none of the other attributes (Avg. document size, Indexed, total index size) are populated either.Attributes for local MongoDB still populate",
"username": "ratbak"
}
] |
Mongodb compass showing 0 documents
|
2022-04-03T17:11:30.785Z
|
Mongodb compass showing 0 documents
| 3,687 |
|
null |
[
"java",
"spring-data-odm"
] |
[
{
"code": " BulkWriteResult bulkUpsert(List<Product> products) {\n BulkOperations bulkOperations = template.bulkOps(BulkOperations.BulkMode.UNORDERED, Product.class);\n products.forEach(product -> {\n Query query = new Query().addCriteria(Criteria.where(\"_id\").is(product.id));\n bulkOperations.replaceOne(query, product, FindAndReplaceOptions.options().upsert());\n });\n\n return bulkOperations.execute();\n }\n",
"text": "Good afternoon! After using bulk upsert now our application is 5 to 10 times faster. However, because one course of MongoDB University and other sources say that bulk operations may partially fail in rare instance, we really need to check if a bulk upsert fails or not. For the following code snippet using Spring Data MongoDB:after each call to bulkUpsert() how can we know whether MongoDB has successfully executed the bulk operation? Do we always get an exception when MongoDB fails to upsert the documents partially?Thanks a lot, in advance!Daniel Li",
"username": "Daniel_Li1"
},
{
"code": "BulkWriteResult ",
"text": "The API seems to returnBulkWriteResult See the documentation for more information.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks a lot, @steevej ! https://www.mongodb.com/docs/v6.2/reference/method/BulkWriteResult/ contains BulkWriteResult.writeErrors and BulkWriteResult.writeConcernError. If my understanding is correct, the BulkWriteResult instance returned by the MongoDB Java driver does not contain BulkWriteResult.writeErrors or BulkWriteResult.writeConcernError. Do you know how to get BulkWriteResult.writeErrors or BulkWriteResult.writeConcernError in Java? Thanks a lot, in advance!",
"username": "Daniel_Li1"
},
{
"code": "",
"text": "From what I understand for writeErrors and writeConcernError you get exceptions:https://mongodb.github.io/mongo-java-driver/3.4/javadoc/com/mongodb/MongoBulkWriteException.html",
"username": "steevej"
},
{
"code": "",
"text": "Thanks a lot again, @steevej! This is also my guess. The MongoDB Java driver needs to provide a notification when a bulk upsert fails partially.",
"username": "Daniel_Li1"
}
] |
How do we know if a bulk upsert fails partially
|
2023-08-21T18:37:30.728Z
|
How do we know if a bulk upsert fails partially
| 471 |
null |
[
"aggregation",
"queries",
"atlas-search"
] |
[
{
"code": "from datetime import datetime, timedelta\n\nresults = my_collection.aggregate(\n [\n {\n \"$searchMeta\": {\n \"index\": \"MsgAtlasIndex\",\n \"count\": {\"type\": \"total\"},\n \"compound\": {\n \"must\": [\n {\n \"range\": {\n \"path\": \"ts\",\n \"gte\": datetime.now() - timedelta(hours=24),\n }\n },\n {\n \"equals\": {\"path\": \"st\", \"value\": 2},\n \"equals\": {\"path\": \"aff\", \"value\": 2},\n \"equals\": {\"path\": \"src\", \"value\": 6},\n },\n ]\n },\n }\n }\n ]\n)\nlist(results)\n[{'count': {'total': 38}}]\ndt[{'count': {'total': 38}},\n'last_message_datetime': datetime.datetime(2023, 8, 16, 11, 30, 21)]\n\"last_message_datetime\": {\"$max\": \"$dt\"}{\n \"_id\": ObjectId(\"64da2904b028a6d62965d623\"),\n \"ts\": datetime.datetime(2023, 8, 14, 13, 15, 48, 632000),\n \"dt\": datetime.datetime(2023, 8, 14, 13, 14, 58),\n \"aff\": 2,\n \"src\": 2,\n \"st\": 2,\n}\n",
"text": "In a MongoDB Atlas searchMeta aggregation:OutputI need to get the time of the latest collected message via dt keyWanted result:I can use this line \"last_message_datetime\": {\"$max\": \"$dt\"} in a normal aggregation but how to do it using Atlas?Sample document below:",
"username": "ahmad_al_sharbaji"
},
{
"code": "db.messages.insertMany([\n {\n _id: 'M1',\n text: 'Lorem ipsum dolor sit amet',\n channelId: 1,\n createdAt: ISODate('2023-08-11T15:00:00.000Z'), // 15:00:00\n },\n {\n _id: 'M2',\n channelId: 1,\n text: 'Eiusmod tempor incididunt',\n createdAt: ISODate('2023-08-11T15:01:00.000Z'), // 15:01:00\n },\n {\n _id: 'M3',\n channelId: 1,\n text: 'Ut labore et dolore magna aliqua',\n createdAt: ISODate('2023-08-11T15:01:45.000Z'), // 15:01:45\n },\n {\n _id: 'M4',\n channelId: 2,\n text: 'Excepteur sint occaecat cupidatat',\n createdAt: ISODate('2023-08-11T15:00:25.000Z'), // 15:00:25\n },\n {\n _id: 'M5',\n channelId: 2,\n text: 'Fugiat nulla pariatur',\n createdAt: ISODate('2023-08-11T15:01:15.000Z'), // 15:01:15\n },\n]);\ndb.messages.aggregate([\n {\n $match: {\n channelId: 1,\n createdAt: {\n $gte: ISODate('2023-08-11T15:00:00.000Z'), // 15:00:00\n }\n }\n },\n {\n $group: {\n _id: null,\n count: {\n $sum: 1\n },\n largestCreatedAt: {\n $max: '$createdAt',\n }\n }\n }\n]);\n[\n {\n _id: null,\n count: 3, // total 3 messages found\n largestCreatedAt: ISODate(\"2023-08-11T15:01:45.000Z\") // 15:01:45\n }\n]\n",
"text": "Hello, @ahmad_al_sharbaji ! It seems you’re overcomplicating the solution. I think in your aggregation pipeline you can use simple $match + $group stages.Have a look the example below - the dataset and the solution is similar to the case of yours.Example dataset:Example aggregation pipeline. Notice, that it contains same conditions, but outside of $searchMeta stage:Output:I used more verbose field names and ISODate object as values instead of timestamp, but you should see the similarity and understand the solution. If not - let me know ",
"username": "slava"
},
{
"code": "",
"text": "Dear Mr. @slava ,\nI really appreciate the time you spent on this great reply! Thank you so much.Actually, I already have this approach, but with a collection that contains more than 70M documents, this aggregation will take around 25 minutes which is bad for our production side.That’s why I decided to use searchMeta, it takes now around 26 seconds! But this approach gets the count only without the document details and here is my problem I cannot get the document time.I again appreciate the time you spent, but can you help me with this approach? Or could you progress any other fast approach like searchMeta?I can’t wait for you response! Regards.",
"username": "ahmad_al_sharbaji"
},
{
"code": "last_message_datetime-1staffsrcdb.messages.createIndex({\n createdAt: -1, // for sorting\n channelId: 1 // for filtering\n});\n",
"text": "@ahmad_al_sharbaji ,Such difference in speed may be because aggregation with $searchMeta strongly relies on underlying indexes, while the other aggregation based on $match + $group stages, that I provided above - not. What indexes you have on your collection? You can check it with db.collection.getIndexes() method.To make the aggregation above faster, you should create a compound index, that would include:Example of a compound index definition (relies on the example dataset I provided initially):Try this out and let me know, if it helped ",
"username": "slava"
},
{
"code": "",
"text": "@slava You can’t believe how much I appreciate your help!Indeed, I already have Atlas compound index called “MsgAtlasIndex”.I tried the solution and it took a lot of time in production I clearly see no solution but searchMeta one… But the problem with searchMeta is returning numbers not documents, so I can’t get into the result details and extract the date time.What do you think?",
"username": "ahmad_al_sharbaji"
},
{
"code": "createdAtdb.messages.insertMany([\n {\n _id: 'M1',\n text: 'Lorem ipsum dolor sit amet',\n channelId: 1,\n createdAt: ISODate('2023-08-11T15:00:00.000Z'), // 15:00:00\n createdAtString: '2023-08-11T15:00:00.000Z',\n },\n {\n _id: 'M2',\n channelId: 1,\n text: 'Eiusmod tempor incididunt',\n createdAt: ISODate('2023-08-11T15:30:00.000Z'), // 15:30:00\n createdAtString: '2023-08-11T15:30:00.000Z',\n },\n {\n _id: 'M3',\n channelId: 1,\n text: 'Ut labore et dolore magna aliqua',\n createdAt: ISODate('2023-08-11T16:00:45.000Z'), // 16:00:45\n createdAtString: '2023-08-11T16:00:45.000Z',\n },\n {\n _id: 'M4',\n channelId: 2,\n text: 'Excepteur sint occaecat cupidatat',\n createdAt: ISODate('2023-08-11T15:00:25.000Z'), // 15:00:25\n createdAtString: '2023-08-11T15:00:25.000Z',\n },\n {\n _id: 'M5',\n channelId: 2,\n text: 'Fugiat nulla pariatur',\n createdAt: ISODate('2023-08-11T16:01:15.000Z'), // 16:01:15\n createdAtString: '2023-08-11T16:01:15.000Z',\n },\n]);\ndb.messages.aggregate([\n {\n $searchMeta: {\n index: 'messages-test-search',\n count: {\n type: 'total'\n },\n facet: {\n operator: {\n compound: {\n must: [\n {\n range: {\n path: 'createdAt',\n gte: ISODate('2023-08-11T15:00:00.000Z'),\n lte: ISODate('2023-08-11T16:00:00.000Z')\n },\n },\n {\n equals: {\n path: 'channelId',\n value: 1\n }\n }\n // you can add more equality conditions in this array\n ]\n },\n },\n facets: {\n myFacet: {\n type: 'string',\n // field 'createdAtString' will be used as a bucket name\n path: 'createdAtString',\n }\n }\n },\n }\n },\n]);\n[\n {\n count: { total: Long(\"2\") },\n facet: {\n myFacet: {\n buckets: [\n { _id: '2023-08-11T15:00:00.000Z', count: Long(\"1\") },\n { _id: '2023-08-11T15:30:00.000Z', count: Long(\"1\") }\n ]\n }\n }\n }\n]\ncreatedAtdb.messages.aggregate([\n {\n $searchMeta: { /* unchanged */ }\n },\n {\n $unwind: '$facet.myFacet.buckets'\n },\n {\n $project: {\n total: '$count.total',\n createdAtBoundary: {\n $toDate: '$facet.myFacet.buckets._id',\n }\n }\n },\n {\n $group: {\n _id: null,\n total: {\n $first: '$total'\n },\n largestCreatedAt: {\n $max: '$createdAtBoundary'\n }\n }\n }\n]);\n[\n {\n _id: null,\n total: Long(\"2\"),\n largestCreatedAt: ISODate(\"2023-08-11T15:30:00.000Z\")\n }\n]\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"channelId\": {\n \"type\": \"number\"\n },\n \"createdAt\": {\n \"type\": \"date\"\n },\n \"createdAtString\": {\n \"type\": \"stringFacet\"\n }\n }\n }\n}\ncreatedAt",
"text": "@ahmad_al_sharbaji ,The main problem in your approach with $searchMeta, that it is used to provide metadata (information about the data, like bucket boundaries or total number of documents, that fall under certain set of conditions), not the data itself.However, I did not say it is not possible to get the data you want. But, you gonna need to play dirty .\nIf you look chosely at the example of how $search meta is used with facets, you can see, that some data, can be extracted. It just needs to become good boundary names for the buckets. A good name, in your case should:I think a good candidate for that name can be createdAt field (see my example dataset below), but stringified - `createdAtString.I will demonstrate the idea with examples below.Example dataset:Example aggregation pipeline that contains conditions close to your query:Output:As you can see, now we have metadata (total documents selected) and actual data (docuement’s createdAt) in form of bucket names. Then, will little efforts, we can get the result you want:Final output:Atlas Search Index configuration object I used:If you afraid that two documents in your collection can have exact same createdAt value, you can, for example, concatenate document id to that string, so the value would look like ‘2023-08-11T15:30:00.000Z__M1’. But, in this case, you will have to add additional stages to disassemble this string in order to work with dates.Again, it is a dirty solution, but it works with $searchMeta ",
"username": "slava"
},
{
"code": "createdAtStringstringcreatedAtStringcreatedAtString",
"text": "@slava This dirty solution needs a very little adjustment to become a complete life saver!\nI don’t know why you assigned createdAtString as a string. The field I used to get the lastest time is date field.Can you please make createdAtString as a date field? Because I applied it on dateFacet field but didn’t work correctly. Please convert createdAtString to a date field and give it dateFacet index and try the example.This is completely what I want!! I can’t wait for your adjustment, kindly.",
"username": "ahmad_al_sharbaji"
},
{
"code": "createdAtStringDateFacet",
"text": "@slava Your are a serious genius! Please just make createdAtString as a date field not string and give it DateFacet indexing and demonstrate it again.",
"username": "ahmad_al_sharbaji"
},
{
"code": "createdAtStringcreatedAt",
"text": "@ahmad_al_sharbaji ,The dirty solution won’t work with date field. If you look at the date facet syntax, you will see, it requires to use boundaries, which you have to know beforehand. Moreover, in this case, boundaries array must represent every datetime possible for a given period of time, so it won’t contain more than 1 message in it.Using stringified version createdAtString of createdAt field is integral and dirty part of the solution.It is the only way you get some data using $searchMeta. Either go dirty or use another solution, that does not involve $searchMeta.",
"username": "slava"
},
{
"code": "",
"text": "@slava life saver !!! I can’t thank you enough man! God bless.Could you please see this topic too? It’s way easier!Thanks in advance!",
"username": "ahmad_al_sharbaji"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
How to get collected document time MongoDB
|
2023-08-16T11:43:31.201Z
|
How to get collected document time MongoDB
| 816 |
null |
[
"atlas-cluster",
"php"
] |
[
{
"code": "\nuse Exception;\nuse MongoDB\\Client;\n\n$uri = mongodb+srv://<username>:<password>@mondocluster.iyop7xn.mongodb.net/?retryWrites=true&w=majority;\n\n// Create a new client and connect to the server\n$client = new MongoDB\\Client($uri);\n\ntry {\n // Send a ping to confirm a successful connection\n $client->selectDatabase('admin')->command(['ping' => 1]);\n echo \"Pinged your deployment. You successfully connected to MongoDB!\\n\";\n} catch (Exception $e) {\n printf($e->getMessage());\n}\n[Sun Aug 13 18:28:48.497295 2023] [core:notice] [pid 48484] AH00052: child pid 68759 exit signal Segmentation fault (11)",
"text": "Hi \nI’m trying to connect to MongoDB Atlas from my Wordpress theme.\nBut I get a segmentation fault error.I’m using the snippet of code provided by the Atlas interface:This is the error I get in the server log:[Sun Aug 13 18:28:48.497295 2023] [core:notice] [pid 48484] AH00052: child pid 68759 exit signal Segmentation fault (11)That PHP code in a freshly installed WordPress version fails and produced the server error.\nIf I run the same PHP code in a standalone PHP application everything works fine.Any idea on how to investigate the problem? Where shall I look?My environment:",
"username": "Giulio_Andreini"
},
{
"code": "",
"text": "Hello @Giulio_Andreini , and welcome to the MongoDB Community forums! I’ve used MongoDB with WordPress for an eCommerce application, so I’m always curious about how others use MongoDB with WP!Could you give us more context about how you integrated the MongoDB PHP Library into your WordPress theme? Did you use Composer to include the library and its dependencies?Also, when looking with step-by-step debugging, the $client object is valid?Let us know,\nHubert",
"username": "Hubert_Nguyen1"
},
{
"code": " \"mongodb/mongodb\": \"^1.16\"[Mon Aug 14 15:48:20.524888 2023] [core:notice] [pid 80719] AH00052: child pid 55145 exit signal Segmentation fault (11)",
"text": "Hi @Hubert_Nguyen1,\nthanks for your answer.I built a forecast service for surfers called Mondo Surf (mondo.surf) and I recently took all the forecast logic out of WordPress and built a dedicated PHP service.The service writes data in MongoDB Atlas and then I need to retrieve all this forecast data from my WordPress application.\nI store in MongoDB the weekly forecast data for each surf spot, so I need to retrieve the JSON with the forecast every time a user opens a page of a surf spot.\nI can connect to MongoDB Atlas from my server/production version but not from my local computer (I added both IPs in Network Access).I added MongoDB via composer ( \"mongodb/mongodb\": \"^1.16\").\nIn the vendor folder I have:This is what the $client object looks like:\nimage1882×786 180 KB\nI keep trying and still get the same segfault error:\n[Mon Aug 14 15:48:20.524888 2023] [core:notice] [pid 80719] AH00052: child pid 55145 exit signal Segmentation fault (11)Also, the connection to MongoDB works correctly from a local test PHP application. It stops working (and generating the segfault) when I move it inside WordPress (even a fresh new installation).Let me know if you need more information\nThanks in advance for any help you could provide!best",
"username": "Giulio_Andreini"
},
{
"code": "",
"text": "Thanks for the additional data. The client seems to be created OK.The AH00052 is specific to Apache (or one of the modules), and is a memory crash error Since isolating the MongoDB code execution outside of WP works, I wondered… In WordPress, some of the PHP settings concerning memory size or buffer sizes may be different (overwritten in wp-config or even .htaccess ) and could be worth looking at.Disabling Apache modules not needed might help too.You mention the same code works on the production server. Is it fair to assume you’re replicating the same setup as much as possible on your local system? Are you running this locally on a Mac directly or via a VM using a Linux flavor?",
"username": "Hubert_Nguyen1"
},
{
"code": ".htaccesswp-config.phpWP_DEBUGABSPATHWP_MEMORY_LIMITmpm_prefork_module, authn_file_module, authn_core_module, authz_host_module, authz_groupfile_module, authz_user_module, authz_core_module, access_compat_module, auth_basic_module, socache_shmcb_module, reqtimeout_module, filter_module, mime_module, log_config_module, env_module, headers_module, setenvif_module, version_module, ssl_module, unixd_module, status_module, autoindex_module, dir_module, alias_module, rewrite_module, php_module",
"text": "Hi @Hubert_Nguyen1\nI removed completely the .htaccess file and in the wp-config.php I just have the basic initial setup of WP (DB configuration, authentication unique keys, WP_DEBUG, ABSPATH).\nI tried to increase the WP_MEMORY_LIMIT but with no success.These are the active modules in my httpd.conf:\nmpm_prefork_module, authn_file_module, authn_core_module, authz_host_module, authz_groupfile_module, authz_user_module, authz_core_module, access_compat_module, auth_basic_module, socache_shmcb_module, reqtimeout_module, filter_module, mime_module, log_config_module, env_module, headers_module, setenvif_module, version_module, ssl_module, unixd_module, status_module, autoindex_module, dir_module, alias_module, rewrite_module, php_moduleI tried to randomly remove some of these but with no success (very often the local server gets broken).I’m running this on my mac directly (no VM).",
"username": "Giulio_Andreini"
},
{
"code": "",
"text": "Thanks for looking at these potential friction points.I wondered if there’s a crash report in the Apache logs with more details. There might be an indication as for which library or module crashes.",
"username": "Hubert_Nguyen1"
},
{
"code": "",
"text": "Hi @Hubert_Nguyen1\nI’ve spent a lot of time on this, and tried everything that came up to my mind.One question: is it possible to query MongoDB Atlas using a cURL request from PHP? (and not using the MongoDB library)Best\ng",
"username": "Giulio_Andreini"
},
{
"code": "",
"text": "Hi @Giulio_Andreini ,The data API might be what you’re looking for: https://www.mongodb.com/docs/atlas/app-services/data-api/Keep in mind that it’s not as efficient as using the driver, but sometimes it’s handy to have HTTPS access for devices that don’t support a driver for various reasons (IoT etc…)For the segmentation fault, it’s hard to track what the issue is. Another alternative to consider is using containers to avoid these compatibility-type issues.I’ve used http://devilbox.org/ (it’s a pre-configured Docker-based tool) in the past and it is quick to get a WP website up and running. MongoDB is supported out of the box and if you need anything else, there are ways to customize things.Cheers,\nHubert",
"username": "Hubert_Nguyen1"
}
] |
MongoDB Atlas connection from Wordpress/Php throws segmentation fault
|
2023-08-13T16:41:02.550Z
|
MongoDB Atlas connection from Wordpress/Php throws segmentation fault
| 663 |
null |
[
"queries",
"rust"
] |
[
{
"code": "[dependencies]\nmongodb = { version = \"2.6.1\", features = [\"bson-uuid-1\"] }\nuuid = \"1.4.1\"\n#[tokio::main]\nasync fn main() -> mongodb::error::Result<()> {\n // Connect to MongoDB.\n let client = Client::with_uri_str(\"mongodb://localhost:27017\").await.unwrap();\n let database = client.database(\"your_database_name\");\n let collection = database.collection(\"your_collection_name\");\n\n // UUID you want to search for.\n let desired_uuid = Uuid::parse_str(\"936DA01F-9ABD-4D9D-80C7-02AF85C822A8\").unwrap();\n let filter = bson::doc! {\n \"uuid\": desired_uuid\n };\n\n // Fetch documents that match the filter criteria.\n let mut cursor = collection.find(Some(filter), None).await.unwrap();\n\n // Iterate over the results of the cursor.\n while let Some(result) = cursor.try_next().await.unwrap() {\n println!(\"result: {:?}\", result);\n };\n\n Ok(())\n}\ndesired_uuid",
"text": "I’ve encountered an issue where a MongoDB search using a UUID as a filter is not returning any results. This is unexpected, as I have documents in the collection with the exact UUID value I’m filtering by.Here’s the code snippet in question:Despite the above code, no results are printed to the console. I’ve double-checked my collection, and I’m certain that there are documents with the “uuid” field in Binary format that matches the desired_uuid.Is there any known issue regarding filtering with UUID?",
"username": "Wojciech_Kargul"
},
{
"code": "",
"text": "For anybody looking for solution please find my GH issue here.What is the actual solution?\nRemember to use bson::Uuid when inserting to your database instead of uuid:Uuid. It is recommended to use bson::Uuid when using UUIDs within you project with mongodb rust driver.",
"username": "Wojciech_Kargul"
}
] |
MongoDB search using UUID returns no results despite matching documents in the collection
|
2023-08-23T09:07:54.796Z
|
MongoDB search using UUID returns no results despite matching documents in the collection
| 458 |
null |
[
"aggregation",
"java",
"performance"
] |
[
{
"code": "{\n \"_id\": {\n \"$oid\": \"64e31163e4b086f9c5a4fe6a\"\n },\n \"_class\": \"de.dmi.dmixcloud.core.model.TestObject\",\n \"name\": \"RandomName: 0\",\n \"description\": \"Random Description 0\",\n \"owner\": \"5f0707690cf241886f45d125\",\n \"creationDate\": {\n \"$date\": \"2023-08-21T07:25:23.407Z\"\n },\n \"updatedDate\": {\n \"$date\": \"2023-08-21T07:25:23.407Z\"\n },\n \"permissionsSet\": []\n}\n",
"text": "Hi,I’m pretty new to work with mongo and I have a question, where I cannot find a good answer for.\nThe current circumstances are given:Also for my testcase I have the following simpel approach. One document with\nan ID, name, description, owner, createDate, updatedDate und permissionSet array.Here one example:Now it goes about the permissionSet-Array. This will keep a list of all user ids which have access to this document. It can be that several thousand users will have access to it. That means this array will contain easily thousands of strings.Now I recorgnized, that as bigger that array is as slower is a $push call working. Not tried what about quering of this array right now.I’m clear that I try to do relations in a document based database, but this is something I cannot change right now. Question is now, is there a better approach to model this, instead of keeping all users which have access in a long list. I also thought about to have some kind of relation collection, with userid and document id as compound key, and then working with aggregations if I want to ask for all documents where user XY has access to.It looks like a realtive “normal” problem, so I’m not sure why I found so little about it, maybe I’m searching jsut wrong. Maybe someone can help me, which approach is the best in terms of runtime.Thanks and best\nAndreas",
"username": "Andreas_Dahm"
},
{
"code": "permissions_id",
"text": "Did you consider the reverse approach?You can store all the user’s permissions in the user model and not vice-versa.So, for example, you can create permissions array in the user model and put all of the permissions that the user has. That way, even if you have a lot of users it will scale (if you don’t have thousands of the permissions).In your current approach, you should consider the MongoDB document limit of 16MB. So if you would have hundreds of thousands of user and you want to store the _id of all of them in the permission document, you can reach the limit.",
"username": "NeNaD"
},
{
"code": "",
"text": "Hi Nenad,thanks for the idea, that indeed something I can consider, but if I have thousands of permissions I will face the same issue again, right? In that case a relation collection with document to user id would be the way to go? Sure this collection will have a lot if document, but each one should be very small.Best\nAndreas",
"username": "Andreas_Dahm"
},
{
"code": "",
"text": "You can store up to ~1M ObjectIds in some arrays and not pass the 16MB document limitation.Storing user IDs in permission document can easily pass 1M if you would have more than 1M users that should have some permission.Storing permission IDs in the user document will probably not pass the 16MB limitation (if you don’t have more than 1M different permissions). However, this approach has one additional benefit:",
"username": "NeNaD"
},
{
"code": "",
"text": "Totally agree, therefore I tried to store users to documents (which will represent items in the platform) because I thought we will have less users than items, for example 10million items but just 5k users.As testcase i created a collection with 10k documents representing my items and 2k users IDs and I wanted to use push and each to add all 2000k user ids to all 10k items and it took a while ~20s on my testmachine. If I now want to add additonal 2k users, it tooks longer than 60s. Sure this is not an operation which happens very often, but it should not scale that bad?Maybe I have to think about another approach, seems to be to much for the db right now.",
"username": "Andreas_Dahm"
},
{
"code": "",
"text": "Hi,me again. I’m currently not sure how to continue with my usecase, maybe I’m doing something totally wrong. The problem I have is the massive amount of data and the many-to-many relation with this.In my system I have a collection with items with 15 million documents and also round about 5k users. The worst case is that each user has access to each item.First I thought, I can add to each of the 15million item documents an array with the user ids who having access to this item. If I now wanted to find all items a given user has access to, I could do this with one query in the items collection. But having an array of hundreds or even thousand fields on each document takes an eternity to create or even to push or pull on these arrays.For sure an array for each user about the items he can access is also not recommended, because in the worst case the array contains 15 million entries.So next I tried, and also found as a solution for this problem, is that if I cannot reduce the problem from a many-to-many relation to a one-to-many relation I could go for a in-between table. Something which is normal on sql databases to have a relation table.Here the next problem, one collection is not big enough for my use case, because I easily will bypass the 2^32 limit of documents per collection if 2000 users having access to 1 million items, I’m close to the max.So I thought whats about having for each user his own access collection, but then I found out that this is not recommended and an anti-pattern. So on this point my question, how can I handle this with mongo. Sadly switching to an sql based database is no option.Nevertheless I tried it out and for my testcase it works, it created an access relation collection on my machine (notebook with mongo 4 running on docker) in roughly 6 minutes, which is ok. Because this task will happen not often in production and it shows the worst case to create 15million documents.The next point is how can I speed up quering on this large collection? Each document has only 2 ids the uder and the object id and both ids are together a compound index. If I now query for 10k items if a specific user has access, it takes around 30s. Might be related to my machine, but 30s is quite long, is there a way to reduce the time?I know that this is a realy tough problem and the given cirumstances (mongo version 4, mongo as db to handle raltions, many-to-many relations) are not optimal, but I just wanted to know if I mussed something here or if it is what it is right now?Thanks and best",
"username": "Andreas_Dahm"
}
] |
[Performance] Best Practice for large arrays per document
|
2023-08-21T08:43:28.651Z
|
[Performance] Best Practice for large arrays per document
| 836 |
null |
[
"compass",
"atlas-cluster"
] |
[
{
"code": "1. Confirmed that my Internet connection is functional.\n2. Validated the accuracy of my connection details.\n3. Attempted to add my current IP address to the \"Network Access\" section of MongoDB Atlas, aiming to fix the issue.\n4. Ensured that my current IP address is included in the MongoDB Atlas IP whitelist.\n5. Confirmed that my firewall (ufw) is inactive.\n6. Tried connecting from a different network, yielding no success.\n",
"text": "Hello,I’m currently encountering a connectivity problem while trying to connect to my MongoDB Atlas cluster from my Ubuntu 22 machine. This issue has arisen after not accessing my projects for a few months. I’ve been consistently receiving the following error message:“querySrv ENOTFOUND _mongodb._tcp.cluster0.vdlcqjo.mongodb.net.”To troubleshoot and resolve this issue, I’ve taken the following steps:However, despite my efforts, the error persists, impeding my connection to the MongoDB Atlas cluster. Furthermore, after entering my IP address into the “Network Access” section of MongoDB Atlas, I noticed that it was labeled as “inactive.” Unfortunately, I couldn’t find any buttons or switches to activate it.I kindly request assistance in resolving this situation. I would greatly appreciate guidance on troubleshooting this connectivity problem within the context of Ubuntu 22. Specifically, I’m seeking advice on how to activate my IP address in the “Network Access” section since I couldn’t locate any buttons or switches for activation.Thank you for your prompt attention and support.Best regards,\nInvectiv System",
"username": "Invectiv_system"
},
{
"code": "nslookup -type=srv _mongodb._tcp.cluster0.vdlcqjo.mongodb.net\nServer:\t\t127.0.0.1\nAddress:\t127.0.0.1#53\n\nNon-authoritative answer:\n_mongodb._tcp.cluster0.vdlcqjo.mongodb.net\tservice = 0 0 27017 ac-x2leqim-shard-00-00.vdlcqjo.mongodb.net.\n_mongodb._tcp.cluster0.vdlcqjo.mongodb.net\tservice = 0 0 27017 ac-x2leqim-shard-00-01.vdlcqjo.mongodb.net.\n_mongodb._tcp.cluster0.vdlcqjo.mongodb.net\tservice = 0 0 27017 ac-x2leqim-shard-00-02.vdlcqjo.mongodb.net.\n",
"text": "Hi @Invectiv_system,“querySrv ENOTFOUND _mongodb._tcp.cluster0.vdlcqjo.mongodb.net.”Please see the details on my post Can't connect to MongoDB Atlas - querySrv ENOTFOUND - #20 by Jason_Tran regarding details of the querySrv error.Using the same srv record you posted, the DNS associated with my machine was able to resolve the hostnames:From the machine attempting to connect, you can try the same thing to see what result you get.If there is nothing returned from the request, you can try a different DNS like Google’s DNS for troubleshooting purposes.Hope this helps.Regards,\nJason",
"username": "Jason_Tran"
}
] |
Connection Issue in MongoDB Compass and Inactive IP in MongoDB Atlas from Ubuntu 22
|
2023-08-23T08:06:25.056Z
|
Connection Issue in MongoDB Compass and Inactive IP in MongoDB Atlas from Ubuntu 22
| 527 |
null |
[
"replication"
] |
[
{
"code": "vm.max_map_count",
"text": "I have deployed mongodb to K8s with bitnami helm charts, and it is running as replicaset. When I log into mongodb terminal on secondary I see notification: “vm.max_map_count is too low”.When checking on container, I see:$ sysctl vm.max_map_count\nvm.max_map_count = 26214Per documentation the setting should bevm.max_map_count value of 128000So why am I getting this warning? Or should I increase the vm_max_count?",
"username": "tatuh"
},
{
"code": "",
"text": "Or should I increase the vm_max_count?Yes. As you’re running bitnami, best check their documentation.",
"username": "chris"
},
{
"code": "",
"text": "Thanks. But there was nothing on Bitnami mongodb documentation about the vm_max_count.\nI managed to found only the mongodb doc with “default”.Also, I found this topic about how to increase the value with helm and init-container kubernetes - Setting vm.max_map_count for mongodb with helm chart - Stack Overflow.\nBut, I’m confused about what the value should be, and If meant to be configured, why it is not documented/taken in bitnami helm charts?",
"username": "tatuh"
},
{
"code": "",
"text": "How to count this value if provided default is not enought:sysctl vm.max_map_count\nvm.max_map_count = 26214\nAND\ncat /proc/sys/vm/max_map_count\n262144\n?I found this Jira ticket, but I dont understand what the “2x max connections” is referring to:\n[SERVER-51233] Warn on startup if vm.max_map_count < 2 * max connections - MongoDB Jira",
"username": "tatuh"
},
{
"code": "",
"text": "Oh, seems to be a bug with defaults values: [DOCS-14280] Documentation incorrectly states that configuration parameter net.maxIncomingConnections has a default value of 65536 - MongoDB JiraAnyhow we have now limited connections on application side, so this is irrelevant.",
"username": "tatuh"
}
] |
Vm.max_map_count is too low in replicaset running in K8s
|
2023-08-21T07:54:22.289Z
|
Vm.max_map_count is too low in replicaset running in K8s
| 2,605 |
null |
[
"aggregation"
] |
[
{
"code": "op_msg{\n \"type\": \"command\",\n \"ns\": \"dba.student\",\n \"command\": {\n \"aggregate\": \"student\",\n \"pipeline\": [\n {\n \"$match\": {\n \"$and\": [\n { \"status\": \"Open\" },\n {\n \"DateTime\": {\n \"$gte\": { \"$date\": \"2023-08-18T13:51:37.495Z\" }\n }\n },\n {\n \"DateTime\": {\n \"$lt\": { \"$date\": \"2023-09-17T13:51:37.495Z\" }\n }\n }\n ]\n }\n },\n {\n \"$group\": {\n \"_id\": \"$storeNumber\",\n \"totalSlotCount\": { \"$sum\": 1 },\n \"minSlotDate\": { \"$min\": \"$appDateTime\" }\n }\n }\n ],\n \"cursor\": {},\n \"allowDiskUse\": false,\n \"$db\": \"dba\",\n \"$clusterTime\": {\n \"clusterTime\": {\n \"$timestamp\": { \"t\": 1692366695, \"i\": 6 }\n },\n \"signature\": {\n \"hash\": {\n \"$binary\": {\n \"base64\": \"x6qCTzYPFUl+HPaNJ1umPcrI0os=\",\n \"subType\": \"00\"\n }\n },\n \"keyId\": 7215676178336580000\n }\n }\n },\n \"planSummary\": \"IXSCAN { status: 1, DateTime: 1, reservedTime: 1 }\",\n \"cursorid\": 4735232936145454000,\n \"keysExamined\": 4355310,\n \"docsExamined\": 4355310,\n \"numYields\": 4474,\n \"nreturned\": 101,\n \"queryHash\": \"E2C2E097\",\n \"planCacheKey\": \"6DD10207\",\n \"reslen\": 6200,\n \"protocol\": \"op_msg\",\n \"durationMillis\": 23145,\n \"v\": \"4.4.23\"\n}\n",
"text": "Hi Team,I need help with the following query that is causing a CPU usage of 30%. How can we reduce the cause of this issue in the future?What is the exact solution?Explain output:",
"username": "hari_dba"
},
{
"code": "op_msgexplain()numYields{status: 1, DateTime: 1}",
"text": "Hey @hari_dba,following query that is causing a CPU usage of 30%The high CPU usage is likely a side effect of the long-running query, rather than an inherent issue with the query itself.How can we reduce the query time from op_msg 23145 to 100?The OP_MSG refers to the wire protocol used to encode the request and response, not the actual query execution time.While reducing query time depends on various factors like database configuration, indexes, etc., we can still make some targeted improvements based on the information provided.Based on the explain() output, it seems there may be constraints on available RAM as indicated by the high numYields value of 4.4k. Also, the current index does not appear efficient for this specific query filter.Per the ESR rules, an index on {status: 1, DateTime: 1} may be more optimal given the query criteria.Could you confirm if you see a similar number of numYields when re-running the query? This will help determine if it is truly a RAM space issue.Please let us know if you need any clarification or have additional details to share.Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] |
Optimizing Long-Running Query Execution Time
|
2023-08-19T15:50:49.586Z
|
Optimizing Long-Running Query Execution Time
| 374 |
null |
[] |
[
{
"code": "",
"text": "I created some charts using a sample database that I have. I have 6 charts and each of them are running fine when sample database collection is used. However, whenever I am connecting with the whole collection I am getting the error, “Cannot retrieve data” and none of the charts are loading.",
"username": "SATTWAMA_BASU"
},
{
"code": "",
"text": "Hi @SATTWAMA_BASU,I have 6 charts and each of them are running fine when sample database collection is used. However, whenever I am connecting with the whole collection I am getting the error, “Cannot retrieve data” and none of the charts are loading.When you state “whole collection” here, are you referring to all of the documents within a specific collection that is not one of the 6 sample database collections?Additionally, can you share more information about the collection having errors and what data you’re trying to show?Lastly, what cluster tier is the chart associated with?If possible, please provide any steps to reproduce the error as well.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "I have used only 1 sample collection for all the 6 charts which is a subset collection of the actual collection. All the charts draw data from 1 single collection. The cluster is M40 (General). While using the sample collection the charts are running fine, when using the actual collection, I am getting the error.",
"username": "SATTWAMA_BASU"
},
{
"code": "",
"text": "You should be able to get a more detailed error message if you click on the “failed” link in the chart footer.",
"username": "tomhollander"
}
] |
Getting "cannot retrieve data" error in Mongodb Charts
|
2023-08-21T10:40:06.607Z
|
Getting “cannot retrieve data” error in Mongodb Charts
| 427 |
[
"vscode"
] |
[
{
"code": "",
"text": "\nmongodb1152×648 28.9 KB\n",
"username": "Micheal_N_A"
},
{
"code": "/<database_name>",
"text": "Hi @Micheal_N_A. Can you post a screenshot with the error you get? The connection string should not need /<database_name> to work.",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "Hi @Massimiliano_Marcon, here is the Error:\n\nmongodbError924×243 8.91 KB\n",
"username": "Micheal_N_A"
},
{
"code": "",
"text": "Ah I see. That is an error thrown by Prisma. From the title and the screenshot you shared, I thought you were referring to a connection problem happening in our MongoDB for VS Code extension.I don’t believe that error is specific to VS Code. If you run the same npm script from a terminal outside of VS Code you’ll likely get the same error. And based on the error message, it does indeed seem that Prisma expects the database name to be in the connection string.",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "@Massimiliano_Marcon I apologize for the confusion, and now I see what I was doing wrong, when I did pass in the connection string I was leaving <> around the password and was getting a bad auth error, I got it working with *Prisma.thanks for you help.",
"username": "Micheal_N_A"
},
{
"code": "",
"text": "Hello I am getting the exact same error! I am not getting a database name on the connection string provided from the connection window when trying to connect with vs code. Very strange tho as even when i add my password correctly it still throws me the same P1013 error message.I have tried adding the name of my database to the end of the connection string (/test) but then i get hit with another error saying its missing a certificate.",
"username": "James_Bellion"
}
] |
Cannot connect through vscode, connection string is missing /test
|
2023-07-04T16:36:32.452Z
|
Cannot connect through vscode, connection string is missing /test
| 771 |
|
null |
[
"queries",
"node-js"
] |
[
{
"code": "",
"text": "am receiving below error when i try to save document which is modified through instance method. can anyone please provide solution to this.]await user.save({ validateBeforeSave: false });\nerror:\n“Error: Illegal arguments: undefined, number\\n at _async (C:\\Users\\rajesh\\Downloads\\node-proj\\node_modules\\bcryptjs\\dist\\bcrypt.js:214:46)\\n at C:\\Users\\rajesh\\Downloads\\node-proj\\node_modules\\bcryptjs\\dist\\bcrypt.js:223:17\\n at new Promise ()\\n at Object.bcrypt.hash (C:\\Users\\rajesh\\Downloads\\node-proj\\node_modules\\bcryptjs\\dist\\bcrypt.js:222:20)\\n at model. (C:\\Users\\rajesh\\Downloads\\node-proj\\modal\\userModal.js:64:32)\\n at callMiddlewareFunction (C:\\Users\\rajesh\\Downloads\\node-proj\\node_modules\\kareem\\index.js:530:27)\\n at model.next (C:\\Users\\rajesh\\Downloads\\node-proj\\node_modules\\kareem\\index.js:79:7)\\n at _next (C:\\Users\\rajesh\\Downloads\\node-proj\\node_modules\\kareem\\index.js:132:10)\\n at C:\\Users\\rajesh\\Downloads\\node-proj\\node_modules\\kareem\\index.js:555:30\\n at processTicksAndRejections (internal/process/task_queues.js:75:11)”\n}",
"username": "Rajeshkumar_G"
},
{
"code": "validateBeforeSave",
"text": "Hi @Rajeshkumar_G welcome to the community!I don’t think we have enough information to determine what’s going on here. The error message is helpful, but the code & circumstances that generates this error must be known as well.Could you post:Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "I am also facing this error . Have you got the solution?",
"username": "KISHAR_N_A"
}
] |
"stack": "Error: Illegal arguments: undefined, number\n at _asyn
|
2022-09-13T08:40:20.047Z
|
“stack”: “Error: Illegal arguments: undefined, number\n at _asyn
| 4,845 |
[
"queries",
"compass"
] |
[
{
"code": "{ \n createdAt: { \n $gte:new Date(\"2023-06-01\"),\n $lte:new Date(\"2023-07-01\")\n },\n user:ObjectId('61f8a56588ee0e53887a59de'),\n status: true\n }\n",
"text": "I have a query like thisIn the MongoDB Compass tool, I am not able to see queries at once, is there any easy to look query properly?\n\nimage1581×82 13.6 KB\n",
"username": "Anshul_Negi"
},
{
"code": "Ctrl+Shift+B",
"text": "Ctrl+Shift+B should reformat your query and display it on multiple lines.",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "Thanks,\nWill it help if an icon to format the query is added to MongoDB Compass?",
"username": "Anshul_Negi"
},
{
"code": "",
"text": "Somewhat like this as in aggregation we have, so that we can edit query also\n",
"username": "Anshul_Negi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
MongoDb Compass Query Input
|
2023-08-18T08:03:26.065Z
|
MongoDb Compass Query Input
| 554 |
|
[
"python"
] |
[
{
"code": "",
"text": "Hey Team,\nI was taking practicing the exam and I found this question bit trick. I noticed option D has syntax error at the end (attached screenshot) while also had same options for B & C ( I cannot see any difference, excuse if I am missing something) so select one of them. The result summary caught me by surprise.Can you please clarify if I am missing something and if not could please fix it so that it helps next folks who may take this free practice questionnaire. \nimage1144×621 47 KB\n",
"username": "Prnam"
},
{
"code": "",
"text": "I cannot see any differenceThere is no difference between the 2 documents.But when you try to insert the document with _id:5 from C, _id:5 already exists in the collection because it was added with B.",
"username": "steevej"
},
{
"code": "",
"text": "But when you try to insert the document with _id:5 from C, _id:5 already exists in the collection because it was added with B.Okay, I misunderstood the format of question, I assumed that both are same so if I select anyone of them I will right but from your response it appears that this question or any questions during exam I must assume these options are executed in sequence and if executed in sequence which one succeeds! BTW, there was weird behaviour too while selecting any one of them, the other was automatically selected. Did not capture GIF as I was in the middle of practicing. Probably some glitch. ",
"username": "Prnam"
},
{
"code": "",
"text": "I agree with Steeve. But still, it is tricky because the last option has a small error, since there is a quotation mark missing, i.e., ‘Orange’ instead of 'Orange .",
"username": "Alvaro_R"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Issue related to practice question
|
2023-07-25T13:51:40.872Z
|
Issue related to practice question
| 699 |
|
[
"compass"
] |
[
{
"code": "unknown option \"0\" (while validating preferences from: Command Line)\"0\"positionalArguments: \"[0]\" must be a string (while validating preferences from: Command line)",
"text": "Why is the GUI window of MongoDB Compass not showing on my macbook screen?Hi. I have been able to use MongoDB very well on my macbook until now. Ever since let MongoDB Compass update itself, it will not bring up the GUI. I get a weird repeated error that says unknown option \"0\" (while validating preferences from: Command Line) with this repeated with the \"0\" replaces with “1”, “2”, “3”, “9”, “p”, “s”, and “n” with a last error of positionalArguments: \"[0]\" must be a string (while validating preferences from: Command line)\nScreen Shot 2023-08-14 at 2.29.38 PM834×720 80.7 KB\nI would very much like to continue using Mongodb Compass in my work, but even after uninstalling and reinstalling Mongodb Compass, the same errors pop up again and no window opens with the Mongodb Compass GUI.Please if there is any idea of how to get it working again, I would appreciate it, thank you.",
"username": "Estela_Schaeffer"
},
{
"code": "",
"text": "Hi @Estela_Schaeffer,Can you try the new version of compass 1.39.2 : https://www.mongodb.com/try/download/compassIf the error re-appears, do you mind DM-ing me a quick video with steps on how to reproduce the error?Regards,\nJason",
"username": "Jason_Tran"
}
] |
Mongodb Compass GUI failing
|
2023-08-14T21:51:02.134Z
|
Mongodb Compass GUI failing
| 520 |
|
[] |
[
{
"code": "",
"text": "Hello All,I am having a beast of a timne installing MongoDB on my Kali machine. I am trying to do so for a Udemy class, and the instructor has sent us here – Install MongoDB Community Edition on Debian — MongoDB ManualAll is going well up to this step –\n\nMongoDB 22Aug5833×317 32.1 KB\nI asked aeround on a group in Linkedin, and someone told me these two simple steps, also with no dice –sudo systemctl daemon-reloadThen try rerunning sudo systemctl start mongodSo anyway, I’m just having a blast dealing with this. If anyone can help a total stranger on the internet out, I would consider it a personal favor.Many thanks,Steve",
"username": "Stephen_Malbasa"
},
{
"code": "",
"text": "I had to add these screenshots as followups here – more error messages and followup –\n",
"username": "Stephen_Malbasa"
},
{
"code": "",
"text": "And one more –\n\nudemy 22 Aug 7937×865 107 KB\n",
"username": "Stephen_Malbasa"
}
] |
Big issue installing MongoDB on Kali
|
2023-08-23T01:14:43.897Z
|
Big issue installing MongoDB on Kali
| 248 |
|
[
"installation"
] |
[
{
"code": "",
"text": "I’m a student trying to run mongodb on my Mac High Sierra 10.13.6. My mongodb version is v4.4.21. When I try to run mongo/mongod I get the following errors for each.When I try to run mongo this is the error I get:Book-Pro:~ aerodynamictapeworm$mongo\nMongoDB shell version V4.4.21\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName\n=mongodb\nError: couldn’t connect to server 127.0.0.1:27017,\nconnection attempt failed: Socketexception: error connecting to 127.0.0.1:27017 :: caused by :: connection refused:\nconnect@src/mongo/shell/mongo.js:374:17\n@(connect):2:6\nexception: connect failed exiting with codeWhen I try to run mongod, I get a very long list of processes, as if it’s starting up, but then I get the following message, and it shuts the program down again (snippet):\nIMG_2327882×99 88.9 KB\nHere is the configuration:systemLog:\ndestination: file path: /us/local/var/log/mongodb/mongo. log logAppend: true\nstorage:\ndbPath: /usr/local/var/mongodb\nnet:\nbindIp: 127.0.0.1, ::1\nIPv6: true",
"username": "Rachel_M"
},
{
"code": "mongodmongodmongod/data/dbdbPathmongod",
"text": "Data directory /data/db not found.The error above is most likely what is causing the mongod instance to terminate upon starting.What is the actual / full mongod command you’re using to start up the mongod instance? From the error, it looks like there is no /data/db directory found for the data directory. I understand you’ve specified the configuration file contents which contains a different dbPath value so I assume you’ve started the mongod instance without the configuration file - To learn more on using the configuration file please see the following Configuration File Options documentation.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Running Errors for mongo/d on Mac High Sierra
|
2023-08-22T23:53:24.312Z
|
Running Errors for mongo/d on Mac High Sierra
| 388 |
|
null |
[
"data-modeling"
] |
[
{
"code": "",
"text": "Hello everyone. Need an advise from MongoDB gurus.I have a collection ‘companies’ which is the list of legal entities. Each entity is unique: all of them have their own official legal name, registration number, and other unique identifiers. And yet, some of them can have parent-child relationship between each other from the point of view ownership: one company can be a shareholder in another one and thus considered as parent. Since all the companies are quite unique I don’t think that imbedding child companies as an array of documents into the document of parent company would be a good idea. It’s not kind of “product-review” or “author-publisher” relationship which you can find in a lot of documentations and examples on this topic.So, could you please share the best practices of handling such kind of relationships in MongoDB?Thanks in advance,\nMax",
"username": "Maxim_Mitrokhin"
},
{
"code": "",
"text": "One simple solution would be an array of the child _id’s or registration number ( or any other unique value) instead of the entire document. Then you could do a query on the values in the array to find the child companies. This shouldn’t have an issue of mutable growing arrays since the growth will be small. But if the array will have a lot of child companies this could be an issue.This link from the mongodb data model documentation shows a similar example: https://docs.mongodb.com/manual/tutorial/model-referenced-one-to-many-relationships-between-documents/",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "Thanks for your reply @tapiocaPENGUIN . The example that you’ve kindly shared is about referencing between 2(!) different collections (book - publisher). Can I do something similar to it within the same(!) collection? I tried to experiment a little bit with storing parent company’s id in child company’s document, but it didn’t work for me (or I might be doing something wrong)",
"username": "Maxim_Mitrokhin"
},
{
"code": "companies{ \"_id\" : 1, \"name\" : \"comp-1\" }\n{ \"_id\" : 2, \"name\" : \"comp-2\", \"ownerCompanies\" : [ \"comp-1\" ] }\n{ \"_id\" : 3, \"name\" : \"comp-3\", \"ownerCompanies\" : [ \"comp-1\", \"comp-2\" ] }\n{ \"_id\" : 1, \"name\" : \"comp-1\", \"childCompanies\": [ \"comp-2\", \"comp-3\" ] }\n{ \"_id\" : 2, \"name\" : \"comp-2\" }\n{ \"_id\" : 3, \"name\" : \"comp-3\" }\n",
"text": "Hello @Maxim_Mitrokhin, welcome to the MongoDB Community forum.The way to maintain the data in a collection depends upon factors. How do you want to query or access the data? The important queries drive the modeling of the data.Your companies collection data could be like any one of the following or may be different. There may be one or more owner companies, and one or more children companies (or no owner or children at all). For example:Example 1:Example 2:See this document on Model Tree Structures. You might be interested with Model Tree Structures with Parent References and Model Tree Structures with Child References.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Referencing documents within the same collection is generally legal, as long as it adheres to copyright and fair use regulations. Proper citation and limited use for educational or transformative purposes are key considerations to avoid infringement issues.",
"username": "James_Robert"
}
] |
Document referencing within the same collection
|
2020-11-10T18:59:17.856Z
|
Document referencing within the same collection
| 6,951 |
null |
[
"compass"
] |
[
{
"code": "",
"text": "Hi,\nFor some reason, when I click “Settings” in my MongoDB Compass client - it won’t pop up.\nAlso, the command panel isn’t shown.\nI’ve tried to delete, reinstall, removing app cache. Nothing works.Does anyone familiar with this issue ?Specs:\nmacOs Ventura 13.5.\nMongoDb Compass 1.39.1",
"username": "Oh_Av"
},
{
"code": "{\"t\":{\"$date\":\"2023-08-17T15:39:07.943Z\"},\"s\":\"W\",\"c\":\"COMPASS-SETTINGS\",\"id\":1001000145,\"ctx\":\"Settings\",\"msg\":\"Failed to fetch settings\",\"attr\":{\"message\":\"Setting \\\"agreedToLicense\\\" is not part of the preferences model\"}}",
"text": "Hello,\nExact same problem here with Windows 10 Enterprise, and version MongoDb Compass 1.39.1I found this in the log file:{\"t\":{\"$date\":\"2023-08-17T15:39:07.943Z\"},\"s\":\"W\",\"c\":\"COMPASS-SETTINGS\",\"id\":1001000145,\"ctx\":\"Settings\",\"msg\":\"Failed to fetch settings\",\"attr\":{\"message\":\"Setting \\\"agreedToLicense\\\" is not part of the preferences model\"}}",
"username": "Nicolas_CASAUX"
},
{
"code": "",
"text": "Hello @Oh_Av and @Nicolas_CASAUX ,Welcome to The MongoDB Community Forums! This is a known issue, it’s fixed in the latest beta release, Please update your MongoDB Compass to latest version 1.39.2.Regards,\nTarun",
"username": "Tarun_Gaur"
}
] |
MongoDb Compass Settings won't open + No command console is shown
|
2023-08-16T08:28:44.509Z
|
MongoDb Compass Settings won’t open + No command console is shown
| 544 |
null |
[
"aggregation",
"queries",
"golang"
] |
[
{
"code": "collectionNameCOLLECTION_SCHEMA_PATH\n\tpipeLine := []bson.M{\n\t\t{\"$match\": bson.M{\"user\": req.User}},\n\t\t{\"$addFields\": bson.M{\"subSetItems\": bson.M{\"$slice\": []interface{}{\"$items\", req.StartIndex, req.EndIndex}}}},\n\t\t{\"$unwind\": bson.M{\"path\": \"$subSetItems\"}},\n\t\t{\"$lookup\": bson.M{\n\t\t\t\"from\": m.nfts.Name(),\n\t\t\t\"let\": bson.M{\n\t\t\t\t\"address\": \"$subSetItems.collection\",\n\t\t\t\t\"tid\": \"$subSetItems.tid\",\n\t\t\t},\n\t\t\t\"pipeline\": []bson.M{{\"$match\": bson.M{\"$expr\": bson.M{\"$and\": []bson.M{\n\t\t\t\t{\"$eq\": []interface{}{\"$address\", \"$$address\"}},\n\t\t\t\t{\"$eq\": []interface{}{\"$tid\", \"$$tid\"}},\n\t\t\t}}}}},\n\t\t\t\"as\": NFT_SCHEMA,\n\t\t}},\n\t\t{\"$unwind\": bson.M{\n\t\t\t\"path\": NFT_SCHEMA_PATH,\n\t\t\t\"preserveNullAndEmptyArrays\": false,\n\t\t}},\n\t\t{\"$lookup\": bson.M{\n\t\t\t\"from\": m.nftCollectibles.Name(),\n\t\t\t\"localField\": \"address\",\n\t\t\t\"foreignField\": \"subsetOfItems.collection\",\n\t\t\t\"as\": COLLECTION_SCHEMA,\n\t\t}},\n\t\t{\"$unwind\": bson.M{\n\t\t\t\"path\": COLLECTION_SCHEMA_PATH,\n\t\t\t\"preserveNullAndEmptyArrays\": false,\n\t\t}},\n\t\t{\"$group\": bson.M{\n\t\t\t\"_id\": NFT_SCHEMA_PATH + \"._id\",\n\t\t\t\"nftName\": bson.M{\"$first\": NFT_SCHEMA_PATH + \".name\"},\n\t\t\t\"tid\": bson.M{\"$first\": NFT_SCHEMA_PATH + \".tid\"},\n\t\t\t\"nftImage\": bson.M{\"$first\": NFT_SCHEMA_PATH + \".image\"},\n\t\t\t\"owner\": bson.M{\"$first\": NFT_SCHEMA_PATH + \".owner\"},\n\t\t\t\"currentPrice\": bson.M{\"$first\": NFT_SCHEMA_PATH + \".lastPrice\"},\n\t\t\t\"onSale\": bson.M{\"$first\": NFT_SCHEMA_PATH + \".onSale\"},\n\t\t\t\"mintDate\": bson.M{\"$first\": NFT_SCHEMA_PATH + \".mintDate\"},\n\t\t\t\"collectionAddress\": bson.M{\"$first\": COLLECTION_SCHEMA_PATH + \".address\"},\n\t\t\t\"chain\": bson.M{\"$first\": COLLECTION_SCHEMA_PATH + \".relatedChain\"},\n\t\t\t\"collectionName\": bson.M{\"$first\": COLLECTION_SCHEMA_PATH + \".collectionName\"},\n\t\t\t\"contractName\": bson.M{\"$first\": COLLECTION_SCHEMA_PATH + \".contractName\"},\n\t\t\t\"collectionIcon\": bson.M{\"$first\": COLLECTION_SCHEMA_PATH + \".icon\"},\n\t\t}},\n\t\t{\"$sort\": bson.M{\"collectionAddress\": -1, \"tid\": 1}},\n\t}\n\n\tif len(req.Search) != 0 {\n\t\tpipeLine = append(pipeLine, bson.M{\"$match\": bson.M{\"$or\": []bson.M{\n\t\t\t{\"collectionAddress\": bson.M{\"$regex\": req.Search}},\n\t\t\t{\"collectionName\": bson.M{\"$regex\": req.Search}},\n\t\t\t{\"nftName\": bson.M{\"$regex\": req.Search}},\n\t\t}}})\n\t}\n\n\topt := options.Aggregate().SetAllowDiskUse(true).SetCollation(&options.Collation{\n\t\tLocale: \"en_US\",\n\t\tNumericOrdering: true,\n\t})\n\n\tctx := context.TODO()\n\tvar result []*types.UserPageNft\n\n\tif cursor, err := m.userFavorite.Aggregate(ctx, pipeLine, opt); err != nil {\n\t\treturn nil, err\n\t} else {\n\t\tdefer cursor.Close(ctx)\n\t\tif err = cursor.All(ctx, &result); err != nil {\n\t\t\treturn nil, err\n\t\t} else if len(result) == 0 {\n\t\t\treturn nil, mongo.ErrNoDocuments\n\t\t} else {\n\t\t\treturn result, nil\n\t\t}\n\t}\n",
"text": "Good day. I’m working on backend development with Golang and often write Aggregate code for MongoDB. I’m facing an issue where the collectionName field isn’t being parsed correctly. This field is in an object format, and while it parses well in other queries, it’s not working as expected in this particular query.All the other values related to COLLECTION_SCHEMA_PATH are either in string or array format, and they are being retrieved correctly. It’s puzzling why only the data in the object isn’t coming through properly in this query.If there’s anyone who can provide assistance, I would greatly appreciate it. Thank you.this is My Code",
"username": "_WM2"
},
{
"code": "$regexbson.M$sortbson.M{\"$sort\": bson.M{\"collectionAddress\": -1, \"tid\": 1}},\n{\"$sort\": bson.D{{\"collectionAddress\", -1}, {\"tid\", 1}}},\n",
"text": "Hey @_WM2 thanks for the question!You mentioned that “collectionName” is an object. However, $regex is documented to work only for strings. Do you match the “collectionName” object using $regex in other queries, or is it a field in the “collectionName” object?Something else I noticed is that your sort may not always work as you expect. The underlying type for a bson.M is a Go map, which has explicitly random key order. The $sort operator is order-sensitive, so using a bson.M may result in undefined sort order depending on what order the map keys are marshaled when sending the command to the database.I recommend changingto",
"username": "Matt_Dale"
}
] |
Parsing MongoDB Collection Name from Object Field
|
2023-08-11T02:32:51.462Z
|
Parsing MongoDB Collection Name from Object Field
| 787 |
null |
[] |
[
{
"code": "",
"text": "Hello,\nI am been trying to connect with MONGODB-AWS authMechanism which uses $external as virtual database for authSource and I keep getting this error: “Invalid character $ in database name: $external”Could someone please help?",
"username": "Maansi_Chandira"
},
{
"code": "$%24",
"text": "Hi @Maansi_ChandiraA connection string will have to url encode many special characters. Replacing $ with %24 should be enough to get you connected.If you have mongodb compass you cana use the advanced options to set many parameters and the copy the resulting connection string to where you need it.",
"username": "chris"
},
{
"code": "",
"text": "If you have mongodb compass you cana use the advanced options to set many parameters and the copy the resulting connection string to where you need it.Hello @chris,Error is the same. Invalid character even with %24 encoding.",
"username": "Maansi_Chandira"
},
{
"code": "Database error (MongoSecurityException): Exception authenticating MongoCredential{mechanism=MONGODB-AWS, userName='<accessKeyId>', source='$external', password=<hidden>, mechanismProperties=<hidden>}\n\nStacktrace:\n|_/ Database error (MongoSecurityException): Exception authenticating MongoCredential{mechanism=MONGODB-AWS, userName='<accessKeyId>', source='$external', password=<hidden>, mechanismProperties=<hidden>}\n|____/ Mongo Server error (MongoCommandException): Command failed with error 73: 'Invalid character $ in database name: $external' on server <docdb-cluster-endpoint>:27017. \n|____... \n|____... The full response is:\n|____... {\n|____... \"ok\" : 0.0,\n|____... \"code\" : 73.0,\n|____... \"errmsg\" : \"Invalid character $ in database name: $external\",\n|____... \"operationTime\" : Timestamp(1692643307, 1)\n|____... }\n",
"text": "Error stack trace:",
"username": "Maansi_Chandira"
},
{
"code": "",
"text": "What driver are you using?Can you paste the redacted connection string your are using?",
"username": "chris"
},
{
"code": "client = pymongo.MongoClient('mongodb://%s:%[email protected]/?authMechanism=MONGODB-AWS&authMechanismProperties=AWS_SESSION_TOKEN:%s&tls=true&tlsCAFile=./src/global-bundle.pem&replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=false' % (accesskeyid,secretkey,sessiontoken))\npymongo.errors.OperationFailure: Invalid character $ in database name: $external, full error: {'ok': 0.0, 'code': 73, 'errmsg': 'Invalid character $ in database name: $external', 'operationTime': Timestamp(1692723661, 1)}\n",
"text": "Hello @chris,I tried connecting via Compass, Studio3T and even PyMongo. Error is consistent.\nFYI - It’s a DocumentDB Cluster Endpoint and I am trying to authenticate with “MONGODB-AWS” mechanism of assuming role with STS credentials.Maybe “MONGODB-AWS” mechanism is only configured and supported for Mongodb Atlas clusters?Connection String:I get the following error upon connecting to the client or finding documents in a collection.Error:",
"username": "Maansi_Chandira"
},
{
"code": "",
"text": "It’s a DocumentDB Cluster EndpointIts not MongoDB, it is not MongoDB Atlas, so that is certainly the issue.",
"username": "chris"
},
{
"code": "",
"text": "Thank you @chris for confirming.",
"username": "Maansi_Chandira"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Invalid character $ in database name: $external
|
2023-08-18T20:52:31.972Z
|
Invalid character $ in database name: $external
| 661 |
[
"node-js",
"flutter"
] |
[
{
"code": "",
"text": "We developed mobile application on flutter platform and integration services using node.js, and both were using realm sync (device sync) So, the problem is when we created or inserted records from our client-side, all of them were synced to realm database but some of them didn’t create or insert into mongoDB. here are 2 pictures get more detail\nmain & realm database1920×2228 271 KB\nPicture 1 shows the document was created on realm sync database that created at 8/16/2023 4:28:45 PMPicture 2 shows the result of query by “_id” from picture 1 in main database didn’t appear for about 2 hours passed from created time.Could you please provide the solution and how can we do next ?\nHow cloud we do some kind of refresh sync data from realm to mongoDB? and how ?Thank you,\[email protected]",
"username": "Chawapol_Rojanarawewong"
},
{
"code": "",
"text": "Hi, do you have a link to your application in the Atlas / Realm UI? I can take a look at the logs.",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "here is the link : App Servicesplease, let me know if you want more information.",
"username": "Chawapol_Rojanarawewong"
}
] |
Datas couldn't transfer from realm database to main database in mongoDB
|
2023-08-22T05:00:13.270Z
|
Datas couldn’t transfer from realm database to main database in mongoDB
| 424 |
|
null |
[
"transactions"
] |
[
{
"code": "",
"text": "Hi,I have a requirement where i need to delete documents which are of older version once a new newer documents are inserted for a given date. I have two ways to do this, one if soft delete where i set some flag like isDeleted to true and delete documents through a cleaning job. Other way is to hard delete the documents of older versions.My only requirement is execution time as i have other operations to do along with the delete inside a transaction. I want to know whether updating a flag to delete is faster or deleting the records happens fastly. I also have indexes on the records, so want to know the performance impact on both operations considering the index updates required after both operation.The records that are available to deletion will be around 10k and can grow to max 50k. Can someone please help me in understanding the performance impact of both operations ? please let me know if any additional details needs to be mentioned.",
"username": "Satyaaditya_Baratam"
},
{
"code": "",
"text": "There is no model that fits all.It all depends of your data size, traffic and usage. Only you can determine the best model for your use-cases by testing both solution using your expected data size, traffic and usage.There is too many variables involved. Any solution is a trade off. You, as a software writer, need to determine which set of trade off is best suited for your application.Few things to consider:Happy testing. It would be nice if you later share any insight you discover.",
"username": "steevej"
}
] |
MongoDB: Hard deletes vs Soft deletes, which is faster to execute?
|
2023-08-17T06:30:24.135Z
|
MongoDB: Hard deletes vs Soft deletes, which is faster to execute?
| 666 |
null |
[
"golang",
"field-encryption"
] |
[
{
"code": "/usr/local/go/pkg/tool/linux_amd64/link: running gcc failed: exit status 1\n/usr/bin/ld: cannot find /usr/lib/x86_64-linux-gnu/librt.so: No such file or directory\ncollect2: error: ld returned 1 exit status\n",
"text": "go build -tags cse main.go\nresults in this errorI have installed libmongocrypt from source.",
"username": "Rahul_Kumar17"
},
{
"code": "",
"text": "Hi Rahul and welcome!Here is a link to the page that contains environment specific directions on how to install libmongocrypt. One of these should be used instead of installing from source.I hope that helps,Cynthia",
"username": "Cynthia_Braund"
}
] |
Issue building app with go build -tags cse main.go
|
2023-08-22T10:57:19.985Z
|
Issue building app with go build -tags cse main.go
| 428 |
null |
[
"replication",
"upgrading"
] |
[
{
"code": "",
"text": "Hi everyone.Am try to upgrade the replica (PSA) exiting 4.4 version to 5.0, am replace the 4.4 binary to 5.0 binary but below error showming please help me out…\nrepl1:PRIMARY> db.adminCommand({ setFeatureCompatibilityVersion: “5.0” })\n{\n“operationTime” : Timestamp(1692640723, 1),\n“ok” : 0,\n“errmsg” : “Invalid command argument. Expected ‘4.4’ or ‘4.2’, found 5.0 in: { setFeatureCompatibilityVersion: \"5.0\", lsid: { id: UUID(\"b7fb6c71-7ce9-4fcc-9183-7349e88833ba\") }, $clusterTime: { clusterTime: Timestamp(1692640703, 2), signature: { hash: BinData(0, 1CEC2C121A7A40F17EFF499048FB0A6E9B6C6B2A), keyId: 7269796627441778693 } }, $db: \"admin\" }. See https://docs.mongodb.com/master/release-notes/4.4-compatibility/#feature-compatibility.”,\n“code” : 2,\n“codeName” : “BadValue”,\n“$clusterTime” : {\n“clusterTime” : Timestamp(1692640723, 1),\n“signature” : {\n“hash” : BinData(0,“ipNigBe/XWDzYCzzE3irZ9p/ifU=”),\n“keyId” : NumberLong(“7269796627441778693”)\n}\n}\n}",
"username": "sindhu_K"
},
{
"code": "",
"text": "Hi @sindhu_K,\nDid you make two upgrades or did you start directly from 4.4?\nBecause it seems as if you did two upgrades so from 4.2 to 4.4 without doing the setFeatureCompatibility.\nIf you use the get, it will give you info on which one you stayed with.Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "Have you started the MongoDB process with version 5.0? You can verify with \"db.version() and see if it’s running as v5.0 or 4.4.The FCV doesn’t upgrade MongoDB, it just enables features for a certain release and helps with backwards compatibility. You don’t want to change the FCV until all the nodes are upgraded to 5.0.Follow the steps here: https://www.mongodb.com/docs/v5.3/release-notes/5.0-upgrade-replica-set/#upgrade-process",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "am upgrade 4.4 to 5.0 and mongo run at 4.4 version.",
"username": "sindhu_K"
},
{
"code": "",
"text": "am upgrade 4.4 to 5.0 and mongo run at 4.4 version.Then you will want to follow this link on the upgrade process. As mentioned you will need to completely upgrade the MongoDB servers first then change the FCV value on the cluster.Follow the steps here: https://www.mongodb.com/docs/v5.3/release-notes/5.0-upgrade-replica-set/#upgrade-process",
"username": "tapiocaPENGUIN"
}
] |
Mongodb replica set Upgrade (PSA) version 4.4 to 5.0 community edtion
|
2023-08-21T18:22:09.193Z
|
Mongodb replica set Upgrade (PSA) version 4.4 to 5.0 community edtion
| 494 |
[
"node-js",
"typescript"
] |
[
{
"code": "",
"text": "I am constantly getting the below error, can someone help me solve for it.\nimage1920×1202 150 KB\n",
"username": "T_Nitin"
},
{
"code": "mpromise",
"text": "Hey @T_Nitin, it looks like you’re using a very old version of mongoose. mpromise has not been used in the product for a long time, and the Invalid mongodb uri error implies the Node.js driver being used by mongoose is older than 3.0 (when SRV support was added).You’re likely using mongoose 4.13.21 (or older) which has long been considered EOL.Try first addressing the deprecation warnings (the errors you provided give you links to the mongoose docs), then upgrade mongoose to at least v5.",
"username": "alexbevi"
},
{
"code": "",
"text": "@alexbevi Thank you so much for the help. Yes I was trying various methods to resolve this issue but couldn’t figure out the correct error. I’ve tried re-installing mongoose, it is working now.",
"username": "T_Nitin"
}
] |
I am constantly getting DeprecationWarning error, need help!
|
2023-08-22T10:57:48.934Z
|
I am constantly getting DeprecationWarning error, need help!
| 458 |
|
null |
[
"atlas-device-sync",
"react-native"
] |
[
{
"code": "{\n \"roles\": [\n {\n \"name\": \"cityAccessRule\",\n \"apply_when\": {},\n \"document_filters\": {\n \"write\": {\n \"city\": {\n \"$in\": \"%%user.custom_data.cities\"\n }\n },\n \"read\": {\n \"city\": {\n \"$in\": \"%%user.custom_data.cities\"\n }\n }\n },\n \"read\": true,\n \"write\": true,\n \"insert\": true,\n \"delete\": true,\n \"search\": true\n }\n ]\n}\n realm.write(() => {\n customUserData.cities = cities;\n });\n await user.refreshCustomData()\n",
"text": "We have a Role for a collection Place that depends on user custom data, so something similar to:We are updating the custom user data collection in a React Native app and calling refresh as doc:As soon as the user custom data is updated, we would expect to sync down the documents in the collection Place that passes successfully the role. But this is not the case, user needs to to kill the app and open again to trigger a new sync and see the expected Place documents.All data seems correct on the server side (Dedicated M10) for the CustomUserData collection and UserSettings in AppSettings has correct configs for userId and Creation Function is working correctly. As well the connection between userId (CustomUserData collection) and the user.Id (User collection from Atlas) seems correct, as it is fetching correctly the expected documents, but only after the user restarts the app.We performed some tests and issue seems to be connected with the user custom data update across the Mongo Atlas environment. Because if we use a static value for the Role comparison the sync is immediate as soon as the condition passes. But if it depends on a value from custom user data, then the sync it’s not immediate.If there is any workaround would be appreciated, such as forcing the sync down on mobile side when we have the data updated.",
"username": "Ampop_Dev"
},
{
"code": "",
"text": "Hello @Ampop_Dev ,Welcome to the MongoDB Community , Thank you for raising your concerns. The behavior you observed in the application is as expected. Changes in custom user data are not picked up until the sync session restarts.You can find out more details in the Realm newsletter shared last month with some community conversations linked to the post.I hope this was helpful. Let me know if I can help you with anything else.Cheers, \nhenna",
"username": "henna.s"
},
{
"code": "",
"text": "@henna.s thank you for your answer and is there a way to restart the sync session without restarting the app?",
"username": "Ampop_Dev"
},
{
"code": "",
"text": "Hi, yes most of the SDK’s have an API to pause and resume the sync session (https://www.mongodb.com/docs/realm/sdk/react-native/sync-data/manage-sync-session/#pause-or-resume-a-sync-session). This should handle the refresh properly.One way we suggest temporarily getting around this restriction is to sync down the custom user data object (many of our customers do this in order to make changes to it). Then you can setup progress listeners on the object to be notified whenever it changes.Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "await user.refreshCustomData();\nrealm.syncSession.pause();\nrealm.syncSession.resume();\n",
"text": "@Tyler_Kaye thanks yes that works, calling pause then resume forces the sync:But it might be worth to prioritize automatic trigger of a sync if the custom user data changes and is a condition in roles for a collection. But thanks for the workaround, have great day!",
"username": "Ampop_Dev"
},
{
"code": "const credentials = Realm.Credentials.anonymous();\nawait app.logIn(credentials);\nawait app.emailPasswordAuth.registerUser(\n {\n email,\n password,\n },\n);\nawait app.emailPasswordAuth.confirmUser({\n token,\n tokenId,\n});\nconst credentials = Realm.Credentials.emailPassword(\n email,\n password,\n);\nawait app.logIn(credentials);\nundefinedconst customUserQuery = useQuery(CustomUserData);\nconst user = useUser();\n\nuseEffect(() => {\n realm.subscriptions.update(mutableSubs => {\n mutableSubs.add(realm.objects(CustomUserData));\n });\n}, [realm]);\n\nconst currentCustomUserData: CustomUserData = useMemo(() => {\n if (user?.id) {\n const currentUser = customUserQuery.filtered(`userId == '${user.id}'`);\n return currentUser?.[0];\n }\n}, [customUserQuery, user.id])\n\nconsole.log('currentCustomUserData', currentCustomUserData?.cities) // Is undefined, restarting app necessary\nrealm.subscriptions.update(mutableSubs => {\n mutableSubs.remove(realm.objects(CustomUserData));\n mutableSubs.add(realm.objects(CustomUserData));\n});\n{\n \"errorCode\": \"OperationAborted\",\n \"message\": \"Sync session became inactive\",\n}\n",
"text": "@Tyler_Kaye There is another scenario in which the pause + start does not help, only killing and restarting app that we noticed now. I’ll try to provide replication steps:We noticed that user.id is correctly updated with new user id, but the customUserQuery does not return the document. We confirmed on backend side and the value this CustomUserData document for this new userId is on server, so issue is only with sync.Do you have any workaround to restart the collection subscription? We tried remove + add:But we get unhandled promise rejection:",
"username": "Ampop_Dev"
},
{
"code": "",
"text": "@Tyler_Kaye is there any workaround for this? Pause + Start does not help, only killing and restarting app. It seems that useRealm hook are not being updated when we have new user is being logged in there is a set of rules using custom user data. Killing and restarting works correctly, so this is an issue only with the sync session getting lost when the user changes.",
"username": "Ampop_Dev"
},
{
"code": "{\n \"roles\": [\n {\n \"name\": \"CustomUserDataRule\",\n \"apply_when\": {},\n \"document_filters\": {\n \"write\": {\n \"foo\": \"%%user.custom_data.foo\"\n },\n \"read\": {\n \"foo\": \"%%user.custom_data.foo\"\n },\n },\n \"read\": true,\n \"write\": true,\n \"insert\": true,\n \"delete\": true,\n \"search\": true\n }\n ]\n}\n",
"text": "@Tyler_Kaye we are still blocked and new version 12.0.0 of realm JS does not fix this. The reproduction steps are simple, please let us know how to proceed:Example of Rule:The sync session is not updated with the new CustomUserData from the new logged in user. If we restart the app, everything works correctly and the new user can update the CustomUserData and the sync session is correctly refreshed.There is a serious limitation on the sync session if you log in anonymously then with another user without restarting app. What is the recommended way of refreshing the sync session when a new user logs in? We tried methods from doc (pause, start) but it does not help: https://www.mongodb.com/docs/realm/sdk/react-native/sync-data/manage-sync-session/",
"username": "Ampop_Dev"
}
] |
Device Sync with roles that depend on user custom data is not triggered immediately
|
2023-07-21T09:43:19.946Z
|
Device Sync with roles that depend on user custom data is not triggered immediately
| 1,005 |
[
"compass"
] |
[
{
"code": "",
"text": "I have local antivirus called smadav in my local, and looks like smadav blocked something in my latest update Mongo Compass so I disable the antivirus and reinstall my mongo Compass and restart my pc, but\nafter restart I cannot open Compass Setting, when I refer to the log file I found something error in the settings part that looks like this\n\nimage1858×919 46.4 KB\n\nplease help or any suggestions coz I need this Compass Setting to enable my MongoShell and I can continue test my aggegate query directly from there",
"username": "Guntur_Setiawan"
},
{
"code": "",
"text": "Hey @Guntur_Setiawan, this is a known issue, it’s already fixed in the latest beta release and we are working on releasing a GA release of Compass this week to address it. You can track release progress with this JIRA ticket",
"username": "Sergey_Petushkov"
}
] |
Cannot Open Mongo Compass Settings
|
2023-08-16T02:48:51.323Z
|
Cannot Open Mongo Compass Settings
| 608 |
|
null |
[
"data-modeling"
] |
[
{
"code": "",
"text": "I am working in a web app that will be used to survey around 2 million students.\nThe survey will ask different range questions (A range from 1 to 5 on how much the student agrees with the statement) and will also gather data on categories like gender, age, school, state, city, time to finish, etc. Also the survey will be repeated periodically to see the changes over time.The web app has to have a viewer where data can be visualized with different charts but also filtered by categories to see differences in ages, schools, etc.\nI think it would be wise to fetch a random sample of the data because of the scale of the project. But all outliers must be fetched because the point of the app is to find struggling students.How would you organize the data? I just cant figure it out.",
"username": "ROI_Addicts_N_A"
},
{
"code": "",
"text": "Hi\nNice idea to work on\nFinding the perfect schema that can scale well, perform well, and manages handily requires hit-and-trial methods which come with experience.\nAt the basic level, we can break this problem likeHope it helps",
"username": "Anshul_Negi"
}
] |
How do I model a survey app data
|
2023-08-14T02:50:02.213Z
|
How do I model a survey app data
| 484 |
null |
[
"compass",
"server"
] |
[
{
"code": "mongodb-community error 3584 root ~/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plistmongo.log mongo.log.2022-10-24T09-42-06 mongo.log.2022-10-24T09-48-09 mongo.log.2022-10-24T10-01-22\nmongo.log.2022-10-24T09-40-55 mongo.log.2022-10-24T09-47-32 mongo.log.2022-10-24T09-49-12 mongo.log.2022-10-24T10-02-49\nmongo.log.2022-10-24T09-41-59 mongo.log.2022-10-24T09-47-44 mongo.log.2022-10-24T10-01-12 mongod.conf\n{\"t\":{\"$date\":\"2022-10-24T12:02:49.563+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-10-24T12:02:49.565+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"thread1\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-10-24T12:02:49.566+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"thread1\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2022-10-24T12:02:49.566+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-10-24T12:02:49.567+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-10-24T12:02:49.567+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2022-10-24T12:02:49.567+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"thread1\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-10-24T12:02:49.567+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":25973,\"port\":27017,\"dbPath\":\"/usr/local/var/mongodb\",\"architecture\":\"64-bit\",\"host\":\"AndreaAir.local\"}}\n{\"t\":{\"$date\":\"2022-10-24T12:02:49.567+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23352, \"ctx\":\"initandlisten\",\"msg\":\"Unable to resolve sysctl {sysctlName} (number) \",\"attr\":{\"sysctlName\":\"hw.cpufrequency\"}}\n{\"t\":{\"$date\":\"2022-10-24T12:02:49.567+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23351, \"ctx\":\"initandlisten\",\"msg\":\"{sysctlName} unavailable\",\"attr\":{\"sysctlName\":\"machdep.cpu.features\"}}\n{\"t\":{\"$date\":\"2022-10-24T12:02:49.568+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.2\",\"gitVersion\":\"94fb7dfc8b974f1f5343e7ea394d0d9deedba50e\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"aarch64\",\"target_arch\":\"aarch64\"}}}}\n{\"t\":{\"$date\":\"2022-10-24T12:02:49.568+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"21.6.0\"}}}\n{\"t\":{\"$date\":\"2022-10-24T12:02:49.568+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"processManagement\":{\"fork\":true},\"storage\":{\"dbPath\":\"/usr/local/var/mongodb\"},\"systemLog\":{\"destination\":\"file\",\"path\":\"/usr/local/var/log/mongodb/mongo.log\"}}}}\n{\"t\":{\"$date\":\"2022-10-24T12:02:49.568+02:00\"},\"s\":\"E\", \"c\":\"NETWORK\", \"id\":23024, \"ctx\":\"initandlisten\",\"msg\":\"Failed to unlink socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\",\"error\":\"Permission denied\"}}\n{\"t\":{\"$date\":\"2022-10-24T12:02:49.568+02:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":40486,\"file\":\"src/mongo/transport/transport_layer_asio.cpp\",\"line\":1125}}\n{\"t\":{\"$date\":\"2022-10-24T12:02:49.568+02:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\nprocessManagement:\n fork: true\nnet:\n bindIp: localhost\n port: 27017\nstorage:\n dbPath: /var/lib/mongo\nsystemLog:\n destination: file\n path: \"/var/log/mongodb/mongod.log\"\n logAppend: true\nstorage:\n journal:\n enabled: true\n",
"text": "Hello everrybody, I’m new here.\nI’m trying to learn how to use MongoDB with Mongo Community / Compass, but even if I followed the installation process, I have / had a constant error:mongodb-community error 3584 root ~/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plistI checked my /usr/local/var/log/mongodb/ folder for the log file, and I find a lot of different files:Yes, I’ve tried more times to this and everytime this occurs.mongo.logmongo.confCurrently I literally do not understand how this works, since even uninstalling everything take me to the same error. Also, I’ve tried most of the solutions here, so I do not know what to do.FYI: I’m using a MacBook Air (M1, 2020), Monterey 12.6Thank you in advance for the help!",
"username": "and36100"
},
{
"code": "",
"text": "There are many reasons for this. You will find one that fits your situation by reading\nhttps://www.mongodb.com/community/forums/search?q=Failed%20to%20unlink%20socket%20file",
"username": "steevej"
},
{
"code": "",
"text": "Hi! That was helpful to understand how to move between different cases but still nothing works.\nI cannot understand how to read the log file to check where is the issue (and obv how to solve it)",
"username": "and36100"
},
{
"code": "Failed to unlink socket file\"s\":\"E\"\"s\":\"F\"\"path\":\"/tmp/mongodb-27017.sock\"\"error\":\"Permission denied\"",
"text": "The search string I provided wasFailed to unlink socket filewhich is the error you have. Log entries with\"s\":\"E\"are errors and line with\"s\":\"F\"are fatal.Often the error gives more clue about the issue compared to the following fatals. In your case, the\"path\":\"/tmp/mongodb-27017.sock\"already exists is needed to start mongod. However you do not have the permission to remove it as expressed by\"error\":\"Permission denied\"Here you have a choiceOR at your own risk",
"username": "steevej"
},
{
"code": "mongod{\"t\":{\"$date\":\"2022-12-12T09:03:00.813+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.813+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.818+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.819+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.819+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.819+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.819+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.819+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":20677,\"port\":27017,\"dbPath\":\"/data/db\",\"architecture\":\"64-bit\",\"host\":\"AndreaAir.LocalDomain\"}}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.819+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23352, \"ctx\":\"initandlisten\",\"msg\":\"Unable to resolve sysctl {sysctlName} (number) \",\"attr\":{\"sysctlName\":\"hw.cpufrequency\"}}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.819+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23351, \"ctx\":\"initandlisten\",\"msg\":\"{sysctlName} unavailable\",\"attr\":{\"sysctlName\":\"machdep.cpu.features\"}}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.819+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.1\",\"gitVersion\":\"32f0f9c88dc44a2c8073a5bd47cf779d4bfdee6b\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"aarch64\",\"target_arch\":\"aarch64\"}}}}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.819+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"21.6.0\"}}}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.819+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.820+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":5693100, \"ctx\":\"initandlisten\",\"msg\":\"Asio socket.set_option failed with std::system_error\",\"attr\":{\"note\":\"acceptor TCP fast open\",\"option\":{\"level\":6,\"name\":261,\"data\":\"00 04 00 00\"},\"error\":{\"what\":\"set_option: Invalid argument\",\"message\":\"Invalid argument\",\"category\":\"asio.system\",\"value\":22}}}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.820+01:00\"},\"s\":\"E\", \"c\":\"CONTROL\", \"id\":20557, \"ctx\":\"initandlisten\",\"msg\":\"DBException in initAndListen, terminating\",\"attr\":{\"error\":\"NonExistentPath: Data directory /data/db not found. Create the missing directory or specify another path using (1) the --dbpath command line option, or (2) by adding the 'storage.dbPath' option in the configuration file.\"}}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.820+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.821+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4794602, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to enter quiesce mode\"}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.821+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":6371601, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FLE Crud thread pool\"}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.821+01:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.821+01:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.821+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"initandlisten\",\"msg\":\"Shutdown: going to close listening sockets\"}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.821+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.821+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784906, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.821+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"initandlisten\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.821+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.821+01:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.821+01:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.821+01:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.821+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.821+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.821+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.821+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.821+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.821+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.821+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.821+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":100}}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.819+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23352, \"ctx\":\"initandlisten\",\"msg\":\"Unable to resolve sysctl {sysctlName} (number) \",\"attr\":{\"sysctlName\":\"hw.cpufrequency\"}}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.819+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23351, \"ctx\":\"initandlisten\",\"msg\":\"{sysctlName} unavailable\",\"attr\":{\"sysctlName\":\"machdep.cpu.features\"}}\n{\"t\":{\"$date\":\"2022-12-12T09:03:00.820+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":5693100, \"ctx\":\"initandlisten\",\"msg\":\"Asio socket.set_option failed with std::system_error\",\"attr\":{\"note\":\"acceptor TCP fast open\",\"option\":{\"level\":6,\"name\":261,\"data\":\"00 04 00 00\"},\"error\":{\"what\":\"set_option: Invalid argument\",\"message\":\"Invalid argument\",\"category\":\"asio.system\",\"value\":22}}}\n",
"text": "Thank you for the help. I’ve done the unsafe option and I still get the same error.\nmongod looks like this at the moment:What other issue could be?\nI see some things here that are not ok, such as:orThank you again for the availability.",
"username": "and36100"
},
{
"code": "",
"text": "How did you start your mongod?\nYour earlier post shows dbpath is under /var/lib but now it is looking for /data/dbAfter removing TMP file you are suppose to start mongod as service which uses the standard config file",
"username": "Ramachandra_Tummala"
},
{
"code": "\"s\":\"E\"\"msg\":\"Failed to unlink socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\",\"error\":\"Permission denied\"}}\n\"msg\":\"DBException in initAndListen, terminating\",\"attr\":{\"error\":\"NonExistentPath: Data directory /data/db not found. Create the missing directory or specify another path ...\"\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"processManagement\":{\"fork\":true},\"storage\":{\"dbPath\":\"/usr/local/var/mongodb\"}, ...\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}}",
"text": "It is notthe same errorAs mentioned earlier, error are lines marked with\"s\":\"E\"Your original error wasnow your error is\"msg\":\"DBException in initAndListen, terminating\",\"attr\":{\"error\":\"NonExistentPath: Data directory /data/db not found. Create the missing directory or specify another path ...\"You could search the forum for a more detailed explanation but the condition that causes the error is:Data directory /data/db not found.and two solutionsare provided as part of error message.As mentionedYour earlier post shows dbpath is under /var/lib but now it is looking for /data/dband you posted the configuration and you used it when you started mongod as we can see by the informational message (log lines with “s”:“I”) in your first post\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"processManagement\":{\"fork\":true},\"storage\":{\"dbPath\":\"/usr/local/var/mongodb\"}, ...Now in your latest post you have\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}}which indicates that you started mongod by simply typing the command mongod (which does not use the configuration file you shared) rather thanstart mongod as service which uses the standard config fileAnd aboutI see some things here that are not okthe messages are marked as \"s\":\"I\" so they are informational and do not stop mongod from starting.",
"username": "steevej"
},
{
"code": "",
"text": "Yep, sorry for that, I share to you different files without realising it.\nI’ll try what you mentioned, and I’ll keep you posted about my issue. Thanks!",
"username": "and36100"
},
{
"code": "",
"text": "start mongod as serviceHi! Since I’m still new and I cannot find anything online, how you do this?",
"username": "and36100"
},
{
"code": "",
"text": "What method you followed to install Mongodb on Macos?\nCheck this link",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I used the homebrew’s method, considering also M1 processor",
"username": "and36100"
},
{
"code": "",
"text": "So did you try brew start,brew list,brew status etc\nYou can start mongod from command line also but give different dbpath,logpath,port to avoid clash with default mongod which comes up on port 27017\nIf you run just mongod without any params it will try to start mongod on port 27017 and default dirpath /data/db\nIn your case it failed because /data/db not existing\nAs suggested by Steve you have to create the missing directory but even that will not work as Macos removed access to root dir\nIf you attempt to create the /data/db dir it will say read only file\nSo best thing is start from brew services\nIf that does not work you can start it from command line\nmongo --port 29000 --dbpath your_homedir --logpath your_homedir/mongod.log --fork\nOnce it is up connect as below\nmongo --port 29000",
"username": "Ramachandra_Tummala"
},
{
"code": "brew services start mongodb/brew/mongodb-community\nbrew services start mongodError: No available formula with the name \"mongod\". Did you mean mono or mongosh?\nzsh: command not found: mongo\n",
"text": "Ok. I started:Then, you said:So best thing is start from brew servicesDoes it mean that the corresponding code is brew services start mongod or what?\nThis command returned:Therefore:mongo --port 29000 --dbpath your_homedir --logpath your_homedir/mongod.log --forkreturned meI tried with and without starting mongodb-cmmunity!",
"username": "and36100"
},
{
"code": "brew services start mongodb/brew/mongodb-community",
"text": "From the documentation provided by Ramachandra_Tummala the following is wrong.brew services start mongodb/brew/mongodb-communityThe correct way is documented:",
"username": "steevej"
},
{
"code": "",
"text": "I missed d in mongod\nmongod is used to start a mongod instance\nmongo/mongosh is used to connect to a mongod instance\nRegarding the mongo not found error you must be having mongosh with latest version of mongodb indtallation.Thats why it says mongo not found\nSo after mongod is up try to connect as\nmongosh --port 29000\nDid you try to start the service with correct command as per doc?",
"username": "Ramachandra_Tummala"
},
{
"code": "brew services list\nmongodb-community error 3584 root ~/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist\n",
"text": "Ok, now mongosh and mongod in port 29000 started! But still, callinggives meAt least mongosh and mongo compass seem to work, even if I’m still wrapping my head around this mongodb community error",
"username": "and36100"
},
{
"code": "",
"text": "Brew list will show only those which are started by brew startWhat you started is your own mongod from command line which can be checked by ps -ef|grep mongoWhat is the result of brew service start@ver_num as per doc?Please read documentation on how many ways we can start mongod and what is default mongod\nLooks like you are getting confused",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Looks like you are getting confusedYea, sorry for that, as I said before I’m extremely new to this. I’ll read the documentation. Thanks!",
"username": "and36100"
},
{
"code": "mongod --version\ndb version v7.0.0\nBuild Info: {\n \"version\": \"7.0.0\",\n \"gitVersion\": \"37d84072b5c5b9fd723db5fa133fb202ad2317f1\",\n \"modules\": [],\n \"allocator\": \"system\",\n \"environment\": {\n \"distarch\": \"aarch64\",\n \"target_arch\": \"aarch64\"\n }\n}\nbrew services list\nsystemLog:\n destination: file\n path: /opt/homebrew/var/log/mongodb/mongo.log\n logAppend: true\nstorage:\n dbPath: /opt/homebrew/var/mongodb\nnet:\n bindIp: 127.0.0.1, ::1\n ipv6: true\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.561-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"thread1\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.562-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"thread1\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":21},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":21},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":21},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.575-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"thread1\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.579-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.579-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.579-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"thread1\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.579-04:00\"},\"s\":\"I\", \"c\":\"TENANT_M\", \"id\":7091600, \"ctx\":\"thread1\",\"msg\":\"Starting TenantMigrationAccessBlockerRegistry\"}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.579-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":5046,\"port\":27017,\"dbPath\":\"/data/db\",\"architecture\":\"64-bit\",\"host\":\"Mac-Studio-de-Juancho.local\"}}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.579-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"7.0.0\",\"gitVersion\":\"37d84072b5c5b9fd723db5fa133fb202ad2317f1\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"aarch64\",\"target_arch\":\"aarch64\"}}}}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.579-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"22.6.0\"}}}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.579-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.581-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":5693100, \"ctx\":\"initandlisten\",\"msg\":\"Asio socket.set_option failed with std::system_error\",\"attr\":{\"note\":\"acceptor TCP fast open\",\"option\":{\"level\":6,\"name\":261,\"data\":\"00 04 00 00\"},\"error\":{\"what\":\"set_option: Invalid argument\",\"message\":\"Invalid argument\",\"category\":\"asio.system\",\"value\":22}}}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.582-04:00\"},\"s\":\"E\", \"c\":\"CONTROL\", \"id\":20557, \"ctx\":\"initandlisten\",\"msg\":\"DBException in initAndListen, terminating\",\"attr\":{\"error\":\"NonExistentPath: Data directory /data/db not found. Create the missing directory or specify another path using (1) the --dbpath command line option, or (2) by adding the 'storage.dbPath' option in the configuration file.\"}}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.582-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.583-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4794602, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to enter quiesce mode\"}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.583-04:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":6371601, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FLE Crud thread pool\"}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.583-04:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.583-04:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.583-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"initandlisten\",\"msg\":\"Shutdown: going to close listening sockets\"}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.583-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.583-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784906, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.583-04:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"initandlisten\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.583-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.583-04:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.583-04:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.583-04:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.583-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.583-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.583-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.583-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.583-04:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.583-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2023-08-21T20:46:57.583-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":100}}\nNonExistentPath: Data directory /data/db not found. Create the missing directory or specify another path using (1) the --dbpath command line option, or (2) by adding the 'storage.dbPath' option in the configuration file.\nmkdir -p /data/db\nmkdir: /data: Read-only file system\nmongod --config /opt/homebrew/etc/mongod.conf --fork\nabout to fork child process, waiting until server is ready for connections.\nforked process: 5225\nERROR: child process failed, exited with 1\nTo see additional information in this output, start without the \"--fork\" option.\ncat homebrew.mxcl.mongodb-community.plist\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!DOCTYPE plist PUBLIC \"-//Apple//DTD PLIST 1.0//EN\" \"http://www.apple.com/DTDs/PropertyList-1.0.dtd\">\n<plist version=\"1.0\">\n<dict>\n <key>Label</key>\n <string>homebrew.mxcl.mongodb-community</string>\n <key>ProgramArguments</key>\n <array>\n <string>/opt/homebrew/opt/mongodb-community/bin/mongod</string>\n <string>--config</string>\n <string>/opt/homebrew/etc/mongod.conf</string>\n </array>\n <key>RunAtLoad</key>\n <true/>\n <key>KeepAlive</key>\n <false/>\n <key>WorkingDirectory</key>\n <string>/opt/homebrew</string>\n <key>StandardErrorPath</key>\n <string>/opt/homebrew/var/log/mongodb/output.log</string>\n <key>StandardOutPath</key>\n <string>/opt/homebrew/var/log/mongodb/output.log</string>\n <key>HardResourceLimits</key>\n <dict>\n <key>NumberOfFiles</key>\n <integer>64000</integer>\n </dict>\n <key>SoftResourceLimits</key>\n <dict>\n <key>NumberOfFiles</key>\n <integer>64000</integer>\n </dict>\n</dict>\n</plist>\n /opt/homebrew/Cellar/mongodb-community/7.0.0/bin\n /opt/homebrew/Cellar/mongodb-community/7.0.0/bin/mongod\n /opt/homebrew/opt/mongodb-community\n /opt/homebrew/opt/mongodb-community/bin\n /opt/homebrew/var/homebrew/linked/mongodb-community\n",
"text": "Hello everyone, I am really driving my self crazy with this topic. I read the entire thread and I couldn’t resolve this annoying error 3584.\nI installed mongodb using homebrew, I have a M1 Mac Studio, and the current version of mongodb is this one:If I check brew services, that’s what I got.\nCaptura de pantalla 2023-08-21 a la(s) 20.41.161710×278 28.2 KB\nLet’s check my mongod.conf file:The folder for the dbPath and logs are there, I mean, they exist, and I have permission to write on those folders.\nIf I run the command, mongd, that’s what I got:If you see in that log, it says:But as mentioned before, the mongodb.conf file is pointing the database to be stored in a different location, plus I can’t create the /data/db folder as it complains with the following error:I have done the following:I have also run the following script, to start the services without homebrew, but I got this error:I tried without --fork but the terminal does not show an output.\nThe homebrew.mxcl.mongodb-community.plist looks ok to me:I also have permission to write and read these folders:So, please help, I’ve tried everything I know, and I really need to have mongodb up and running in my machine, I would really appreciate your help.Thank you very much,Juan",
"username": "Juan_Galue"
},
{
"code": "",
"text": "When you run just mongod without any params it looks for default dbpath /data/db\nSince it is not there it failed\nOn Macos access to root dir /data is removed\nso you have to give some other dir\nTry this\nmongod --port 28000 --dbpath your_home_dir --logpath your_home_dir/mongod.log --fork\nOnce mongod is up connect as mongo --port 28000 or mongosh --port 28000 depending on the shell you have\nRegarding the error exited with error 1 from services you need to investigate why it is filing to start",
"username": "Ramachandra_Tummala"
}
] |
Constant mongodb-community error 3584
|
2022-12-11T14:29:36.135Z
|
Constant mongodb-community error 3584
| 9,890 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.