image_url
stringlengths
113
131
tags
list
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "node-js", "data-modeling", "react-native", "graphql" ]
[ { "code": "", "text": "Hello Everyone,I’ve been looking around for quite some time now and can really seems to find a good solution to my problem and i hope that i’m not creating a duplicate or boring topics.I’m working on a small app to manage my own farm and i have now collected weather information for every single hour since Jan 01 1979. Currently i have 382k doc in my collection and it’s growing…\nMy main plan now is to display daily weather and some graph displaying weather of the same day for all past years. i’m using node.js and graphql with apollo server for api and react native for the app.To reduce the file size i grouped all hourly data in days. From 382k i’m down to 16k. But i’m still struggling on storing this data in an easy and efficient way so that i can make one query and get back a specific day of of each year without making al look query and also how to query dates properly.I’m not really looking into how to code this but more like general opinion or suggestions.\nI looked into finding similar problem/solution on the net but no luck so far.in the meantime thanks for all the helpKind RegardsM", "username": "marco_ferrari" }, { "code": "", "text": "You might be looking for Time Series Collections which were added in MongoDB 5.0 and they are built for this use case exactly. They even mention in at the top of the page \nimage742×410 26.7 KB\nHere is a weather TS collection example:", "username": "tapiocaPENGUIN" }, { "code": "", "text": "Have you tried MongoDB Charts?If you use MongoDB Atlas, the Charts are also available for use in it. Also available for free/shared clusters.By the way, you do not have to reduce your data size since your document size is relatively small enough. and if you use Charts after, you will be able to filter as you like later.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Many thanks, i had a look at it but i’ll look into this paying more attention.", "username": "marco_ferrari" } ]
Storing weather data
2022-08-11T18:56:04.054Z
Storing weather data
2,944
null
[ "queries", "replication", "sharding", "transactions" ]
[ { "code": "TransactionOptions txnOptions = TransactionOptions.builder()\n .readPreference(ReadPreference.primary())\n .readConcern(ReadConcern.LOCAL) // Tried ReadConcern with MAJORITY as well, same issue.\n .writeConcern(WriteConcern.MAJORITY)\n .build();\ncom.mongodb.MongoCommandException: Command failed with error 148 (ReadConcernMajorityNotEnabled): 'Transaction was aborted :: caused by :: from shard <Shard>:: caused by :: 'prepareTransaction' is not supported for replica sets with arbiters' on server <Server> The full response is {\"ok\": 0.0, \"errmsg\": \"Transaction was aborted :: caused by :: from shard <Shard>::: caused by :: 'prepareTransaction' is not supported for replica sets with arbiters\", \"code\": 148, \"codeName\": \"ReadConcernMajorityNotEnabled\", \"operationTime\": {\"$timestamp\": {\"t\": 1660211382, \"i\": 6}}, \"$clusterTime\": {\"clusterTime\": {\"$timestamp\": {\"t\": 1660211382, \"i\": 6}}, \"signature\": {\"hash\": {\"$binary\": {\"base64\": <>, \"subType\": \"00\"}}, \"keyId\": <keyID> }}, \"recoveryToken\": {\"recoveryShardId\": <Shard>}}", "text": "Hi Team,I’m using mongo 4.4 transactions in a sharded environment with replica set (with arbiters). to update multi document transactions. Following is the error message returned.com.mongodb.MongoCommandException: Command failed with error 148 (ReadConcernMajorityNotEnabled): 'Transaction was aborted :: caused by :: from shard <Shard>:: caused by :: 'prepareTransaction' is not supported for replica sets with arbiters' on server <Server> The full response is {\"ok\": 0.0, \"errmsg\": \"Transaction was aborted :: caused by :: from shard <Shard>::: caused by :: 'prepareTransaction' is not supported for replica sets with arbiters\", \"code\": 148, \"codeName\": \"ReadConcernMajorityNotEnabled\", \"operationTime\": {\"$timestamp\": {\"t\": 1660211382, \"i\": 6}}, \"$clusterTime\": {\"clusterTime\": {\"$timestamp\": {\"t\": 1660211382, \"i\": 6}}, \"signature\": {\"hash\": {\"$binary\": {\"base64\": <>, \"subType\": \"00\"}}, \"keyId\": <keyID> }}, \"recoveryToken\": {\"recoveryShardId\": <Shard>}}", "username": "Laks" }, { "code": "", "text": "Hi @laks,Per Transactions: Production Considerations (Sharded Clusters) this is an expected error:On a sharded cluster, transactions that span multiple shards will error and abort if any involved shard contains an arbiter.Arbiters do not write data, so they can cause significant operational challenges for use cases that rely on majority acknowledgement of writes. If your transaction use case requires updating documents across multiple shards, you will have to replace any arbiters with secondaries.I strongly recommend avoiding arbiters in production as they introduce significant operational challenges. For more background, see Replica set with 3 DB Nodes and 1 Arbiter - #8 by Stennie.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongo Transactions in Sharded Environment
2022-08-11T12:35:45.814Z
Mongo Transactions in Sharded Environment
3,174
https://www.mongodb.com/…e_2_1024x736.png
[ "atlas-device-sync" ]
[ { "code": "", "text": "I see ‘InternalServerError - error processing request’ errors in the logs from time to time. Is it something to worry about? There is no info at all so it’s hard to understand what had happened or the impact severity.\n\nScreenshot 2022-01-06 at 11.07.461346×968 68.4 KB\n", "username": "Anton_P" }, { "code": "", "text": "If you’re app is continuing to work as you expect, then I’d assume that the error was transitory, and that the Realm backend handled it.", "username": "Andrew_Morgan" }, { "code": "", "text": "Hi, I am new to MongoDB and trying to create an app from the AppServices but I get the error “error processing request”Can you kindly help / advise please?\n\nMongoDB error1530×826 58.9 KB\n", "username": "Nimesh_Biyagamage" } ]
Is 'InternalServerError - error processing request' something to worry about in MongoDB Realm logs?
2022-01-06T08:09:10.657Z
Is &lsquo;InternalServerError - error processing request&rsquo; something to worry about in MongoDB Realm logs?
3,401
null
[]
[ { "code": "mongoddbpathdbpathdbpath", "text": "For the command that starts the mongod instance:mongod --port “PORT” --dbpath “YOUR_DB_DATA_PATH” --replSet “REPLICA_SET_INSTANCE_NAME”The dbpath is where all of the DBs are stored right, so if I am setting up the primary node for example it would write and read the data to the specified DBs from the dbpath and then would forward the data to the Secondary nodes for replication. In the Secondary nodes, the replicated data forwarded from the Primary node will be stored in the dbpath as well a reading of the dataset will be made from it by the client application.", "username": "Master_Selcuk" }, { "code": "", "text": "That is correct, the purpose of the --dbPath is to tell MongoDB where you want it to store the data on disk. If you don’t provide the flag MongoDB will use a default path but this let’s you choose where you want to store your data on disk.In a replica set the primary and secondary’s will hold all of the data (just in case it become primary or you want to read from secondary). Each one has their own dbPath that you specify.", "username": "tapiocaPENGUIN" }, { "code": "dbpathportdata1data2data3270012700227003mongod --port 27001 --dbpath ./data1 --replSet myreplicaset\nmongod --port 27002 --dbpath ./data2 --replSet myreplicaset\nmongod --port 27003 --dbpath ./data3 --replSet myreplicaset\ndbpathport", "text": "If you are trying to understand replication, then you may have 3 instances running in the same machine (3 is the suggested minimum), then dbpath along with port becomes clearer.by creating 3 data folders such as data1, data2 and data3 and starting on 3 different ports such as 27001, 27002 and 27003, this local replica set (after other commands to complete replication) will just be as ready as any remote replica sets.in short, dbpath and port allows you to start multiple instances for different purposes.what you need to be careful of is that 2 instances would not use the same path.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Thank you for the clarification. So if there are multiple DBs within the set --dbpath folder on the primary will just clone all DBs to the Secondary.", "username": "Master_Selcuk" }, { "code": "", "text": "Thank you I will try and replicate this for practice and understanding purposes.", "username": "Master_Selcuk" }, { "code": "--dbpathmongod", "text": "The --dbpath location just tells the mongod process where to store its data files. This doesn’t really have anything to do with replication.The secondary servers get their information from the oplog and then applies those operations to their local copy of the database.MongoDB has a basic overview of replication document that might be worth reading.", "username": "Doug_Duncan" }, { "code": "", "text": "the primary will just clone all DBs to the SecondaryIt is not that simple. If you have existing data, then you can copy it but then you will need to go through a set of operations so that other servers can work with that data.After you have set your replica set, either locally or remote, the primary will communicate with the secondaries @Doug_Duncan wrote above.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What is the purpose of --dbpath in the mongod instance command?
2022-08-11T19:05:19.768Z
What is the purpose of &ndash;dbpath in the mongod instance command?
5,121
https://www.mongodb.com/…87cf8fb7217d.png
[ "replication" ]
[ { "code": "", "text": "Hello everyone,Passing the M201 “Chapter 5: Performance on Clusters” and faced an issue with the reading of data from the secondary node of my replica set, whereas I can’t find whether my collection is present on my SECONDARY node. Please suggest how to check this in any way.\nRegards.", "username": "Oleksandr_Hetman" }, { "code": "", "text": "Here is what I would do to check:\nCould you show what is on the primary (show dbs), also is this a sharded cluster?", "username": "tapiocaPENGUIN" }, { "code": "", "text": "did you connect with an admin or an account with sufficient privileges?", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Hello everyone,Thanks for the comments. In order to properly use the rs.slaveOk() command, you should log in to that particular SECONDARY node with an appropriate privilege and authentication.Regards.", "username": "Oleksandr_Hetman" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can't see data on the Secondary Nodes
2022-08-11T19:45:18.958Z
Can&rsquo;t see data on the Secondary Nodes
2,637
null
[]
[ { "code": "", "text": "MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017I use macOS montery (intel)Nomatter what I try. I get this error. Please help!!!", "username": "Mohammed_Faraz" }, { "code": "brewbrew service start mongodb-commmunitybrew servicesmongod", "text": "Hello @Mohammed_Faraz and welcome to the MongoDB community forums. It sounds like the database engine is not running so there’s nothing to connect to.Can you state how you installed MongoDB? If you used brew, you can start things up using brew service start mongodb-commmunity. This will start the system up. You can then use brew services to verify that it’s running. If there are errors you would need to check the log file for MongoDB to determine what’s going on.If you manually installed MongoDB by downloading the compressed files and are running the mongod process manually, then we would need to know more about the exact command line parameters you’re using and any log information on why the process is not starting up.", "username": "Doug_Duncan" }, { "code": "id -un", "text": "Thanks for the reply. I tried using “sudo mkdir -p /System/Volumes/Data/data/db” then “sudo chown -R id -un /System/Volumes/Data/data/db” and then “sudo mongod --dbpath /System/Volumes/Data/data/db”.It worked!I tried these commands after installing via brew yesterday and they didn’t work but somehow now its working. Don’t know how! Do you have any idea how it worked this time?", "username": "Mohammed_Faraz" }, { "code": "brewmongodbrew info mongodb-community==> Caveats\nTo restart mongodb/brew/mongodb-community after an upgrade:\n brew services restart mongodb/brew/mongodb-community\nOr, if you don't want/need a background service you can just run:\n mongod --config /usr/local/etc/mongod.conf\nsudorootrootbrewbrew services start mongodb-communitybrew services stop mongodb-community", "text": "The installation via brew should have set everything up for your local user to run the mongod process. This is the last few lines you get when running brew info mongodb-community:Since you’re running with sudo on everything, you’re running the risk of security issues as the process is running under elevated privileges. Even though the data directory is owned by your user, you’re still running the process as the root user so all the files in that path will be owned by root and not accessible by your normal user.After installation with brew were you able to run brew services start mongodb-community? That’s how I usually run things, then when I don’t need MongoDB running I just brew services stop mongodb-community.", "username": "Doug_Duncan" }, { "code": "", "text": "I was able to run brew services start [email protected] but when I use mongod it throws me an error “MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017”.", "username": "Mohammed_Faraz" }, { "code": "", "text": "If I dont use sudo and I run “mongod” I get this error : ““ctx”:“initandlisten”,“msg”:“Shutting down”,“attr”:{“exitCode”:100}}”", "username": "Mohammed_Faraz" }, { "code": "", "text": "I also get this error: “ctx”:“initandlisten”,“msg”:“Shutting down”,“attr”:{“exitCode”:48}}after using : \"brew services start mongodb/brew/mongodb-community \"\nand then “mongod”. Something is wrong and I cannot figure out.", "username": "Mohammed_Faraz" }, { "code": "mongodmongoshexitCodeexitCodemongodbrewmongodmongod --port 30000 ...mongod", "text": "I don’t think I’ve ever seen mongod throw an ECONNREFUSED error. I would expect something like that from mongosh if the server was not up and running when connecting.The exitCode 100 was thrown most likely due to your normal user not being able to write to the data files due to permission errors.The exitCode 48 means that there is already a server listening on the port you are trying to connect to. When you try to manually run mongod and you still have the brew service running, by default they will both try to connect on port 27017. You cannot run two instances of the mongod process on the same machine unless you override the port for one of them: mongod --port 30000 ... for example.To see what’s happening when mongod fails to start up, you need to look higher up in the log files. Generally there are only a couple of dozen entries if the process doesn’t want to start up and it’s not that bad to look through the files to see the exact cause.", "username": "Doug_Duncan" }, { "code": "", "text": "The server generated these startup warnings when booting2022-08-12T00:28:08.275+05:30: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted.How can I fix this? Please help.", "username": "Mohammed_Faraz" }, { "code": "", "text": "You need to enable authentication. This article with help with that.", "username": "Doug_Duncan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017
2022-08-11T13:40:22.319Z
MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017
10,460
null
[ "field-encryption" ]
[ { "code": " const _key = await encryption.createDataKey('local', {\n keyAltNames: ['demo-data-key']\n });\n await mongoose.connection.createCollection('Users', {\n validator: {\n $jsonSchema: {\n bsonType: 'object',\n properties: {\n lastName: {\n encrypt: {\n bsonType: 'string',\n keyId: [_key],\n algorithm: 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic' }\n }\n }\n }\n }});\n", "text": "Hi,I would like to implement Client-Side Field Level Encryption where each document (e.g. user) would be encrypted with its data key. This feature was mentioned keynote in 2019 on Field Level Encryption in MongoDB 4.2 (MongoDB World 2019 Keynote, part 4) - YouTube.Until now I have managed to set up “per collection encryption” with defining $jsonSchema validation but this is not granular enough for my use case:Also in official documentation, this scenario is not covered because key always needs to be specified upfront and it is defined on collection or field level but never on document https://docs.mongodb.com/manual/reference/security-client-side-automatic-json-schema/Any help, please?", "username": "Clement" }, { "code": "", "text": "Were you able to figure this out? I need the same.", "username": "Vishal_Rastogi1" }, { "code": "mongocryptd", "text": "Hi @Vishal_Rastogi1 and @Clement,I just discovered this topic, I’m so sorry for not seeing it earlier.I actually implemented this in Java in this blog post:Learn how to use the client side field level encryption using the MongoDB Java Driver.You can apply exactly the same logic with the Node Driver. It would work exactly the same way. In my version, I’m not using MongoDB Enterprise Advanced i.e. mongocryptd. I’m just using libmongocrypt to manipulate the data but I don’t use the automated encryption & decryption that mongocryptd provides. When you use the $jsonschema, you have to specify the single Data Encryption Key (DEK) that will encrypt this field for all the docs. It doesn’t work with the implementation you are trying to do i.e. one key for one user.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
CSFLE with data key per document
2021-06-29T09:09:03.327Z
CSFLE with data key per document
3,837
null
[ "replication" ]
[ { "code": "systemLog:\n destination: file\n path: \"/Products/mongodb/mongod.log\"\n logAppend: true\nstorage:\n dbPath: \"/Products/mongo_data_db\"\nprocessManagement:\n fork: true\nnet:\n bindIp: 0.0.0.0\n port: 27017\nsetParameter:\n enableLocalhostAuthBypass: false\nsecurity: \n keyFile: /Products/mongo_data_db/keyfile/keyfile\nreplication: \n replSetName: myRS\nMongoDB Enterprise myRS:PRIMARY> rs.add(\"192.168.122.203\")\n{\n \"ok\" : 0,\n \"errmsg\" : \"Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded: 192.168.122.201:27017; the following nodes did not respond affirmatively: 192.168.122.203:27017 failed with No route to host\",\n \"code\" : 74,\n \"codeName\" : \"NodeNotFound\",\n \"operationTime\" : Timestamp(1660130888, 1),\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1660130888, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"GFhdtq3hWCB/5TmETZYit5ALHic=\"),\n \"keyId\" : NumberLong(\"7129808885757509634\")\n }\n }\n}\n", "text": "I want to build a replica set. The conf files of my 3 machines are attached. I booted the primary machine. The process started correctly. However, I cannot add the 2nd and 3rd machines, that is, the secondry machines.\nThe error I get when I run the rs.add(“192.168.122.203:27017”) command; “errmsg” : “Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded: 192.168.122.201:27017; the following nodes did not respond affirmatively: 192.168.122.203:27017 failed with No route to Shoo”. I would be glad if you help.", "username": "Ayberk_Cengiz" }, { "code": "", "text": "@Stennie_X The error I get is not related to the conf.\nthe process is working properly. conf seems to be wrong while typing here.The real problem is rs.add()", "username": "Ayberk_Cengiz" }, { "code": "ping 192.168.122.203No route to host", "text": "Can the primary reach the secondary machines? What is the result of running ping 192.168.122.203 from the primary? No route to host sounds like a networking issue.", "username": "Doug_Duncan" }, { "code": "security: \n keyFile: /Products/mongo_data_db/keyfile/keyfile\nkeyFileping", "text": "Hi @Ayberk_Cengiz,Probably not related to your connectivity issue, but FYI you will need the same keyFile configuration on your other replica set members.Since you are on a private network, you may want to get basic networking working before enabling additional configuration.The error message is about connectivity to the member you’re trying to add:192.168.122.203:27017 failed with No route to hostAs a start I would try to ping and make sure your replica set members all have a known route to the other replica set member IPs.Can you please provide some more details about your deployment:Are you using the same config file for all replica set members?Are the replica set members on different machines or are they VMs or containers on the same host?Are there any firewalls that might be blocking communication between replica set members?What O/S version are you using?What specific version of MongoDB server are you using?Thanks,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "accessible\nping 192.168.122.202\nPING 192.168.122.202 (192.168.122.202) 56(84) bytes of data.\n64 bytes from 192.168.122.202: icmp_seq=1 ttl=64 time=0.457 ms\n64 bytes from 192.168.122.202: icmp_seq=2 ttl=64 time=0.436 ms\n64 bytes from 192.168.122.202: icmp_seq=3 ttl=64 time=0.432 ms", "username": "Ayberk_Cengiz" }, { "code": "", "text": "1-)I am using the same config file for all replica sets.\n2-)3 different machines installed on securecrt on my main computer.\n3-)\n4-)NAME=“Red Hat Enterprise Linux Server”\nVERSION=“7.9 (Maipo)”\n5-)3.6.8 enterprise", "username": "Ayberk_Cengiz" }, { "code": "", "text": "Hi are these on separate VMs? the error “no route to host” suggests that the FW is not open. You can check this very easily by doing a telnet secondary_ip mongod_port from the primary and see if it can can connect. Even though the ping succeeds you need to validate on the specific mongod port.if that fails it means there is a FW blocking it (probably the default rhel FW) so you can open it and then try again.", "username": "tapiocaPENGUIN" }, { "code": "", "text": "5-)3.6.8 enterpriseIs there a reason that you’re running version 3.6.8? Support for this version ended in April of 2021. If you’re setting up a production system, I would recommend going to either 5.0 or 6.0 (recently released) depending on your comfort level. Even if this is a test system, I would recommend a newer version of MongoDB so that you get all the newer features, as well as the latest security, stability and performance patches.", "username": "Doug_Duncan" }, { "code": "", "text": "Conscious, it has to be.", "username": "Ayberk_Cengiz" }, { "code": "", "text": "The problem was related to linux. Solved", "username": "Ayberk_Cengiz" }, { "code": "", "text": "Could you please provide more details about the root cause of the problem and about the solution? This would help others that may face the same issue.", "username": "steevej" }, { "code": "", "text": "The problem here is related to linux. I disabled the service called iptables first. Then I repeated the process and the problem was solved.\n#systemctl disable iptables", "username": "Ayberk_Cengiz" }, { "code": "", "text": "What you did is risky. You disabled your firewall.Check https://www.mongodb.com/docs/manual/tutorial/configure-linux-iptables-firewall/", "username": "steevej" }, { "code": "", "text": "Of course there is risk. This was necessary to access the ports. After access, the security problem can be solved again.", "username": "Ayberk_Cengiz" }, { "code": "", "text": "In the future you can just open the mongo ports that you are using with Bi-directional connectivity.", "username": "tapiocaPENGUIN" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Replica Set Error
2022-08-10T12:05:50.712Z
Replica Set Error
5,753
null
[ "replication", "sharding" ]
[ { "code": "", "text": "Hi Team ,Getting below error while trying to connect to Shard cluster .\npprbj@XXXXXXXXXbin % ./mongo --port 27021MongoDB shell version v5.0.9connecting to: mongodb://127.0.0.1:27021/?compressors=disabled&gssapiServiceName=mongodbError: couldn’t connect to server 127.0.0.1:27021, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27021 :: caused by :: Operation timed out :connect@src/mongo/shell/mongo.js:372:17@(connect):2:6exception: connect failedexiting with code 1Note: Config server,Mongos and replicaset are up and running fine.\nissue is while connecting to shard .any help will be appreciated .Regards\nPrince.", "username": "Prince_Das" }, { "code": "ss -tlnp\nps -aef | grep [m]ongo\n", "text": "Need the output of the commands:Do you get the same with the new mongosh? With Compass?How do you start mongos? Can you share the logs of mongos?Have you check your firewall rules to make sure you can connect to this address:port?", "username": "steevej" }, { "code": "", "text": "mongodb.log (26.9 KB)Do you get the same with the new mongosh? Yes.\nWith Compass ? No\nHave you check your firewall rules to make sure you can connect to this address:port? yes.\nCan you share the logs of mongos? Uploaded .\nHow do you start mongos? ./mongod --config /Users/pprbj/config/mongos.confNote: I am building a single sharded cluster.Prince.", "username": "Prince_Das" }, { "code": "ss -tlnp\nps -aef | grep [m]ongo\n", "text": "mongod --config /Users/pprbj/config/mongos.confThat looks very wrong. You are starting mongod with something that looks like a mongos configuration file.I got 403 ERROR when trying to look at the logs.Please share the configuration file /Users/pprbj/config/mongos.conf.With Compass ? NoSo you are able to connect with Compass with the same URI? Please post a screenshot.Need the output of the commands:", "username": "steevej" }, { "code": " 501 1906 1738 0 12:45PM ttys000 0:03.79 ./mongod -f /Users/pprbj/config/config1.conf\n 501 1976 1775 0 12:49PM ttys001 0:00.00 grep mongo\n 501 1929 1785 0 12:46PM ttys002 0:00.14 ./mongos -f /Users/pprbj/config/mongos.conf\nss -tlnp ==>Not working ,I am using MAC os.\npprbj@MN-C02FH9LAMD6N bin % ./mongos -f /Users/pprbj/config/mongos.conf \n{\"t\":{\"$date\":\"2022-08-04T07:16:43.989Z\"},\"s\":\"W\", \"c\":\"SHARDING\", \"id\":24132, \"ctx\":\"-\",\"msg\":\"Running a sharded cluster with fewer than 3 config servers should only be done for testing purposes and is not recommended for production.\"}\n{\"t\":{\"$date\":\"2022-08-04T07:16:43.992Z\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20697, \"ctx\":\"-\",\"msg\":\"Renamed existing log file\",\"attr\":{\"oldLogPath\":\"/Users/pprbj/Desktop/data/shard/mongodb.log\",\"newLogPath\":\"/Users/pprbj/Desktop/data/shard/mongodb.log.2022-08-04T07-16-43\"}}\nnet:\n port: 27021\nsharding:\n configDB: MPODS-PRA-001/localhost:27009\nsystemLog:\n destination: file\n path: /Users/pprbj/Desktop/data/shard/mongodb.log\n", "text": "@steevej : Sorry for providing wrong details. Please check the details1: ps -aef | grep mongo2:Starting of Mongos :3:With Compass : I am not using compass (It’s in a built state.)\n4: Please find the config file.", "username": "Prince_Das" }, { "code": "", "text": "What does your latest mongos.log show?\nPlease show your config server config file also\nIs your data replicaset up?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Yes, Replica set is up and running fine .output from below mongos.logPreference: Could not find host matching read preference { mode: “nearest” } for set MPODS-PRA-001\"}}\n{“t”:{\"$date\":“2022-08-04T15:57:49.619+05:30”},“s”:“W”, “c”:“SHARDING”, “id”:23834, “ctx”:“mongosMain”,“msg”:“Error initializing sharding state, sleeping for 2 seconds and retrying”,“attr”:{“error”:{“code”:133,“codeName”:“FailedToSatisfyReadPreference”,“errmsg”:“Error loading clusterID :: caused by :: Could not find host matching read preference { mode: “nearest” } for set MPODS-PRA-001”}}}\n{“t”:{\"$date\":“2022-08-04T15:58:03.225+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4333208, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM host selection timeout”,“attr”:{“replicaSet”:“MPODS-PRA-001”,“error”:“FailedToSatisfyReadPreference: Could not find host matching read preference { mode: “nearest” } for set MPODS-PRA-001”}}", "username": "Prince_Das" }, { "code": "", "text": "Please show contents of config file you have used to start config servers\nCheck if any misconfiguration or reference to configdb is correct or not", "username": "Ramachandra_Tummala" }, { "code": "auditLog:\n destination: syslog\n filter: '{ roles:{role:\"root\", db: \"admin\"} }'\nnet:\n port: 27009\n#processManagement:\n # fork: \"true\"\nreplication:\n replSetName: MPODS-PRA-001 \nstorage:\n dbPath: /Users/pprbj/Desktop/data\n engine: wiredTiger\nsystemLog:\n destination: file\n path: /Users/pprbj/Desktop/data/mongodb.log\nsharding:\n clusterRole: configsvr\n", "text": "Content of config file .", "username": "Prince_Das" }, { "code": "", "text": "Have you initialized your config server?\nPlease show output of rs.status() from both config server and data replica set", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Output of config server :\nMongoDB Enterprise MPODS-PRA-001:PRIMARY> rs.status()\n{\n“set” : “MPODS-PRA-001”,\n“date” : ISODate(“2022-08-05T10:23:07.831Z”),\n“myState” : 1,\n“term” : NumberLong(1),\n“syncSourceHost” : “”,\n“syncSourceId” : -1,\n“configsvr” : true,\n“heartbeatIntervalMillis” : NumberLong(2000),\n“majorityVoteCount” : 1,\n“writeMajorityCount” : 1,\n“votingMembersCount” : 1,\n“writableVotingMembersCount” : 1,\n“optimes” : {\n“lastCommittedOpTime” : {\n“ts” : Timestamp(1659694987, 1),\n“t” : NumberLong(1)\n},\n“lastCommittedWallTime” : ISODate(“2022-08-05T10:23:07.091Z”),\n“readConcernMajorityOpTime” : {\n“ts” : Timestamp(1659694987, 1),\n“t” : NumberLong(1)\n},\n“appliedOpTime” : {\n“ts” : Timestamp(1659694987, 1),\n“t” : NumberLong(1)\n},\n“durableOpTime” : {\n“ts” : Timestamp(1659694987, 1),\n“t” : NumberLong(1)\n},\n“lastAppliedWallTime” : ISODate(“2022-08-05T10:23:07.091Z”),\n“lastDurableWallTime” : ISODate(“2022-08-05T10:23:07.091Z”)\n},\n“lastStableRecoveryTimestamp” : Timestamp(1659694986, 1),\n“electionCandidateMetrics” : {\n“lastElectionReason” : “electionTimeout”,\n“lastElectionDate” : ISODate(“2022-08-05T10:21:06.007Z”),\n“electionTerm” : NumberLong(1),\n“lastCommittedOpTimeAtElection” : {\n“ts” : Timestamp(1659694865, 1),\n“t” : NumberLong(-1)\n},\n“lastSeenOpTimeAtElection” : {\n“ts” : Timestamp(1659694865, 1),\n“t” : NumberLong(-1)\n},\n“numVotesNeeded” : 1,\n“priorityAtElection” : 1,\n“electionTimeoutMillis” : NumberLong(10000),\n“newTermStartDate” : ISODate(“2022-08-05T10:21:06.217Z”),\n“wMajorityWriteAvailabilityDate” : ISODate(“2022-08-05T10:21:07.634Z”)\n},\n“members” : [\n{\n“_id” : 0,\n“name” : “localhost:27009”,\n“health” : 1,\n“state” : 1,\n“stateStr” : “PRIMARY”,\n“uptime” : 215,\n“optime” : {\n“ts” : Timestamp(1659694987, 1),\n“t” : NumberLong(1)\n},\n“optimeDate” : ISODate(“2022-08-05T10:23:07Z”),\n“lastAppliedWallTime” : ISODate(“2022-08-05T10:23:07.091Z”),\n“lastDurableWallTime” : ISODate(“2022-08-05T10:23:07.091Z”),\n“syncSourceHost” : “”,\n“syncSourceId” : -1,\n“infoMessage” : “”,\n“electionTime” : Timestamp(1659694866, 1),\n“electionDate” : ISODate(“2022-08-05T10:21:06Z”),\n“configVersion” : 1,\n“configTerm” : 1,\n“self” : true,\n“lastHeartbeatMessage” : “”\n}\n],\n“ok” : 1,\n“$gleStats” : {\n“lastOpTime” : Timestamp(1659694865, 1),\n“electionId” : ObjectId(“7fffffff0000000000000001”)\n},\n“lastCommittedOpTime” : Timestamp(1659694987, 1),\n“$clusterTime” : {\n“clusterTime” : Timestamp(1659694987, 1),\n“signature” : {\n“hash” : BinData(0,“AAAAAAAAAAAAAAAAAAAAAAAAAAA=”),\n“keyId” : NumberLong(0)\n}\n},\n“operationTime” : Timestamp(1659694987, 1)\n}\nMongoDB Enterprise MPODS-PRA-001:PRIMARY> exitMongoDB Enterprise MPODS-PRA-001:PRIMARY> rs.status()\n{\n“set” : “MPODS-PRA-001”,\n“date” : ISODate(“2022-08-05T11:15:48.369Z”),\n“myState” : 1,\n“term” : NumberLong(6),\n“syncSourceHost” : “”,\n“syncSourceId” : -1,\n“heartbeatIntervalMillis” : NumberLong(2000),\n“majorityVoteCount” : 2,\n“writeMajorityCount” : 2,\n“votingMembersCount” : 2,\n“writableVotingMembersCount” : 2,\n“optimes” : {\n“lastCommittedOpTime” : {\n“ts” : Timestamp(1659698145, 1),\n“t” : NumberLong(6)\n},\n“lastCommittedWallTime” : ISODate(“2022-08-05T11:15:45.185Z”),\n“readConcernMajorityOpTime” : {\n“ts” : Timestamp(1659698145, 1),\n“t” : NumberLong(6)\n},\n“appliedOpTime” : {\n“ts” : Timestamp(1659698145, 1),\n“t” : NumberLong(6)\n},\n“durableOpTime” : {\n“ts” : Timestamp(1659698145, 1),\n“t” : NumberLong(6)\n},\n“lastAppliedWallTime” : ISODate(“2022-08-05T11:15:45.185Z”),\n“lastDurableWallTime” : ISODate(“2022-08-05T11:15:45.185Z”)\n},\n“lastStableRecoveryTimestamp” : Timestamp(1659698085, 1),\n“electionCandidateMetrics” : {\n“lastElectionReason” : “electionTimeout”,\n“lastElectionDate” : ISODate(“2022-08-05T10:23:53.911Z”),\n“electionTerm” : NumberLong(6),\n“lastCommittedOpTimeAtElection” : {\n“ts” : Timestamp(0, 0),\n“t” : NumberLong(-1)\n},\n“lastSeenOpTimeAtElection” : {\n“ts” : Timestamp(1659628001, 1),\n“t” : NumberLong(5)\n},\n“numVotesNeeded” : 2,\n“priorityAtElection” : 1,\n“electionTimeoutMillis” : NumberLong(10000),\n“numCatchUpOps” : NumberLong(0),\n“newTermStartDate” : ISODate(“2022-08-05T10:23:53.973Z”),\n“wMajorityWriteAvailabilityDate” : ISODate(“2022-08-05T10:23:54.870Z”)\n},\n“members” : [\n{\n“_id” : 0,\n“name” : “localhost:27010”,\n“health” : 1,\n“state” : 1,\n“stateStr” : “PRIMARY”,\n“uptime” : 3192,\n“optime” : {\n“ts” : Timestamp(1659698145, 1),\n“t” : NumberLong(6)\n},\n“optimeDate” : ISODate(“2022-08-05T11:15:45Z”),\n“lastAppliedWallTime” : ISODate(“2022-08-05T11:15:45.185Z”),\n“lastDurableWallTime” : ISODate(“2022-08-05T11:15:45.185Z”),\n“syncSourceHost” : “”,\n“syncSourceId” : -1,\n“infoMessage” : “”,\n“electionTime” : Timestamp(1659695033, 1),\n“electionDate” : ISODate(“2022-08-05T10:23:53Z”),\n“configVersion” : 3,\n“configTerm” : 6,\n“self” : true,\n“lastHeartbeatMessage” : “”\n},\n{\n“_id” : 1,\n“name” : “localhost:27011”,\n“health” : 1,\n“state” : 2,\n“stateStr” : “SECONDARY”,\n“uptime” : 3119,\n“optime” : {\n“ts” : Timestamp(1659698145, 1),\n“t” : NumberLong(6)\n},\n“optimeDurable” : {\n“ts” : Timestamp(1659698145, 1),\n“t” : NumberLong(6)\n},\n“optimeDate” : ISODate(“2022-08-05T11:15:45Z”),\n“optimeDurableDate” : ISODate(“2022-08-05T11:15:45Z”),\n“lastAppliedWallTime” : ISODate(“2022-08-05T11:15:45.185Z”),\n“lastDurableWallTime” : ISODate(“2022-08-05T11:15:45.185Z”),\n“lastHeartbeat” : ISODate(“2022-08-05T11:15:48.164Z”),\n“lastHeartbeatRecv” : ISODate(“2022-08-05T11:15:47.167Z”),\n“pingMs” : NumberLong(0),\n“lastHeartbeatMessage” : “”,\n“syncSourceHost” : “localhost:27010”,\n“syncSourceId” : 0,\n“infoMessage” : “”,\n“configVersion” : 3,\n“configTerm” : 6\n}\n],\n“ok” : 1,\n“$clusterTime” : {\n“clusterTime” : Timestamp(1659698145, 1),\n“signature” : {\n“hash” : BinData(0,“AAAAAAAAAAAAAAAAAAAAAAAAAAA=”),\n“keyId” : NumberLong(0)\n}\n},\n“operationTime” : Timestamp(1659698145, 1)\n}\nMongoDB Enterprise MPODS-PRA-001:PRIMARY>", "username": "Prince_Das" }, { "code": "", "text": "Check if any misconfiguration or reference to configdb is correct or notI am suspecting issue with replicaset names\nBoth data replicaset and config replicaset are having same name?\nThey should be different", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Can you share some sample config file for\n1:config server file.\n2:Mongos conf file.\n3:Replica ser conf file.", "username": "Prince_Das" }, { "code": "", "text": "Please search our forum threads\nYou will get many sample files for each\nAlso check mongo documentation\nDid you try by changing name of replset?Sharding is a strategy some users will implement to help them scale their database horizontally, with the hope being that the improved scalability will outwe…\nAbove setup is similiar to yours without security/auth params(advised only for practice/test envs)", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I am also getting 403 error while looking at log files of a website. How i don’t understand what is going wrong in db while migration. There is lot other files in db that are removed.I got 403 ERROR when trying to look at the logs of site.", "username": "west_weselly" }, { "code": "", "text": "Hi All ,After lot of fight/struggle issue got fixed.\nBelow are steps that made it work.\n1:Created two config replicaset service(As the name describe we need to create two configserver).\nNote: Will try again using single config server will post the outcome.\n2:Added bindIp in config file of config server.\n3:Able to connect shard cluster and able to add replica set.It was nice learning and thanks to all of you who jumped into the issue and tried to help.Regards\nPrince.", "username": "Prince_Das" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Not able to connect to Shard cluster
2022-08-02T11:59:41.382Z
Not able to connect to Shard cluster
5,761
null
[]
[ { "code": "wget -qO - https://www.mongodb.org/static/pgp/server-5.0.asc | sudo apt-key add -**➜** **~** host www.mongodb.org\n\nwww.mongodb.org is an alias for glb.mongodb.com.\n\nglb.mongodb.com has address 54.175.147.155\n\nglb.mongodb.com has address 52.206.222.245\n\nglb.mongodb.com has address 52.21.89.200\n\n", "text": "I was attempting to install mongodb onto a IPv6 only Ubuntu server when I ran into errors with the very first step : wget -qO - https://www.mongodb.org/static/pgp/server-5.0.asc | sudo apt-key add -.I realized that www.mongodb.org has no ipv6 dns record. Would it be possible for someone from the MongoDB networking team to work on implementing that?", "username": "Rishi_Panthee" }, { "code": "wget -qO - https://pgp.mongodb.com/server-5.0.asc | sudo apt-key add -\n", "text": "Hi @Rishi_Panthee,Can you try getting the .asc file from https://pgp.mongodb.com/ instead to see if this works from your IPv6 only Ubuntu server? I believe the command for this would be similar to:Regards,\nJason", "username": "Jason_Tran" }, { "code": "wget -qO - https://pgp.mongodb.com/server-5.0.asc | sudo apt-key add -x@xxxx:~# wget -qO - https://pgp.mongodb.com/server-5.0.asc\n-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQINBGAsKNUBEAClMqPCvvqm6gFmbiorEN9qp00GI8oaECkwbxtGGbqX9sqMSrKe\nAB3sGI7kqG2Fl0K+xmmiq1QDjhNgFDA1jjXq+Bd66RNPtvu747IRxVs+9fX7bk67\n8Bruha7U3M5l4193x5oYLlbcZL9aC7RSJE2mggTyS6LarmF6vKQN9LMXDicnageV\nKCPpF2i3jkZaGnLPzAisW/pOjPQpWCbatTVqKOKvtOyP3Fz1spYd4obu6ELu1PXa\ngmhSfvWJYt1irpchOl29LWZfcmXuJszmb00bqm4gLcK12VrnK191iXv46A8h2hSO\nf3eQqrkc+pF/kw4RyG54EV7QtHXyTe9TVCbJUfgtliWIQt/bCoJYfPLHJaWIMs83\nbzA6ZvOjCKIfMS0CY5ZJyVaBfiI3wURSjgZIYFZAXVwbreQIfOKKuik7UVVn3xUO\nnWpmQ2zyI0W7cJMquxwLNjkI+RckPhIqxWFo5iNSV4v6pzrlHD1WmIfFGBKEn7m+\nedwVyHG53fNIFZjxyShO6Pf1vgb9Js/XmXB4lxYnNyx1tB+hQhXTjLlY6N5gPpw5\nZ/PWQc7vfYekUZGQMXhTyRxU0QTwmdEeKcb+fb9r23OH59bbAfzE10xTMzhqCd2L\nlgSozMBvMmkHb1xs1x6FFuv/U/X7LjHTrHIf4M//DNwdP4l4I1jhPlTAxwARAQAB\ntDdNb25nb0RCIDUuMCBSZWxlYXNlIFNpZ25pbmcgS2V5IDxwYWNrYWdpbmdAbW9u\nZ29kYi5jb20+iQI+BBMBAgAoBQJgLCjVAhsDBQkJZgGABgsJCAcDAgYVCAIJCgsE\nFgIDAQIeAQIXgAAKCRCwCgvR4sY8EawdD/0ewkyx3yE99K9n3y7gdvh5+2U8BsqU\n7SWEfup7kPpf+4pF5xWqMaciEV/wRAGt7TiKlfVyAv3Q9iNsaLFN+s3kMaIcKhwD\n8+q/iGfziIuOSTeo20dAxn9vF6YqrKGc7TbHdXf9AtYuJCfIU5j02uVZiupx+P9+\nrG39dEnjOXm3uY0Fv3pRGCpuGubDlWB1DYh0R5O481kDVGoMqBxmc3iTALu14L/u\ng+AKxFYfT4DmgdzPVMDhppgywfyd/IOWxoOCl4laEhVjUt5CygBa7w07qdKwWx2w\ngTd9U0KGHxnnSmvQYxrRrS5RX3ILPJShivTSZG+rMqnUe6RgCwBrKHCRU1L728Yv\n1B3ZFJLxB1TlVT2Hjr+oigp0RY9W1FCIdO2uhb9GImpaJ1Y0ZZqUkt/d9D8U2wcw\nSW6/6WYeO7wAi/zlJ25hrBwhxS2+88gM6wJ1yL9yrM9v8JUb7Kq0rCGsEO5kqscV\nAmX90wsF2cZ6gHR53eGIDbAJK0MO5RHR73aQ4bpTivPnoTx4HTj5fyhW9z8yCSOe\nBlQABoFFqFvOS7KBxoyIS3pxlDetWOSc6yQrvA1CwxnkB81OHNmJfWAbNbEtZkLm\nxs2c8CIh2R81yi6HUzAaxyDH7mrThbwX3hUe/wsaD1koV91G6bDD4Xx3zpa9DG/O\nHyB98+e983gslg==\n=IQQF\n-----END PGP PUBLIC KEY BLOCK-----\nx@xxxx:~# host pgp.mongodb.com\npgp.mongodb.com is an alias for pgp.release.build.10gen.cc.\npgp.release.build.10gen.cc has address 13.35.125.6\npgp.release.build.10gen.cc has address 13.35.125.119\npgp.release.build.10gen.cc has address 13.35.125.22\npgp.release.build.10gen.cc has address 13.35.125.23\npgp.release.build.10gen.cc has IPv6 address 2600:9000:2202:9600:1:ed10:bd80:93a1\npgp.release.build.10gen.cc has IPv6 address 2600:9000:2202:a000:1:ed10:bd80:93a1\npgp.release.build.10gen.cc has IPv6 address 2600:9000:2202:6e00:1:ed10:bd80:93a1\npgp.release.build.10gen.cc has IPv6 address 2600:9000:2202:5600:1:ed10:bd80:93a1\npgp.release.build.10gen.cc has IPv6 address 2600:9000:2202:d600:1:ed10:bd80:93a1\npgp.release.build.10gen.cc has IPv6 address 2600:9000:2202:5a00:1:ed10:bd80:93a1\npgp.release.build.10gen.cc has IPv6 address 2600:9000:2202:800:1:ed10:bd80:93a1\npgp.release.build.10gen.cc has IPv6 address 2600:9000:2202:e800:1:ed10:bd80:93a1\n", "text": "wget -qO - https://pgp.mongodb.com/server-5.0.asc | sudo apt-key add -That looks like it worked. Thank you Jason.Will documentation be updated to use this instead?", "username": "Rishi_Panthee" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
IPv6 Support For MongoDB.org
2022-07-07T17:19:56.505Z
IPv6 Support For MongoDB.org
2,681
null
[ "aggregation", "python", "crud" ]
[ { "code": "\"rates\": [\n {\n \"category\": \"Web\",\n \"seniorityRates\": [\n {\n \"seniority\": \"junior\",\n \"rate\": 100\n },\n {\n \"seniority\": \"intermediate\",\n \"rate\": 135\n },\n {\n \"seniority\": \"senior\",\n \"rate\": 165\n }\n ]\n }\n ]\nresult = my_coll.update_many({},\n {\n \"$set\":\n {\n \"rates.$[].seniorityRates.$[j].seniority\" : new\n }\n },\n upsert=False,\n array_filters= [\n {\n \"j.seniority\": old\n }\n ]\n )\ndb.projects.updateMany({},\n {\n $set:\n {\n \"rates.$[].seniorityRates.$[j].seniority\" : \"debutant\"\n }\n },\n { arrayFilters = [\n {\n \"j.seniority\": \"junior\"\n }\n ]\n }\n)\n", "text": "Hey,I’ve been trying to modify a value in multiple arrays for a few arrays and I can’t find documentation on how to do this.My collection looks like thisI’m just trying to modify “junior” to “beginner”, this should be simple.Thanks to these answers:https://stackoverflow.com/questions/54055702/how-can-i-update-a-multi-level-nested-array-in-mongodbhttps://stackoverflow.com/questions/9611833/mongodb-updating-fields-in-nested-arrayI’ve manage to write that python code (pymongo), but it doesn’t works…The path ‘rates’ must exist in the document in order to apply array updates.It correspond to this command that doesn’t work eitherclone(t={}){const r=t.loc||{};return e({loc:new Position(\"line\"in r?r.line:this.loc.line,\"column\"in r?r.column:……)} could not be clonedWhat am I doing wrong ?Any help would be very appreciated", "username": "Timothee_Wright" }, { "code": "{\"rates.0\":{\"$exists\":true}}\n", "text": "The path ‘rates’ must exist in the document in order to apply array updates.The above might hint at the fact that some documents do not have the rates array.I would try to update only document that matches", "username": "steevej" }, { "code": "", "text": "Hey,so it was indeed simple, thanks steevej for helping me. As all docs had “rates” I thought it may not be the correct collection. And in pymongo I inverted two paramaters (so dumb to have spent this much time on that).But I wanted to try in mongo to avoid this kind of mistakes and it was not working either so I thought that the query was wrong.—> found later that the mongo query was wrong because I wrote \" arrayFilters= \" instead of “:”it’s a bit shameful…", "username": "Timothee_Wright" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can't update in nested arrays
2022-08-11T02:51:20.548Z
Can&rsquo;t update in nested arrays
3,539
null
[ "aggregation", "java", "transactions" ]
[ { "code": "", "text": "Hi Team,Can I run an update with aggregate pipleline inside transactions? Does java driver API provide support for this ?", "username": "Laks" }, { "code": "", "text": "Hi @Laks and welcome to the community!!Yes, you can utilise the aggregate pipeline for update operations starting with MongoDB 4.2.\nPlease refer the Updates with Aggregation Pipeline documentation to learn more on operations which can be used for the update operation.Does java driver API provide support for this ?Please refer to the Java Driver API Documentations to learn more.However, if you need further help, could you help with the following informations:Let us know if you need any further assistance.Thanks\nAasawari", "username": "Aasawari" }, { "code": "", "text": "@Aasawari, Thanks for your message. Let me rephrase it this way,\nI create a mongo client session as mentioned in https://www.mongodb.com/docs/manual/core/transactions/ and run an update (multi documents) with aggregation pipeline.If update is successful, commit transaction. If there are errors, roll back the updates across all transactions.Is this update with aggregation pipeline supported in Mongo 4.4 with Java Driver 4.5?", "username": "Laks" } ]
Update with Aggregate Inside Mongo Transaction
2022-08-03T11:33:53.809Z
Update with Aggregate Inside Mongo Transaction
2,920
null
[ "time-series" ]
[ { "code": "", "text": "Hello,when tunnelling into my mongo db via ssh port forwarding, I get quite slow data transfer response. I am using time series collections.I measured the time it takes to query 276 Million data points (about 1.2 Million 5-second bins of data in the time series db)…So, there definitely seems to be a huge overhead due to the remote data transfer, while the mongo db is actually working ok in terms of the actual query speed.What I’d like to ask:Thanks!", "username": "jayzee" }, { "code": "", "text": "Any reasons why you encrypt via SSH tunnel rather than TLS?I suspect TLS will be more efficient as there is 2 extra steps with SSH. With SSH your data is sent from the server to its local SSHd to get encrypted and then sent to the client SSHd to get decrypted before being sent to the client driver. These extra steps might have a big influence on performance with 276 million data points.But to effectively test the overhead of SSH is to perform your queries on the PC with and then without SSH. Your localhost test is useless as a comparison point with your remote PC.", "username": "steevej" }, { "code": "", "text": "Hey Steeve,thanks for your reply. Can you give me some buzzwords on how to set up TLS?The comparison I made includes the overhead caused by SSH + transferring the data between the PC’s via WiFi. So, what you are suggesting is to disentangle the two in a test, in order to find out the overhead JUST caused by SSH, right?Thanks!", "username": "jayzee" }, { "code": "", "text": "Start with https://www.mongodb.com/docs/manual/tutorial/configure-ssl/So, what you are suggesting is to disentangle the two in a test, in order to find out the overhead JUST caused by SSH, right?yes", "username": "steevej" }, { "code": " tls: \n mode: allowTLS\n certificateKeyFile: PATH_TO_MONGO\\bin\\mongod-cert.pem\n allowConnectionsWithoutCertificates: true\n allowInvalidCertificates: true\n allowInvalidHostnames: true\nimport pymongo \nclient = pymongo.MongoClient('mongodb://localhost:27017/',tls=True,tlsCAfile=\"PATH_TO_MONGO\\\\cert.pem\")\nServerSelectionTimeoutError: 192.168.X.XXX:27017: timed out, Timeout: 30s, Topology Description: \n<TopologyDescription id: 62f22029b65b0358c6e03ca4, topology_type: Single, servers: \n[<ServerDescription ('192.168.X.XXX', 27017) server_type: Unknown, rtt: None, \nerror=NetworkTimeout('192.168.X.XXX:27017: timed out')>]>\n# where and how to store data.\nstorage:\n dbPath: D:\\MongoDB\\Server\\5.0\\data\n journal:\n enabled: true\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: PATH_TO_LOG\\mongod.log\n quiet: true\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1, localhost, 192.168.X.XXX \nssh -N -L 8000:192.168.X.XXX:27017 [email protected]\nclient = pymongo.MongoClient('mongodb://localhost:8000/')\n", "text": "Hello Steeve,I finally got the time to try switching to TLS.\nSo, firstly what I did was editing the net settings in the config file like this:I use the minimum security settings and a self-signed certificate for now, for two reasons:\n(i) I just want to test whether it works at the minimum settings, then, later on, increase the security level, again\n(ii) I am connecting to the mongodb within a private network, and I am not planning to use it to serve external clients, just my own pc inside the network. Therefore, I guess, it is ok to use a self-signed certificate.On my host pc for the mongodb, I managed to connect to the mongodb like this from python:I tested the connection and it worked fine.Next, I tried to connect to my mongodb from another pc (the “client”) inside the same private router network. This always gives me the following error, when trying to request data from the host:Apparently, the client cannot connect to the hosted mongo db in time. The remaining mongo conf settings that I use are:I tried different combinations of the IPs listed in bindip, including adding 0.0.0.0. Nothing prevents the error from occurring. There must be something I have done wrong.Usually, as stated above, I connect to the db via SSH port forwarding. So, on my client ubuntu machine, I start a terminal forwarding the host’s 27017 port to the client’s 8000 port:On the client, once the tunnel is up, I connect like this:This also now works with the tls option enabled, as in the python code above, but only when the SSH tunnel is up (of course). However, I think there is no point in using TLS via the SSH tunnel, because the speed of transfer will still be limited by the SSH tunnel, right? Therefore, I am trying to now connect directly without the SSH tunnel, which however gives the timeout error.Can you help me with this? Any ideas?Thank you!Best, JZ", "username": "jayzee" }, { "code": "ServerSelectionTimeoutError: 192.168.X.XXX:27017: timed out", "text": "Is the certificate available on the other PC?Share the python code you use on the other PC when it failed withServerSelectionTimeoutError: 192.168.X.XXX:27017: timed out", "username": "steevej" }, { "code": " net:\n port: 27017\n bindIp: 127.0.0.1, 0.0.0.0 \nuri_mongo = 'mongodb://192.168.X.XXX:27017/'\nclient = pymongo.MongoClient(uri_mongo)\ndb = client['db_name']\ndb.list_collection_names()\nuri_mongo = 'mongodb://localhost:8000/'\n", "text": "I found that it is not a certificate or TLS related issue, because the timeout error also occurs without it. In that case the net settings look like that:I also tried it with only 0.0.0.0.The python code when trying to connect directly without SSH is this:This results in the timeout error.For SSH, I forward the port 27017 of the db host to port 8000 of the client and can run the code above successfully, when replacing uri_mongo with:My client machine runs on Ubuntu 22.04 LTS and my PC on Windows 11. I thought I might have to open the Windows Firewall. I added in- and outbound permission rules for mongo.exe, mongos.exe, mongod.exe and the port 27017 TCP in general. It still does not work. I am out of ammo at this point.Thanks! Best, JZ", "username": "jayzee" }, { "code": "", "text": "Hello,I solved the problem. It was indeed a windows firewall problem with the port settings, and following these steps in the mongo docs solved it:", "username": "jayzee" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Measure ssh transfer speed of mongo db data?
2022-07-11T07:39:01.283Z
Measure ssh transfer speed of mongo db data?
3,696
null
[ "queries" ]
[ { "code": "articleauthorviewscontentdb.article.find({})contentdb.article.find({}, {content: 0})", "text": "Hello, say I have a collection called article which has author, views and contentFor some APIs I want to return the whole document, so, I query db.article.find({})In others I exclude content, so, I query db.article.find({}, {content: 0})From the database engine perspective, is this optimization helpful with reducing the working set’s size (compared to returning the full document)?Thank you", "username": "Khaled_ElNaggar" }, { "code": "content", "text": "Hi,Yes, it’s optimized since content information will not be passed through network, which means faster response.", "username": "NeNaD" }, { "code": "_id:0_id$project", "text": "Hi @Khaled_ElNaggar,As confirmed by @NeNaD, projection can reduce the size of results returned over the network by removing unnecessary fields from result documents.Reducing the size of result documents will likely benefit the working set for your client application that has to manipulate result documents, but it generally does not reduce the working set for your MongoDB deployment.From the database engine perspective, is this optimization helpful with reducing the working set’s size (compared to returning the full document)?There are two cases to consider with respect to working set impact and projections for a query:The special case of a covered query that can be satisfied entirely using an index (so the original document doesn’t need to be in memory to satisfy this query). Projection can be useful to ensure a query is covered (for example, specifying _id:0 for a secondary index that does not include the _id field). However, the size of the covering index adds to your working set and documents that are frequently or recently accessed will also be in the working set (possibly leading to more memory usage than without the covering index).The general case of a query that is not covered (such as your example above). Projection will not reduce the size of the working set: the full document will be loaded in memory (uncompressed) in order to select the required fields. Large documents where you are frequently working with a small subset of data are typically a schema design anti-pattern.For more information on document size and working set impact, please see:One more projection caveat (and a common misstep) is using $project early in an aggregation pipeline as an attempted optimisation. The aggregation framework automatically does dependency analysis to determine which fields are needed for subsequent stages. Early projection is redundant and can lead to less optimal memory usage for pipeline execution.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thank you for your thorough answer. Really helped.", "username": "Khaled_ElNaggar" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Does execulding fields reduce working set size?
2022-08-10T13:39:58.461Z
Does execulding fields reduce working set size?
1,718
https://www.mongodb.com/…bc9aa8e071c0.png
[ "installation" ]
[ { "code": "", "text": "Hi,\nI followed the instructions in https://docs.mongodb.com/manual/tutorial/install-mongodb-on-windows-unattended/\n\nmongod872×158 9.07 KB\nbut the mongod.exe is missing (only mongo.exe and mongos.exe are present) and I can read in the installation logsCA - YAML_FILE = C:\\Program Files\\MongoDB\\Server\\5.0\\bin\\mongod.cfg\nReceived GetLastError 2\nMSI (s) (9C:28) [10:07:15:011]: Executing op: ActionStart(Name=RegisterProduct,Description=Registering product,Template=[1])\nFailed to find yaml fileAlso the file mongod.cfg is missing.All I did was execute the commandmsiexec.exe /l*v mdbinstall.log /qb /i mongodb-windows-x86_64-5.0.5-signed.msiWhy is this installation failing?\nThanks", "username": "Andrea_Paolo_Ferraresi" }, { "code": "", "text": "I think you have to mention the components using ADDLOCAL\nCheck different options like with Compass/without Compass from the link you shared", "username": "Ramachandra_Tummala" }, { "code": "", "text": "A post was split to a new topic: Mongo.exe file is missing in bin folder", "username": "Stennie_X" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Mongod.exe is missing - mongod.cfg Failed to find yaml file
2022-01-09T09:37:41.168Z
Mongod.exe is missing - mongod.cfg Failed to find yaml file
6,150
null
[ "connecting", "installation" ]
[ { "code": "", "text": "For some reason, I can’t link my SSH to my instance of MongoDB, even though I’m on the right port number. This is a picture of what it’s showing: https://i.stack.imgur.com/fg82v.png It still says “waiting for connection” even though my shell is running.I’ve been trying to install/connect to mongoDB for the last three hours and am extremely frustrated. any help is appreciated.", "username": "Jason_N_A1" }, { "code": "mongoshmongodFeatureCompatiblityVersion", "text": "Hi @Jason_N_A1 and welcome to the MongoDB community forums. Out of curiosity have you tried running any commands in the mongosh shell? If so what errors do you get?Can you show how you’re running the mongod command. Are you supplying any parameters from the command line? Unfortunately that info is in the logs just a few lines before what’s been captured in the screen shot.I see a message about FeatureCompatiblityVersion being 5.0.0 but according to the shell you’re using MongoDB 6.0.0. Did you upgrade from an older version of MongoDB?I see you have Compass installed. What happens if you try to connect to the server from Compass? Do you see your list of databases?", "username": "Doug_Duncan" } ]
Port 27017 cannot connect
2022-08-10T20:39:50.241Z
Port 27017 cannot connect
2,035
null
[ "java", "atlas", "spring-data-odm" ]
[ { "code": "", "text": "HiWe are exploring mongodb Atlas for our application, Currently we are using M2 (General) for POC.\nOur data connection and other parameters are well under control. But there is frequent restart from server.\nWhich looks odd.\nOur application is spring boot(2.6.8) application using spring-boot-starter-data-mongodb, Java version is 11.Thanks", "username": "Prabhat_Kumar2" }, { "code": "", "text": "Hi @Prabhat_Kumar2 - Welcome to the community.With regards to the M2 shared tier cluster issues you’re experiencing, I would recommend you please contact the Atlas support team via the in-app chat to investigate any operational issues related to your Atlas account. You can additionally raise a support case if you have a support subscription. The community forums are for public discussion and we cannot help with service or account / billing enquiries.Some examples of when to contact the Atlas support team:Best Regards,\nJason Tran", "username": "Jason_Tran" } ]
MongoDB Atlas Unhealthy
2022-08-11T02:06:53.333Z
MongoDB Atlas Unhealthy
1,839
null
[ "server", "release-candidate", "upgrading" ]
[ { "code": "", "text": "MongoDB 5.0.11-rc1 is out and is ready for testing. This is a release candidate containing only fixes since 5.0.10. The next stable release 5.0.11 will be a recommended upgrade for all 5.0 users.\nFixed in this release:", "username": "Aaron_Morand" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 5.0.11-rc1 is released
2022-08-10T22:08:22.449Z
MongoDB 5.0.11-rc1 is released
2,185
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 6.0.1-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 6.0.0. The next stable release 6.0.1 will be a recommended upgrade for all 6.0 users.\nFixed in this release:", "username": "Aaron_Morand" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 6.0.1-rc0 is released
2022-08-10T20:43:26.836Z
MongoDB 6.0.1-rc0 is released
2,113
null
[ "queries", "node-js", "next-js" ]
[ { "code": "isConnectedexport async function getServerSideProps(context) {\n try {\n await clientPromise\n return {\n props: { isConnected: true },\n }\n } catch (e) {\n console.error(e)\n return {\n props: { isConnected: false },\n }\n }\n}\nimport { clientPromise } from \"../../util/mongodb\";\n\nexport default async (req, res) => {\n const client = await clientPromise\n const db = client.db('sample_mflix')\n\n const movies = db\n .collection(\"movies\")\n .find({})\n .sort({ metacritic: -1 })\n .limit(20)\n .toArray();\n\n res.json(movies);\n}\nclientPromiseconnectToDatabasemongodb.jsTypeError: Cannot read property 'db' of undefinedconst db = client.db('sample_mflix')", "text": "Hi all, I’m new to MongoDB and Next.js and am following the MongoDB Next.js integration tutorial working in the project cloned from this example repo.I am able to get everything in the example repo to work properly, with isConnected to return true from this snippet in /pages/index.js:`\nThen I added an API endpoint at /pages/api/test.js like so:This is the exact same code as provided in the tutorial except that I am importing and using clientPromise instead of connectToDatabase due to updates to the mongodb.js utility file since the tutorial was originally posted. When I try and access the endpoint at localhost:3000/api/test, I get TypeError: Cannot read property 'db' of undefined on the line const db = client.db('sample_mflix').It seems like clientPromise is returning undefined but I cannot figure out why. Can someone help me understand? Thanks so much.", "username": "amy" }, { "code": "import { clientPromise } from \"../../util/mongodb\";\nimport clientPromise from \"../../util/mongodb\";\nimport clientPromise from \"../../util/mongodb\";\n\nexport default async (req, res) => {\n try {\n const client = await clientPromise;\n const db = client.db(\"sample_mflix\");\n\n const movies = await db\n .collection(\"movies\")\n .find({})\n .sort({ metacritic: -1 })\n .limit(10)\n .toArray();\n\n res.status(200).json(movies);\n } catch (error) {\n console.log(error);\n }\n}\n", "text": "Hi @amy,Welcome to MongoDB Community forums Due to some recent changes in Next.js, the article needs some revision. Although here you go:So, first, you need to modifyto without curly parenthesis as stated hereAnd then the following code is as follows: If you have any doubts, please feel free to reach out to us.Best,\nKushagra Kesav", "username": "Kushagra_Kesav" }, { "code": "", "text": "Thank you! This solved my problem ", "username": "amy" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Cannot connect to MongoDB using Next.js tutorial
2022-08-05T04:21:17.796Z
Cannot connect to MongoDB using Next.js tutorial
4,325
null
[ "aggregation", "queries", "node-js", "mongoose-odm" ]
[ { "code": "lookupselect: false", "text": "Is there a way to automatically exclude certain (sensitive) fields from lookup calls in aggregations? I don’t want to always have to remember to remove sensitive data. Mongoose has select: false but I noticed that this doesn’t work with lookup and I have to remove fields manually.", "username": "Florian_Walther" }, { "code": "", "text": "I guess you could create a view with lookup in the definition that has a projection in the subpipeline, and query that view instead of the original collections.", "username": "Katya" }, { "code": "", "text": "Thank you, I will look into it!", "username": "Florian_Walther" } ]
Default exclude in aggregations?
2022-08-10T08:49:38.511Z
Default exclude in aggregations?
1,385
null
[ "aggregation" ]
[ { "code": "", "text": "Hi,I have two collections. One with product details{\nproduct_id:…,\nproduct_name:…\n}and another with product components{\nproduct_id:…,\ncomponent_name:…,\n…\n}with one-to-many relationship. One product can have many product components. When I do a query with lookup it is too slow. I am just trying to retrieve 1000 documents from 3000. Do I have to add any index?", "username": "Udhayha_Karthik" }, { "code": "", "text": "Do I have to add any index?yes, having the appropriate indices is the first step to have decent performance with any system.the other step is to have sufficient hardware for your use cases, that means enough RAM, CPU and permanent storage", "username": "steevej" }, { "code": "$lookupproduct_id", "text": "Your product component collection must absolutely have an index on the field by which you are doing the $lookup (most likely that’s product_id?). If you provide the actual pipeline that’s slow along with what version, etc you are running we might be able to help further.Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB lookup is slow
2022-08-10T12:45:30.564Z
MongoDB lookup is slow
3,259
null
[ "change-streams" ]
[ { "code": "", "text": "How do we handle mongodb change stream queries with getmore queries that are running as collscans on the server. Is there a way to avoid collscans ? Or is there a way to reduce the impact of changestreams on performance.", "username": "thota_38291" }, { "code": "", "text": "Changestreams do not do a traditional COLLSCAN because they are not querying any of your collections, they are watching/querying the oplog, a special system collection also used for replication.Can you explain what’s behind your question since changestream getmore usually just gets data from the tail end of the oplog. What’s the reason that you think it’s impacting your performance?Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "Hello, I have a similar issue and hope this will not be considered hijacking.\nI have a number of running change-streams that all appear in currentOp’s output with\n“planSummary”: “COLLSCAN”\nAll report execution time of around 900ms.\nI don’t know that the change-streams impacting the DB performance, but the currentOp’s output is worrying.\nPlease advise", "username": "Chen_Levy" }, { "code": "", "text": "Not sure why it’s worrying - see my response above.", "username": "Asya_Kamsky" }, { "code": "", "text": "Thanks @Asya_Kamsky for the prompt response.\nWhat was worrying to me was the 900ms execution time; we monitor execution times to make sure all of our queries are performant.\nJudging by your answer, I now assume a change-stream’s execution time is considered a non-busy-wait on the DB.\nIf my assumption is wrong, please let me know\nChen", "username": "Chen_Levy" }, { "code": "", "text": "Your assumption is correct. We’ve actually been considering how we can more appropriately mark change streams and other such queries so as not to “trip” any such monitoring…Asya", "username": "Asya_Kamsky" } ]
Mongodb changestream getmore runs as collscans
2022-04-27T15:17:10.347Z
Mongodb changestream getmore runs as collscans
3,687
null
[ "database-tools", "backup" ]
[ { "code": "", "text": "Hello, when I use mongodump to backup data or mongorestore to restore data, it always take too much time, is there any way to reduce the cost of time?mongod version: v4.4.8\ndata size: 371GB\nrestore time: about 5.5 hours\nOS version: Ubuntu 20.04.2", "username": "zhou_yi" }, { "code": "mongodmongodumpmongod--gzipdbPathmongodumpmongorestore--numParallelCollections=<int>--numInsertionWorkersPerCollection=<int>", "text": "Hi @zhou_yi,Can you share some more details:How much RAM do you have available for mongod?What sort of storage are you using?What sort of deployment are you backing up (standalone, replica set, or sharded cluster)?Is 371GB the uncompressed data size or the compressed storage size (size on disk)?mongodump has to read and uncompress all data via the mongod process, so system resources will be limiting factors in performance. If you have CPU to spare, compressing the output with --gzip can reduce the amount of I/O for writing the dump to disk. You can also reduce I/O contention by dumping to a separate disk from the one hosting your dbPath and pausing writes to your deployment while taking the mongodump.mongorestore has to recreate all data files and rebuild indexes, so available resources will again be a limiting factor. If your resources are not maxed out, you could try adjusting concurrency via --numParallelCollections=<int> (default is 4) and --numInsertionWorkersPerCollection=<int> (default is 1).If you are looking for a faster backup and restore approach, I suggest using filesystem snapshots or copying the underlying data files per Supported Backup Methods. These approaches do not require reading all of the uncompressed data into memory or recreating the data files, so are much more suited to backing up production deployments (especially where total data is significantly greater than available RAM).Regards,\nStennie", "username": "Stennie_X" }, { "code": "mongod", "text": "Hi @Stennie_X , thanks for your reply, the details are as follows:Available RAM for mongod : total available memory is 20GB, but I see the document mentioned that WiredTiger Engine will use 50% of (RAM - 1 GB), so it may be 9.5GB?Storage type: WiredTigerDeployment type: sharded cluster, but the router, config server, shard are both on a single server371GB is uncompressed data sizeI will learn and try the approach in your suggestion, thanks a lot!", "username": "zhou_yi" }, { "code": "", "text": "If more than one mongod is running on a single server, each mongod will have far less RAM thanmay be 9.5GBEach mongo components running on the same single server will each fight for the same limited set of resources. Making a system slow. Having more than 1 mongod running on the same server should only be done when testing.", "username": "steevej" } ]
Why mongodump or mongorestore so slow?
2022-08-09T12:57:57.122Z
Why mongodump or mongorestore so slow?
6,144
null
[ "atlas-cluster" ]
[ { "code": "", "text": "Hello guys, Connection to my mongo atlas db times put with the following error,description: “cluster0-shard-00-01-pri.mghyr.mongodb.net:27017: timed out,cluster0-shard-00-00-pri.mghyr.mongodb.net:27017: timed out,cluster0-shard-00-02-pri.mghyr.mongodb.net:27017: timed out, Timeout: 30s, Topology Description: <TopologyDescription id: 62f0df03e2be3642332186bb, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription (‘cluster0-shard-00-00-pri.mghyr.mongodb.net’, 27017) server_type: Unknown, rtt: None, error=NetworkTimeout(‘cluster0-shard-00-00-pri.mghyr.mongodb.net:27017: timed out’)>, <ServerDescription (‘cluster0-shard-00-01-pri.mghyr.mongodb.net’, 27017) server_type: Unknown, rtt: None, error=NetworkTimeout(‘cluster0-shard-00-01-pri.mghyr.mongodb.net:27017: timed out’)>, <ServerDescription (‘cluster0-shard-00-02-pri.mghyr.mongodb.net’, 27017) server_type: Unknown, rtt: None, error=NetworkTimeout(‘cluster0-shard-00-02-pri.mghyr.mongodb.net:27017: timed out’)>]>”it used to work perfectly and i didn’t make any change, also i didn’t get any alert about replica no set primary.", "username": "Prometheus" }, { "code": "", "text": "If you don’t have a static IP contract, there might be an IP change on your app’s side. I suggest you check your IP access list (and other network security) and the host’s current IP.", "username": "Yilmaz_Durmaz" } ]
ServerSelectionTimeoutError NetworkTimeout, topology_type: ReplicaSetNoPrimary
2022-08-10T14:38:51.022Z
ServerSelectionTimeoutError NetworkTimeout, topology_type: ReplicaSetNoPrimary
2,463
null
[ "atlas" ]
[ { "code": " exports = function(changeEvent) {\nconst collection = context.services.get('mycluster').db(\"mydb\").collection(\"mycollection\");\ncollection\n .updateOne(\n { _id: changeEvent.documentKey._id },\n { $set: { updatedDate: new Date() } }\n )\n .catch(err => console.error(`Failed to add updatedDate: ${err}`));\n\n return;\n};", "text": "I am trying to make some triggers when insert or update some docuement in my collection but here i found a problem or i am doing something wrong…\nwhen i create a function that updates the document it still running because i call again the update function, so this make a loop for update the date, is there any way to just update my document once after or before insert/update the document?", "username": "Miguel_Angel_Arcinie" }, { "code": "", "text": "Hi Miguel!At a high level, you’ll want to add logic to the match expression of your Trigger or within your Trigger’s function. Some approaches that might work –Hope that helps!\nDrew", "username": "Drew_DiPalma" }, { "code": "", "text": "Hi Drew! Thank you for sharing. Can you add an example?", "username": "Eduardo_Cuomo" } ]
How to use triggers to update a document on insert
2020-03-13T19:31:00.977Z
How to use triggers to update a document on insert
7,579
null
[ "replication" ]
[ { "code": "", "text": "I have 2 questions when it comes to creating replica sets with Mongo DB. The questions are as follows:", "username": "Master_Selcuk" }, { "code": "", "text": "The answer to your first question is, yes all nodes can be on the same machine. I do this all the time for testing purposes and have had a 15 node cluster running locally on my MacBook. This is not recommended however as the instances will be fighting for the same resources and performance will suffer. The other problem with this is that if the machine dies, then you’ve lost access to your entire database.The answer to your second question is yes as well. You just need to make sure that the proper firewall rules are in place to allow connections between the instances. As you mention, network latency comes into play here. Best practice is to have your instances placed in different data centers/network routes so it’s less likely that all instances are impacted by an outage in one location.", "username": "Doug_Duncan" }, { "code": "", "text": "Thanks for the answers. They were in good detail and simple.", "username": "Master_Selcuk" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Replica Set Configuration Questions
2022-08-10T13:22:21.226Z
Replica Set Configuration Questions
1,099
null
[ "python", "atlas-cluster" ]
[ { "code": "", "text": "Hi guys, I’m trying to connect to my mongo atlas from my machine, but I’m getting a serverSelectionTimeoutError, i have added my IP address to he access list still doesn’t work, but strangely if allow access from all origin, it connects successfully, I’m clueless as it stands, and would appreciate any help/pointers.pymongo.errors.ServerSelectionTimeoutError: cluster0-shard-00-00.mghyr.mongodb.net:27017: timed out,cluster0-shard-00-02.mghyr.mongodb.net:27017: timed out,cluster0-shard-00-01.mghyr.mongodb.net:27017: timed out, Timeout: 30s, Topology Description: , , ]>I’m Using the motor AsyncIOMotorClient driver in my FastApi app, happy to provide more info if needed", "username": "Prometheus" }, { "code": "mongomongoshCompass", "text": "can you connect from terminal with mongo or mongosh, and with Compass, without a problem?", "username": "Yilmaz_Durmaz" }, { "code": "connect ETIMEDOUT 52.49.220.55:27017", "text": "Hi, Thanks for your response, I am unable to connect with compass when i tried, got this error connect ETIMEDOUT 52.49.220.55:27017, haven’t tried with mongosh.", "username": "Prometheus" }, { "code": "connect ETIMEDOUT 52.49.220.55:27017", "text": "I am unable to connect with compass when i tried, got this error connect ETIMEDOUT 52.49.220.55:27017 , haven’t tried with mongoshif even Compass gives time out error, then others will do the same. this also means the problem you are having is not related to pymongo, or to similar drivers.although it is pretty simple to do, you need to check the way you set the IP access list. you may have set a short duration and forgot about it.also, check if you are on a shared network and someone resets your router so it gets another IP after you set one in the access list.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "HI, thanks again for your response, so I fixed the access list and now I can connect from compass, apparently the security group was causing the issue, I changed it for my ip and it works, but my hosted machine still cannot connect", "username": "Prometheus" }, { "code": "", "text": "it is possible your “hosted” machine might be using some other IP. can you give more details about it? the difference between the pc you used Compass and the system your app runs!?if it is on some cloud provider, then the IP it has will surely be different. or if you use some complex network settings so that the app is in a virtual machine having its own external IP address, then you will also have to add its own IP.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Hi, Thanks alot for your responses, finally fixed it, i was actually connecting via vpc, i didn’t set my route table properly.", "username": "Prometheus" }, { "code": "", "text": "Hi again… so aparrently i solved that isusue but this one just popped up, something about ReplicaSetNoPrimary, i’m going to create a new topic, just wanted to pin it here too, any help will be appreciated.description: “cluster0-shard-00-01-pri.mghyr.mongodb.net:27017: timed out,cluster0-shard-00-00-pri.mghyr.mongodb.net:27017: timed out,cluster0-shard-00-02-pri.mghyr.mongodb.net:27017: timed out, Timeout: 30s, Topology Description: <TopologyDescription id: 62f0df03e2be3642332186bb, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription (‘cluster0-shard-00-00-pri.mghyr.mongodb.net’, 27017) server_type: Unknown, rtt: None, error=NetworkTimeout(‘cluster0-shard-00-00-pri.mghyr.mongodb.net:27017: timed out’)>, <ServerDescription (‘cluster0-shard-00-01-pri.mghyr.mongodb.net’, 27017) server_type: Unknown, rtt: None, error=NetworkTimeout(‘cluster0-shard-00-01-pri.mghyr.mongodb.net:27017: timed out’)>, <ServerDescription (‘cluster0-shard-00-02-pri.mghyr.mongodb.net’, 27017) server_type: Unknown, rtt: None, error=NetworkTimeout(‘cluster0-shard-00-02-pri.mghyr.mongodb.net:27017: timed out’)>]>”", "username": "Prometheus" } ]
Pymongo ServerSelectionTimeoutError
2022-08-04T16:16:51.626Z
Pymongo ServerSelectionTimeoutError
7,078
null
[ "aggregation", "node-js" ]
[ { "code": "\t\t\t\t{\n\t\t\t\t\t$lookup: {\n\t\t\t\t\t\tfrom: 'distributionchannels',\n\t\t\t\t\t\tlocalField: '_id',\n\t\t\t\t\t\tforeignField: 'feedbackId',\n\t\t\t\t\t\tas: 'distributionchannels',\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t$unwind: '$distributionchannels',\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t$lookup: {\n\t\t\t\t\t\tfrom: 'feedbackquestions',\n\t\t\t\t\t\tlocalField: '_id',\n\t\t\t\t\t\tforeignField: 'feedbackId',\n\t\t\t\t\t\tas: 'feedbackquestions',\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t$unwind: '$feedbackquestions',\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t$lookup: {\n\t\t\t\t\t\tfrom: 'feedbackresponses',\n\t\t\t\t\t\tlocalField: '_id',\n\t\t\t\t\t\tforeignField: 'feedbackId',\n\t\t\t\t\t\tas: 'feedbackresponses',\n\t\t\t\t\t},\n\t\t\t\t},\n", "text": "Hello, good day all,I have used the lookUp aggregation pipeline to join 3 collections, is there a way to ignore null returns such that if one of the lookUp aggeration pipeline stage returns null, a response is sent for the stage which does not return null?so if distributionchannels is found and feedbackresponses, feedbackquestions are not found, it should return the pipeline with distributionchannels found instead of returning an empty arrayI have created a wildcard text index, I would love to know how partial search can be enabled i.e if I have the sentence \" Hello World\" and i search for “He” i should find “Hello World”", "username": "Seghosimhe_David" }, { "code": "false$regex", "text": "Hi @Seghosimhe_David,it should return the pipeline with distribution channels found instead of returning an empty arrayDid you try using the preserveNullAndEmptyArrays option in your $unwind stage?\nWhen it is set to false, $unwind does not output a document for null, missing, or an empty array.If it doesn’t help, can you share a few example documents from all your collections along with the expected output from the pipeline?If the aggregation pipeline is a frequently used query in your application, please consider changing your data model to accommodate this query better. Take a look at this Data Modeling guide and you can also consider taking our Free M320: Data Modeling course on MongoDB University.how partial search can be enabledDepending on your use case, if you’re using Atlas to manage your database deployment, please try Atlas Search.\nHowever, if your database deployment is not hosted on Atlas, you could try $regex, which provides regular expression capabilities for matching strings pattern in queries.If you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB", "username": "SourabhBagrecha" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
lookUp aggregation pipeline and text search
2022-07-30T08:49:34.859Z
lookUp aggregation pipeline and text search
2,051
null
[]
[ { "code": "", "text": "I have a collection with fields date, counter and values like this:\ncounter : 19134\ndate : 2022-08-09T06:15:42.646+00:00When I create a chart with counter on X axis and date on Y Axis both as unwind array the date field is recognized correctly as date - I can bin it as Date of month, and on the graph I can see values 09-Aug-2022But when I switch to Filter and add show only Period: Previous 1 week it doesn’t work. It shows all values from the history.Any hint is welcome…", "username": "psmith" }, { "code": "", "text": "Hi @psmith - Are you able to share a screenshot of the chart? I can’t think of a reason why this wouldn’t work, but a screenshot may reveal some clues.Tom", "username": "tomhollander" }, { "code": "", "text": "Hi\ntake a look at these pictures. First one shows what is on axis, second the filter.\nAxis\n\nimage809×728 84.7 KB\n", "username": "psmith" }, { "code": "", "text": "Filter:\n\nimage773×737 73.8 KB\n", "username": "psmith" }, { "code": "[{$unwind: \"$auctions\"}]\n", "text": "Thanks, this helps. The reason this is not working as expected is because your dates are stored in an array. The filter is being applied before the array is unwound so it is filtering for documents where any element of the array meets that criteria, and then when the array is unwound you are seeing some values that do not match the filter.The solution is to unwind the array in the query bar rather than using the Array Reductions option. You can do this with the following query:This will ensure the array is unwound before the filter is applied, so it should filter out all the values you expect.HTH\nTom", "username": "tomhollander" }, { "code": "", "text": "That worked.\nThank you!", "username": "psmith" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Cannot filter charts using date field
2022-08-09T11:31:21.352Z
Cannot filter charts using date field
2,475
null
[ "queries", "node-js" ]
[ { "code": "", "text": "Hello,\nCurrently I am facing a issue in sorting my database. I have a database that contains 8k documents with a title field in it. I want to sort data in alphabetical order i.e. title with first letter A comes first, then so on to Z. But I am getting documents that starts with symbols in the beginning. like #facebook,&google etc. Please provide me the solution on how to achive this usecase using mongodb queries and that should be fast too.", "username": "Ashutosh_Mishra1" }, { "code": "", "text": "Hi @Ashutosh_Mishra1 and welcome in the MongoDB Community !When you want to alter the sorting algorithm / behaviour, you use collation. I’m not sure if there is actually a solution for exactly your use case, but if it exists, it’s in here.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "I couldn’t find any… Please someone help me to provide exact solution.", "username": "Ashutosh_Mishra1" }, { "code": "", "text": "What do you want to do with those special titles?You wan to keep them and move them at the end of the sort or completely remove them from your result set?", "username": "steevej" }, { "code": "Googlefacebook", "text": "It sounds like you want to strip any leading special character and you would have to do that in the pipeline before you performed your sort.As for how things are currently, the sort is working exactly as it’s supposed to. The results are being sorted in the proper lexicographical order.I don’t have time to test right now, but you should be able to reshape the data to strip out the special character using a regex pattern. One other thing to think about is that by default MongoDB will sort all upper case values before sorting in lower case values. That means that Google would come before facebook. If you have mixed case data, that where collations would come in to play.", "username": "Doug_Duncan" }, { "code": "$trimaggregate", "text": "Actually if you just want to remove a certain set of characters from the start/end of a string, take a look at the $trim operator. I was able to use that to do what I believe you want to do. It’s easy enough to do so I will let you work through the solution to build up skill and retention. You will still need to set a collation for your aggregate function to sort properly if you have mixed case entries.", "username": "Doug_Duncan" }, { "code": "", "text": "Suppose I have 10 documents which are, Cat ,Grapes, #zebra,#monkey, Apple, Bat, !snake, &house,110test etc.\nI want to sort these documents alphabetically in such a way like, Apple, Bat, Cat, Grapes (ignoring special characters, or keeping them in last). But by doing normal alphabetical sorting I am getting, !snake,#zebra,#monkey, &house ,110test , Apple, Bat, Cat, Grapes etc. Actually My database consists of about 800k startups database in which some companies have their names begiining with special characters, but I want to get that data in Alphabetical order ignoring such startups with special characters.", "username": "Ashutosh_Mishra1" }, { "code": "$trim", "text": "Suppose I have 10 documents which are, Cat ,Grapes, #zebra,#monkey, Apple, Bat, !snake, &house,110test etc.", "username": "Doug_Duncan" }, { "code": ".find().aggregate()$trimcollationAaaZ", "text": "Unfortunately you’re not showing the full query, but from what you do show, I see you’re running a .find() command. That is not going to work, and is why I keep suggesting to look at the aggregation framework (.aggregate() command) and the $trim operator. I can’t tell from your screen shot if all companies are proper-cased or not, but I will assume you will want to add the collation option as well just to make sure that A and a get sorted together instead of having a after Z.Play around with the above suggestions for a bit and if you’re still not got it I will provide a sample query after I get some sleep.", "username": "Doug_Duncan" }, { "code": "db.company.insert([\n\t{ InvestorName: '#Angels'},\n\t{ InvestorName: '&Vest'},\n\t{ InvestorName: '+Impact'},\n\t{ InvestorName: '0 Ventures'},\n\t{ InvestorName: 'A Plus Finance'}\n])\n$trim$toLowerdb.company.aggregate([\n\n\t// This field could also be calculated in client code\n\t// when documents are inserted or updated\n\n\t{ $addFields: {\n\t\tsortOrder: {\n\t\t\t$toLower: {\n\t\t\t\t$trim: {\n\t\t\t\t\tinput: \"$InvestorName\",\n\t\t\t\t\tchars: \"#&*+1234567890 \"\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}},\n\n\t// Update collection with calculated sortOrder field\n\t{ $out: \"company\" }\n])\nsortOrderdb.company.find({}, {_id:0}).sort({sortOrder:1})\t\n[\n { InvestorName: 'A Plus Finance', sortOrder: 'a plus finance' },\n { InvestorName: '#Angels', sortOrder: 'angels' },\n { InvestorName: '+Impact', sortOrder: 'impact' },\n { InvestorName: '0 Ventures', sortOrder: 'ventures' },\n { InvestorName: '&Vest', sortOrder: 'vest' }\n]\nsortOrder", "text": "Hi @Ashutosh_Mishra1,As @Doug_Duncan mentioned, aggregation would be a straightforward way to dynamically calculate a field for custom sort orders but I would definitely add a pre-calculated field to your documents with an appropriate Index to Support Sorted Query Results. You want to avoid the performance overhead and limitations of an in-memory sort.I set up some test data:I used $trim to remove ignorable characters and $toLower to make a case-insensitive pre-calculated sort field:Now I can get the expected order using my pre-calculated custom sortOrder field (and fast, with an appropriate index):The sortOrder field could be projected out, but I included it here to show the outcome of my earlier calculation.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hey this looks cool, I will try implement. Thanks! Just one last thing, The characters you mentioned i.e. chars: “#&*+1234567890” are the only characters which needs to be ignored according to mongodb rules or you used this only for testing!", "username": "Ashutosh_Mishra1" }, { "code": "#AngelsAngels0 VenturesVentures", "text": "The characters you mentioned i.e. chars: “#&*+1234567890” are the only characters which needs to be ignored according to mongodb rules or you used this only for testing!Hi @Ashutosh_Mishra1,I just used those as an example for testing since you provided screenshots rather than sample documents or expected output based on the input.Your second screenshot doesn’t show where the initial documents would land (i.e. #Angels listed before or after Angels, 0 Ventures vs Ventures, etc), but you can adjust the calculation to achieve your desired sort order.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb Alphabetical Sorting
2022-08-09T08:52:04.223Z
Mongodb Alphabetical Sorting
4,743
https://www.mongodb.com/…c_2_1024x315.png
[]
[ { "code": "", "text": "We are getting this alert since yesterday on the specific VM\nCPU utilization is greater than 80.\nWhich gets resolved automatically. But need to check why the alert has started to come now.Also when I checked overall CPU utilisation for the specific vm in monitoring tool ,\nfound Overall CPU utilisation for all cores is average 5%-6% . One of the cores might have high utilisation at that point.mongod log sample entry:{“t”:{\"$date\":“2022-04-06T06:22:11.337+00:00”},“s”:“I”, “c”:“ACCESS”, “id”:20436, “ctx”:“conn4812791”,“msg”:“Checking authorization failed”,“attr”:{“error”:{“code”:13,“codeName”:“Unauthorized”,“errmsg”:“not authorized on admin to execute command { find: “system.version”, filter: { _id: “shardIdentity” }, limit: 1, singleBatch: true, lsid: { id: UUID(“476e968c-d8db-41f0-bf0e-9a5a58dc1da7”) }, $clusterTime: { clusterTime: Timestamp(1649226413, 666), signature: { hash: BinData(0, D4123AA36EB1598F8EF4376EB8CEDDBE20F85764), keyId: 7029869742718451713 } }, $db: “admin”, $readPreference: { mode: “primaryPreferred” } }”}}}ps -ef | grep [m]ongo\nmongodb 11256 1 71 2021 ? 120-00:23:55 /usr/local/bin/mongod --config /apps/mongodb/config/mongod.conf --fork\nroot 23433 39601 11 Mar30 ? 17:27:46 /usr/local/percona/pmm2/exporters/mongodb_exporter --collector.collstats-limit=200 --collector.diagnosticdata --collector.replicasetstatus --compatible-mode --discovering-mode --mongodb.global-conn-pool --web.listen-address=:42002\nroot 63831 2602 0 06:21 ? 00:00:00 sshd: mongodb [priv]\nmongodb 63850 1 0 06:21 ? 00:00:00 /lib/systemd/systemd --user\nmongodb 63851 63850 0 06:21 ? 00:00:00 (sd-pam)\nmongodb 63989 63831 0 06:21 ? 00:00:00 sshd: mongodb@pts/1\nmongodb 63992 63989 0 06:21 pts/1 00:00:00 -bash\ncasestudy_6April1822×562 44.3 KB\n", "username": "Debalina_Saha" }, { "code": "", "text": "I am facing the same problem with same error message, any one here to help ?", "username": "Tin_Cvitkovic" } ]
CPU utilization is greater than 80, on the specific VM
2022-04-06T07:10:28.213Z
CPU utilization is greater than 80, on the specific VM
1,887
null
[ "queries" ]
[ { "code": "let sensor = sensorName \n\n// I use node Driver. \ncollection.find({ _id : ObjectID(boatId)} , \n {\n $project : { 'data.sensors.0.$$sensor' : 1}\n }).toArray((err , r)=>{\n console.log(r[0].data.sensors.sensor)\n});\n", "text": "HelloI need to insert a js variable in my projection. How to do ?The following statement does not work.Thanks for help ", "username": "Upsylon_Developpemen" }, { "code": "'data.sensors.0.' + sensor\n", "text": "You do that like you do it for any normal JS string expression.The expressionshould work.", "username": "steevej" }, { "code": "'data.sensors.0.' + sensor\n", "text": "Driver code (like the above js string concat) can be mixed with mongo code,but the driver code runs before sending the query to mongo.sensor is a js variable not a mongo variableIts helpful to use driver code to generate mongo queries many times,not just a string concat\nlike here.", "username": "Takis" }, { "code": "", "text": "Unfortunaltly no :-/ whith interpolation no more\ndata.sensors.0.${sensorName}", "username": "Upsylon_Developpemen" }, { "code": "data.sensors.0.${sensorName}.data", "text": "Find !!! We must use an array and backtiks why???\n[data.sensors.0.${sensorName}.data]", "username": "Upsylon_Developpemen" }, { "code": "$project : { [\"data.sensors.0.\" + sensorName] : 1 }\n", "text": "Try simply:I indeed did forget the square brackets.", "username": "steevej" }, { "code": "", "text": "Thanks Steeve ! your solution is more simple effectivly ! ", "username": "Upsylon_Developpemen" }, { "code": "", "text": "Thank You so much, I am an beginer and i was stuck at this for almost 4 hrs and now i get solution", "username": "Neel_Shah1" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Insert js variables in projection?
2020-12-05T10:01:08.163Z
Insert js variables in projection?
5,561
null
[ "swift", "transactions" ]
[ { "code": "flowersFlowerNavigationSplitView {\n List(selection: $selectedFlower) {\n ForEach($viewModel.flowers) { $flower in\n NavigationLink(value: flower) {\n FlowerRow(flower: flower)\n }\n }\n .onDelete { indexSet in\n withAnimation {\n selectedFlower = nil\n $viewModel.flowers.remove(atOffsets: indexSet)\n }\n }\n }\n} detail: { /* Omitted */ }\n\nNavigationSplitView {\n List($viewModel.flowers, editActions: [.delete], selection: $selectedFlower) { $flower in\n NavigationLink(value: flower) {\n FlowerRow(flower: flower)\n }\n }\n} detail: { /* Omitted */ }\nextension List {\n\n public func replaceSubrange<C: Collection, R>(_ subrange: R, with newElements: C)\n where C.Iterator.Element == Element, R: RangeExpression, List<Element>.Index == R.Bound {\n let subrange = subrange.relative(to: self)\n \n let thawed = self.thaw() ?? self\n if let realm = thawed.realm, !realm.isInWriteTransaction {\n try! realm.write {\n remove(at: subrange.lowerBound)\n }\n } else {\n remove(at: subrange.lowerBound)\n }\n \n }\n}\n(lldb) po thawed.realm\n▿ Optional<Realm>\n ▿ some : Realm\n - rlmRealm : <RLMRealm: 0x60000018c6e0>\n\n(lldb) po self.realm\n▿ Optional<Realm>\n ▿ some : Realm\n - rlmRealm : <RLMRealm: 0x600000188370>\n", "text": "I have currently implemented this and it works. Here flowers is a List of Flower which is\na Realm.Object.SwiftUI (xCode14, beta 4) now provides an editable SwiftUI.List, making this more compact code possible:Attempting to do this with a RealmSwift.List gives the following error message:\n‘delete’ is unavailable: Delete is available only for collections conforming to RangeReplaceableCollection.RangeReplaceableCollection conformance requires an empty initializer and the\nfunc replaceSubrange(Range<Self.Index>, with: C) method. These are both implemented by RealmSwift.List.By declaring compliance to RangeReplaceableCollection in an extension to List I can get the code to\ncompile and run, unsurpisingly I get this error message.\n‘Cannot modify managed RLMArray outside of a write transaction.’If I then make a copy of replaceSubrange and for debugging purposes simplify it to the delete of a single Element\nof the list and try to wrap that operation in a write transation like this:I get this error message:\n‘Object is already managed by another Realm. Use create instead to copy it into this Realm.’If I check the realm objects of self and thawed it seems the thawing creates and new object with a different Realm.At this point I am stumped. Any suggestions?Also in other places of my app I have found that if I directly use $list.append or remove\nmethods, things work, but as soon as I deviate from this I tend to end up with different list objects and multiple\nrealms. I would greatly appreciate an advanced tutorial on this topic!I guess many developers would like to use the new cleaner approach to NavigationSplitView and List so it would\nbe good if you could get this to work.As a side comment I have implemented my app on two different branches one for CoreData and one for Realm.\nI very much prefer Realm with its bettter fit to Swift. However I found the topic of updating elements and\nappending / removing from lists so complicated that at one point I gave up work on the Realm branch.", "username": "Mats_Ramnefors" }, { "code": "@main\nstruct SwiftListApp: SwiftUI.App {\n @StateRealmObject var viewModel: ViewModel = {\n try! Realm().write {\n let model = ViewModel()\n try! Realm().add(model)\n model.flowers.append(objectsIn: [\n Flower(index: 1, name: \"one\"),\n Flower(index: 2, name: \"two\")\n ])\n return model\n }\n }()\n \n var body: some Scene {\n WindowGroup {\n ContentView(viewModel: viewModel)\n }\n }\n}\nstruct ContentView: View {\n @ObservedRealmObject var viewModel: ViewModel\n \n @State var selectedFlower: Flower?\n \n var body: some View {\n NavigationSplitView {\n List($viewModel.flowers, editActions: [.delete], selection: $selectedFlower) { $flower in\n NavigationLink(value: flower) {\n Text(\"\\(flower.index)\")\n }\n }\n .navigationTitle(\"Blommor\")\n .toolbar {\n EditButton()\n }\n\n } detail: {\n if let flower = selectedFlower {\n Text(flower.name)\n } else {\n Text(\"Select a flower\")\n }\n }\n }\n}\nclass Flower: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var index: Int\n @Persisted var name: String\n \n /// Default initializer\n override init() {\n super.init()\n }\n \n convenience init(index: Int, name: String) {\n self.init()\n self.index = index\n self.name = name\n }\n}\n\nclass ViewModel: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var id: ObjectId\n @Persisted var flowers = RealmSwift.List<Flower>()\n \n /// Default initializer\n override init() {\n super.init()\n }\n}\n\nextension RealmSwift.List: RangeReplaceableCollection {\n \n}\n\nextension List {\n\n public func replaceSubrange<C: Collection, R>(_ subrange: R, with newElements: C)\n where C.Iterator.Element == Element, R: RangeExpression, List<Element>.Index == R.Bound {\n let subrange = subrange.relative(to: self)\n \n let thawed = self.thaw() ?? self\n if let realm = thawed.realm, !realm.isInWriteTransaction {\n try! realm.write {\n thawed.remove(at: subrange.lowerBound)\n }\n } else {\n thawed.remove(at: subrange.lowerBound)\n }\n \n }\n}\n", "text": "To summarize several days of investigation …If I create a managed object in my App view (StateRealmObject) and pass it as an\nargument to ContentView (ObservedRealmObject) the application fails with\n“Object is already managed by another Realm”.If I create the same StateRealmObject directly in ContentView the application actually works!Very confusing to me. I do not see a reason for dual Realms to be created because a managed object\nis passed in as an argument.I have created a simplfied version of my app for this investigation. I include it here in full.\nAny hint on how to proceed would be much appreciated.SwiftListApp:ContentView:ViewModel:", "username": "Mats_Ramnefors" }, { "code": "ViewModelclass TestModel: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true ) var id: ObjectId\n @Persisted var testField = \"\"\n}\nstruct Realm_SwiftUI_MacApp: SwiftUI.App {\n @StateRealmObject var testModel: TestModel = {\n try! Realm().write {\n let model = TestModel()\n try! Realm().add(model)\n model.testField = \"Hello, World\"\n return model\n }\n }()\n \n var body: some Scene {\n WindowGroup {\n ContentView(myTestModel: testModel)\n }\n }\n}\nstruct ContentView: View {\n @ObservedRealmObject var myTestModel: TestModel\n .\n .\n .\n var body: some View {\n NavigationView {\n VStack {\n List {\n ForEach(testLabel) { labels in //this is just a list of a, b, c that are clickable\n NavigationLink(destination: TestView(myObservedTestModel: myTestModel))\n {\n Text(testLabel.myId)\n }\n\n }\n }\n }\nstruct TestView: View {\n @Binding var navigationViewIsActive: Bool\n @ObservedRealmObject var myObservedTestModel: TestModel\ntry! Realm().write {\n return model\n}\ntry! Realm().write {\n \n}\nreturn model\n", "text": "I am having a bit of trouble trying to replicate the issue. The gist of the problem is you’re instantiating a Realm Object, ViewModel, populating it with some data, returning that model and passing it to another view. That action throws the error. I wanted to try to address that part:If I create a managed object in my App view (StateRealmObject) and pass it as an\nargument to ContentView (ObservedRealmObject) the application fails with\n“Object is already managed by another Realm”.I’ve got a model TestModelHere’s the @StateRealmObject that instantiates a model, stores it in Realm and then is used as an argument to the ContentViewand then the ContentViewand then the TestViewThat all works correctly. The only little and probably not an issue thing is returning inside the write transaction?Instead of thismaybe this?That’s probably not the issue.", "username": "Jay" }, { "code": "2022-08-10 10:07:46.158934+0200 SwiftList[55775:3565350] *** Terminating app due to uncaught exception 'RLMException', reason: 'Object is already managed by another Realm. Use create instead to copy it into this Realm.'\n*** First throw call stack:\n(\n\t0 CoreFoundation 0x00007ff800427380 __exceptionPreprocess + 242\n\t1 libobjc.A.dylib 0x00007ff80004dbaf objc_exception_throw + 48\n\t2 SwiftList 0x00000001079e64c5 _ZN18RLMAccessorContext12createObjectEP11objc_objectN5realm12CreatePolicyEbNS2_6ObjKeyE + 1829\n\t3 SwiftList 0x00000001079e8083 _ZN18RLMAccessorContext5unboxIN5realm3ObjEEET_P11objc_objectNS1_12CreatePolicyENS1_6ObjKeyE + 83\n\t4 SwiftList 0x00000001079f6e2e _ZZN5realm4List3addIRU8__strongP11objc_object18RLMAccessorContextEEvRT0_OT_NS_12CreatePolicyEENKUlS9_E_clIPNS_3ObjEEEDaS9_ + 94\n\t5 SwiftList 0x00000001079f6701 _ZN5realmL14switch_on_typeINS_3ObjEZNS_4List3addIRU8__strongP11objc_object18RLMAccessorContextEEvRT0_OT_NS_12CreatePolicyEEUlSB_E_EEDaNS_12PropertyTypeEOS9_ + 337\n\t6 SwiftList 0x00000001079f6492 _ZNK5realm4List8dispatchIZNS0_3addIRU8__strongP11objc_object18RLMAccessorContextEEvRT0_OT_NS_12CreatePolicyEEUlSA_E_EEDaSB_ + 66\n\t7 SwiftList 0x00000001079f5fed _ZN5realm4List3addIRU8__strongP11objc_object18RLMAccessorContextEEvRT0_OT_NS_12CreatePolicyE + 253\n\t8 SwiftList 0x0000000107aa89f5 _ZZN5realm4List6assignIRU8__strongKP7NSArray18RLMAccessorContextEEvRT0_OT_NS_12CreatePolicyEENKUlSA_E_clIRU8__strongP11objc_objectEEDaSA_ + 85\n\t9 SwiftList 0x0000000107aa8863 _ZN27RLMStatelessAccessorContext20enumerate_collectionIZN5realm4List6assignIRU8__strongKP7NSArray18RLMAccessorContextEEvRT0_OT_NS1_12CreatePolicyEEUlSC_E_EEvP11objc_objectSC_ + 467\n\t10 SwiftList 0x0000000107a9fd6f _ZN5realm4List6assignIRU8__strongKP7NSArray18RLMAccessorContextEEvRT0_OT_NS_12CreatePolicyE + 207\n\t11 SwiftList 0x0000000107a9fc57 __48-[RLMManagedArray replaceAllObjectsWithObjects:]_block_invoke_2 + 119\n\t12 SwiftList 0x0000000107aa4489 _ZL15translateErrorsIRU8__strongU13block_pointerFvvEEDaOT_ + 25\n\t13 SwiftList 0x0000000107aa4aa2 _ZL11changeArrayIZL11changeArrayP15RLMManagedArray16NSKeyValueChange8_NSRangeU13block_pointerFvvEE4$_23EvS1_S2_S5_OT_ + 514\n\t14 SwiftList 0x0000000107a9f291 _ZL11changeArrayP15RLMManagedArray16NSKeyValueChange8_NSRangeU13block_pointerFvvE + 81\n\t15 SwiftList 0x0000000107a9fb19 -[RLMManagedArray replaceAllObjectsWithObjects:] + 825\n\t16 SwiftList 0x0000000107a4c800 RLMAssignToCollection + 80\n\t17 SwiftList 0x0000000107d6d4d4 $s10RealmSwift4ListC6assignyyypF + 404\n\t18 SwiftList 0x0000000107d6d580 $s10RealmSwift4ListCyxGAA07MutableA10CollectionA2aEP6assignyyypFTW + 16\n\t19 SwiftList 0x0000000107dcea04 $s10RealmSwift9PersistedV3set_5valueySo13RLMObjectBaseC_xtF + 580\n\t20 SwiftList 0x0000000107dce73c $s10RealmSwift9PersistedV18_enclosingInstance7wrapped7storagexqd___s24ReferenceWritableKeyPathCyqd__xGAHyqd__ACyxGGtcSo13RLMObjectBaseCRbd__luisZ + 268\n\t21 SwiftList 0x00000001079d32f6 $s9SwiftList9ViewModelC7flowers05RealmA00B0CyAA6FlowerCGvs + 134\n\t22 SwiftList 0x00000001079cf601 $s9SwiftList9ViewModelC7flowers05RealmA00B0CyAA6FlowerCGvpACTk + 113\n\t23 libswiftCore.dylib 0x00007ff80d70c3ca $ss26NonmutatingWritebackBufferCfD + 138\n\t24 libswiftCore.dylib 0x00007ff80d910b50 _swift_release_dealloc + 16\n\t25 libswiftCore.dylib 0x00007ff80d70dd8b swift_setAtReferenceWritableKeyPath + 219\n\t26 SwiftList 0x0000000107e0d5f8 $s10RealmSwift23createCollectionBinding33_06F2B43D1E2DA64D3C5AC1DADA9F5BA7LL_10forKeyPath0B2UI0E0Vyq_Gx_s017ReferenceWritablerS0Cyxq_GtAA14ThreadConfinedRzSo08RLMSwiftD4BaseCRb_AaLR_r0_lFyq_cfU0_yxXEfU_ + 184\n\t27 SwiftList 0x0000000107e0bded $s10RealmSwift9safeWrite33_06F2B43D1E2DA64D3C5AC1DADA9F5BA7LLyyx_yxXEtAA14ThreadConfinedRzlFyyXEfU_ + 61\n\t28 SwiftList 0x0000000107e417d4 $s10RealmSwift9safeWrite33_06F2B43D1E2DA64D3C5AC1DADA9F5BA7LLyyx_yxXEtAA14ThreadConfinedRzlFyyXEfU_TA + 36\n\t29 SwiftList 0x0000000107deb773 $s10RealmSwift0A0V5write16withoutNotifying_xSaySo20RLMNotificationTokenCG_xyKXEtKlF + 275\n\t30 SwiftList 0x0000000107e0bca2 $s10RealmSwift9safeWrite33_06F2B43D1E2DA64D3C5AC1DADA9F5BA7LLyyx_yxXEtAA14ThreadConfinedRzlF + 1042\n\t31 SwiftList 0x0000000107e0d519 $s10RealmSwift23createCollectionBinding33_06F2B43D1E2DA64D3C5AC1DADA9F5BA7LL_10forKeyPath0B2UI0E0Vyq_Gx_s017ReferenceWritablerS0Cyxq_GtAA14ThreadConfinedRzSo08RLMSwiftD4BaseCRb_AaLR_r0_lFyq_cfU0_ + 297\n\t32 SwiftUI 0x0000000110945e97 get_witness_table 7SwiftUI4ViewRzAA10ShapeStyleRd__r__lAA15ModifiedContentVyxAA018_DefaultForegroundE8ModifierVyqd__GGAaBHPxAaBHD1__AhA0cJ0HPyHCHCTm + 3335\n\t33 SwiftUI 0x00000001106357a5 __swift_memcpy74_8 + 30437\n\t34 SwiftUI 0x00000001106357f4 __swift_memcpy74_8 + 30516\n\t35 SwiftUI 0x0000000110634adc __swift_memcpy74_8 + 27164\n\t36 SwiftUI 0x00000001109455d0 get_witness_table 7SwiftUI4ViewRzAA10ShapeStyleRd__r__lAA15ModifiedContentVyxAA018_DefaultForegroundE8ModifierVyqd__GGAaBHPxAaBHD1__AhA0cJ0HPyHCHCTm + 1088\n\t37 SwiftUI 0x000000010feac3b2 get_witness_table 7SwiftUI4ViewRzlAA15ModifiedContentVyxAA32_EnvironmentKeyTransformModifierVyAA06ScrollE10BackgroundVGGAaBHPxAaBHD1__AiA0cI0HPyHCHCTm + 7669\n\t38 SwiftUI 0x000000010fbcbf4b get_witness_table 7SwiftUI4ViewRzlAA15ModifiedContentVyxAA26_PreferenceWritingModifierVyAA019PresentationDetentsF3KeyVGGAaBHPxAaBHD1__AiA0cH0HPyHCHCTm + 5396\n\t39 SwiftUI 0x000000010fbcc585 get_witness_table 7SwiftUI18DynamicViewContentRzlAA08ModifiedE0VyxAA21_TraitWritingModifierVyAA08OnDeleteG3KeyVGGAaBHPxAaBHD1__AiA0dI0HPyHCHCTm + 1152\n\t40 SwiftUI 0x000000010fbcc44b get_witness_table 7SwiftUI18DynamicViewContentRzlAA08ModifiedE0VyxAA21_TraitWritingModifierVyAA08OnDeleteG3KeyVGGAaBHPxAaBHD1__AiA0dI0HPyHCHCTm + 838\n\t41 SwiftUI 0x0000000110262698 objectdestroy.136Tm + 41111\n\t42 SwiftUI 0x000000010ff85497 block_destroy_helper.15 + 46631\n\t43 SwiftUI 0x000000010ff84df8 block_destroy_helper.15 + 44936\n\t44 SwiftUI 0x00000001103747f0 __swift_memcpy56_4 + 171220\n\t45 SwiftUI 0x0000000110375409 block_destroy_helper + 232\n\t46 SwiftUI 0x00000001103741fc __swift_memcpy56_4 + 169696\n\t47 SwiftUI 0x000000011037579d block_destroy_helper + 1148\n\t48 SwiftUI 0x0000000110482df8 __swift_memcpy94_8 + 16562\n\t49 UIKitCore 0x000000010cf99573 -[UIContextualAction executeHandlerWithView:completionHandler:] + 148\n\t50 UIKitCore 0x000000010cfa8310 -[UISwipeOccurrence _executeLifecycleForPerformedAction:sourceView:completionHandler:] + 656\n\t51 UIKitCore 0x000000010cfa89f0 -[UISwipeOccurrence _performSwipeAction:inPullView:swipeInfo:] + 620\n\t52 UIKitCore 0x000000010cfaa4a3 -[UISwipeOccurrence swipeActionPullView:tappedAction:] + 62\n\t53 UIKitCore 0x000000010cfb2bb4 -[UISwipeActionPullView _tappedButton:] + 148\n\t54 UIKitCore 0x000000010cde1cbb -[UIApplication sendAction:to:from:forEvent:] + 95\n\t55 UIKitCore 0x000000010c554fe3 -[UIControl sendAction:to:forEvent:] + 110\n\t56 UIKitCore 0x000000010c5553e7 -[UIControl _sendActionsForEvents:withEvent:] + 345\n\t57 UIKitCore 0x000000010c551507 -[UIButton _sendActionsForEvents:withEvent:] + 148\n\t58 UIKitCore 0x000000010c553c3e -[UIControl touchesEnded:withEvent:] + 485\n\t59 UIKitCore 0x000000010c7e82f7 _UIGestureEnvironmentUpdate + 9811\n\t60 UIKitCore 0x000000010c7e584a -[UIGestureEnvironment _updateForEvent:window:] + 844\n\t61 UIKitCore 0x000000010ce28f2f -[UIWindow sendEvent:] + 5282\n\t62 UIKitCore 0x000000010cdfc722 -[UIApplication sendEvent:] + 898\n\t63 UIKitCore 0x000000010cea3620 __dispatchPreprocessedEventFromEventQueue + 9383\n\t64 UIKitCore 0x000000010cea5d3d __processEventQueue + 8355\n\t65 UIKitCore 0x000000010ce9c1a0 __eventFetcherSourceCallback + 272\n\t66 CoreFoundation 0x00007ff800386eed __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 17\n\t67 CoreFoundation 0x00007ff800386e2c __CFRunLoopDoSource0 + 157\n\t68 CoreFoundation 0x00007ff800386629 __CFRunLoopDoSources0 + 212\n\t69 CoreFoundation 0x00007ff800380de3 __CFRunLoopRun + 927\n\t70 CoreFoundation 0x00007ff800380667 CFRunLoopRunSpecific + 560\n\t71 GraphicsServices 0x00007ff809bfc28a GSEventRunModal + 139\n\t72 UIKitCore 0x000000010cddb621 -[UIApplication _run] + 994\n\t73 UIKitCore 0x000000010cde04fd UIApplicationMain + 123\n\t74 SwiftUI 0x00000001108131b7 __swift_memcpy53_8 + 95801\n\t75 SwiftUI 0x0000000110813064 __swift_memcpy53_8 + 95462\n\t76 SwiftUI 0x000000010ff37140 __swift_memcpy195_8 + 12056\n\t77 SwiftList 0x00000001079d555e $s9SwiftList0aB3AppV5$mainyyFZ + 30\n\t78 SwiftList 0x00000001079d57c9 main + 9\n\t79 dyld 0x000000010ba042bf start_sim + 10\n\t80 ??? 0x000000011461552e 0x0 + 4636890414\n)\nlibc++abi: terminating with uncaught exception of type NSException\n*** Terminating app due to uncaught exception 'RLMException', reason: 'Object is already managed by another Realm. Use create instead to copy it into this Realm.'\nterminating with uncaught exception of type NSException\nCoreSimulator 857.7 - Device: iPad Pro (11-inch) (3rd generation) (48AD3BF4-0841-40A0-901D-E6BD056D882A) - Runtime: iOS 16.0 (20A5339d) - DeviceType: iPad Pro (11-inch) (3rd generation)\n", "text": "Hi,\nThank you for your response. Passing the StateRealmObject from the app to the ContentView works OK. The error occurs deep in Realm when deleting a object from the List as described in my first post. The trace is below.It seems that SwiftUI’s editable List, sets its data object with same list (same objectId) after doing the delete operation (to trigger notification?). As far as I have been able to trace down in the Realm code, it seems Realm does not recognize that this set operation is setting the list that is already set. I think this is the cause for the error. Why it only occurs when the object is an ObservedObject and not a StateObject I have no idea, still learning ", "username": "Mats_Ramnefors" } ]
RealmSwift.List support for new editable List in SwiftUI
2022-08-01T09:41:50.085Z
RealmSwift.List support for new editable List in SwiftUI
2,820
null
[ "connecting" ]
[ { "code": "", "text": "hi Guys,Need help with MonoDB atlas connection issue after failover to different replica.As part of DR drill, I was doing test failover for MongoDB atlas , during test failover process , available secondary replica became primary but application is unable to reach MongoDB atlas.Any thoughts\\suggestion on this?", "username": "mastanvali_shaik" }, { "code": "", "text": "If you are using SRV type connect string it should detect the new primary and connect automatically\nWhat type of connect string are you using in your app?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "yes , using SRV type connection string in app.", "username": "mastanvali_shaik" }, { "code": "", "text": "Can you connect by shell using SRV string?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "After failover , can not connect from shell as well.", "username": "mastanvali_shaik" }, { "code": "", "text": "What error message are you getting? Can’t connect is pretty vague and could be due to a lot of issues.", "username": "Doug_Duncan" }, { "code": "", "text": "GET /api/v1/languageConstant?locale=en-us - - ms - -\nMongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you’re trying to access the database from an IP that isn’t whitelisted. Make sure your current IP address is on your Atlas cluster’s IP whitelist: https://docs.atlas.mongodb.com/security-whitelist/\nat Function.Model.$wrapCallback (/usr/src/app/node_modules/mongoose/lib/model.js:5087:32)\nat /usr/src/app/node_modules/mongoose/lib/model.js:504:27\nat /usr/src/app/node_modules/mongoose/lib/helpers/promiseOrCallback.js:32:5\nat new Promise ()\nat promiseOrCallback (/usr/src/app/node_modules/mongoose/lib/helpers/promiseOrCallback.js:31:10)\nat Mongoose._promiseOrCallback (/usr/src/app/node_modules/mongoose/lib/index.js:1149:10)\nat model.Model.save (/usr/src/app/node_modules/mongoose/lib/model.js:503:35)\nat exports.createApplicationLog (/usr/src/app/controllers/customLoggerV2Controller.js:9:35)\nat Layer.handle [as handle_request] (/usr/src/app/node_modules/express/lib/router/layer.js:95:5)\nat next (/usr/src/app/node_modules/express/lib/router/route.js:137:13)\nat /usr/src/app/routes/customLoggerV2Routes.js:11:13\nat Layer.handle [as handle_request] (/usr/src/app/node_modules/express/lib/router/layer.js:95:5)\nat next (/usr/src/app/node_modules/express/lib/router/route.js:137:13)\nat Route.dispatch (/usr/src/app/node_modules/express/lib/router/route.js:112:3)\nat Layer.handle [as handle_request] (/usr/src/app/node_modules/express/lib/router/layer.js:95:5)\nat /usr/src/app/node_modules/express/lib/router/index.js:281:22\nat Function.process_params (/usr/src/app/node_modules/express/lib/router/index.js:335:12)\nat next (/usr/src/app/node_modules/express/lib/router/index.js:275:10)\nat Function.handle (/usr/src/app/node_modules/express/lib/router/index.js:174:3)\nat router (/usr/src/app/node_modules/express/lib/router/index.js:47:12)\nat Layer.handle [as handle_request] (/usr/src/app/node_modules/express/lib/router/layer.js:95:5)\nat trim_prefix (/usr/src/app/node_modules/express/lib/router/index.js:317:13) {\nreason: TopologyDescription {\ntype: ‘ReplicaSetNoPrimary’,\nsetName: ‘atlas-11x95p-shard-0’,\nmaxSetVersion: 10,\nmaxElectionId: 7fffffff0000000000000041,\nservers: Map(3) {\n‘-pri.a4dcp.mongodb.net:27017’ => [ServerDescription],\n‘-pri.a4dcp.mongodb.net:27017’ => [ServerDescription],\n‘-pri.a4dcp.mongodb.net:27017’ => [ServerDescription]\n},\nstale: false,\ncompatible: true,\ncompatibilityError: null,\nlogicalSessionTimeoutMinutes: null,\nheartbeatFrequencyMS: 10000,\nlocalThresholdMS: 15,\ncommonWireVersion: 9\n}\n}", "username": "mastanvali_shaik" }, { "code": "", "text": "Have you whitelisted your IP.If yes remove and add again.This fix worked for some people for connectivity issues after failover.\nAlso did you try to connect directly to new primary giving it’s address & port\nSomething blocking your connection.Coild be firewall/network issues", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Entire Vnet is added through Peering . I didn’t try by directly connecting to the primary after failover.", "username": "mastanvali_shaik" } ]
Mongodb atlas replica is not connecting after failover
2022-08-09T06:07:52.276Z
Mongodb atlas replica is not connecting after failover
2,068
null
[ "mongodb-shell" ]
[ { "code": "m switching from the legacy mongo shell to mongosh and I have noticed that when creating indexes using db.collection.createIndex(...) the output varies from the previous shell. In the legacy version it used to return JSON structure with the \"ok\" key which allowed me to automatically validate the result of operation by just checking if it", "text": "Hi,\nIm switching from the legacy mongo shell to mongosh and I have noticed that when creating indexes using db.collection.createIndex(...) the output varies from the previous shell. In the legacy version it used to return JSON structure with the \"ok\" key which allowed me to automatically validate the result of operation by just checking if its value is either 1 or 0, while in mongosh the return isn`t as consistent, since when succeeding it returns just a name of the index as a string and for example MongoServerError when I input invalid data. I wonder if this behavior is intended and whether there is any possibility to check the result of the operation in a similar way to the legacy shell.Thanks for your time", "username": "Bartosz_Dabrowski1" }, { "code": "mongosh", "text": "Hi @Bartosz_Dabrowski1 and welcome to the community!!The output change from legacy mongo to mongosh is an expected change. The later displays the index key getting created on success and throws an exception in case of an error.\nThe mongosh ticket has the details on the same.there is any possibility to check the result of the operation in a similar way to the legacy shell.In case you wish to display the output in the similar format, you could construct the command manually or write your own helper function:db.foo.runCommand({ createIndexes: “mycollection”, indexes: [{key: { “bar”: 1}, name: “bar_1”}]})\n{\ncreatedCollectionAutomatically: true,\nnumIndexesBefore: 1,\nnumIndexesAfter: 2,\nok: 1\n}db.foo.runCommand({ createIndexes: “mycollection”, indexes: [{key: { “bar”: 1}, name: “bar_1”}]})\n{\nnumIndexesBefore: 2,\nnumIndexesAfter: 2,\nnote: ‘all indexes already exist’,\nok: 1\n}Please let us know if you have any further questions.Thanks\nAasawari", "username": "Aasawari" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Different output in mongosh and mongo shells when creating indexes
2022-08-03T12:22:43.889Z
Different output in mongosh and mongo shells when creating indexes
1,480
null
[ "aggregation", "node-js", "java" ]
[ { "code": "mecontact$reduceme: {\n first_name: \"Prasad\",\n last_name: \"Saya\",\n contact: [ \"prasadsaya\", \"@\", \"yahoo\", \".\", \"com\" ],\n lives_at: {\n location: \"The subcontinent\",\n timezome: \"IST (GMT+5:30)\"\n }\n}\n", "text": "Hello. I am looking for some work related with MongoDB database. I am trained and have some experience working with MongoDB. I am interested in the areas of database programming - Aggregation Framework, Indexing/Performance, Data Modeling and basic Cluster Administration. I can program using NodeJS and Java drivers. I am also experienced in programming with relational databases.My present interest is in short-term and task based (paid) work. You can also find my MongoDB related posts at StackOverflow, prasad_. Please feel free to contact me about your interest with a private message to me or query me with the contact field (you may have to $reduce the string array field ).", "username": "Prasad_Saya" }, { "code": "$reduce", "text": "you may have to $reduce the string array fieldI liked this. Nice and subtle.", "username": "steevej" }, { "code": "", "text": "Yeah, thinking like a MongoDB developer!", "username": "Prasad_Saya" } ]
Looking for MongoDB related work
2022-08-09T16:27:20.094Z
Looking for MongoDB related work
1,424
null
[ "mdbw22-communitycafe" ]
[ { "code": "", "text": "We’ll wrap up our Community program with a chat with the Diversity and Inclusion Scholars talking about their experience at MongoDB World!", "username": "TimSantos" }, { "code": "", "text": "Photos from @Harshit", "username": "TimSantos" }, { "code": "", "text": "@henna.s moderating questions\n\nDiversity &amp; Inclusion Scholars1920×1440 201 KB\n", "username": "wan" }, { "code": "", "text": "Where can we find the full video?", "username": "Justin_Poveda" }, { "code": "", "text": "Hi @Justin_Poveda and welcome to the forums!Where can we find the full video?Unfortunately the Community Café sessions were live streamed but not recorded.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "Actually, we just learned that the sessions WERE recorded, and as of today, are available on YouTube! Find 'em here: https://www.youtube.com/playlist?list=PL4RCxklHWZ9tb8qJw_Kqv7f2vceAKOusLSorry for the confusion, MongoDB World was a bit of a chaotic time. ", "username": "webchick" } ]
Coffee Chat: Diversity & Inclusion Scholars
2022-06-06T14:10:46.673Z
Coffee Chat: Diversity &amp; Inclusion Scholars
3,975
null
[ "connecting" ]
[ { "code": "prepend domain-name-servers 8.8.8.8, 8.8.4.4;\nprepend domain-name-servers 2001:4860:4860::8888, 2001:4860:4860::8844;\nUnable to connect: connection <monitor> to 13.251.13.252:27017 closed", "text": "I have the same problem mentioned here. However, the provided solution does not work for me. I followed the instructions here on my Linux Mint (Debian) system and it has made no difference. I have re-started the laptop. My mobile wifi hotspot otherwise works without issue. My dhclient.conf includes the following 2 lines:and these settings have also been applied in the network configuration GUI for the mobile hotspot connection in Linux.\nI have also successfully added my mobile phone IP address to the ‘Network Access’ list in Atlas.\nAttempting to connect to mongodb via VSCode gives me:\nUnable to connect: connection <monitor> to 13.251.13.252:27017 closed\nWhat else can I try? thanks …", "username": "freeross" }, { "code": "ping/// example\nping cluster0-shard-00-00.ze4xc.mongodb.net\ntelnet27017/// example\ntelnet cluster0-shard-00-00.ze4cx.mongodb.net 27017\n", "text": "Hi @freeross,Whilst using the mobile hotspot, can you please try performing the initial basic network connectivity tests and provide the output for the cluster you are having trouble connecting to:Note: You can find the hostname in the metrics page of your clusterAdditionally, I would recommend to review the Troubleshoot Connection Issues documentation You may also find the following blog post regarding tips for atlas connectivity useful.Regards,\nJason Tran", "username": "Jason_Tran" }, { "code": "Unable to connect: connection <monitor> to 18.140.187.114:27017 closed\nUnable to connect: connection <monitor> to 13.251.13.252:27017 closed\nUnable to connect: connection <monitor> to 52.221.43.169:27017 closed\n\nFINAL STATUS OF CHECKS: \n 1. URL-CHECK: \t Passed\n 2. MEMBERS-CHECK: \t Passed\n 3. DNS-IP-CHECK: \t Passed\n 4. SOCKET-CHECK: \t Passed\n 5. DRIVER-CHECK: \t Passed\n 6. DBPING-CHECK: \t Failed\n 7. HEALTH-CHECK: \t NotTested\n\nRESULTING ADVICE: \n - If the MongoDB deployment is an Atlas M0/M2/M5 tier cluster then via the Atlas console, in the 'Network Access' section, for the 'IP Access List' tab select to 'ADD CURRENT IP ADDRESS' which should be the address of this host machine (DBPING-CHECK)\n - If using Atlas, via the Atlas console, in the 'Network Access' section, for the 'IP Access List' tab ensure this machine is listed in the access list, and if not, add it ('Add Current IP Address') (DBPING-CHECK)\n - If using Atlas, via the Atlas console, check the cluster is NOT PAUSED and resume it if is paused (DBPING-CHECK)\n - If not using Atlas to host the MongoDB deployment, check the firewall rules for the network hosting the deployment to ensure it permits MongoDB TCP connections on the configured ports - check NOT BLOCKED (DBPING-CHECK)\n - Check any local firewalls on this machine and in your local network, to ensure that MongoDB TCP network connections to outside your network are NOT BLOCKED (DBPING-CHECK)\n", "text": "Hi @Jason_Tran,\nThanks for your response.\nBoth ping and telnet tests are successful for my primary cluster using my mobile phone hotspot from a terminal in my project directory on local Linux PC. However, attempting to connect to the same cluster via VSCode is still not working.\nI reviewed the Troubleshoot Connection Issues documentation, but, as far as I can tell, the issues there (connection string config etc.) are not relevant in this instance as I can connect on my home based standard wifi connection with these setting.\nI did notice that each time I connect the error message I get refers to a different IP:However, I presume that is expected due to dynamic address allocation(?).I tried the tool from blog post regarding tips for atlas connectivity and got:I tried the suggestions and added my IP address as per my laptop network connection … still get same error (as above). Any other ideas? thanks again …", "username": "freeross" }, { "code": "", "text": "@freeross in your first post, you did not mention you are getting the error while using VSCode. this is an important detail. I am not sure if it is relevant this time, but give us details of which extension is this you are getting the error.anyways, “timeout” and “connection closed” are two distinct errors. the first one means there is no network toward the target. the second is, your case, from possibly a missing flag or something like that such that you connect but are not allowed to access.the above is also true when you have a strict access setting in the “network settings” section. unless you have a static IP contract with your internet provider, your home modem or phone hotspot will have a different IP address anytime they are restarted. thus your connection to the database server will just be dropped.go to your Atlas cluster and allow access from anywhere, “0.0.0.0”, to test if you can connect to the cluster that way.PS: you have a static “name” to your cluster, but load balancing, primary changing, etc causes the IP change.", "username": "Yilmaz_Durmaz" }, { "code": "v0.9.3mongodb.mongodb-vscodeDBPING-CHECK\n", "text": "@Yilmaz_Durmaz\nThe mobile hotspot connection works with 0.0.0.0.\nTo address the other points raised:\nThe extension isMongoDB for VS Code v0.9.3More InfoReleased on\n5/12/2020, 04:16:46\nLast updated\n4/27/2022, 07:10:42\nIdentifier\nmongodb.mongodb-vscodeI’m unable to find a strict access setting in the “network settings” section in Atlas so I assume you mean on the laptop(?), however, I don’t find any reference to such a setting there either. I’m virtually certain I don’t have a static IP contract, but I’m adding the dynamically provided mobile phone hotspot address to ‘network access’ in Atlas ‘IP Access List’ and I still cannot connect via VSCode and geterror with the terminal app. Only 0.0.0.0 works currently. So the listed local DNS IPv4 address is not working even temporarily (before a restart etc.).\nMy db is at POC stage and is not of a sensitive nature so I can work with 0.0.0.0 when I am outside if necessary for now. But it would be preferable for the longer term to have a more robust solution. thanks again …", "username": "freeross" }, { "code": "", "text": "What you want to add to your Network Access List is the result ofSee the IP address assigned to your device. Show my IP city, state, and country. What Is An IP Address? IPv4, IPv6, public IP explained.\nEst. reading time: 1 minute\ndynamically provided mobile phone hotspot addressThis above does not work because the IP you get for your hotspot is not necessary the public access point of your cell provide or vpn provider.", "username": "steevej" }, { "code": "", "text": "Got it. Worked straight away. Thanks ", "username": "freeross" }, { "code": "", "text": "Good thing to know it was just a wrong WAN IP problem by the way, with “strict network settings” I was referring to Atlas settings. but that could also be a problem from the hotspot, router, proxy, vpn, etc, yet they mostly cause “connection timeout” error in your case, you were connecting but refused due to strict IP access.one more thing about my “PS:” in my post above. it is about the IP change of the database server itself. it is a different thing than the IP change after hotspot/router restart or vpn/proxy changes.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Connect via Mobile Phone Hotspot
2022-08-06T05:17:56.066Z
Connect via Mobile Phone Hotspot
3,605
null
[ "php" ]
[ { "code": "require('src/functions.php');\n\n$client = new MongoDB\\Client(\"mongodb://localhost:27017\");\n$collection = $client->demo->beers;\n\n$result = $collection->insertOne( [ 'name' => 'Hinterland', 'brewery' => 'BrewDog' ] );\n\necho \"Inserted with Object ID '{$result->getInsertedId()}'\";\n", "text": "I know this might have been asked in a different forms but I’m in a situation where I need to use MongoDB PHP Library with out Composer. The official guide says that it is possible to use it without Composer but provides no example of how to load it into your project; it only mentions the following:While not recommended, you may also manually install the library using\na source archive attached to the GitHub releases.If you installed the library manually from a source archive, you will\nneed to manually configure autoloading:So, when I use the following example:I get an error with the following message:Uncaught Error: Class ‘MongoDB\\Client’ not found in xxxPlease note that the driver and the PHP extension is installed and working fine, only that the library isn’t working.Your help is highly appreciated!", "username": "Co_Living" }, { "code": "src/functions.phpMongoDB\\src/use", "text": "While the example code you shared did manually include the src/functions.php file, it omitted the first instruction from the documentation:Map the top-level MongoDB\\ namespace to the src/ directory using your preferred autoloader implementation.“Autoloader implementation” here is referring to PSR-4. Composer provides the most common implementation, which is also highly optimized (see: Autoloader and Autoloader Optimization); however, any compliant implementation can be used. The PSR-4 example implementation is one other option.If you do not use an autoloader, the only alternative is to manually require each class that you will use in your application (as discussed in this Stack Overflow answer). This is in addition to specifying use statements for classes (assuming you don’t intend to use fully qualified class names everywhere).I’ve opened PHPLIB-937 to better clarify this in the library documentation.", "username": "jmikola" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to use MongoDB without Composer
2021-10-27T18:38:49.567Z
How to use MongoDB without Composer
4,530
null
[ "aggregation", "attribute-pattern" ]
[ { "code": "{\n \"productId\": 1,\n \"customerId\": 1,\n \"attr\": [\n { \"k\": \"price\", \"v\": 42 },\n { \"k\": \"color\", \"v\": \"red\" }\n ]\n}\n[{$match: { <filter> }}, \n{$unwind: { path: \"$attr\" }}, \n{$match: { \"attr.k\": \"price\"}}, \n{$sort: { \"attr.v\": 1 }}, \n{$lookup: {\n from: 'product',\n localField: '_id',\n foreignField: '_id',\n as: 'fullProduct'\n}}, \n{$replaceRoot: {\n newRoot: { $arrayElemAt: [ \"$fullProduct\", 0] }\n}}]\n", "text": "I’m giving another shot at Attribute Pattern - How to sort by some attribute valueI was wondering if we could build something with aggregation.Using the example from the previous postI was thinking maybe we canFirst problem I can see: we lose the documents that do not have a value.", "username": "Brecht_Yperman" }, { "code": "", "text": "I’m currently testing if the wildcard index would be a better solution for my use case.", "username": "Brecht_Yperman" }, { "code": "SortKey = \"price\"\n\nfunction pipeline() {\n return [ _match_stage() , _set_stage() , _sort_stage() ]\n}\nfunction _match_stage() {\n return { \"$match\" : { \"customerId\" : 1 } }\n}\nfunction _set_stage() {\n return { \"$set\" : { [SortKey] : _reduce() } }\n}\nfunction _sort_stage() {\n return { \"$sort\" : { [SortKey] : 1 } }\n}\nfunction _reduce() {\n return { \"$reduce\" : {\n \"input\" : \"$attr\" ,\n \"initialValue\" : null ,\n \"in\" : { '$cond': [ { \"$eq : [ \"$$this.k\" , SortKey ] } , '$$this.v', '$$value' ] } }\n } }\n}\n\nc.aggregate( pipeline() )\n", "text": "This is a very interesting problem.Here is an alternative solution that would also involve a memory sort but might be more efficient than the solution with $unwind and $lookup.Basically, it uses $reduce on $attr to $set a new field to the sort value desired.Warning: Quickly tested code, use at your own risk.", "username": "steevej" } ]
Attribute Pattern - sorting by attribute value
2022-08-08T12:21:43.117Z
Attribute Pattern - sorting by attribute value
2,343
null
[ "compass" ]
[ { "code": "", "text": "Hican someone explain with MongoDB Compass UI toolHow i can export my collection into json file without ID field ?ORHow to import json file into collection and produce new ID ?Thanks a lot", "username": "Giulian" }, { "code": "", "text": "I originally posted:With Compass when you export you may select the fields you want to export. You simply have to unchecked the _id field.before testing it. And it did not work. I was really really surprised. So I tested with another version of Compass (version 1.32.2) and it did work. The version that failed is 1.29.5 on Windows.", "username": "steevej" }, { "code": "", "text": "Ok i can t try to latest version because i need to upgrade my server mongo version", "username": "Giulian" }, { "code": "", "text": "Not the server, only Compass version.", "username": "steevej" } ]
CompassUI Export to json without ID field
2022-08-09T11:52:02.759Z
CompassUI Export to json without ID field
2,275
null
[ "cxx" ]
[ { "code": "", "text": "Hello,I’ve been having some trouble installing Mongo-cxx. When using cmake I’m using this command (amongst others I’ve tried):cmake … \\ -G “Visual Studio 16 2019” \\ -DBOOST_ROOT=C:\\Users\\win8\\Desktop\\boost_1_73_0 -DCMAKE_PREFIX_PATH=C:\\mongo-c-driver \\ -DCMAKE_INSTALL_PREFIX=c:\\mongo-cxx-driver -DCMAKE_BUILD_TYPE=Release \\ -DBUILD_SHARED_AND_STATIC_LIBS=ON \\ -DBUILD_VERSION=3.5.0Though it is not listed on the installation page, it requires that I put in the build version or otherwise the installation fails, which I set to 3.5.0. I suspect that’s the problem.In any case, installation always goes smoothly, but after installation, it is missing the necessary libraries “libbsoncxx.lib” and “libmongocxx.lib”. Instead, it has “bsoncxx.lib” and “mongocxx.lib”, which do not work.Can anyone help me figure out how to get it installing properly?", "username": "Andrew_Freitas" }, { "code": "build/VERSION_CURRENTBUILD_VERSION--depth 1--depth 1", "text": "@Andrew_Freitas are you building from a release tarball or from Git? If you are building from a release tarball, the build/ sub-directory contains the file VERSION_CURRENT which specifies the value of BUILD_VERSION and which should be found during the build. If you are building from Git, then the build scripts should be able to determine build version from the Git history. However, I just noticed that the installation instructions suggest a shallow Git clone. We need to fix that, as the --depth 1 option is going to throw things off if you build from Git. If you did clone using --depth 1, then drop that option and clone again.As far as the libraries not being found, how is your build specifying the location and resources of the C++ driver? Can you provide the complete error output? Also, are you using Visual Studio for all of your builds (C driver, C++ driver, and your own project), or are you mixing Visual Studio and MinGW?", "username": "Roberto_Sanchez" }, { "code": "", "text": "I have the some problem here. I am building from release tarball using MS Visual Studio 2019. I installed Mongocxx - Driver 3.5.0 and 3.5.1. Both installation went without any error. However I am missing both librarys (libbsoncxx.lib and libmongocxx.lib). --> only mongocxx.lib and bsoncxx.lib is installed. I used different cmake configuration (during different attempts of installing), including the standard configuration on the installation page.The Mongoc-Driver was also installed with the help of MS Visual Studio 2019 (libmongoc 1.16.2) and contains every file needed. I am using Visual Studio for everything, including my project. Any ideas to fix this problem and getting both librarys?", "username": "Mike_Reichardt" }, { "code": "", "text": "@Mike_Reichardt the convention on Unix-like systems (including on Windows when using GCC through MSYS or Cygwin) is name libraries something like “libfoo…” (with the suffix being system dependent). When using Windows build tools on Windows, the naming is of the form “foo.lib”. Could you provide a sample program that will not build against the C/C++ driver as you have installed them? Be sure to include the program itself, the build files, and any accompanying output showing the complete build failure.", "username": "Roberto_Sanchez" }, { "code": "Severity\tCode\tDescription\tProject\tFile\tLine\tSuppression State\nError\tC2039\t'basic_string_view': is not a member of 'std'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\stdx\\string_view.hpp\t98\t\nError\tC2873\t'basic_string_view': symbol cannot be used in a using-declaration\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\stdx\\string_view.hpp\t98\t\nError\tC2039\t'string_view': is not a member of 'std'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\stdx\\string_view.hpp\t99\t\nError\tC2873\t'string_view': symbol cannot be used in a using-declaration\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\stdx\\string_view.hpp\t99\t\nError\tC2039\t'string_view': is not a member of 'bsoncxx::v_noabi::stdx'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\element.hpp\t124\t\nError\tC3646\t'key': unknown override specifier\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\element.hpp\t124\t\nError\tC2059\tsyntax error: '('\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\element.hpp\t124\t\nError\tC2238\tunexpected token(s) preceding ';'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\element.hpp\t124\t\nError\tC2039\t'string_view': is not a member of 'bsoncxx::v_noabi::stdx'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\element.hpp\t344\t\nError\tC2061\tsyntax error: identifier 'string_view'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\element.hpp\t344\t\nError\tC2805\tbinary 'operator [' has too few parameters\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\element.hpp\t344\t\nError\tC2039\t'string_view': is not a member of 'bsoncxx::v_noabi::stdx'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\view.hpp\t90\t\nError\tC2061\tsyntax error: identifier 'string_view'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\view.hpp\t90\t\nError\tC2039\t'string_view': is not a member of 'bsoncxx::v_noabi::stdx'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\view.hpp\t102\t\nError\tC2061\tsyntax error: identifier 'string_view'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\view.hpp\t102\t\nError\tC2805\tbinary 'operator [' has too few parameters\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\view.hpp\t102\t\nError\tC2039\t'optional': is not a member of 'std'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\stdx\\optional.hpp\t83\t\nError\tC2873\t'optional': symbol cannot be used in a using-declaration\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\stdx\\optional.hpp\t83\t\nError\tC2039\t'nullopt': is not a member of 'std'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\stdx\\optional.hpp\t84\t\nError\tC2873\t'nullopt': symbol cannot be used in a using-declaration\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\stdx\\optional.hpp\t84\t\nError\tC2039\t'make_optional': is not a member of 'std'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\stdx\\optional.hpp\t85\t\nError\tC2873\t'make_optional': symbol cannot be used in a using-declaration\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\stdx\\optional.hpp\t85\t\nError\tC2039\t'string_view': is not a member of 'bsoncxx::v_noabi::stdx'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\json.hpp\t69\t\nError\tC2065\t'string_view': undeclared identifier\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\json.hpp\t69\t\nError\tC2146\tsyntax error: missing ')' before identifier 'json'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\json.hpp\t69\t\nError\tC2039\t'optional': is not a member of 'mongocxx::v_noabi::stdx'\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\options\\transaction.hpp\t85\t\nError\tC2143\tsyntax error: missing ';' before '<'\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\options\\transaction.hpp\t85\t\nError\tC4430\tmissing type specifier - int assumed. Note: C++ does not support default-int\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\options\\transaction.hpp\t85\t\nError\tC2238\tunexpected token(s) preceding ';'\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\options\\transaction.hpp\t85\t\nError\tC2039\t'optional': is not a member of 'mongocxx::v_noabi::stdx'\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\options\\transaction.hpp\t107\t\nError\tC2143\tsyntax error: missing ';' before '<'\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\options\\transaction.hpp\t107\t\nError\tC4430\tmissing type specifier - int assumed. Note: C++ does not support default-int\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\options\\transaction.hpp\t107\t\nError\tC2238\tunexpected token(s) preceding ';'\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\options\\transaction.hpp\t107\t\nError\tC2039\t'optional': is not a member of 'mongocxx::v_noabi::stdx'\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\options\\transaction.hpp\t127\t\nError\tC2143\tsyntax error: missing ';' before '<'\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\options\\transaction.hpp\t127\t\nError\tC4430\tmissing type specifier - int assumed. Note: C++ does not support default-int\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\options\\transaction.hpp\t127\t\nError\tC2238\tunexpected token(s) preceding ';'\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\options\\transaction.hpp\t127\t\nError\tC2039\t'optional': is not a member of 'mongocxx::v_noabi::stdx'\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\options\\transaction.hpp\t147\t\nError\tC2143\tsyntax error: missing ';' before '<'\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\options\\transaction.hpp\t147\t\nError\tC4430\tmissing type specifier - int assumed. Note: C++ does not support default-int\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\options\\transaction.hpp\t147\t\nError\tC2238\tunexpected token(s) preceding ';'\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\options\\transaction.hpp\t147\t\nError\tC2039\t'optional': is not a member of 'mongocxx::v_noabi::stdx'\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\options\\client_session.hpp\t74\t\nError\tC4430\tmissing type specifier - int assumed. Note: C++ does not support default-int\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\options\\client_session.hpp\t74\t\nError\tC2143\tsyntax error: missing ';' before '<'\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\options\\client_session.hpp\t74\t\nError\tC2238\tunexpected token(s) preceding ';'\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\options\\client_session.hpp\t74\t\nError\tC2039\t'optional': is not a member of 'mongocxx::v_noabi::stdx'\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\options\\client_session.hpp\t78\t\nError\tC2143\tsyntax error: missing ';' before '<'\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\options\\client_session.hpp\t78\t\nError\tC4430\tmissing type specifier - int assumed. Note: C++ does not support default-int\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\options\\client_session.hpp\t78\t\nError\tC2238\tunexpected token(s) preceding ';'\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\options\\client_session.hpp\t78\t\nError\tC2039\t'optional': is not a member of 'mongocxx::v_noabi::stdx'\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\client_session.hpp\t125\t\nError\tC4430\tmissing type specifier - int assumed. Note: C++ does not support default-int\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\client_session.hpp\t125\t\nError\tC2143\tsyntax error: missing ',' before '<'\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\client_session.hpp\t125\t\nError\tC2143\tsyntax error: missing ')' before '{'\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\client_session.hpp\t125\t\nError\tC2059\tsyntax error: ')'\tMongoDB\tC:\\mongo-cxx-driver\\include\\mongocxx\\v_noabi\\mongocxx\\client_session.hpp\t125\t\nError\tC2039\t'optional': is not a member of 'bsoncxx::v_noabi::stdx'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp\t137\t\nError\tC2143\tsyntax error: missing ';' before '<'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp\t137\t\nError\tC4430\tmissing type specifier - int assumed. Note: C++ does not support default-int\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp\t137\t\nError\tC2238\tunexpected token(s) preceding ';'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp\t137\t\nError\tC3861\t'_value': identifier not found\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp\t68\t\nError\tC2065\t'_value': undeclared identifier\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp\t68\t\nError\tC3861\t'_value': identifier not found\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp\t68\t\nError\tC3861\t'_value': identifier not found\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp\t74\t\nError\tC3861\t'_value': identifier not found\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp\t74\t\nError\tC2065\t'_value': undeclared identifier\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp\t74\t\nError\tC3861\t'_value': identifier not found\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp\t74\t\nError\tC3861\t'_value': identifier not found\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp\t80\t\nError\tC3861\t'_value': identifier not found\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp\t81\t\nError\tC2065\t'_value': undeclared identifier\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp\t81\t\nError\tC3861\t'_value': identifier not found\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp\t81\t\nError\tC3861\t'_value': identifier not found\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp\t91\t\nError\tC3861\t'_value': identifier not found\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp\t92\t\nError\tC2065\t'_value': undeclared identifier\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp\t92\t\nError\tC3861\t'_value': identifier not found\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp\t92\t\nError\tC2039\t'nullopt': is not a member of 'bsoncxx::v_noabi::stdx'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp\t94\t\nError\tC3861\t'_value': identifier not found\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp\t102\t\nError\tC3861\t'_value': identifier not found\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp\t103\t\nError\tC2065\t'_value': undeclared identifier\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp\t103\t\nError\tC3861\t'_value': identifier not found\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp\t103\t\nError\tC2039\t'nullopt': is not a member of 'bsoncxx::v_noabi::stdx'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp\t105\t\nError\tC3861\t'_value': identifier not found\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp\t115\t\nError\tC2039\t'string_view': is not a member of 'bsoncxx::v_noabi::stdx'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\string\\view_or_value.hpp\t36\t\nError\tC2065\t'string_view': undeclared identifier\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\string\\view_or_value.hpp\t36\t\nError\tC2923\t'bsoncxx::v_noabi::view_or_value': 'string_view' is not a valid template type argument for parameter 'View'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\string\\view_or_value.hpp\t36\t\nError\tC2955\t'bsoncxx::v_noabi::view_or_value': use of class template requires template argument list\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\string\\view_or_value.hpp\t36\t\nError\tC2039\t'string_view': is not a member of 'bsoncxx::v_noabi::stdx'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\string\\view_or_value.hpp\t41\t\nError\tC2065\t'string_view': undeclared identifier\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\string\\view_or_value.hpp\t41\t\nError\tC2923\t'bsoncxx::v_noabi::view_or_value': 'string_view' is not a valid template type argument for parameter 'View'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\string\\view_or_value.hpp\t41\t\nError\tC2955\t'bsoncxx::v_noabi::view_or_value': use of class template requires template argument list\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\string\\view_or_value.hpp\t41\t\nError\tC3881\tcan only inherit constructor from direct base\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\string\\view_or_value.hpp\t41\t\nError\tC2039\t'string_view': is not a member of 'bsoncxx::v_noabi::stdx'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\string\\view_or_value.hpp\t56\t\nError\tC2065\t'string_view': undeclared identifier\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\string\\view_or_value.hpp\t55\t\nError\tC2923\t'bsoncxx::v_noabi::view_or_value': 'string_view' is not a valid template type argument for parameter 'View'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\string\\view_or_value.hpp\t56\t\nError\tC2039\t'string_view': is not a member of 'bsoncxx::v_noabi::stdx'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\string\\view_or_value.hpp\t56\t\nError\tC3861\t'string_view': identifier not found\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\string\\view_or_value.hpp\t55\t\nError\tC2614\t'bsoncxx::v_noabi::string::view_or_value': illegal member initialization: 'view_or_value' is not a base or member\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\string\\view_or_value.hpp\t56\t\nError\tC2039\t'string_view': is not a member of 'bsoncxx::v_noabi::stdx'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\string\\view_or_value.hpp\t69\t\nError\tC2065\t'string_view': undeclared identifier\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\string\\view_or_value.hpp\t68\t\nError\tC2923\t'bsoncxx::v_noabi::view_or_value': 'string_view' is not a valid template type argument for parameter 'View'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\string\\view_or_value.hpp\t69\t\nError\tC2039\t'string_view': is not a member of 'bsoncxx::v_noabi::stdx'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\string\\view_or_value.hpp\t69\t\nError\tC3861\t'string_view': identifier not found\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\string\\view_or_value.hpp\t68\t\nError\tC2614\t'bsoncxx::v_noabi::string::view_or_value': illegal member initialization: 'view_or_value' is not a base or member\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\string\\view_or_value.hpp\t69\t\nError\tC2039\t'view': is not a member of 'bsoncxx::v_noabi::string::view_or_value'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\string\\view_or_value.hpp\t101\t\nError\tC2039\t'string_view': is not a member of 'bsoncxx::v_noabi::stdx'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\string\\view_or_value.hpp\t101\t\nError\tC3861\t'string_view': identifier not found\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\string\\view_or_value.hpp\t101\t\nError\tC2039\t'string_view': is not a member of 'bsoncxx::v_noabi::stdx'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\decimal128.hpp\t56\t\nError\tC2061\tsyntax error: identifier 'string_view'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\decimal128.hpp\t56\t\nError\tC2535\t'bsoncxx::v_noabi::decimal128::decimal128(void)': member function already defined or declared\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\decimal128.hpp\t56\t\nError\tC2039\t'string_view': is not a member of 'bsoncxx::v_noabi::stdx'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\oid.hpp\t66\t\nError\tC4430\tmissing type specifier - int assumed. Note: C++ does not support default-int\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\oid.hpp\t66\t\nError\tC2143\tsyntax error: missing ',' before '&'\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\oid.hpp\t66\t\nError\tC1003\terror count exceeds 100; stopping compilation\tMongoDB\tC:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\oid.hpp\t66\t\n", "text": "@Roberto_Sanchez Thank you for your response. I am sorry, if the following text is a bit unstructured, however i am new to writing forum questions.Here is the test programm \nTest_cpp_mongodb1290×475 23.7 KB\n.The Error Message:The configurations are identical to the configuration of the mongo-cxx-installation-guide, except of using foo.lib instead of libfoo.lib, because I don´t have the libfoo.lib files. Here is a picture showing the configuration:\n\nmongodbcxx_driver_configuration1832×263 13 KB\nI am new to the forum and would like to know how i can upload or show you the error message quite comfortable instead of copying the complete message in the text box next time.", "username": "Mike_Reichardt" }, { "code": "", "text": "Hello @Andrew_Freitas,\nI am also facing the same issue. Were you able to resolve it? If you resolve please let me know the solution.", "username": "Jatin_Kumar3" }, { "code": "", "text": "Hello @Mike_Reichardt ,\nI am also facing the same issue. Were you able to solve it? If yes, then please provide the solution.", "username": "Jatin_Kumar3" } ]
Mongo-cxx Installation Problems
2020-05-02T19:52:08.300Z
Mongo-cxx Installation Problems
5,592
https://www.mongodb.com/…3fe490d7fdf.jpeg
[ "time-series", "bengaluru-mug" ]
[ { "code": "30 attendeesTechnical Services Engineer, MongoDB", "text": "\nimage800×800 73 KB\nBengaluru MUG is happy to announce that Mydbops they are returning to customary physical meetups in Bangalore after a prolonged virtual event. The event will take place on July 30th.Our first slot speaker Mr. Darshan Jayarama, Technical Services Engineer at MongoDB is going to share his expertise on the topic ‘ Have a good time with MongoDB Time series’MongoDB 5.0 was released with Time Series Collections and MongoDB 6.0 brought some Time Series enhancements. It simplifies the time series collection and improves query efficiency. This session will focus on the implementation and benefits of the Time series collection.Event Type: In-Person\n Location: 91 Springboard.\n 91 Springboard, 6th floor, Trifecta Adatto, 21, ITPL Main Rd, Garudachar Palya, Mahadevapura, Bengaluru, Karnataka 560048.Unfortunately, due to COVID restrictions, we are only able to accommodate 30 attendees, even though we would have loved to have more.Save Your Seat: Mydbops Opensource Database Meetup - 12\nimage512×512 67.2 KB\nTechnical Services Engineer, MongoDBJoin the Bengaluru MUG to stay updated with upcoming meetups and discussions.", "username": "Harshit" }, { "code": "", "text": "Here are some photos from the event:\n\nIMG_20220730_1321564000×1800 288 KB\n\n\nIMG_20220730_1328231920×864 180 KB\n\n\nIMG_20220730_1103304000×1800 511 KB\n\n\nIMG_20220730_1015354000×1800 316 KB\n\n\nIMG_20220730_1015214000×1800 429 KB\nThanks everyone for attending. Hope you all had a great time!", "username": "Harshit" } ]
Bengaluru MUG: Mydbops Open Source Meetup, MongoDB Time Series
2022-07-26T08:21:29.404Z
Bengaluru MUG: Mydbops Open Source Meetup, MongoDB Time Series
3,982
null
[]
[ { "code": "", "text": "I have installed mongodb CE 6.0 on redhat 8\nin the old system we have : 5.0.5 we have mongo as a command in /usr/bin\nin the new system : I can nowhere find this command in any location\nI installed server-shell-tools\nis any additional package/command needed ?\nthanks for all answers, best regards, Guy", "username": "Guy_Przytula" }, { "code": "", "text": "these are the packages installed\nrpm -qa |grep -i mongo\nmongodb-org-server-6.0.0-1.el8.x86_64\nmongodb-mongosh-shared-openssl11-1.5.4-1.el8.x86_64\nmongodb-org-mongos-6.0.0-1.el8.x86_64\nmongodb-database-tools-100.5.4-1.x86_64", "username": "Guy_Przytula" }, { "code": "mongomongomongoshmongodb-mongosh-shared-openssl11-1.5.4-1.el8.x86_64mongosh", "text": "Hi @Guy_Przytula,The legacy mongo shell is no longer included in server packages as of MongoDB 6.0. mongo has been superseded by the new MongoDB Shell ( mongosh ) which you appear to have installed by way of mongodb-mongosh-shared-openssl11-1.5.4-1.el8.x86_64.If you prefer an admin GUI, you can also Download MongoDB Compass from the MongoDB Download Centre. Compass includes an an Embedded MongoDB Shell (which is mongosh).Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "many thanks for the answer…\nbest regards, Guy", "username": "Guy_Przytula" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongo command not found with v6.0 CE
2022-08-09T07:00:46.108Z
Mongo command not found with v6.0 CE
30,278
null
[ "aggregation", "node-js", "data-modeling", "mongoose-odm" ]
[ { "code": "", "text": "Hi Guys! I have a front end background but I’m new to the databases and back end.I like to be certain about the stuff that I work with and right now I need to structure my database and models in MongoDB.I did read a lot of MongoDB articles about data modeling and different patterns, but I can’t properly fit my application into one of them.Let’s say I have 4 collections:User → Artist\nItem → ProductI need to have a User collection with some user specific fields like name and avatar etc. Some Users can apply to become an Artist - Artist(a new collection) also has got some artist-specific fields like music genres and so on.\nArtist always has to be a User first, but a User doesn’t have to be an Artist.The other part of my app is about the Item and Product collections. An Item might have a title and description fields. An Item can become a Product and extends Item’s fields with Product ones like price for example.For now we have a reference between a User and an Artist and Item and Product, but obviously there will be more references regarding the app itself. A User can buy a Product and an Artist can create an Item.So that’s just a small part of the app - there will be more collections and references in the future.\nI can’t find a way to model the data I have right now and I’m a bit worried about my current approach.For now I see two possible solutions:My thoughts on both of those solutions:\nFirst solution forces me to use populate(I use mongoose) or aggregate to join the collections whenever I need the data from both collections - Item & Product and User & Artist which results in having two queries. This might not be the best solution in terms of a performance, but I’m not sure if that will be the case.\nSecond solution feels weird for me - why should I have an empty fields on a merged collection when I don’t need them? I believe this approach will be more optimised cause I need to do just one query instead of two, but I have to say that the first solution feels more ‘natural’ for me.Is any of those two solutions any good? Or should I find a different path? Should I be worried about joining the collections?Thanks in advance, I really appreciate any help!", "username": "M0ngo_newbie" }, { "code": "User{\n \"name\": \"Sourabh\",\n \"avatar\": \"https://avatars0.githubusercontent.com/u/5718?v=4\",\n \"email\": \"[email protected]\"\n ...\n}\nArtistUserUser// The following User is an Artist\n{\n \"name\": \"Sourabh\",\n \"avatar\": \"https://avatars0.githubusercontent.com/u/5718?v=4\",\n \"email\": \"[email protected]\",\n \"isArtist\": true,\n \"genres\": [\"Rock\", \"Pop\", \"Jazz\"],\n \"awards\": [\"Best Rock Band\", \"Best Jazz Band\", \"MORE\"]\n ...\n}\n\n// The following User is not an Artist\n{\n \"name\": \"Sourabh\",\n \"avatar\": \"https://avatars0.githubusercontent.com/u/5718?v=4\",\n \"email\": \"[email protected]\",\n \"isArtist\": false\n ...\n}\ndb.users.find({isArtist: true})\nItemsItemsProductsProduct{\n \"title\": \"MongoDB Tshirt\",\n \"description\": \"A black MongoDB Tshirt having a neon green logo printed in the front and white in the back.\",\n \"isListed\": false,\n}\n\n{\n \"title\": \"MongoDB Tshirt\",\n \"description\": \"A black MongoDB Tshirt having a neon green logo printed in the front and white in the back.\",\n \"isListed\": true,\n \"price\": 10\n}\n", "text": "Hi @M0ngo_newbie, welcome to the community .\nFirst of all, thank you so much for describing your concern in detail.First solution forces me to use populate(I use mongoose) or aggregate to join the collections whenever I need the data from both collections - Item & Product and User & Artist which results in having two queries. This might not be the best solution in terms of a performanceThat’s correct, there is a performance overhead for joining two collections.As per our Data Modelling guide:Embedded data models allow applications to store related pieces of information in the same database record. As a result, applications may need to issue fewer queries and updates to complete common operations.The second solution suggested by you seems correct with a small modification as follows:\nSuppose your current User document looks something like this:Therefore, to accommodate Artist specific details in your User collection, your User document would have the following structure:Now if you would like to search for more Artists in your database, you can do so by using the following command:Also, the same would be applicable for the Items document, instead of having Items and Products together, you can simply combine them in a single Product collection.Having said that, schemas are generally dependent on the application use cases. So, take a look at our forever free M320: Data Modeling course, where you will:If you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB", "username": "SourabhBagrecha" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What pattern to choose with given collections
2022-08-03T20:52:50.272Z
What pattern to choose with given collections
1,566
null
[ "atlas-functions" ]
[ { "code": "await axios.get(\n \"https://.../something\",\n {\n headers: {\n username: \"xyz\",\n password: \"safePassword\"\n }\n }\n );\n// This function is the endpoint's request handler.\nexports = function({ query, headers, body}, response) {\n // Data can be extracted from the request as follows:\n\n // Query params, e.g. '?arg1=hello&arg2=world' => {arg1: \"hello\", arg2: \"world\"}\n const {arg1, arg2} = query;\n\n // Headers, e.g. {\"Content-Type\": [\"application/json\"]}\n const contentTypes = headers[\"Content-Type\"];\n\n // Raw request body (if the client sent one).\n // This is a binary object that can be accessed as a string using .text()\n const reqBody = body;\n\n console.log(\"arg1, arg2: \", arg1, arg2);\n console.log(\"Content-Type:\", JSON.stringify(contentTypes));\n console.log(\"Request body:\", reqBody);\n\n // You can use 'context' to interact with other Realm features.\n // Accessing a value:\n // var x = context.values.get(\"value_name\");\n\n // Querying a mongodb service:\n // const doc = context.services.get(\"mongodb-atlas\").db(\"dbname\").collection(\"coll_name\").findOne();\n\n // Calling a function:\n // const result = context.functions.execute(\"function_name\", arg1, arg2);\n\n // The return value of the function is sent as the response back to the client\n // when the \"Respond with Result\" setting is set.\n response.addHeader(\"Access-Control-Allow-Origin\", \"http://localhost:3000\");\n response.addHeader(\"Access-Control-Allow-Methods\",\"GET\");\n response.addHeader( \"Access-Control-Allow-Headers\", \"Content-Type, username, password\");\n response.setStatusCode(200);\n\n return \"Hello World!\";\n};\n\n", "text": "Hi together,I am currently trying to setup a https endpoint. The function should require application authentication. I simply setup a function and a https endpoint now that does nothing special so far and tried requesting it from the frontend with axios, however, I am getting an CORS error: “… has been blocked by CORS policy: Response to preflight request doesn’t pass access control check: No ‘Access-Control-Allow-Origin’ header is present on the requested resource.”If I understand it correctly this issues appears since the OPTIONS request to my https endpoint does not include the access control allow origin header and therefore the actual GET request never gets executed.My request looks as followsAnd in realm it is mostly the default functionI hope someone can help to find what’s missing since this didnt seem to be an issue with the old webhooks.BR", "username": "Daniel_Rollenmiller" }, { "code": "response.setBody(message);", "text": "You might want to disable return type and do not return anything from your function. Instead, use\nresponse.setBody(message);", "username": "Gilles_Yvetot1" } ]
HTTP Endpoints CORS policy
2022-05-12T08:58:19.797Z
HTTP Endpoints CORS policy
3,489
null
[ "queries" ]
[ { "code": "", "text": "Team,Greetings !!We are going to upgrade mongo atlas cluster from 4.0.* to 5.0*. I want to know how to estimate total time require. Is it depend any parameter like DB Size/ Instance type etc…Regards,\nYogesh", "username": "Yogesh_Bhamare" }, { "code": "", "text": "Hi @Yogesh_Bhamare - Welcome to the community.Performing an upgrade of a staging cluster with the same (or similar) data (and workload if possible) as your production cluster may give you the closest general estimate to how long it would take although it may be difficult to replicate production workload onto a staging cluster. I would refer to the Upgrade Major MongoDB Version for a Cluster documentation which details considerations as well as staging upgrade details including the following (step 6 of upgrading a staging cluster):…Consider measuring the time required by Atlas to upgrade the cluster to set a general expectation for your production cluster upgrade.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to estimate the total time required for Mongo Atlas Cluster Upgrade from 4.0* to 5.0*
2022-07-21T12:54:58.286Z
How to estimate the total time required for Mongo Atlas Cluster Upgrade from 4.0* to 5.0*
1,604
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 4.2.22-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.2.21. The next stable release 4.2.22 will be a recommended upgrade for all 4.2 users.\nFixed in this release:", "username": "Aaron_Morand" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 4.2.22-rc0 is released
2022-08-08T21:21:31.389Z
MongoDB 4.2.22-rc0 is released
2,187
null
[ "backup" ]
[ { "code": "", "text": "Hi, I deleted some data by accident and would like to restore, inserting only those data back to my database.\nI tried to download the restore, but couldn’t use it properly.\nHow can I do that?", "username": "Luccas_Casagrande" }, { "code": "", "text": "Hi Luccas, you could restore to a temporary side cluster, otherwise you’d need to use mongodb community software to use the restore locally. Then you can move the data you want back over to the original cluster. Please don’t hesitate to open a support case to ask or help!", "username": "Andrew_Davidson" }, { "code": "", "text": "Nice. Thanks for reply! I could manage to restore all backup to another cluster. Now, how can I move the required data only to my main cluster?Thanks in advance.", "username": "Luccas_Casagrande" }, { "code": "", "text": "Luccas I recommend piping mongodump to mongorestore (If I remember correctly this can be done with the --archive flag e.g. mongodump --archive | mongorestore --archive) w/ parts of the commands missing for brevity", "username": "Andrew_Davidson" } ]
How to restore filtered backup data without deleting the current database
2022-08-04T18:18:01.555Z
How to restore filtered backup data without deleting the current database
1,892
null
[ "indexes" ]
[ { "code": "{\n \"key\": {\n \"date\": 1\n },\n \"options\": {\n \"name\": \"date_info\",\n \"background\": true,\n \"sparse\": true\n }\n }\n", "text": "I have a existing index which is a non-TTL index.\nFirst question is Can I create a index with same field (means Can I create index with ‘date’ field?)\nIf the answer is yes please do explain how?\nIf answer is no then how can we convert this non-TTL index into TTL index (add timeToExpire in options)", "username": "Vartika_Malguri" }, { "code": "db.runCommand({\n \"collMod\": <collName>,\n \"index\": {\n \"keyPattern\": <keyPattern>,\n \"expireAfterSeconds\": <number>\n }\n})\nexpireAfterSecondsexpireAfterSeconds", "text": "Hi @Vartika_Malguri have you tried running something like the following (taken from the TTL Index documentation):This will go through and update an existing index with the expireAfterSeconds field. I just tested this out on a normal index and the document was deleted after the expireAfterSeconds period passed.", "username": "Doug_Duncan" }, { "code": "", "text": "Yes… I have tried this …This didn’t worked for me\nIt says no expireAfterSeconds to update", "username": "Vartika_Malguri" }, { "code": "", "text": "Can you state what version of MongoDB you’re using? I just tested this out on version 6.0.0 and it worked.\nimage1165×230 34.1 KB\n", "username": "Doug_Duncan" }, { "code": "", "text": "Quick note, I just tested with versions 5.0.10 and 4.4.15 and was able to do the same as I did on 6.0.0 and the process worked on those versions as well.", "username": "Doug_Duncan" } ]
Can we convert a non-TTL index into a TTL index?
2022-08-03T17:52:46.695Z
Can we convert a non-TTL index into a TTL index?
2,211
https://www.mongodb.com/…a6ede1ebce57.png
[ "mongoose-odm" ]
[ { "code": "const mongoose = require('mongoose');\n\nconst Schema = mongoose.Schema;\n\nconst ArticlesSchema = new mongoose.Schema({\n path: {\n type: String,\n required: true,\n unique: true,\n },\n base_headline: {\n type: String,\n required: true,\n },\n intro: {\n type: String,\n required: true,\n },\n featured_image: {\n type: String,\n required: true,\n },\n author: {\n type: String,\n required: true,\n },\n platform_filter: {\n type: String,\n },\n free_filter: {\n type: String,\n },\n content_type: {\n type: String,\n required: true,\n },\n data: [{ type: Schema.Types.ObjectId, ref: 'DesignProducts' }],\n date: {\n type: Date,\n default: Date.now,\n },\n});\n\nmodule.exports = mongoose.model('Articles', ArticlesSchema);\nArticle.findOne({ path: slug }).populate('data').exec();\nconst mongoose = require('mongoose');\n\nconst DesignProductsSchema = new mongoose.Schema({\n name: {\n type: String,\n required: true,\n unique: true,\n },\n intro: {\n type: String,\n required: true,\n },\n website: {\n type: String,\n required: true,\n },\n date: {\n type: Date,\n default: Date.now,\n },\n});\n\nmodule.exports = mongoose.model('DesignProducts', DesignProductsSchema);\n", "text": "Hi everyoneI have a model with articles, and would like to populate an array of data with all the documents in a collection.The data property should be populated with all documents in the DesignProducts collection.I tried running this but the data array is still empty:Here is what the designProducts model looks like:This array should be populated with all the documents in the DesignProducts collection:", "username": "Fredrik_Aurdal" }, { "code": " # Get my array\n data = my_coll.find({})\n\n all_data = []\n\n for document in data :\n all_data .append({str(document[\"_id\"]):\"myproperty\"})\n \n # add array if does not exist\n some_other_collection.update_many({},\n {\n \"$addToSet\":\n {\n \"my_array\": all_data \n }\n })\n", "text": "I wanted to the same thing, but I was unable to find any documentation only so I used a pymongo script", "username": "Timothee_Wright" } ]
Populate a property of a mongoose schema with all the data in another collection
2021-01-19T12:59:10.833Z
Populate a property of a mongoose schema with all the data in another collection
11,044
null
[ "dot-net" ]
[ { "code": "[BsonExtraElements]\npublic IDictionary<string, object> DynamicProperties { get; set; }\nBsonSerializer.RegisterSerializer(new DateTimeOffsetSerializer(BsonType.String));\n\n \n }\n }\n else\n {\n var dictionary = (IDictionary<string, object>)extraElements;\n foreach (var key in dictionary.Keys)\n {\n bsonWriter.WriteName(key);\n var value = dictionary[key];\n var bsonValue = BsonTypeMapper.MapToBsonValue(value);\n BsonValueSerializer.Instance.Serialize(context, bsonValue);\n }\n }\n }\n }\n \n private void SerializeDiscriminator(BsonSerializationContext context, Type nominalType, object obj)\n {\n var discriminatorConvention = _classMap.GetDiscriminatorConvention();\n if (discriminatorConvention != null)\n {\n \n ", "text": "Hi, MongoDB Experts,I am using MongoDB C#Driver 2.11.6. I have a class with dynamic properties. So I define it like below.However, when I try to customize the serialization of the DateTimeOffset type with the code below.It works fine with fields in that class but not the DateTimeOffset data I added to the DynamicProperties at runtime.From the source code, it looks the field has the BsonExtraElements attribute will always be serialized with the default serializer instead of the registered serializer. Is there any reasons for this? Any comments will be highly appreciated.", "username": "LQ_Sun" }, { "code": "", "text": "I noticed the same thing with BsonExtraElements and custom serializers, I’m wondering if this is a bug?", "username": "nathan_ren" } ]
BsonExtraElements with custom serializer
2021-02-28T07:18:30.155Z
BsonExtraElements with custom serializer
5,150
https://www.mongodb.com/…6_2_1023x819.png
[]
[ { "code": "", "text": "Received: ERROR “Client attempted a write that is outside of permissions or query filters; it has been reverted” (error_code=231, try_again=true, recovery_disabled=false)I’m trying to debug this error. I have this default role on all my collections:\n\nScreen Shot 2022-08-08 at 8.08.35 AM1964×1572 184 KB\n.When I try to add a role to an individual collection I see the error “default roles can’t be used for table “Pets”: invalid permissions for “read”: invalid match expression: key “owner_id” is a queryable field, but doesn’t exist on this table’s schema”Indeed, owner_id is not in my schema. I see that it’s set as a queryable field. Do I need to add it to my schema, is that the source of my trouble? Thank you.", "username": "Harry_Netzer1" }, { "code": "", "text": "Hi, unfortunately, the picture you attached is not relevant as Sync uses its own permissions on the Sync Page (we are working on integrating them into the rules page). Can you send either (a) a link to your app (the URL in the App Services UI) or (b) Your permissions defined in the Sync page?The “Error” you are getting is actually a feature called “Compensating Writes”, which means that if a client writes something that it is not allowed to see due to permissions, then it will not just reject the write and break sync, but rather “fix” the client by undoing that write (since it is not allowed to make the write)", "username": "Tyler_Kaye" }, { "code": "{\n \"defaultRoles\": [\n {\n \"name\": \"read-write\", \n \"applyWhen\": {},\n \"read\": true,\n \"write\": true\n }\n ] \n}\n{\n \"defaultRoles\": [\n {\n \"name\": \"owner-read-write\", \n \"applyWhen\": {},\n \"read\": {\"owner_id\": \"%%user.id\"},\n \"write\": {\"owner_id\": \"%%user.id\"}\n }\n ] \n}\n", "text": "Thanks for the quick reply. I changed the permissions under sync to this and I’m able to write date:I’m still confused why the prior rule didn’t allow my write:Would I need to set owner_id prior to writing the objects to realm?", "username": "Harry_Netzer1" }, { "code": "\"owner_id\"== \"%%user.id\"{\"owner_id\": \"%%user.id\"}{ owner_id: \"abc\" }{\"owner_id\": \"%%user.id\"}", "text": "Hi, so it depends what your write was trying to do. Those permissions mean that on every write, we take the PreImage of the document and the PostImage and both of them have to have \"owner_id\"== \"%%user.id\" if they are not empty.The last bit is key so that you can “insert” a document with {\"owner_id\": \"%%user.id\"} but you canot update an existing document with { owner_id: \"abc\" } to be {\"owner_id\": \"%%user.id\"} because that would let it update a document it is not allowed to see or write to.", "username": "Tyler_Kaye" } ]
Changes reverted after trying to write any data
2022-08-08T12:12:06.891Z
Changes reverted after trying to write any data
3,456
null
[ "node-js", "connecting" ]
[ { "code": "MongoServerSelectionError: connect ECONNREFUSED ::1:27017127.0.0.1 localhost\n255.255.255.255 broadcasthost\n::1 localhost\n", "text": "I post here as a follow up of Econnrefused ::1:27017 as it is no longer possible to post replies there.When connecting to mongodb with MongoClient i get MongoServerSelectionError: connect ECONNREFUSED ::1:27017 when connecting to ‘mongodb://localhost:27017/’ but no error when connecting to ‘mongodb://127.0.0.1:27017/’.As @Stennie_X mention the problem is that MongoClient tries to connect with IPv6. It didn’t seem to me it could be that because in my /etc/hosts file localhost is configured for both IPv4 and IPv6:But with that configuration it seems that localhost will randomly redirect to IPv4 or v6, so the solution is to remove the line for IPv6.", "username": "Nicolas_Traut" }, { "code": "hosts", "text": "in hosts file, you can assign multiple names to a single IP, but assigning multiple IPs to a single name can be chaotic. in the case of multiple IP addresses, the first occurrence will be used. And in case you have both IPv4 and IPv6 assigned to the same name, a host OS with IPv6 enabled will use the first IPv6 address while the one with only IPv4 will use the first IPv4 address.", "username": "Yilmaz_Durmaz" }, { "code": "MongoClient/etc/hosts", "text": "In my case it is just the call of MongoClient inside a node instance, so not sure if that means that the host is IPv6 or IPv4, and the /etc/hosts file with multiple assignations for localhost is the one which came with my Mac.The most disturbing part is that after I boot my Mac it will use IPv4, and after some usage it will start to use IPv6…", "username": "Nicolas_Traut" }, { "code": "", "text": "I saw some posts noting that Mac chooses the faster one with the network you use but fails to switch to IPv4 if it decides to go with IPv6. I guess this is what happened in your case. I don’t own a Mac so I can’t say if this is bad as it sounds or has its advantages.This solution you have provided works for sure since you force name resolution to go for IPv4. Nice to have it. an alternative is suggested in the other discussion you started to bind mongod to both IPv4 and IPv6. have you tried to start your server that way instead of changing hosts file?", "username": "Yilmaz_Durmaz" }, { "code": "mongod.conf/usr/local/etc/mongod.confnet:\n ipv6: true\n bindIp: \"127.0.0.1,::1\"\n", "text": "Yes configuring mongod.conf (in my case /usr/local/etc/mongod.conf) withis an alternative solution to fix the issue.", "username": "Nicolas_Traut" } ]
Follow up of Econnrefused ::1:27017
2022-08-03T15:28:44.672Z
Follow up of Econnrefused ::1:27017
4,895
null
[ "queries", "data-modeling", "java" ]
[ { "code": "var ast = parse('{\"foo\": \"bar\"}');\nassert.deepEqual(ast, {\n'pos': 'expression',\n'clauses': [\n{\n'pos': 'leaf-clause',\n'key': 'foo',\n'value': {\n'pos': 'leaf-value',\n'value': 'bar'\n}\n}\n]\n", "text": "I’m trying to find a parser (better on java) for any query that parses the query and creates an abstract syntax tree, something like that:});\ndoes mongoDb have a library doing that? (better for java)", "username": "to_ad" }, { "code": "", "text": "Hi @to_adPlease have a look at https://www.mongodb.com/docs/drivers/java/sync/current/fundamentals/data-formats/document-data-format-extended-json/ and see whether that suits your needs.Regards,\nJeff", "username": "Jeffrey_Yemin" } ]
Does mongoDB has a parse library
2022-08-08T12:30:15.194Z
Does mongoDB has a parse library
1,738
null
[ "sharding", "php" ]
[ { "code": "{\n\t\"set\" : \"rs0\",\n\t\"date\" : ISODate(\"2022-08-08T06:36:33.451Z\"),\n\t\"myState\" : 1,\n\t\"term\" : NumberLong(133),\n\t\"syncingTo\" : \"\",\n\t\"syncSourceHost\" : \"\",\n\t\"syncSourceId\" : -1,\n\t\"configsvr\" : true,\n\t\"heartbeatIntervalMillis\" : NumberLong(2000),\n\t\"majorityVoteCount\" : 2,\n\t\"writeMajorityCount\" : 2,\n\t\"optimes\" : {\n\t\t\"lastCommittedOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1659940591, 1),\n\t\t\t\"t\" : NumberLong(133)\n\t\t},\n\t\t\"lastCommittedWallTime\" : ISODate(\"2022-08-08T06:36:31.072Z\"),\n\t\t\"readConcernMajorityOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1659940591, 1),\n\t\t\t\"t\" : NumberLong(133)\n\t\t},\n\t\t\"readConcernMajorityWallTime\" : ISODate(\"2022-08-08T06:36:31.072Z\"),\n\t\t\"appliedOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1659940591, 1),\n\t\t\t\"t\" : NumberLong(133)\n\t\t},\n\t\t\"durableOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1659940591, 1),\n\t\t\t\"t\" : NumberLong(133)\n\t\t},\n\t\t\"lastAppliedWallTime\" : ISODate(\"2022-08-08T06:36:31.072Z\"),\n\t\t\"lastDurableWallTime\" : ISODate(\"2022-08-08T06:36:31.072Z\")\n\t},\n\t\"lastStableRecoveryTimestamp\" : Timestamp(1659940572, 1),\n\t\"lastStableCheckpointTimestamp\" : Timestamp(1659940572, 1),\n\t\"electionCandidateMetrics\" : {\n\t\t\"lastElectionReason\" : \"stepUpRequestSkipDryRun\",\n\t\t\"lastElectionDate\" : ISODate(\"2022-08-07T18:00:50.758Z\"),\n\t\t\"electionTerm\" : NumberLong(133),\n\t\t\"lastCommittedOpTimeAtElection\" : {\n\t\t\t\"ts\" : Timestamp(1659895248, 2),\n\t\t\t\"t\" : NumberLong(132)\n\t\t},\n\t\t\"lastSeenOpTimeAtElection\" : {\n\t\t\t\"ts\" : Timestamp(1659895248, 2),\n\t\t\t\"t\" : NumberLong(132)\n\t\t},\n\t\t\"numVotesNeeded\" : 2,\n\t\t\"priorityAtElection\" : 1,\n\t\t\"electionTimeoutMillis\" : NumberLong(10000),\n\t\t\"priorPrimaryMemberId\" : 2,\n\t\t\"numCatchUpOps\" : NumberLong(0),\n\t\t\"newTermStartDate\" : ISODate(\"2022-08-07T18:00:51.320Z\"),\n\t\t\"wMajorityWriteAvailabilityDate\" : ISODate(\"2022-08-07T18:00:52.676Z\")\n\t},\n\t\"electionParticipantMetrics\" : {\n\t\t\"votedForCandidate\" : true,\n\t\t\"electionTerm\" : NumberLong(132),\n\t\t\"lastVoteDate\" : ISODate(\"2022-08-07T18:00:48.115Z\"),\n\t\t\"electionCandidateMemberId\" : 2,\n\t\t\"voteReason\" : \"\",\n\t\t\"lastAppliedOpTimeAtElection\" : {\n\t\t\t\"ts\" : Timestamp(1659895244, 1),\n\t\t\t\"t\" : NumberLong(130)\n\t\t},\n\t\t\"maxAppliedOpTimeInSet\" : {\n\t\t\t\"ts\" : Timestamp(1659895246, 3),\n\t\t\t\"t\" : NumberLong(131)\n\t\t},\n\t\t\"priorityAtElection\" : 1\n\t},\n\t\"members\" : [\n\t\t{\n\t\t\t\"_id\" : 0,\n\t\t\t\"name\" : \"mongodb-conf1:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 1,\n\t\t\t\"stateStr\" : \"PRIMARY\",\n\t\t\t\"uptime\" : 45347,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1659940591, 1),\n\t\t\t\t\"t\" : NumberLong(133)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2022-08-08T06:36:31Z\"),\n\t\t\t\"syncingTo\" : \"\",\n\t\t\t\"syncSourceHost\" : \"\",\n\t\t\t\"syncSourceId\" : -1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"electionTime\" : Timestamp(1659895250, 1),\n\t\t\t\"electionDate\" : ISODate(\"2022-08-07T18:00:50Z\"),\n\t\t\t\"configVersion\" : 1,\n\t\t\t\"self\" : true,\n\t\t\t\"lastHeartbeatMessage\" : \"\"\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 1,\n\t\t\t\"name\" : \"mongodb-conf2:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 2,\n\t\t\t\"stateStr\" : \"SECONDARY\",\n\t\t\t\"uptime\" : 45345,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1659940591, 1),\n\t\t\t\t\"t\" : NumberLong(133)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(1659940591, 1),\n\t\t\t\t\"t\" : NumberLong(133)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2022-08-08T06:36:31Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"2022-08-08T06:36:31Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2022-08-08T06:36:31.539Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2022-08-08T06:36:32.860Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncingTo\" : \"mongodb-conf1:27017\",\n\t\t\t\"syncSourceHost\" : \"mongodb-conf1:27017\",\n\t\t\t\"syncSourceId\" : 0,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : 1\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 2,\n\t\t\t\"name\" : \"mongodb-conf3:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 2,\n\t\t\t\"stateStr\" : \"SECONDARY\",\n\t\t\t\"uptime\" : 45338,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1659940591, 1),\n\t\t\t\t\"t\" : NumberLong(133)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(1659940591, 1),\n\t\t\t\t\"t\" : NumberLong(133)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2022-08-08T06:36:31Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"2022-08-08T06:36:31Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2022-08-08T06:36:31.519Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2022-08-08T06:36:33.304Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncingTo\" : \"mongodb-conf2:27017\",\n\t\t\t\"syncSourceHost\" : \"mongodb-conf2:27017\",\n\t\t\t\"syncSourceId\" : 1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : 1\n\t\t}\n\t],\n\t\"ok\" : 1,\n\t\"$gleStats\" : {\n\t\t\"lastOpTime\" : Timestamp(0, 0),\n\t\t\"electionId\" : ObjectId(\"7fffffff0000000000000085\")\n\t},\n\t\"lastCommittedOpTime\" : Timestamp(1659940591, 1),\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1659940591, 1),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"xdvd4H1nUSIA2sIqQ8vxQHoaO+o=\"),\n\t\t\t\"keyId\" : NumberLong(\"7127557678649311233\")\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1659940591, 1)\n}```\n\n2. Here is the rs.config() for the shards:\n\n\t\t},\n\t\t\"slaveDelay\" : NumberLong(0),\n\t\t\"votes\" : 1\n\t},\n\t{\n\t\t\"_id\" : 1,\n\t\t\"host\" : \"mongodb-shard1-02:27017\",\n\t\t\"arbiterOnly\" : false,\n\t\t\"buildIndexes\" : true,\n\t\t\"hidden\" : false,\n\t\t\"priority\" : 1,\n\t\t\"tags\" : {\n\n\t\t},\n\t\t\"slaveDelay\" : NumberLong(0),\n\t\t\"votes\" : 1\n\t},\n\t{\n\t\t\"_id\" : 2,\n\t\t\"host\" : \"mongodb-shard1-03:27017\",\n\t\t\"arbiterOnly\" : false,\n\t\t\"buildIndexes\" : true,\n\t\t\"hidden\" : false,\n\t\t\"priority\" : 1,\n\t\t\"tags\" : {\n\n\t\t},\n\t\t\"slaveDelay\" : NumberLong(0),\n\t\t\"votes\" : 1\n\t}\n],\n\"settings\" : {\n\t\"chainingAllowed\" : true,\n\t\"heartbeatIntervalMillis\" : 2000,\n\t\"heartbeatTimeoutSecs\" : 10,\n\t\"electionTimeoutMillis\" : 10000,\n\t\"catchUpTimeoutMillis\" : -1,\n\t\"catchUpTakeoverDelayMillis\" : 30000,\n\t\"getLastErrorModes\" : {\n\n\t},\n\t\"getLastErrorDefaults\" : {\n\t\t\"w\" : 1,\n\t\t\"wtimeout\" : 0\n\t},\n\t\"replicaSetId\" : ObjectId(\"62effe83cfa06cab70c2d2c0\")\n}\n\n\n3. Here is the mongos.log output when trying to do a connection:\n\n\nCan you please help me with some suggestions?", "text": "Hello,I followed the instruction from here: https://www.mongodb.com/docs/manual/tutorial/deploy-shard-cluster/#start-each-member-of-the-shard-replica-set to setup a shard.{\n“_id” : “rs1”,\n“version” : 1,\n“protocolVersion” : NumberLong(1),\n“writeConcernMajorityJournalDefault” : true,\n“members” : [\n{\n“_id” : 0,\n“host” : “mongodb-shard1-01:27017”,\n“arbiterOnly” : false,\n“buildIndexes” : true,\n“hidden” : false,\n“priority” : 1,\n“tags” : {}022-08-08T08:48:55.683+0200 I CONNPOOL [ShardRegistry] Connecting to mongodb-conf1:27017\n2022-08-08T08:48:55.690+0200 I CONTROL [LogicalSessionCacheRefresh] Failed to refresh session cache: ShardingStateNotInitialized: Cannot accept sharding commands if sharding state has not been initialized with a shardIdentity document\n2022-08-08T08:49:00.369+0200 I NETWORK [conn21] received client metadata from 10.135.169.16:33198 conn21: { driver: { name: “mongoc / ext-mongodb:PHP”, version: “1.16.2 / 1.7.4” }, os: { type: “Linux”, name: “Debian GNU/Linux”, version: “9”, architecture: “x86_64” }, platform: “PHP 7.4.4cfg=0x015156a8e9 posix=200809 stdc=201112 CC=GCC 6.3.0 20170516 CFLAGS=”\" LDFLAGS=“”\" }\n2022-08-08T08:49:20.617+0200 I COMMAND [conn21] command feeder.logs command: create { create: “logs”, capped: false, $db: “feeder”, lsid: { id: UUID(“9ac401d6-0352-423a-9c8e-4557be32ecc3”) }, $clusterTime: { clusterTime: Timestamp(1659941336, 1), signature: { hash: BinData(0, 26334C49AAEA28A44B3DFF3A9911AFBD09350A29), keyId: 7127557678649311233 } } } numYields:0 ok:0 errMsg:“Could not find host matching read preference { mode: \"primary\" } for set rs1” errName:FailedToSatisfyReadPreference errCode:133 reslen:306 protocol:op_msg 20220ms\n2022-08-08T08:50:16.066+0200 I CONNPOOL [ShardRegistry] Ending idle connection to host mongodb-conf1:27017 because the pool meets constraints; 1 connections to that host remain open\n2022-08-08T08:53:55.686+0200 I CONTROL [LogicalSessionCacheRefresh] Failed to refresh session cache: ShardingStateNotInitialized: Cannot accept sharding commands if sharding state has not been initialized with a shardIdentity document", "username": "Alexandru_Baciu" }, { "code": "sharding.clusterRoleshardsvrsharding:\n clusterRole: shardsvr\nreplication:\n replSetName: <replSetName>\nnet:\n bindIp: localhost,<ip address>\nmongod --shardsvr --replSet <replSetname> --dbpath <path> --bind_ip localhost,<hostname(s)|ip address(es)>", "text": "Hello,Have you started your shard replica set using the correct configuration ?The guide that you linked to shows that sharding.clusterRole option to be put on shardsvr , either by establising correct configuration file:Or by starting mongod from command line (also using correct configuration):\nmongod --shardsvr --replSet <replSetname> --dbpath <path> --bind_ip localhost,<hostname(s)|ip address(es)>Kind regards, Tin.", "username": "Tin_Cvitkovic" }, { "code": "# Where and how to store data.\nstorage:\n dbPath: /var/data/mongodb\n journal:\n enabled: true\n# engine:\n# mmapv1:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# network interfaces\nnet:\n port: 27017\n# bindIp: 127.0.0.1\n bindIpAll: true\n\n# how the process runs\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\nsecurity:\n keyFile: /etc/mongodb_key\n\nreplication:\n replSetName: \"rs1\"\n\nsharding:\n clusterRole: shardsvr\n\n-- Sharding Status ---\n sharding version: {\n \t\"_id\" : 1,\n \t\"minCompatibleVersion\" : 5,\n \t\"currentVersion\" : 6,\n \t\"clusterId\" : ObjectId(\"5e4539795396520a99f4c822\")\n }\n shards:\n { \"_id\" : \"rs1\", \"host\" : \"rs1/mongodb-shard1-01:27017,mongodb-shard1-02:27017,mongodb-shard1-03:27017\", \"state\" : 1 }\n active mongoses:\n \"4.2.8\" : 1\n autosplit:\n Currently enabled: yes\n balancer:\n Currently enabled: yes\n Currently running: no\n Failed balancer rounds in last 5 attempts: 5\n Last reported error: Could not find host matching read preference { mode: \"primary\" } for set rs1\n Time of Reported error: Mon Aug 08 2022 10:36:17 GMT+0200 (CEST)\n Migration Results for the last 24 hours:\n No recent migrations\n databases:\n { \"_id\" : \"config\", \"primary\" : \"config\", \"partitioned\" : true }\n config.system.sessions\n shard key: { \"_id\" : 1 }\n unique: false\n balancing: true\n chunks:\n rs1\t1\n { \"_id\" : { \"$minKey\" : 1 } } -->> { \"_id\" : { \"$maxKey\" : 1 } } on : rs1 Timestamp(1, 0)\n { \"_id\" : \"feeder\", \"primary\" : \"rs1\", \"partitioned\" : true, \"version\" : { \"uuid\" : UUID(\"4062a851-c0f4-4cad-a63d-c07ee0888f8c\"), \"lastMod\" : 1 } }\n\n", "text": "Hello,Yes. Here is the mongod.conf file for replica set:Here is the sh.status() output command returned by mongos", "username": "Alexandru_Baciu" }, { "code": "sh.startBalancer();\n\n", "text": "It looks like the balancer is not started and even I’ve tried:it didn’t start", "username": "Alexandru_Baciu" }, { "code": "\n# Where and how to store data.\nstorage:\n dbPath: /var/data/mongodb\n journal:\n enabled: true\n# engine:\n# mmapv1:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# network interfaces\nnet:\n port: 27017\n# bindIp: 127.0.0.1\n bindIpAll: true\n\n# how the process runs\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\nsecurity:\n keyFile: /etc/mongodb_key\n\nreplication:\n replSetName: \"rs0\"\n\nsharding:\n clusterRole: configsvr\n\n", "text": "Here is the config for conf servers:", "username": "Alexandru_Baciu" }, { "code": "", "text": "Have you correctly executed rs.addShard() commands ? Do you get any errors doing so ?", "username": "Tin_Cvitkovic" }, { "code": "", "text": "Hello, no errors,Please check here:", "username": "Alexandru_Baciu" }, { "code": "sh.enableSharding(dbname);", "text": "Basic question in case you forgot. You did execute sh.enableSharding(dbname);\nCan you try not using same names in replica set naming ? I am not that experienced in sharding tehniques and please understand that I am just trying to point out to some mistakes I’ve made to help you out Edit: You should provide information like MongoDB Version, and OS that you are using.", "username": "Tin_Cvitkovic" }, { "code": "", "text": "Are the config files you have shared complete?\nI don’t see clusterrole,replicasetname and configDB params\nPlease show complete config files of data replicaset,config server and mongos\nAlso share rs.status() of both data replica and config server replica", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Thank you so much for your help.", "username": "Alexandru_Baciu" } ]
Failed to refresh session cache: ShardingStateNotInitialized: Cannot accept sharding commands if sharding state has not been initialized with a shardIdentity document
2022-08-08T06:59:33.290Z
Failed to refresh session cache: ShardingStateNotInitialized: Cannot accept sharding commands if sharding state has not been initialized with a shardIdentity document
4,334
null
[ "aggregation", "dot-net" ]
[ { "code": " var query = from cabinet in filteredCabinet.AsQueryable()\n join meds in _med.AsQueryable() on cabinet.Id equals meds.MedicineCabinetId into joined\n where (cabinet.AccountProfileId == accountProfileId)\n select new { cabinet, joined };\n", "text": "I’m having issues with the joining of collections and coming back null when there is data.Is there something I’m doing wrong?", "username": "Annette_Varndell" }, { "code": "test.cabinets.Aggregate([{ \"$project\" : { \"_outer\" : \"$$ROOT\", \"_id\" : 0 } }, { \"$lookup\" : { \"from\" : \"medicines\", \"localField\" : \"_outer._id\", \"foreignField\" : \"MedicineCabinetId\", \"as\" : \"_inner\" } }, { \"$project\" : { \"cabinet\" : \"$_outer\", \"joined\" : \"$_inner\", \"_id\" : 0 } }, { \"$match\" : { \"cabinet.AccountProfileId\" : 42 } }, { \"$project\" : { \"cabinet\" : \"$cabinet\", \"joined\" : \"$joined\", \"_id\" : 0 } }])\nIEnumerable<Medicine>", "text": "Hi, @Annette_Varndell,Welcome to the MongoDB Community Forums. I understand that you’re having problems with a LINQ join query not returning the expected data. Based on your description, you have configured LINQ3. Rendering this query with LINQ3 produces the following MQL. (Your collection names may differ, but the remaining MQL should be the same.)Running this query on a collection with a single cabinet and 2 medicines within that cabinet returns the expected results - a cabinet along with an IEnumerable<Medicine> containing the two medicines. So your code is doing something different than my simple repro.In order to investigate further, please provide:Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "", "text": "Thanks for your reply. I’m using 2.17.1 driver. and it’s returning the cabinet but no medications. I will put together a repo.", "username": "Annette_Varndell" } ]
C# driver and linq v3 to join collections
2022-08-05T15:43:39.033Z
C# driver and linq v3 to join collections
3,901
null
[ "aggregation" ]
[ { "code": "{\n _Id:objectId()\n CreatedOn:IsoDate(2022-05-12)\n PowerAvailiabilityTime:IsoDate(2022-05-12)\n stat:0\n}\n{\n _Id:objectId()\n CreatedOn:IsoDate(2022-05-12)\n PowerAvailiabilityTime:IsoDate(2022-05-12)\n stat:1\n}\n", "text": "I’m having a data in my collection as following,Now I want to get all stat:0 records in to one array and stat:1 records into one array, by using aggregation, kindly help me in this.", "username": "MERUGUPALA_RAMES" }, { "code": "", "text": "Please provide more sample documents, you shared only one of each case so it is not much for us to experiment with.Also share the expected results.", "username": "steevej" }, { "code": "/* 1 */\n{\n \"_id\" : ObjectId(\"61f3afa32bf8817538c898a1\"),\n \"stat\" : 0,\n \"time\" : NumberLong(1643359570),\n \"powerAvailabilityTime\" : ISODate(\"2022-01-28T08:46:10.000Z\"),\n \"temperatureLogId\" : \"61f3afa32\",\n \"dId\" : \"356849088\",\n \"month\" : \"Jan\",\n \"year\" : \"2022\",\n \"powerStringDate\" : \"2022-01-28\",\n \"storeId\" : NumberLong(0),\n \"storeName\" : \"\",\n \"stateId\" : 0,\n \"districtId\" : 0,\n \"blockId\" : 0,\n \"city\" : \"\",\n \"badgeName\" : \"\",\n \"updatedOn\" : ISODate(\"2022-01-28T08:56:03.022Z\"),\n \"createdOn\" : ISODate(\"2022-01-28T08:56:03.022Z\"),\n \"isDeleted\" : false,\n \n}\n\n/* 2 */\n{\n \"_id\" : ObjectId(\"61f3c9a22bf8817538c89b06\"),\n \"stat\" : 1,\n \"time\" : NumberLong(1643366213),\n \"powerAvailabilityTime\" : ISODate(\"2022-01-28T10:36:53.000Z\"),\n \"temperatureLogId\" : \"61f3c9a22b\",\n \"dId\" : \"356849088\",\n \"month\" : \"Jan\",\n \"year\" : \"2022\",\n \"powerStringDate\" : \"2022-01-28\",\n \"storeId\" : NumberLong(0),\n \"storeName\" : \"\",\n \"stateId\" : 0,\n \"districtId\" : 0,\n \"blockId\" : 0,\n \"city\" : \"\",\n \"badgeName\" : \"\",\n \"updatedOn\" : ISODate(\"2022-01-28T10:46:58.924Z\"),\n \"createdOn\" : ISODate(\"2022-01-28T10:46:58.924Z\"),\n \"isDeleted\" : false,\n \n}\n\n/* 3 */\n{\n \"_id\" : ObjectId(\"61fa749abe414854f990aae4\"),\n \"stat\" : 0,\n \"time\" : NumberLong(1643803135),\n \"powerAvailabilityTime\" : ISODate(\"2022-02-02T11:58:55.000Z\"),\n \"temperatureLogId\" : \"61fa749ab\",\n \"assetDId\" : \"201100027MR\",\n \"dId\" : \"356849088\",\n \"month\" : \"Feb\",\n \"year\" : \"2022\",\n \"powerStringDate\" : \"2022-02-02\",\n \"storeId\" : NumberLong(1),\n \"storeName\" : \"Agra DVS\",\n \"stateId\" : 573,\n \"districtId\" : 1802,\n \"blockId\" : 0,\n \"city\" : \"agra\",\n \"badgeName\" : \"DVS\",\n \"updatedOn\" : ISODate(\"2022-02-02T12:10:02.439Z\"),\n \"createdOn\" : ISODate(\"2022-02-02T12:10:02.439Z\"),\n \"isDeleted\" : false,\n \n}\n\n/* 4 */\n{\n \"_id\" : ObjectId(\"61fb7fee7eab2d12568a6d50\"),\n \"stat\" : 1,\n \"time\" : NumberLong(1643871511),\n \"powerAvailabilityTime\" : ISODate(\"2022-02-03T06:58:31.000Z\"),\n \"temperatureLogId\" : \"61fb7fed7eab\",\n \"assetDId\" : \"201100027MR\",\n \"dId\" : \"356849088\",\n \"month\" : \"Feb\",\n \"year\" : \"2022\",\n \"powerStringDate\" : \"2022-02-03\",\n \"storeId\" : NumberLong(2),\n \"storeName\" : \"Agra DVS\",\n \"stateId\" : 573,\n \"districtId\" : 1802,\n \"blockId\" : 0,\n \"city\" : \"agra\",\n \"badgeName\" : \"DVS\",\n \"updatedOn\" : ISODate(\"2022-02-03T07:10:38.156Z\"),\n \"createdOn\" : ISODate(\"2022-02-03T07:10:38.156Z\"),\n \"isDeleted\" : false,\n \n}\n\n/* 5 */\n{\n \"_id\" : ObjectId(\"61fcc7377eab2d12568a9b0a\"),\n \"stat\" : 0,\n \"time\" : NumberLong(1643955409),\n \"powerAvailabilityTime\" : ISODate(\"2022-02-04T06:16:49.000Z\"),\n \"temperatureLogId\" : \"61fcc7377eab2d\",\n \"assetDId\" : \"201100027MR\",\n \"dId\" : \"356849088\",\n \"month\" : \"Feb\",\n \"year\" : \"2022\",\n \"powerStringDate\" : \"2022-02-04\",\n \"storeId\" : NumberLong(3),\n \"storeName\" : \"Agra DVS\",\n \"stateId\" : 573,\n \"districtId\" : 1802,\n \"blockId\" : 0,\n \"city\" : \"agra\",\n \"badgeName\" : \"DVS\",\n \"updatedOn\" : ISODate(\"2022-02-04T06:27:03.592Z\"),\n \"createdOn\" : ISODate(\"2022-02-04T06:27:03.592Z\"),\n \"isDeleted\" : false,\n \n}\n\n/* 6 */\n{\n \"_id\" : ObjectId(\"61fd2d2c7da7233dd1e91979\"),\n \"stat\" : 0,\n \"time\" : NumberLong(1643981587),\n \"powerAvailabilityTime\" : ISODate(\"2022-02-04T13:33:07.000Z\"),\n \"temperatureLogId\" : \"61fd2d2c7da7233dd\",\n \"assetDId\" : \"201100027MR\",\n \"dId\" : \"356849088\",\n \"month\" : \"Feb\",\n \"year\" : \"2022\",\n \"powerStringDate\" : \"2022-02-04\",\n \"storeId\" : NumberLong(4),\n \"storeName\" : \"Agra DVS\",\n \"stateId\" : 573,\n \"districtId\" : 1802,\n \"blockId\" : 0,\n \"city\" : \"agra\",\n \"badgeName\" : \"DVS\",\n \"updatedOn\" : ISODate(\"2022-02-04T13:42:04.610Z\"),\n \"createdOn\" : ISODate(\"2022-02-04T13:42:04.610Z\"),\n \"isDeleted\" : false,\n \n}\n\n/* 7 */\n{\n \"_id\" : ObjectId(\"62025c822733a70dc9d10ff8\"),\n \"stat\" : 0,\n \"time\" : NumberLong(1644321229),\n \"powerAvailabilityTime\" : ISODate(\"2022-02-08T11:53:49.000Z\"),\n \"temperatureLogId\" : \"62025c822733a\",\n \"dId\" : \"356849088\",\n \"month\" : \"Feb\",\n \"year\" : \"2022\",\n \"powerStringDate\" : \"2022-02-08\",\n \"storeId\" : NumberLong(0),\n \"storeName\" : \"\",\n \"stateId\" : 0,\n \"districtId\" : 0,\n \"blockId\" : 0,\n \"city\" : \"\",\n \"badgeName\" : \"\",\n \"updatedOn\" : ISODate(\"2022-02-08T12:05:22.684Z\"),\n \"createdOn\" : ISODate(\"2022-02-08T12:05:22.684Z\"),\n \"isDeleted\" : false,\n \n}\n\n/* 8 */\n{\n \"_id\" : ObjectId(\"62027d722733a70dc9d1138e\"),\n \"stat\" : 1,\n \"time\" : NumberLong(1644329658),\n \"powerAvailabilityTime\" : ISODate(\"2022-02-08T14:14:18.000Z\"),\n \"temperatureLogId\" : \"62027d722733a70\",\n \"dId\" : \"356849088\",\n \"month\" : \"Feb\",\n \"year\" : \"2022\",\n \"powerStringDate\" : \"2022-02-08\",\n \"storeId\" : NumberLong(0),\n \"storeName\" : \"\",\n \"stateId\" : 0,\n \"districtId\" : 0,\n \"blockId\" : 0,\n \"city\" : \"\",\n \"badgeName\" : \"\",\n \"updatedOn\" : ISODate(\"2022-02-08T14:25:54.915Z\"),\n \"createdOn\" : ISODate(\"2022-02-08T14:25:54.915Z\"),\n \"isDeleted\" : false,\n \n}\n\n/* 9 */\n{\n \"_id\" : ObjectId(\"6203711c0b90c148626575ea\"),\n \"stat\" : 0,\n \"time\" : NumberLong(1644392090),\n \"powerAvailabilityTime\" : ISODate(\"2022-02-09T07:34:50.000Z\"),\n \"temperatureLogId\" : \"6203711c0b90c14\",\n \"dId\" : \"356849088\",\n \"month\" : \"Feb\",\n \"year\" : \"2022\",\n \"powerStringDate\" : \"2022-02-09\",\n \"storeId\" : NumberLong(0),\n \"storeName\" : \"\",\n \"stateId\" : 0,\n \"districtId\" : 0,\n \"blockId\" : 0,\n \"city\" : \"\",\n \"badgeName\" : \"\",\n \"updatedOn\" : ISODate(\"2022-02-09T07:45:32.508Z\"),\n \"createdOn\" : ISODate(\"2022-02-09T07:45:32.508Z\"),\n \"isDeleted\" : false,\n \n}\n\n/* 10 */\n{\n \"_id\" : ObjectId(\"62039b010b90c148626578e5\"),\n \"stat\" : 1,\n \"time\" : NumberLong(1644402766),\n \"powerAvailabilityTime\" : ISODate(\"2022-02-09T10:32:46.000Z\"),\n \"temperatureLogId\" : \"62039b010b90c\",\n \"dId\" : \"356849088\",\n \"month\" : \"Feb\",\n \"year\" : \"2022\",\n \"powerStringDate\" : \"2022-02-09\",\n \"storeId\" : NumberLong(6),\n \"storeName\" : \"Agra DH\",\n \"stateId\" : 573,\n \"districtId\" : 1802,\n \"blockId\" : 0,\n \"city\" : \"District Hospital\",\n \"badgeName\" : \"CCP\",\n \"updatedOn\" : ISODate(\"2022-02-09T10:44:17.715Z\"),\n \"createdOn\" : ISODate(\"2022-02-09T10:44:17.715Z\"),\n \"isDeleted\" : false,\n \n}\n", "text": "Thanks for replying Steevej, As you said following is the sample dataCurrently im using 4.4 version in my environment, im expecting all the records which contains “STAT:0” should come into one array, and the records which contails “STAT:1” should come into another seperate array Is it possible to split into two different arrays. I hope you get my concen.", "username": "MERUGUPALA_RAMES" }, { "code": "", "text": "Please read Formatting code and log snippets in posts and update your documents so that we can cut-n-paste into our system.", "username": "steevej" }, { "code": "", "text": "You need aggregation.A $match stage to keep only stat:0 and stat:1 documents.Then a $group stage that specifies _id:$stat and includes a $push to build the array.", "username": "steevej" }, { "code": "", "text": "Thank you Steevej, with your suggestions i am able to get the results as per my requirement. Thanks Again", "username": "MERUGUPALA_RAMES" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Getting two different values into different arrays
2022-08-03T19:47:21.714Z
Getting two different values into different arrays
3,769
null
[ "queries", "crud" ]
[ { "code": "db={\n Rankings: [\n {\n _id: ObjectId(\"62c66dc612296752b7c82cde\"),\n players: [\n {\n playerId: \"xyz1\",\n challengerId: \"abc1\",\n rank: 1\n },\n {\n playerId: \"abc1\",\n challengerId: \"xyz1\",\n rank: 2\n }\n ],\n \n }\n ]\n}\n[\n {\n \"_id\": ObjectId(\"62c66dc612296752b7c82cde\"),\n \"players\": [\n {\n \"challengerId\": null,\n \"playerId\": \"abc1\",\n \"rank\": 1\n },\n {\n \"challengerId\": null,\n \"playerId\": \"xyz1\",\n \"rank\": 2\n }\n ]\n }\n]\ndb.Rankings.bulkWrite([\n { updateOne : {\n //here the challenger is 62c2b79d966b72973fe52317, but he has won, so he\n //becomes 'player' in rank 1\n \"filter\" : { _id: ObjectId(\"62c66dc612296752b7c82cde\"),\n \n input: \"$players\",\n as: \"players\",\n cond: { $eq: [ \"$$players.ranking\", 1 ] }},\n\n \"update\" : { $set : { \"players.playerId\": ObjectId(\"62c2b79d966b72973fe52317\"),\n \"players.challengerId\": null\n } }\n } }\n \n ,\n { updateOne : {\n //player from rank 1, is now player in rank 2\n \"filter\" : { _id: ObjectId(\"62c66dc612296752b7c82cde\"), \n input: \"$players\",\n as: \"players\",\n cond: { $eq: [ \"$$players.ranking\", 2 ] }},\n\n \"update\" : { $set : { \"players.playerId\": ObjectId(\"62c2b79d966b72973fe52316\"),\n \"players.challengerId\": null\n } }\n } }\n\n ])\n{\n \"acknowledged\": true,\n \"insertedCount\": 0,\n \"insertedIds\": {},\n \"matchedCount\": 0,\n \"modifiedCount\": 0,\n \"deletedCount\": 0,\n \"upsertedCount\": 0,\n \"upsertedIds\": {}\n}\n", "text": "I have the following (simplified) data:and I want to do a bulkWrite in the same document against rank 1 and 2 with expected output:For reference/context I believe my closest attempt so far is (simplified ids are now full ObjectIds, but the same swapping logic applies):This returns:What should my “filter” syntax be to filter against rank in the players object array and achieve these updates? thanks …", "username": "freeross" }, { "code": "\"filter\": { _id: ObjectId(\"62c66dc612296752b7c82cde\") },\n\"update\": { $set: { \"players.$[playerId]\" : ObjectId(\"62c2b79d966b72973fe52317\")} },\n\"arrayFilters\": [ { playerId : ObjectId(\"62c2b79d966b72973fe52316\") } ]\n", "text": "The following is, I believe, closer, as “matchedCount” is now 1 (“modifiedCount” is still 0):what do I need to change to set the playerId and therefore get ‘modifiedCount’ to 1? …", "username": "freeross" }, { "code": "db.rankings.bulkWrite([\n { updateMany : {\n //challenger, identified by ObjectId, has won, so his Id\n //becomes 'player.playerId' in rank 1\n //arrayFilters are used to match to the correct object to be updated in the player array\n \"filter\": { _id: ObjectId(\"62c66dc612296752b7c82cde\") },\n \"update\": \n { $set: { \"players.$[player].playerId\" : ObjectId(\"62c2b79d966b72973fe52317\"),\n \"players.$[player].challengerId\" : null,\n \"players.$[player].rank\" : 1}\n },\n \"arrayFilters\": \n [\n {\n \"player.playerId\": \n { \n $eq: ObjectId(\"62c2b79d966b72973fe52316\")\n }\n ,\n \"player.challengerId\": \n { \n $eq: ObjectId(\"62c2b79d966b72973fe52317\")\n }\n ,\n \"player.rank\": \n { \n $eq: 1\n }\n }\n ]\n }}\n ,\n { updateMany : {\n //player from rank 1, is now player in rank 2\n \"filter\": { _id: ObjectId(\"62c66dc612296752b7c82cde\") },\n \"update\": \n { $set: { \"players.$[player].playerId\" : ObjectId(\"62c2b79d966b72973fe52316\"),\n \"players.$[player].challengerId\" : null,\n \"players.$[player].rank\" : 2}\n },\n \"arrayFilters\": \n [\n {\n \"player.playerId\": \n { \n $eq: ObjectId(\"62c2b79d966b72973fe52317\")\n }\n ,\n \"player.challengerId\": \n { \n $eq: ObjectId(\"62c2b79d966b72973fe52316\")\n }\n ,\n \"player.rank\": \n { \n $eq: 2\n }\n }\n ]\n }}\n\n ])\n", "text": "I fixed this thanks to this article in StackOverflow.\nHere is the code for my case as detailed above:", "username": "freeross" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Bulk write filter on an object array
2022-08-05T12:06:29.251Z
Bulk write filter on an object array
2,642
null
[ "queries" ]
[ { "code": "", "text": "Hi,\nI install the mongoDB community server one and I also added bin folder to path.\nBut in bin folder i did not found the mongo.exe.\nThere is mongod.exe but not mongo.exe.\nSo I am not able to make sure how to install it properly.\nCan anyone please help me in this.\nThanks", "username": "Gaurav_Singh6" }, { "code": "", "text": "\nmongo Bin folder996×807 31.1 KB\n", "username": "Gaurav_Singh6" }, { "code": "mongomongomongoshmongosh.msimongoshmongoshInstallCompassbin", "text": "Welcome to the MongoDB community @Gaurav_Singh6 !The legacy mongo shell is no longer included in server packages as of MongoDB 6.0. mongo has been superseded by the new MongoDB Shell (mongosh) which is available separately: Download and Install mongosh.The MongoDB Compass admin GUI should be installed by default if you are using the Windows .msi installer. Compass includes mongosh as an Embedded MongoDB Shell so you may not need the separate mongosh download unless you prefer to work from a command line environment.If Compass hasn’t been installed yet, you should be able to install it by double-clicking on the InstallCompass PowerShell script in your MongoDB server bin directory (this is the second item in your directory listing above). You can also Download MongoDB Compass from the MongoDB Download Centre.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongo.exe file not found in "C:\Program Files\MongoDB\Server\6.0\bin". I also install the tools but did not found mongo.exe
2022-08-08T04:18:32.376Z
Mongo.exe file not found in &ldquo;C:\Program Files\MongoDB\Server\6.0\bin&rdquo;. I also install the tools but did not found mongo.exe
15,921
null
[ "transactions", "change-streams" ]
[ { "code": "", "text": "I’m thinking of using MongoDB for an application that requires normalized data because it needs to access most entities separately. So a lot of it transactions will involve multiple documents.I’d like to use change streams to monitor all transactions and decide whether data needs to be sent to another service. But if several documents are transacted as a unit, they need to be sent as a unit to maintain consistency.From the MongoDB documentation, it looks like you can open a change stream on the entire database, but change events seem to be triggered separately for each collection. So it looks like a multi-document transaction will trigger several events for different collection. But what I really want is an event for the entire transaction. Is that possible with MongoDB?If it isn’t, I guess I could make the application duplicate the contents of each transaction to a dedicated collection, and then open a change stream on that collection. But that’s a lot of overhead.", "username": "Denis_Vulinovich1" }, { "code": "", "text": "Hi @Denis_Vulinovich1 and welcome to the community!!But what I really want is an event for the entire transaction. Is that possible with MongoDB?We do not have the supported feature as of today. The change stream is still available on the database level.\nThe following Server Ticket falls in the same line as per your requests. Please keep tracking for further updates.Also, depending on what your use case is, if you match the transaction ID with the event of the collection to the database. But please note that this would entirely depend on your use case.Thanks\nAasawari", "username": "Aasawari" } ]
Using change streams with multi-document transactions
2022-07-22T21:51:58.722Z
Using change streams with multi-document transactions
2,587
null
[ "dot-net", "app-services-user-auth" ]
[ { "code": "", "text": "Hi, in my .NET project I upgrade to Realm 10.15.0I have a error in login:authentication via ‘custom-token’ is unsupportedI use a single email/password login with no problem before 10.15.0I downgrade to 10.14.0 and all woks fineCan anyone help me? If I update in production client not workThanks", "username": "Luigi_De_Giacomo" }, { "code": "", "text": "Can you open an issue here: Issues · realm/realm-dotnet · GitHub and fill in all relevant sections - particularly the ones that ask about code snippets and the actual error message you’re getting.", "username": "nirinchev" } ]
.NET Realm 10.15.0 Error Login
2022-08-08T00:36:23.484Z
.NET Realm 10.15.0 Error Login
1,754
https://www.mongodb.com/…_2_1024x601.jpeg
[ "aggregation" ]
[ { "code": "{\n_id: ...,\nkillerID: 'wraith',\n killerPerks: [\n 'tinkerer',\n 'pop goes the weasel',\n 'barbecue and chili',\n 'fearmonger'\n ],\n...\n}\nkillerPerkskillerIDkillerIDkillerID", "text": "I have a collection of stats for game matches. There are 28 distinct strings for ‘killerID’, each document selecting a single one. Also, each document has an array of 4 strings, ‘killerPerks’, chosen from a pool of 97 choices. For example,I created a heatmap to plot the intensity of killerPerks per each killerID over a range of all matches; this looks fine.\nMy issue is the X-Axis only shows a maximum of 8 entries out of 28, and I am unsure how to format this to include more entries, or to create a curated list of killerIDs of my choosing.Is the 8 items on the x-axis a Mongo Charts default? I don’t see any information in the docs about this.\nWould I need to create separate charts and aggregations for 8 distinct killerIDs at a time?Thanks for reading through this, just started MongoDB University a few weeks ago and trying to learn as I go.\nScreen Shot 2022-08-06 at 10.14.533344×1964 356 KB\n", "username": "Anon_Rob" }, { "code": "", "text": "Hi @Anon_Rob -There’s nothing inherit to the heatmap chart type that limits the number of X axis categories. For example here is a simple chart from the Movies sample dataset with a much larger number of categories.\nimage1634×782 78.7 KB\nMy guess is that there’s something about your data or query which means that fewer categories are being returned than you might expect. You can use the […] menu to retrieve the full aggregation query used by Charts and test it in a tool like Compass or mongosh to try to get a better understanding of what’s going on.HTH\nTom", "username": "tomhollander" } ]
Heatmap - How to format distinct values on X-axis?
2022-08-06T17:47:17.403Z
Heatmap - How to format distinct values on X-axis?
2,207
null
[ "node-js" ]
[ { "code": "", "text": "Hi everyone!I was sent here from The Odin Project, and I’m doing M001 and finding it interesting. I can see some cool uses for MongoDB in some of the projects I’d like to go after.Has anyone done a clean sweep of the courses? I was checking them out and I like the look of M220JS - it seems to cover user registration via Node.js.It’s cool that all this is free. ", "username": "Adam_Parr" }, { "code": "", "text": "Welcome to the MongoDB Community @Adam_Parr!Does the Odin Project include MongoDB as part of their courses? What Odin learning path are you following?MongoDB University also has learning paths with recommended courses:Learning Path for MongoDB DevelopersLearning Path for MongoDB Database AdministratorsM220JS covers the basics of application development with the Node.js driver. There are complementary courses in the Developer Path like M320 (Data Modelling) and M121 (Aggregation Framework) that will give you more insight into data model design and data analysis/transformation.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hey Stennie! Thanks for your welcome!It does indeed! I’m following the Node/Express path, and M001 is part of that. Thanks for the recommendations, I’ll check them out!", "username": "Adam_Parr" } ]
Adam from the UK!
2022-07-24T12:14:25.248Z
Adam from the UK!
2,356
null
[ "graphql", "atlas-triggers" ]
[ { "code": "", "text": "i am having problem when i execute a graphql mutations to a document, the function trigger will detect changes and run some background processing. unfortunately when i update using graphql , it didn’t trigger, but if i update using mongo then it will trigger the processing.", "username": "woon_chien_heng" }, { "code": "", "text": "I have no problem triggering on graphql mutations. check logs to debug!", "username": "Egidio_Caleiro" }, { "code": "", "text": "I am having the same problem. I have a trigger that should run on insert and run a function to insert a document in a different collection. The function works from the console, so I know my function code is good, but the trigger isn’t firing when I insert to the trigger collection with GraphQL.Edit: actually it looks like Egidio was right, the GraphQL mutation is tripping the Trigger, but it’s throwing an error even though the Function works when ran from the editor.", "username": "Brandon_Long" }, { "code": "", "text": "I have to extend my previous answer.\nUpsert operations may be tricky if one wants to log change events.\nTo log changeEvent, a upsert op must turns to be an update op. But this does not seems to be the case when using GraphQL upsert op that turns out to be an update. You won’t get changeEvent (what I think is a issue). So if one wants to get changeEvent in a GraphQL event triggered function, one must to use a GraphQL update op.\nUsing SDK to execute an update op using {upsert: true}, if the op turns out to be an update, it will give your function the changeEvent.", "username": "Egidio_Caleiro" } ]
Do realm graphql mutations will trigger functions in Realm
2021-08-16T11:04:25.352Z
Do realm graphql mutations will trigger functions in Realm
4,265
null
[ "compass" ]
[ { "code": "", "text": "I tried a lot of times to run the mongo command on cmd but it shows only the command is not recognized as the internal and external command, even after setting the environment variable. I can run the mongod command. And also the mongo compass is not getting installed in my computer.", "username": "S.K_kansya" }, { "code": "", "text": "May be the path is not updated properly?\nDid you try to run mogod from bin dir?\nWhat error are you getting with Compass installation?\nPlease show screenshots for mongod and Compass issues", "username": "Ramachandra_Tummala" }, { "code": "mongomongoshmongo", "text": "Hi @S.K_kansya and welcome to the MongoDB Community forums. If you installed MognoDB 6.0, you will not have a mongo command available. I just downloaded the tarballs for both 5.0.10 and 6.0.0 on my Mac and expanded them:If you have version 6.0.x then you will want to download the mongosh command line tool which has replaced the older mongo command line tool.", "username": "Doug_Duncan" } ]
Mongodb is not working
2022-08-07T08:17:36.179Z
Mongodb is not working
3,740
null
[ "replication" ]
[ { "code": "rs.reconfig(cfg)cfg = rs.conf();\ncfg.version=1;\nrs.reconfig(cfg);\nrs.reconfig(cfg)\n{\n\t\"operationTime\" : Timestamp(1659609419, 4),\n\t\"ok\" : 0,\n\t\"errmsg\" : \"version field value of 2147483648 is out of range\",\n\t\"code\" : 103,\n\t\"codeName\" : \"NewReplicaSetConfigurationIncompatible\",\n\t\"$gleStats\" : {\n\t\t\"lastOpTime\" : Timestamp(0, 0),\n\t\t\"electionId\" : ObjectId(\"7fffffff0000000000000045\")\n\t},\n\t\"lastCommittedOpTime\" : Timestamp(1659609419, 4),\n\t\"$configServerState\" : {\n\t\t\"opTime\" : {\n\t\t\t\"ts\" : Timestamp(1659609434, 2),\n\t\t\t\"t\" : NumberLong(27)\n\t\t}\n\t},\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1659609434, 2),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"hYlj4D5wuCoTuTjjLwNQRyqxbvQ=\"),\n\t\t\t\"keyId\" : NumberLong(\"7107200311055351831\")\n\t\t}\n\t}\n}\n", "text": "HiPreviously , rs.reconfig was ran with force: true and now I am having an arbitrary large number as a version number. I need to reset this number to a number within the allowed range.I tried to run rs.reconfig(cfg) where cfg has the following:I confirmed cfg now has version = 1 . However, after rs.reconfig(cfg) version reverted back to old version number and I keep getting the out of range issue as beforeI don’t know why it’s ignoring the version number I supplied.", "username": "Reab_AB" }, { "code": "", "text": "Hi @Reab_AB,It sounds like you are experiencing an issue similar to replSetReconfig force generates too high version number.It is expected that a force reconfiguration will increment the replica set config version but\nthat should be in the order of 10s or 100s of thousands, not millions. Replica set members compare the version number and replica set name to determine if their configuration is stale, so you cannot force an older version number.Can you provide more details on your environment:Thanks,\nStennie", "username": "Stennie_X" }, { "code": "rs.reconfig()versionterm", "text": "If we look at the documentation for rs.reconfig() we see the following text:Replica set members propagate their replica configuration via heartbeats. Whenever a member learns of a configuration with a higher version and term , it installs the new configuration.Since the server only looks for larger numbers for updating the new configuration, it will not update the version to a lower number as that could cause issues.There are ways of rebuilding your replica set without losing your data. If not done properly this could result in loss of data, or a complete failure of your database system. You will want to test any potential process for resetting the replica set version thoroughly before running on a database that is storing data that you care about.It will be interesting to see the answers to the questions that Stennie asked above.", "username": "Doug_Duncan" }, { "code": "", "text": "Hi @Stennie_X\nMongodb Server version is 4.4\nCurrent version of replica set is 2147483648\nO/S oracle linux 7.9\nI am using Kubernetes operator, so this is the result of several rs.reconfig with force: true. Now the operator is fixed and it’s not using force flag, but I need to reset the version number in the replica setThanks", "username": "Reab_AB" }, { "code": "", "text": "How to rebuild the replica set? I have a sharded replica set.", "username": "Reab_AB" } ]
Reset replica set version number using rs.reconfig()
2022-08-04T10:56:02.664Z
Reset replica set version number using rs.reconfig()
2,761
null
[]
[ { "code": "sudosudo", "text": "I’m using Mac M1 and I followed the steps in the documentation to install MongoDB 5.0 but the step to start the service doesn’t work. Weirdly the log file also isn’t updated. But when I run the same command with sudo then the service starts and the log also gets updated. This issue wasn’t there in MongoDB 4.2.Is there a way I can start MongoDB 5.0 without sudo?", "username": "Niraj_Nandish" }, { "code": "brewmongodmongod --dbpath <database file path> --logpath <database log path>sudosudo", "text": "Hi @Niraj_Nandish, and welcome to the MongoDB Community forums.Is there a way I can start MongoDB 5.0 without sudo ?I have an Intel based Mac and I run without sudo. I’m assuming that your user doesn’t have permissions to write to the log, and possibly the data, files and that’s causing the issues.", "username": "Doug_Duncan" }, { "code": "brewbrewSuccessfully started (label: [email protected])Bootstrap failed: 5: Input/output error\nTry re-running the command as root for richer errors.\nError: Failure while executing; `/bin/launchctl bootstrap gui/501 /Users/niraj/Library/LaunchAgents/[email protected]` exited with 5.\nmongodbrew services start mongodb/brew/[email protected]\nmongod --dbpath <database file path> --logpath <database log path>{\"t\":{\"$date\":\"2022-08-07T04:49:37.098Z\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20697, \"ctx\":\"-\",\"msg\":\"Renamed existing log file\",\"attr\":{\"oldLogPath\":\"/opt/homebrew/var/log/mongodb/mongo.log\",\"newLogPath\":\"/opt/homebrew/var/log/mongodb/mongo.log.2022-08-07T04-49-37\"}}\n", "text": "How did you install MongoDB on your Mac? Via brew or manually installing it?I installed it using brewAre you seeing any errors written to the terminal window?The first time I run the command to start the brew service, it returns Successfully started [email protected] (label: [email protected]) even though it has failed to start. On the second run it gives the following errorHow are you trying to start the mongod process?I’m starting the process with the following commandCan you run mongod --dbpath <database file path> --logpath <database log path> from the command line without issue, or do you get an error?I get the following output, not sure if it’s an error or notI’m assuming that your user doesn’t have permissions to write to the log, and possibly the data, files and that’s causing the issuesMongoDB was working fine before with 4.2 and yesterday when I upgrade it to 5.0 it stopped working my user. If I downgrade back to 4.2 then it’s working fine.", "username": "Niraj_Nandish" }, { "code": "root", "text": "So somehow the permission of some data file and the mongo.log file had been changed to root user. Changing them back fixed my issue. Weird that it broke for 5.0 and not for 4.2.Thanks for the help @Doug_Duncan", "username": "Niraj_Nandish" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB 5.0 starts only as root
2022-08-06T20:29:19.876Z
MongoDB 5.0 starts only as root
2,571
null
[ "aggregation", "queries" ]
[ { "code": "", "text": "Db.getcollection(‘da’). Count({year:{2021}})its a huge record needs to delete as batches of 10k or 20k. can we do that?", "username": "Ashik_D" }, { "code": "", "text": "{year:{2021}}Seems to generate a syntax error.Why do you want to batch your deletes?Please share sample documents, some that need to be deleted and some that should not.If you have a year you probably have month too. You could delete one month at a time rather than a year at a time.", "username": "steevej" }, { "code": "", "text": "Db.getcollection(‘da’). Count({year:{$lt:2021}})Actually this was for finding count of record its about 50Million. So we cant delete that much at a time. we dont need to insert other conditions. Just delete those all records is enough.So what u suggest is delete per month right?", "username": "Ashik_D" }, { "code": "", "text": "Try to use below commandvar bulk = db.collection.initializeUnorderedBulkOp();\nbulk.find({timestamp: {$lte: new Date(“2020-10-31T23:59:59Z”)}}).remove();\nbulk.execute();I did that on my production.Note: You need to run when you env. is quite.", "username": "Prince_Das" }, { "code": "", "text": "So what u suggest is delete per month right?yes, by restricting the delete to a given month you can easily distribute the work over time.another solution would be to use aggregation to $out into a temp. collection, drop the original, the move back the temp. collection into the original. that might be faster if you keep less documents than the number you want to delete. you might even reclaim more disk space.", "username": "steevej" }, { "code": "", "text": "Not that this suggestion would help out much at this time, but if you know that records will not be valid after a certain period of time and you’re just going to delete them, you could look at using a TTL index and let the database engine remove the records after a certain period of time based on a date field. At high volumes of data this might not be the right approach (you would need to test as best as you can), but in some scenarios it might work well for you.", "username": "Doug_Duncan" } ]
I wanted to delete millions of old records.date is a field we can use that
2022-08-04T06:27:03.521Z
I wanted to delete millions of old records.date is a field we can use that
4,493
https://www.mongodb.com/…a677cdb167e1.png
[ "compass" ]
[ { "code": "", "text": "Hello,I am trying to query the name in a list of nested Objects that have different field names.\nEvery example has a name field, but I am unable to filter on it. I’ve tried “tasks…name” but that does not work in Compass - { “tasks…name”: “restCall” }.Thanks,\nAustin", "username": "Austin_Summers" }, { "code": "", "text": "Hello, @Austin_Summers,The query will not provide a filtered result in the nested array, for that you have to use an aggregation query with $filter operator to filter the nested array by providing your conditions.", "username": "turivishal" }, { "code": "", "text": "Your example has all field names perfectly aligned. I need a way around that.", "username": "Austin_Summers" }, { "code": "", "text": "The difficulty is your schema. The fact that you repeat the field name of Tasks as a field inside the subdocuments is a clue about an issue with the schema. Tasks should be an array since the field name is sufficient to identify the task. With an array you would have a schema that uses the attribute pattern.This being out of my chest, see if $objectToArray can perform the transformation you need to use $filter as hinted by @turivishal.", "username": "steevej" } ]
Query common name in objects with different names
2022-08-03T15:40:44.211Z
Query common name in objects with different names
1,716
null
[]
[ { "code": "molly@xxxx:/etc$ mongod --dbpath /var/lib/mongodb\n{\"t\":{\"$date\":\"2022-08-05T23:29:53.768+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-08-05T23:29:53.769+08:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2022-08-05T23:29:53.770+08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2022-08-05T23:29:53.770+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":6319,\"port\":27017,\"dbPath\":\"/var/lib/mongodb\",\"architecture\":\"64-bit\",\"host\":\"Francisco\"}}\n{\"t\":{\"$date\":\"2022-08-05T23:29:53.770+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"4.4.15\",\"gitVersion\":\"bc17cf2c788c5dda2801a090ea79da5ff7d5fac9\",\"openSSLVersion\":\"OpenSSL 1.1.1 11 Sep 2018\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"ubuntu1804\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-08-05T23:29:53.770+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Ubuntu\",\"version\":\"18.04\"}}}\n{\"t\":{\"$date\":\"2022-08-05T23:29:53.770+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"storage\":{\"dbPath\":\"/var/lib/mongodb\"}}}}\n{\"t\":{\"$date\":\"2022-08-05T23:29:53.770+08:00\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":20568, \"ctx\":\"initandlisten\",\"msg\":\"Error setting up listener\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Permission denied\"}}}\n{\"t\":{\"$date\":\"2022-08-05T23:29:53.770+08:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":10000}}\n{\"t\":{\"$date\":\"2022-08-05T23:29:53.771+08:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2022-08-05T23:29:53.771+08:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2022-08-05T23:29:53.771+08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2022-08-05T23:29:53.771+08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2022-08-05T23:29:53.771+08:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2022-08-05T23:29:53.771+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2022-08-05T23:29:53.771+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2022-08-05T23:29:53.771+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2022-08-05T23:29:53.771+08:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2022-08-05T23:29:53.771+08:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":4784926, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down full-time data capture\"}\n{\"t\":{\"$date\":\"2022-08-05T23:29:53.771+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2022-08-05T23:29:53.771+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":48}}\n", "text": "I ran mongodb and failed with below Error.I don’t know how to solve it.Pls kindly help.", "username": "Molly_She" }, { "code": "mongod", "text": "Hi @Molly_She and welcome to the MongoDB community.Can you please supply to following information so that we might be able to help you?", "username": "Doug_Duncan" }, { "code": "", "text": "Hi Doug:\nThanks for replying me. I ran this on Ubuntu18.04 as I tried to set up a Yapi server. It asked me to setup MongoDb.\nI followed below guide to install the Mongodb. The only difference is I used 4.4 version for Mongodb installation.Learn how to set up MySQL MongoDB, PostgreSQL, SQLite, Microsoft SQL Server, or Redis on the Windows Subsystem for Linux.\n( Install MongoDB part including ’ Add the init script to start MongoDB as a service’ part)\nThis is the first time I ran MongoDB on my machine as I just installed it.", "username": "Molly_She" }, { "code": "ls -alh /tmpmongodb-27017.sock", "text": "Thanks for the info Molly!I don’t have a Windows box to test things out under WSL, but I’m wondering what the results of running ls -alh /tmp return for you (post the complete output)? Do you see a file with a name of mongodb-27017.sock?", "username": "Doug_Duncan" }, { "code": "", "text": "This file ‘/tmp/mongodb-27017.sock’ can be seen. And when I met this ,I removed it and restart mongodb.My colleague give me below solution:\n1.ps -eaf | grep mongodb 2.service mongodb stop 3.check the pid listed again. 4.kill the pid. 5.Then start db by using sudo mongod --dbpath /var/lib/mongodb instead of service mongodb start.\n\ne3b335d29d9067ff1c4214477adb4776_1148×131 5.86 KB\n\nThen the issue solved.", "username": "Molly_She" }, { "code": "", "text": "I’m glad you got things resolved Molly!", "username": "Doug_Duncan" } ]
I have the issue with 'Error setting up listener","attr":{"error":{"code":9001,"codeName":"SocketException","errmsg":"Permission denied"
2022-08-05T16:01:33.525Z
I have the issue with &lsquo;Error setting up listener&rdquo;,&rdquo;attr&rdquo;:{&ldquo;error&rdquo;:{&ldquo;code&rdquo;:9001,&rdquo;codeName&rdquo;:&rdquo;SocketException&rdquo;,&rdquo;errmsg&rdquo;:&rdquo;Permission denied&rdquo;
7,837
null
[ "java", "atlas-cluster" ]
[ { "code": "Failed looking up TXT record for host ****.mongodb.net", "text": "Hi all,\nI’m failing to connect to my mongo atlas instance. Whenever I try to connect using the connection string provided by Atlas, I keep getting this exception:\nFailed looking up TXT record for host ****.mongodb.net.I’ve attempted to change my DNS server as recommended in this post but that didn’t resolve the issue.I’m using the KMongo 4.6.0 which internally uses 4.6.0 mongo java driver.Would anyone be able to help?", "username": "Joseph_Magara" }, { "code": "", "text": "If you switched DNS provider and you still have a DNS issue then your URI is really wrong.However, we cannot help you if you share a redacted version with the only important part replaced with stars. We need to see the real URI to figure out what is wrong.If you really have stars in your URI, this is the error.", "username": "steevej" }, { "code": "mongodb+srv://<username>:<password>@trellis-uat.tamwc.mongodb.net/?retryWrites=true&w=majority<username><password>", "text": "Hi Steeve, thanks for your response.Regarding the URI, here is my real uri:Thats the connection string I am given from the Atlas web portal. I’ve attached an image below for your reference. I do swap out the <username> & <password> parts of the URI for an actual username and password but yeah, it doesn’t work. What’s strange is I’ve used this connection string for the last two years or so and it’s worked fine up till about two or three months ago. Furthermore, I have deployed my application to external servers and it is running well on those servers. It can connect to the Mongo database even now but when I run the application on my PC, it fails to connect and keeps throwing the aforementioned exception.I’ve tried changing the DNS on my computer (mac), I’ve tried using a VPN to when attempting to connect, I’ve verified that the 27017 is open, but yeah, the issue still persists, I cannot connect to the mongo instance when running my application on IntelliJ.One final thing, I am able to connect to my atlas db if I use the MongoDB Compass application. I don’t know if that information is helpful but yeah, thought I would share it.If you need me to, I can send you a username & password that you can use to attempt to connect to the Atlas instance.Thanks again for your help.\nScreen Shot 2022-07-11 at 7.56.33 am800×644 47.2 KB\n", "username": "Joseph_Magara" }, { "code": "Failed looking up TXT record for host ****.mongodb.netFailed looking up TXT record for host ****.mongodb.netFailed looking up TXT record for host trellis-uat.tamwc.mongodb.net", "text": "We were not aware that you could connect (Compass or mongosh) from the same machine with the same connection string. Your original post did not mentioned that. This important fact rules out any DNS issues.My question now returns to your original post where you mentioned:I keep getting this exception:\nFailed looking up TXT record for host ****.mongodb.net .Are the stars part of the error message or is the real cluster name is there? In other words is the error message really:Failed looking up TXT record for host ****.mongodb.netORFailed looking up TXT record for host trellis-uat.tamwc.mongodb.netIf IntelliJ is printing starts then it is a configuration issue and your code does not read correctly the current cluster name from your configuration. May be you do not start IntelliJ from the top directory of your project. May be the configuration file has the starts and you are supposed to changed it.", "username": "steevej" }, { "code": "", "text": "Hi steevej, thanks again for your response.Regarding the question you’d askedAre the stars part of the error message or is the real cluster name is there? In other words is the error message really:the stars are not part of the error message. I put them in when making this post because I thought if there is a generic fix that doesn’t require me exposing the cluster name then I would go for that but as our discussion has progressed, I see that I was wrong to do that and that details are required to fix this problem.Here is a photo of the error messge I’m getting (top exception)\n\nScreen Shot 2022-07-13 at 7.24.02 am2044×1154 287 KB\nHere is the underlying cause (underlying exception that is caught and then throws the one above )\n\nScreen Shot 2022-07-13 at 7.24.16 am2008×1068 255 KB\nThe exception underlying the one above: (root exception that is caught and then throws the one above )\n\nScreen Shot 2022-07-13 at 7.24.29 am2017×1165 272 KB\nFYI:\nThose screenshots are from sentry.io which is what I\"m using to capture my exceptions. For some reason, intellij IDE doesn’t seem to have a good log that shows the exceptions when they are thrown.", "username": "Joseph_Magara" }, { "code": "", "text": "I have no clue.May be some package are outdated.I would try without kmongo.", "username": "steevej" }, { "code": "", "text": "Alright, I’ll give that a go", "username": "Joseph_Magara" }, { "code": "ps -ef | grep tomcat", "text": "[For the sake of anyone who runs into this issue in the future]So I believe I’ve found the root issue. It seems that this issue is caused by something going wrong in the tomcat server. Potentially it is not being started when the computer boots. I found the approximate time when this issue first emerged; it coincided with a mac OS upgrade. It may have been that something in the change prevented the Tomcat server from starting on boot.Checking the status of the Tomcat server seems to have resolved the issue.You can check the status of the Tomcat server by running:ps -ef | grep tomcatAnother thing you should do is switch the JDK you’re using to the latest OpenJDK available.The combination of doing those things seems to have resolved the issue for me.This StackOverflow post was particularly helpful for me when debugging the problem.", "username": "Joseph_Magara" }, { "code": "", "text": "Please note that I also verified that this issue is not caused by a specific local environment, in other words, it is occurring on multiple macs running OS 12.4+. I also tried connecting to mongo via Android Studio and via Intellij (I created two different projects - one for Android & one in Intellij) and in both instances, it failed to connect but instead threw the same error. This would seem to indicate that the root issue is not because of the configuration of one particular computer or one particular program (Android Studio/Intellij).It points to the issue being something caused by the OS as both computers that were used to test this were Macs running OS12.4+, thus leading me to conclude that something in the recent MacOS maybe causing this. Changes made may have impacted the running of the tomcat server.", "username": "Joseph_Magara" }, { "code": "", "text": "Had the same issue. I could connect through mongo shell, but not with kmongo in intellij. Also, I swithced from windows to mac device. Changing the SDK version to 1.8 seems to fix the issue. File → Project Structure → Project Settings → Project then download JDK and change SDK", "username": "Notte_Puzzle" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to look up TXT record for host ****.mongodb.net from IntelliJ
2022-07-09T22:54:30.453Z
Unable to look up TXT record for host ****.mongodb.net from IntelliJ
6,084
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 6.0 is now live and ready to download! Key highlights with this release include time series enhancements, new functionality and performance improvements to change streams and the introduction of Queryable Encryption (preview). For more on the new features and enhancements, be sure to check out the 7 big reasons to upgrade to MongoDB 6.0.You can also review the release notes to learn more about the new features as well as the upgrade procedures and instructions on how to report an issue.Last but not least, we would like to acknowledge the following community members who have contributed to this release:akos kristo, Alon Horev, Ankit Shah, Ayhan APAYDIN, Casey Carter, Chris Kramer, Daniel Hegener, David Holland, deyukong, Elliot Metsger, Halvor Strand, hung hoang, Igor Canadi, Igor Solodovnikov, Jack Park, Jackson Dagger, Jan Votava, Jiang Li, Jing Wu, jing xu, Kent Green, Martin Müller, Maxim Zasorin, Michael Tartre, Nehal J Wani, Paran Lee, Patrick Dawson, Paulo Pereira, Radosław Piliszek, Ruben Herold, Ryan Schmidt, Tejaswi Nadahalli, Tema G., Tianon Gravi, Tom Scott, Tomas Mozes, Tommy Lee, VINICIUS GRIPPA, vinllen chen, William Deegan, Yingdong YuanMongoDB 6.0 Release Notes | Changelog | Downloads– The MongoDB Team", "username": "Aaron_Morand" }, { "code": "", "text": "akos kristo, Alon Horev, Ankit Shah, Ayhan APAYDIN, Casey Carter, Chris Kramer, Daniel Hegener, David Holland, deyukong, Elliot Metsger, Halvor Strand, hung hoang, Igor Canadi, Igor Solodovnikov, Jack Park, Jackson Dagger, Jan Votava, Jiang Li, Jing Wu, jing xu, Kent Green, Martin Müller, Maxim Zasorin, Michael Tartre, Nehal J Wani, Paran Lee, Patrick Dawson, Paulo Pereira, Radosław Piliszek, Ruben Herold, Ryan Schmidt, Tejaswi Nadahalli, Tema G., Tianon Gravi, Tom Scott, Tomas Mozes, Tommy Lee, VINICIUS GRIPPA, vinllen chen, William Deegan, Yingdong YuanWhere can I find information regarding the differences between the community and enterprise edition features for 6.0?", "username": "Jake_Probst" }, { "code": "", "text": "Where can I find information regarding the differences between the community and enterprise edition features for 6.0?This datasheet calls what’s included in Enterprise. This was last updated in 2021based off copyright, but it should give you what you’re looking for.", "username": "Doug_Duncan" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 6.0.0 is released
2022-07-19T17:20:21.253Z
MongoDB 6.0.0 is released
4,255
null
[ "java" ]
[ { "code": "./gradlew :libs:mongo-reader:createObfuscatedExecution failed for task ':libs:mongo-reader:createObfuscated'.\n> java.io.IOException: Can't read [/home/user/.gradle/caches/modules-2/files-2.1/org.mongodb/bson-record-codec/4.7.1/3b2bd7bc4908c4e6a19143e2f4c7ca5df8ed69cf/bson-record-codec-4.7.1.jar] (Can't process class [org/bson/codecs/record/RecordCodec$ComponentModel.class] (Unsupported version number [61.0] (maximum 57.0, Java 13)))\nactions/setup-java@v1, with: java-version: 114.5.14.6.X4.7.X", "text": "I noticed that you experienced a similar issue like I am experiencing now, see this ticket.You fixed this in 4.6.1 (bug occurred in 4.6.0), but I am still experiencing the issue with 4.6.0, 4.6.1 and 4.7.1. Only when I downgrade to 4.5.1 I can run our obfuscation again.Unfortunatelly, I cannot share the code of our mongo-reader library, but the command I’m executing to reproduce the issue is: ./gradlew :libs:mongo-reader:createObfuscatedThe error is:I’m running this with Java 11, both locally and on our CI (Github Actions) where we only install Java 11. actions/setup-java@v1, with: java-version: 11.I hope this information helps, if you need any further details, please let me know.\nI hope this is no mistake on my side, but it feels like a breaking change when upgrading from 4.5.1 to 4.6.X/4.7.X so this could actually be a bug.", "username": "hb0" }, { "code": "", "text": "Sorry to hear you are having trouble. Under normal circumstances the bson-record-codec dependency (which is the one causing this issue) will never be loaded by an application unless it is actually using Java records (which already implies Java 17), so it doesn’t cause problems for the great majority of applications.Can you try excluding the bson-record-codec dependency from your build using Gradle dependency management features? See here for details.Regards,\nJeff", "username": "Jeffrey_Yemin" }, { "code": "org.mongodb", "text": "org.mongodbHi Jeffrey, yes, this does the trick. Thanks for pointing out the workaround.", "username": "hb0" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Version 4.6.1 and 4.7.1 not compatible with Java 11
2022-08-05T15:08:00.485Z
Version 4.6.1 and 4.7.1 not compatible with Java 11
3,640
null
[ "aggregation", "queries" ]
[ { "code": "Topdocument: {\n groups: {\n subdoc1: {\n name: 'my name'\n },\n subdocXyz: {\n name: 'my name'\n }\n ...\n }\n}\ndb.getCollection('mycollection').aggregate([\n {\n $project: {\n dynamicKeys: { $objectToArray: \"$groups\" }\n }\n },\n {\n $match: {\n 'dynamicKeys': {\n $elemMatch: {\n \"v.name\": /my group name/\n }\n }\n }\n },\n {\n $project: {\n $filter: {\n input: \"$dynamicKeys\",\n as: \"dk\",\n cond: {\n $eq: [ \"$$dk.name\", \"my group name\" ]\n }\n }\n }\n },\n {\n $limit: 3\n }\n])\n", "text": "Hi\nI’m trying to query on a subdocument field and return only matching subdocuments. I’m not too worried about the structure of the returned data, array or not for example.Data looks like this where items in groups can have any key:Query so far that does not work, giving error: “FieldPath field names may not start with ‘$’”", "username": "I_Brewis" }, { "code": "{\n $project: {\n dynamicKeys: { $objectToArray: \"$groups\" }\n }\n }\n$project: {\n filtered_dynamic_keys : { $filter: {\n input: \"$dynamicKeys\",\n as: \"dk\",\n cond: {\n $eq: [ \"$dk.name\", \"my group name\" ]\n }\n } }\n }\n", "text": "You should implement your dynamic keyed sub-documents using the attribute pattern.What you are doing inis essential transforming your non-attribute-patterned sub-documents into an attribute-patterned sub-documents.The attribute pattern will save you processing time doing this transformation, it will allow indexing of the dynamic keys and simplify your queries.As for the errorFieldPath field names may not start with ‘$’the cause is that your last $project had not the correct syntax. You need a path name where you have $filter. Such as:", "username": "steevej" }, { "code": "\"$$dk.name\"\"$$dk.v.name\"", "text": "Hi @steevej thanks for the answer! I’m not sure why but the $eq expression, I think 2 $'s are needed, eg:\n\"$$dk.name\"\nAlso this needs to actually reference the values so needs to be:\n\"$$dk.v.name\"", "username": "I_Brewis" } ]
Query dynamic keyed subdocument field
2022-08-03T11:15:50.913Z
Query dynamic keyed subdocument field
3,050
null
[ "dot-net" ]
[ { "code": "", "text": "Can someone tell me when will MAUI Realm relesed?\n1-2 month or more?", "username": "Gyarfas_Bence" }, { "code": "", "text": "What exactly are you looking for when you say MAUI Realm? The current version of the SDK should work with MAUI as its binding engine is based on INotifyPropertyChanged, which we integrate with already. So the base functionality should be there. There are certain features of MAUI which we don’t support yet - such as hot reload and compiled two-way automatic bindings. We hope to support them for MAUI GA, but we can’t make hard promises as the last time I tried hot reload was horribly broken, which means I don’t even know what it would take to support it. So I know it’s a super vague answer, but we’ll try to add support for features as soon as they actually work with the strong desire to have a fully fleshed-out product by GA.", "username": "nirinchev" }, { "code": "", "text": "This is good news that you do plan to have Realm fully support .Net MAUI. I would love to really start upgrading my XF Realm project I’ve worked on for a number of years, but trying to add the existing Realm to MAUI causes the MacCatalyst dependencies to flag an error. (This may be because one or more of Realm’s dependencies cannot be supported.) Therefore I cannot even really begin until that is cured. Hope compatibility comes sooner rather than later, but I can imagine how much work that will be. Thank you for your kind efforts.", "username": "David_Pressman" }, { "code": "", "text": "Ah, I see. Unfortunately, we don’t support Catalyst yet. You can only deploy on one of our supported targets listed here.", "username": "nirinchev" }, { "code": "", "text": "Any progress on supporting MAUI more fully? MAUI is now officially released although its Visual Studio tools on both Windows and Mac remain in preview. Interestingly, Realm works OK in iOS when run from Visual Studio on Windows, but I get the same type mismatch error in Visual Studio for Mac trying to use Realm in iOS as I do in trying in Catalyst. If you have concrete plans to soon support MAUI fully, GREAT! If not, I may have to start examining alternatives. Nonetheless, I do appreciate any efforts you make in this area. Thank you.", "username": "David_Pressman" }, { "code": "", "text": "MAUI should be supported on both VS for Windows and Mac - we tested it with several projects about a month ago, but it’s possible something in the new tooling broke and I’ll need to re-test. If you have an iOS project you can share that is not working on VS for Mac, I’d be happy to take it for a spin.Regarding Catalyst - we still don’t support it, but definitely plan to in the near future.", "username": "nirinchev" }, { "code": "", "text": "Your Github site indicates MacCatalyst is supported now; is it possible to get a sample project demonstrating that? In my app trying to create a Ream database still throws an error even though the “caught” exception is null. Same code works in iOS, so should be valid. A working sample might clarify what I am doing wrong.", "username": "David_Pressman" }, { "code": "", "text": "I’m also having issues trying to get a MAUI app working under Catalyst. Using the MAUI template I’ve hooked in simple platform behaviours for both windows and Catalyst. Windows works but Catalyst throws ‘The type initializer for ‘Realms.SharedRealmHandle’ threw an exception.’ error. This happens in my class constructor but if I move the code into a normal method, it throws a null exception.@David_Pressman did you manage to sort your problem?", "username": "Craig_Kleinig" }, { "code": "", "text": "Unfortunately, no. I certainly hope Realm will supply a working MacCatalyst sample as I am at a loss as to how to proceed otherwise.\nDavid Pressman", "username": "David_Pressman" }, { "code": "", "text": "Hey, sorry for the delay here - Catalyst has been implemented, but not released yet. We’re working through some regressions in Github Actions (our CI) and hope to do a release at some point this week.", "username": "nirinchev" }, { "code": "", "text": "Hey folks, we just published 10.15, which contains the changes we made to support Catalyst. Can you give it a go and report any problems you encounter?", "username": "nirinchev" }, { "code": "", "text": "Wow! Just created 7 Realm Databases in MacCatalyst! Will let you know if I encounter usage issues, but finally the databases exist. A hearty THANK YOU!", "username": "David_Pressman" } ]
MAUI estimated time to arrival?
2022-01-03T18:45:54.533Z
MAUI estimated time to arrival?
4,955
null
[ "data-modeling" ]
[ { "code": "manufacturing_asset_idmaid{\n \"manufacturing_asset_id\": 14\n}\n{\n \"maid\": 14\n}\n", "text": "I saw this stack overflow post about data being stored in BSON format, and not JSON format.This makes me think, if you have hundreds of thousand of documents, is it a good idea to for example shorten the field name manufacturing_asset_id to maid to save space?", "username": "Simen_Wiik" }, { "code": "_id", "text": "After a bit more searching, I found the section in the documentation about storage optimization for small documents.As for the _id trick, it seems that depends on whether the data you store has a property that can be uniquely identified, such as timestamps or other IDs that have meaning within some knowledge domain.", "username": "Simen_Wiik" }, { "code": "", "text": "I was considering using short/minified keys many years ago, when MongoDB used the mmap engine without compression. Nowadays most deployments (I believe) use data compression so “verbose” keys should not translate into larger storage use. There will be some overhead also in CPU processing, caching and network transfer, but I think key minifying is not worth it for the hassle in application code and in debugging use cases.", "username": "Petr_Messner" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Do short key names actually improve storage efficiency?
2022-07-02T16:14:46.731Z
Do short key names actually improve storage efficiency?
2,274
null
[ "python" ]
[ { "code": "", "text": "HelloI do a transfer of a python script from a server to another and i get this error:ModuleNotFoundError: No module named ‘pymongo.server_api’Pymongo is installedDo you know why i got that?", "username": "Zelkoa_Network" }, { "code": "", "text": "Is it the same pymongo version and MongoDB version? The server_api is a newer feature with v5.0 and new in pymongo 3.12 so maybe you need to update?https://pymongo.readthedocs.io/en/stable/api/pymongo/server_api.html", "username": "tapiocaPENGUIN" }, { "code": "", "text": "Okay thanks men !\nIts strange because this pymongo was installed yesterday lmao", "username": "Zelkoa_Network" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
PyMongo error with server api
2022-08-04T13:51:37.402Z
PyMongo error with server api
2,926
null
[]
[ { "code": "", "text": "Hi, guys.\nI have tried to install the lower version (specially 3.4) of mongo-community by homebrew on my Mac.\nbut there is no mongo-community v3.4 in brew list and 4.2 is minimum version.\nPlease let me know if any solution.Thanks,\nKelvin", "username": "Kelvin_Rivero" }, { "code": "", "text": "Hello @Kelvin_Rivero and welcome to the MongoDB community.MongoDB 3.4 is old and had an end of life date of January 2020. I would caution against using this version, even as part of a test platform.If you really want to use MongoDB 3.4 you can still find that on the MongoDB downloads page. Just choose 3.4.24 from the version dropdown:\nimage2730×1084 267 KB\nIf you’re looking for a specific verison of 3.4 that is not the most recent, you can click on the Archived releases link on that page and then install any version clear back to 2.6.", "username": "Doug_Duncan" }, { "code": "", "text": "Hello, Doug_Duncan.\nThanks for your fast reply.\nI’ve been using v4.2 but the project what I am working asks for v3.4 or below.\nI installed v4.2 using homebrew and tried to install v3.4 but brew doesn’t support that version anymore.\nBy googling and stackoverflow, I tried to find the way to install the lower version of mongo manually but didn’t yet find the good solution.\nWould you give me the guide for that?All the best,\nKelvin", "username": "Kelvin_Rivero" }, { "code": "mongomongosh", "text": "Would you give me the guide for that?MacOS installation from tarball for MongoDB 3.4. This will explain the manual steps for installing MongoDB 3.4 on your Mac.One thing to note here is that you will have to use the older mongo shell to interact with this version of the database as the mongosh shell, and recent versions of Compass, requires MongoDB 3.6 or newer. If using any of the MongoDB drivers, I will assume you need to find an older version of the driver as well.Again, if this is for a production environment, I would heavily push to be able to use a newer version of MongoDB.", "username": "Doug_Duncan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Installation the lower version of mongo-community on mac
2022-08-04T17:09:55.114Z
Installation the lower version of mongo-community on mac
3,329
null
[]
[ { "code": "", "text": "I’m trying to download backup from mongo atlas via download link. but I have no idea how about pricing for that action .In document only say thatWhen restoring a cluster using a manual download via HTTPS, Atlas also charges for each hour that the download link remains active. To contact MongoDB Support for more information, click Support at the top of any pageDue to in app support is not working for me ( js open intercom is broken ) , So anybody know about this kind of information ? I’m really appreciate your help .Thanks", "username": "Trung_Nguyen_Quang" }, { "code": "", "text": "Hi @Trung_Nguyen_Quang, Thank you for reaching out.Pricing for Atlas Snapshot downloads varies based on a couple of factors so there is not a single price point for all of our customers as it is too dynamic.There are two main factors that affect the pricing. The size of the data is one big factor (smaller amount of data, less to download) and the other major factor is how fast your network can download data.The slower your network, the longer it takes to download the snapshot and the more time we have to keep the download machinery up and running. I normally suggest our Atlas users to download one snapshot to get a better understanding of how long the download takes and then you can also see the download line item in your invoice after that to see the exact price for your snapshots (again this can vary even with the same amount of data due to network fluctuations).", "username": "Evin_Roesle" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Download data backup pricing
2022-08-05T09:12:40.557Z
Download data backup pricing
1,540
null
[]
[ { "code": "rs.reconfigdb.adminCommand({replSetReconfig:})\"errmsg\" : \"BSON field 'version' value must be < 2147483648, actual value '2147579250'\",\nterm", "text": "I found that rs.reconfig is different from db.adminCommand({replSetReconfig:})when doing the last I got an errorBut it work using rs.reconfig.Do you know if we can reset the version to 1. I understand rs.reconfig add a term field but not db.runCommand …any insights are more than welcome.Thanks", "username": "Jean-Baptiste_PIN" }, { "code": "db.adminCommand({replSerReconfig: {...}, force: true}){\n\t\"ok\" : 0,\n\t\"errmsg\" : \"BSON field 'version' value must be < 2147483648, actual value '2147529374'\",\n\t\"code\" : 51024,\n\t\"codeName\" : \"Location51024\",\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1659450614, 1),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"LxJSIFgxTpIuQhwh+L1jRnFBzi8=\"),\n\t\t\t\"keyId\" : NumberLong(\"7124390870412951557\")\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1659450614, 1)\n}\n", "text": "When using db.adminCommand({replSerReconfig: {...}, force: true}) I got an error whatever version number I put in.apparently force option will generate automatically an higher number but it seems that is not capped to a correct value ??Any help is more than welcome ", "username": "Jean-Baptiste_PIN" }, { "code": "", "text": "@Stennie_X Thanks for the moderation. Actually I though the title was no more relevant with my last post and will not help people to find the correct issue with force: true parameter. Could we update the title of this thread with the other title ? The real issue is when using force attribute not really reseting version number of the replSetConfig. Thank you.", "username": "Jean-Baptiste_PIN" }, { "code": "forceforceforceforceforce", "text": "Hi @Jean-Baptiste_PIN,I updated the title as requested.Can you provide more information to help reproduce this issue:It would also be helpful to know about a bit more about your use case for using the force option.Please note that this option is only intended for extreme scenarios where a majority of replica set members are unavailable:The force option forces a new configuration onto the member. Use this procedure only to recover from catastrophic interruptions. Do not use force every time you reconfigure. Also, do not use the force option in any automatic scripts and do not use force when there is still a primary.Regards,\nStennie", "username": "Stennie_X" }, { "code": "> db.version()\n5.0.10\n> rs.initiate()\n{\n\t\"info2\" : \"no configuration specified. Using a default configuration for the set\",\n\t\"me\" : \"localhost:27017\",\n\t\"ok\" : 1\n}\nrs0:SECONDARY> db.adminCommand({replSetReconfig: { \"_id\" : \"rs0\", \"version\" : 2147480329, \"term\" : 1, \"members\" : [ { \"_id\" : 0, \"host\" : \"localhost:27017\", \"arbiterOnly\" : false, \"buildIndexes\" : true, \"hidden\" : false, \"priority\" : 1, \"tags\" : { }, \"secondaryDelaySecs\" : NumberLong(0), \"votes\" : 1 } ], \"protocolVersion\" : NumberLong(1), \"writeConcernMajorityJournalDefault\" : true, \"settings\" : { \"chainingAllowed\" : true, \"heartbeatIntervalMillis\" : 2000, \"heartbeatTimeoutSecs\" : 10, \"electionTimeoutMillis\" : 10000, \"catchUpTimeoutMillis\" : -1, \"catchUpTakeoverDelayMillis\" : 30000, \"getLastErrorModes\" : { }, \"getLastErrorDefaults\" : { \"w\" : 1, \"wtimeout\" : 0 }, \"replicaSetId\" : ObjectId(\"62ebb14889135fc6140d5283\") } }, force: true})\n{\n\t\"ok\" : 0,\n\t\"errmsg\" : \"BSON field 'version' value must be < 2147483648, actual value '2147497198'\",\n\t\"code\" : 51024,\n\t\"codeName\" : \"Location51024\",\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1659613988, 15),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\t\t\t\"keyId\" : NumberLong(0)\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1659613988, 15)\n}\n", "text": "Here is an update with a fresh install mongodb on docker:latest (v5.0.10)", "username": "Jean-Baptiste_PIN" }, { "code": "docker run -p27017:27017 --name mongo -d mongo /usr/bin/mongod --replSet rs0\ndocker exec -ti mongo bash\n", "text": "Here are docker command I use:", "username": "Jean-Baptiste_PIN" }, { "code": "\"version\" : 2147480329forceforce : true2147480329force", "text": "\"version\" : 2147480329Hi @Jean-Baptiste_PIN,Thanks for the extra info.The force option randomly increments the version number to try to avoid conflicts (per Reconfigure a Replica Set with Unavailable Members):When you use force : true , the version number in the replica set configuration increases significantly, by tens or hundreds of thousands. This is normal and designed to prevent set version collisions if you accidentally force re-configurations on both sides of a network partition and then the network partitioning ends.It looks like that there may be a missing check for a valid range of version value, but there must be something else going amiss if your starting config version is 2147480329. Can you provide some more context on this version number – was that the result of a previous forced increment (or repeated increments) or a manually provided value?As noted earlier, the use case for force reconfig is for recovery from catastrophic issues where a majority of your replica set members are unavailable. This option should be used relatively rarely (if at all) in the lifetime of a replica set.Regards,\nStennie", "username": "Stennie_X" }, { "code": "force", "text": "I would be curious to know how the version number got to be so high as well. As Stennie stated, the documentation states that the version number will go up by 10’s or 100’s of thousands when using the force option of a reconfig, but even so you’d have to do that over 21,000 times (if the version changed by 100,000 each time). That’s a lot of reconfiguration.There are ways to reset your replica set version without losing your data. Doing this however is a tricky proposition and requires care so as to not screw up your database. I would recommend doing this on a database storing data that you care about only after thoroughly testing it on test systems and making sure you have the process down.", "username": "Doug_Duncan" }, { "code": "db.adminCommand({replSetReconfig:...})termrs.reconfig", "text": "@Stennie_X Yes I do agree force should not be used.\n@Doug_Duncan I actually use a kubernetes operator that update replicaSet config using this parameter and I think it did a lot of update to the config to attain the maximum version number.But, as I was able to replicate the issue directly on 5.0.10 I though it was a good idea to report it.I also find that using db.adminCommand({replSetReconfig:...}) seems to not create the term field in the config compare to using rs.reconfig who did it.Also I was thinking, version number will be reinit when updating term but it’s not the case. If you can provide me with more explanation about term/version correlation please.I think I can be able to update the config following something similar to this instruction (https://www.mongodb.com/docs/manual/tutorial/rename-unsharded-replica-set/) ?However, I’m reverting to another operator for sure.Regards", "username": "Jean-Baptiste_PIN" }, { "code": "force", "text": "Interesting. I haven’t played around with any K8s operators for MongoDB for a while, so didn’t realize that they might be forcing a reconfig. Still it seems weird that it would have gotten that high as during my testing of just manually running reconfigs with a force option I was only seeing things go u on the order of 10s of thousands which would take a hundred thousand updates or so to get past the limit.The document you linked to for resolving the issue looks like it could work once you modify the command to update the version number and not the name. Ive not done it that way, hut there are generally multiple ways to do the same thing. Again I would caution to be very careful when doing this and test thoroughly on a test system so you make sure you get the steps right. Also make sure you have a good backup of your database files and have the restore process down.Best of luck.", "username": "Doug_Duncan" } ]
replSetReconfig force generates too high version number
2022-08-02T09:22:51.197Z
replSetReconfig force generates too high version number
3,197
null
[ "java", "spring-data-odm" ]
[ { "code": "", "text": "MongoDB supports different field size in collection of documents. How to access different document size using Spring Data or any ORM.For example, In existing application we have Student collection which store documents about Students. Now we need to store some additional fields (optional) in all new Student document (only new documents).My question is, how can we support different structure of same document and access them using Java ORM.", "username": "Jiteshkumar_Patel" }, { "code": "", "text": "give a try for this ODM:Learn how to use Morphia with MongoDB in this quickstart guide.by the way, if an ORM maps missing fields to some null value, that should do the job as well. or you may use different schemas for old and new document structures.", "username": "Yilmaz_Durmaz" } ]
Different size and structure of MondoDB document and access through ORM
2022-08-05T06:05:06.069Z
Different size and structure of MondoDB document and access through ORM
1,900
null
[ "node-js", "connecting" ]
[ { "code": "MongoServerSelectionError: Server selection timed out after 30000 ms\n at Timeout._onTimeout (/home/ec2-user/test/node_modules/mongodb/lib/sdam/topology.js:293:38)\n at listOnTimeout (node:internal/timers:559:17)\n at processTimers (node:internal/timers:502:7) {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n servers: Map(1) {\n", "text": "Hello,We are trying to connect DB cluster via node application and getting the following errorwe tried the solutions provided in forum likeAll the above options did not work for us.Node Version we are using is Node v16 and mongo v4.4.\nWe are successfully connecting to the cluster with mongo shell but not with node code.Please suggest a solution to resolve the above mentioned issue.\nThank you", "username": "akshata_mk" }, { "code": "", "text": "Please share the connection strings you used in both cases. I suspect that they are different and that you connect directly to one instance (vs replica set) with mongosh.Because if you have the error ReplicaSetNoPrimary you will have the same error everywhere. The replica set is in the same state whatever client you use to connect.Please share your rs.status().", "username": "steevej" }, { "code": "", "text": "@steevejmongoshell\nmongo --host clusterurl:27017 --username user --password pwdnode way\nmongodb://user:pwd@clusterurl:27017/?replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=falsewe also tried this way without replicaSet\nmongodb://user:pwd@clusterurl:27017/?directConnection=true", "username": "akshata_mk" }, { "code": "", "text": "The following does not connect to a replica setmongo --host clusterurl:27017 --username user --password pwdthat is why you can connect. PleasePlease share your rs.status().See https://www.mongodb.com/docs/v4.4/mongo/#connect-to-a-mongodb-replica-set.As far as I knowdirectConnection=trueis only for localhost connection.", "username": "steevej" }, { "code": "rs0:PRIMARY> rs.status()\n{\n \"set\" : \"rs0\",\n \"date\" : ISODate(\"2022-08-03T13:57:43Z\"),\n \"myState\" : 1,\n \"members\" : [\n {\n \"_id\" : 0,\n \"name\" : \"clusterurl:27017\",\n \"health\" : 1,\n \"state\" : 1,\n \"stateStr\" : \"PRIMARY\",\n \"self\" : true,\n \"uptime\" : 197\n }\n ],\n \"ok\" : 1,\n \"operationTime\" : Timestamp(1659535063, 1)\n}\nrs0:PRIMARY>\n", "text": "@steevejthis my rs.status() response", "username": "akshata_mk" }, { "code": "", "text": "We now know that your single node replica set is correct you should try to connect your application again.However, since you have a single node replica set, I would try withoutreadPreference=secondaryPreferredas you do not have any and I do not know if it can cause issues.", "username": "steevej" }, { "code": "", "text": "@steevejThank you the issue got resolved.", "username": "akshata_mk" } ]
Unable to connect DB cluster with Mongo node driver
2022-08-03T10:45:15.547Z
Unable to connect DB cluster with Mongo node driver
2,909
null
[ "indexes" ]
[ { "code": "{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"brand\": {\n \"fields\": {\n \"tr\": [\n {\n \"type\": \"string\"\n },\n {\n \"type\": \"stringFacet\"\n }\n ]\n },\n \"type\": \"document\"\n },\n \"qualified\": [\n {\n \"type\": \"numberFacet\"\n },\n {\n \"type\": \"number\"\n }\n ]\n }\n }\n}\n", "text": "I am trying to create facet index in mongodb atlas search when i am playing with indexes on free tier cluster\nit is showing this errorYour index could not be built: Field limit exceeded: 413 > 300i can’t find anything about this limit in any doc\nplease provide suggestionshere is json of my index", "username": "Nirali_88988" }, { "code": "", "text": "Hi @Nirali_88988,I believe the error may be referring to the following limitation (For M0,M2 and M5 tier clusters):This is noted on the Atlas Search M0 (Free Cluster), M2 and M5 Limitations documentation. However, in saying so, please provide the schema or a few sample documents from the collection that the index is being built on that generates this error.Regards,\nJason", "username": "Jason_Tran" }, { "code": "\"mappings.dynamic\":falsefalse{\n \"mappings\": {\n \"dynamic\": false, /// <--- set to false\n \"fields\": {\n \"field1\": {\n \"dynamic\": true,\n \"type\": \"document\"\n },\n \"field2\": {\n \"dynamic\": true,\n \"type\": \"document\"\n }\n }\n }\n}\n", "text": "I was able to reproduce a similar error by creating a document with 502 fields on an M0 tier cluster:Your index could not be built: Field limit exceeded: 502 > 300You can possibly try setting \"mappings.dynamic\":false.I was able to get the index built eventually by turning the Dynamic Mapping to false for the search index and specifying specific fields to index. See the example JSON of this:Although I did notice I had to delete the original index where the error was generated. Perhaps this may not be the case for you.Hope this helps.Regards,\nJason", "username": "Jason_Tran" }, { "code": "\"dynamic\": false", "text": "yes it is working after setting \"dynamic\": falseThanks for help ", "username": "Nirali_88988" }, { "code": "", "text": "Awesome - thanks for updating this post and verifying that the change suggested worked out for you ", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Create index error: Field limit exceeded
2022-08-05T00:56:08.924Z
Create index error: Field limit exceeded
2,948
https://www.mongodb.com/…2_1024x1024.jpeg
[]
[ { "code": "Community Triage Engineer, MongoDBCurriculum Services Engineer, MongoDB", "text": "\nMongoDB Workshop1920×1920 213 KB\nMongoDB is proud to be a software partner of the Break the Barrier Hackathon organized by LDRP Institute of Research and Technology.Get Support: If you are one of the competitors, join the BTB Hackathon Group to get technical support and guidance during the hackathon. Ask any questions or doubt you have and build your applications faster with MongoDB.MongoDB is organizing a workshop for the hackathon participants and you all are welcome to join and be a part of it.:Modeling your application’s schema - is the first thing that comes to your mind when you start planning an application for your Hackathon. Things to Is your app read or write heavy? What data is frequently accessed together? How will your data set grow and scale?In this session, we will discuss the basics of MongoDB and the basics of data modeling using real-world examples. Learn how you can design your application’s schema better and faster with MongoDB.The mission of KSV University, LDRP-ITR, and Break The Barrier is to inspire developers to develop ingenious solutions. We aspire in the virtual world to illuminate the real world. It’s an innovative platform and accelerator. We aim to build a community where the pioneer coders and leading industry partners could meet and collaborate toward future innovations.Featured workshops and mentorship sessions backed by industry specialists, senior developers, and professionals. 14+ workshops are going to be hosted in June-July 2022. Be ready to dive deep into a huge pool of knowledge. Learn MoreWhy Join The Hackathon?Challenges Inspire Developers Uniquely Well For Several Reasons:Be A Part Of Break The Barrier And Let’s Bash The Barrier!\nAasawari2055×2052 374 KB\nCommunity Triage Engineer, MongoDB–\n\nKushagra1616×1751 442 KB\nCurriculum Services Engineer, MongoDB", "username": "Harshit" }, { "code": "", "text": "Hello All,\nThe workshop starts in 15 mins. Join here: https://us02web.zoom.us/j/82500679971?pwd=dHFoYlBNajFVb2IxWWtVRjU0QmxKQT09 ", "username": "Harshit" } ]
Break the Barrier Hackathon: Workshop - Innovate and Build Applications faster with MongoDB!
2022-07-28T18:31:07.745Z
Break the Barrier Hackathon: Workshop - Innovate and Build Applications faster with MongoDB!
3,140
null
[ "aggregation" ]
[ { "code": "", "text": "I am struggling to create a list of documents whose ‘title’ field contains only one word.\nThank you\nB", "username": "Bryan_Castellucci" }, { "code": "$split$size$eqdb.collection.aggregate([\n {\n \"$match\": {\n \"$expr\": {\n \"$eq\": [\n {\n \"$size\": {\n \"$split\": [\n \"$title\",\n \" \"\n ]\n }\n },\n 1\n ]\n }\n }\n }\n])\n", "text": "You can do it like this:Working example", "username": "NeNaD" }, { "code": "", "text": "Once again, thank you, this helps me understand the nested nature of the query.\nBC", "username": "Bryan_Castellucci" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to use '$size' correctly?
2022-08-04T00:14:26.725Z
How to use &lsquo;$size&rsquo; correctly?
1,415
null
[ "aggregation" ]
[ { "code": "", "text": "Where can I find a comprehensive skeleton-list of $operators for aggregation?\nThanks,\nB", "username": "Bryan_Castellucci" }, { "code": "", "text": "Hi,You can check official docs.", "username": "NeNaD" }, { "code": "", "text": "Thank you. Have a wonderful day.\nBC", "username": "Bryan_Castellucci" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How many aggregation operations exist?
2022-08-04T01:09:17.607Z
How many aggregation operations exist?
886
null
[ "python", "connecting", "atlas-cluster" ]
[ { "code": "import pymongo\nclient = pymongo.MongoClient(\"mongodb+srv://<USERNAME>:<USER_PASSWORD>@andreytestmdb.oztib01.mongodb.net/\")\nclient.list_database_names()\nServerSelectionTimeoutError: ac-1epbh6r-shard-00-00.oztib01.mongodb.net:27017: timed out", "text": "Hi!\nIm trying to connect to cloud.mongodb.com database with this code:but i have ServerSelectionTimeoutError: ac-1epbh6r-shard-00-00.oztib01.mongodb.net:27017: timed outI double checked my user password - correct\nI double checked ip list (i have 0.0.0.0/0 (includes your current IP address) and my real IP here)\nAnd i still cant conntect to database (error on client.list_database_names() )python version 3.10.2\npymongo version 4.1.1Im in tilt now, i need help with it", "username": "Andrey_Zubov" }, { "code": "from pymongo import MongoClient\n\nuri = \"mongodb+srv://andreytestmdb.oztib01.mongodb.net/?authSource=%24external&authMechanism=MONGODB-X509&retryWrites=true&w=majority\"\nclient = MongoClient(uri,\n tls=True,\n tlsCertificateKeyFile='<path_to_certificate>')\n\ndb = client['testDB']\ncollection = db['testCol']\ndoc_count = collection.count_documents({})\nprint(doc_count)\n", "text": "I have the same error if i try to connect using certs:", "username": "Andrey_Zubov" }, { "code": "ping/// example\nping cluster0-shard-00-00.ze4xc.mongodb.net\ntelnet27017/// example\ntelnet cluster0-shard-00-00.ze4cx.mongodb.net 27017\nmongosh+srvtlsssltruetlssslfalsetls=falsessl=false", "text": "Hi Andrey_Zubov,Just to clarify, is the connection being attempted from the same machine that has it’s IP on the whitelist? I.e. Not a client within VM (with different network configuration(s)) running on a machine.If so, please try performing the initial basic network connectivity tests and provide the output for the cluster you are having trouble connecting to:Note: You can find the hostname in the metrics page of your clusterAdditionally, I would recommend to review the Troubleshoot Connection Issues documentation and verify some configurations, such as adding the client’s IP (or IP ranges) to the Network Access List. You may also find the following blog post regarding tips for atlas connectivity useful.I would also ask if you can try connecting using mongosh or MongoDB Compass to see if the same error occurs. Please let me know the results of this.Please note, use of the +srv connection string modifier generally automatically sets the tls (or the equivalent ssl ) option to true for the connection. You can override this behavior by explicitly setting the tls (or the equivalent ssl ) option to false with tls=false (or ssl=false ) in the query string. More info on this here.Regards,\nJason Tran", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
ServerSelectionTimeoutError: <CLUSTER_NAME>.mongodb.net:27017: timed out
2022-07-17T09:35:31.014Z
ServerSelectionTimeoutError: &lt;CLUSTER_NAME&gt;.mongodb.net:27017: timed out
1,649
null
[ "security", "configuration" ]
[ { "code": "db.adminCommand({getParameter:1,javascriptProtection:1})\n**{ \"javascriptProtection\" : false}**\n", "text": "HiJust have a quick question from the MongoDB Database server end .\nIf we are running the MongoDB instance with authentication enabled , do we have any vulnerability with running javascriptProtection: false (I see that is the default - because we do not have anything specified on the mongod config by default but I do see thisDoes this impose any risk and can we leave this as is ?", "username": "Vinay_Setlur" }, { "code": "", "text": "Team,\nCan anyone please help on this question? I also have same doubt will that be any risk?", "username": "Jerwin_Roy_Jackson" }, { "code": "javascriptProtectionmongotrue", "text": "Hi @Jerwin_Roy_Jackson and welcome to the forums,The value for javascriptProtection parameter has been changed in MongoDB v3.4+ to be enabled by default. If your MongoDB deployment is on v3.2 (EOL September 2018) or under, I’d recommend to upgrade your deployment version to a more recent version.The feature setting was built to avoid overloading built-in functions in/from mongo shell. It’d be recommended to set the value to true. Please see also MongoDB Security Checklist to view list of security measures that you should implement to protect your MongoDB installation.Regards,\nWan", "username": "wan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Does javascriptProtection:false impose any risk?
2021-12-17T19:33:22.476Z
Does javascriptProtection:false impose any risk?
3,680
null
[ "aggregation", "queries", "atlas-search", "text-search" ]
[ { "code": "", "text": "Is there a way to use either the MongoDB Atlas Search or the older $text search in the aggregation pipeline for a relations field? All the examples for both seem be applying to the top level fields of a collection only", "username": "Ezeikel_Pemberton" }, { "code": "$search$searchMeta$lookup$search$searchMeta$lookup", "text": "Hi @Ezeikel_Pemberton - Welcome to the community Can you provide the following information?:This will help clarify what you are trying to achieve and will help us seeing if it may be possible or not.However, please note that starting in MongoDB 6.0, you can specify the Atlas Search $search or $searchMeta stage in the $lookup pipeline to search collections on the Atlas cluster. The $search or the $searchMeta stage must be the first stage inside the $lookup pipeline. More information regarding this here.Regards,\nJason", "username": "Jason_Tran" } ]
Is there a way to use text search on a related field?
2022-07-25T15:10:23.644Z
Is there a way to use text search on a related field?
2,255
null
[]
[ { "code": "{\n \"_id\": \"67448af8-a68b-4d08-8948-2cddca57d708\",\n \"links\": [{\n \"linkType\": \"Org\",\n \"linkPath\": \"0923689a-e009-4d67-8db5-5ba40f840bf3/facd3c31-dbfd-4097-9a27-0d862bb0c8e9\",\n \"status\": \"Activated\",\n \"createdOn\": {\n \"$date\": \"2022-07-26T14:49:38.780Z\"\n },\n \"createdBy\": \"\"\n }],\n \"memberOfGroups\": [],\n \"roles\": [\n \"_id\":\"123\",\n \"sources\":[]\n ]\n}```\n\ni am using below pipeline but not able to update the role sources array.\n\n```db.principals.update({ _id: \"67448af8-a68b-4d08-8948-2cddca57d708\" }, [\n { $set: { \"memberOfGroups\": { $ifNull: [\"$memberOfGroups\", []] } } },\n { $set: { \"roles\": { $ifNull: [\"$roles\", []] } } },\n {\n $set: {\n \"memberOfGroups\": {\n $cond: [\n {\n $ne: [\"$memberOfGroups._id\", \"ba93384d-d18a-4b36-9a24-7d3ebb1619d7\"]\n },\n {\n $concatArrays: [\n \"$memberOfGroups.items\",\n [\n {\n _id: \"ba93384d-d18a-4b36-9a24-7d3ebb1619d7\",\n name: \"test group\"\n }\n ]\n ]\n },\n { $concatArrays: [\"$memberOfGroups.items\", []] }\n ]\n }\n }\n },\n\n {\n $set: {\n \"roles\": {\n $cond: [\n {\n $ne: [\"$roles._id\", \"ba93384d-d18a-4b36-9a24-7d3ebb1619d7\"]\n },\n {\n $concatArrays: [\n \"$roles.items\",\n [\n {\n _id: \"ba93384d-d18a-4b36-9a24-7d3ebb1619d7\",\n name: \"test group\",\n sources: [\n {\n type: \"group\",\n linkType: \"app\",\n groupId: \"ba93384d-d18a-4b36-9a24-7d3ebb1619d7\"\n }\n ]\n }\n ]\n ]\n },\n {}\n ]\n }\n }\n },\n {\n $set: {\n roles: {\n $reduce: {\n input: { $ifNull: [\"$roles\", []] },\n initialValue: { role: { _id: \"0923689a-e009-4d67-8db5-5ba40f840bf3\", name: \"testRole\", sources: [{ type: \"group\", linkType: \"org\" }] }, sources: [{ type: \"group\", linkType: \"org\", groupid: \"0923689a-e009-4d67-8db5-5ba40f840bf3\" }] },\n in: {\n $cond: [{ $eq: [\"$$this.roleid\", \"0923689a-e009-4d67-8db5-5ba40f840bf3\"] },\n {\n $cond: [{ $ne: [\"$$this.sources.groupId\", \"0923689a-e009-4d67-8db5-5ba40f840bf3\"] },\n\n {\n\n $concatArrays: [\"$$this.sources\",\n \"$$value.sources\",\n ]\n\n },\n\n {$concatArrays: [\"$$this.sources\",[]]}\n\n ],\n },\n {}\n ]\n }\n }\n }\n }\n }\n], { \"multi\": true })```\n\nIn the above third stage is not working properly where i am trying to insert role array item source array, please help on this.\n\nThanks,\nShyam Sohane", "text": "Hi,I am trying to update the nested array based on filter condition, and i am struggling to update nested array. please suggest how it can be written.Schema:", "username": "Shyam_Sohane" }, { "code": "{\n _id: '67448af8-a68b-4d08-8948-2cddca57d708',\n links: [\n {\n linkType: 'Org',\n linkPath: '0923689a-e009-4d67-8db5-5ba40f840bf3/facd3c31-dbfd-4097-9a27-0d862bb0c8e9',\n status: 'Activated',\n createdOn: '2022-07-26T14:49:38.780Z',\n createdBy: ''\n }\n ],\n memberOfGroups: [\n {\n _id: 'ba93384d-d18a-4b36-9a24-7d3ebb1619d7',\n name: 'test group'\n }\n ],\n roles: {}\n }\n$setsourcesroles$[<identifier>]", "text": "Hi @Shyam_Sohane,I ran the provided update operation on a test environment and ended up getting the following output document:As I understand it, this is of course not what you are wanting.In the above third stage is not working properly where i am trying to insert role array item source array, please help on this.Can you clarify which part specifically of the update is attempting to achieve what you mentioned in the above and what the issue is when you mean “is not working properly”? From the third $set operator, I cannot see any reference to sources or roles fields. However, please correct me if I am wrong here.In saying so, I understand that you have provided the schema already but could you also provide the following information:If it suits your use case, perhaps maybe use of the $[<identifier>] may help.Regards,\nJason", "username": "Jason_Tran" } ]
How to update the nested array object with update pipeline
2022-07-27T04:52:56.705Z
How to update the nested array object with update pipeline
1,303
null
[ "aggregation", "swift" ]
[ { "code": "var body: some View {\n\t\t\n\t\tNavigationStack(path: $navigationPath) {\n\t\t\tVStack {\n\t\t\t\tList {\n\t\t\t\t\tForEach(neoListGroup.items) { item in\n\t\t\t\t\t\tText(item.title)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\t.navigationDestination(for: NeoListGroup.self) { group in\n\t\t\t\t\tAddListView(group: group)\n\t\t\t\t}\n\t\t\t}\n\t\t\t.toolbar {\n\t\t\t\tToolbarItem(placement: .navigationBarTrailing) {\n\t\t\t\t\tButton(action: {\n\t\t\t\t\t\tnavigationPath.append(neoListGroup)\n\t\t\t\t\t}, label: {Image(systemName: \"plus\")})\n\t\t\t\t}\n\t\t\t}\n\t\t\t.navigationTitle(\"My Lists\")\n\t\t}\n }\nstruct AddListView: View {\n\t\t\n\t@ObservedRealmObject var group: NeoListGroup\n\t@State var title = \"Title\"\n\t\n var body: some View {\n\t\tVStack {\n\t\t\tTextField(\"Title\", text: $title)\n\t\t\tButton(\"Send\") {\n\t\t\t\t$group.items.append(NeoList(title: self.title))\n\t\t\t}\n\t\t}\n }\n}\n", "text": "Hi,I have been discovering the RealmSwift SDK these past few days for a personal project. Currently, I am working on an iOS 16.0 app (SwiftUI) with Xcode 14.0 beta 4 and RealmSwift 10.28.2.Right now, I am on a simple use case where I want to let users create lists, which are synced to their account with Realm Sync. I followed the quickstart on SwiftUI guide to manage the authentication flow. So in my models, I have a “NeoList” and “NeoListGroup” that has List (so similar to items/itemsGroup in the guide).For a quick test to see if I can manage CRUD operations, I created first a main view that shows all the created lists by the user (with the new NavigationStack API in iOS 16.0):When tapping on the + sign, the next view (AddListView) is pushed along with the group object so that I can append a new item:The code actually works, but I am getting a warning on Xcode related to a hang risk with one of the thread:[…]/checkouts/realm-core/src/realm/util/interprocess_mutex.cpp:44: warning run: Thread running at QOS_CLASS_USER_INTERACTIVE waiting on a lower QoS thread running at QOS_CLASS_DEFAULT. Investigate ways to avoid priority inversionsThread Performance Checker: Thread running at QOS_CLASS_USER_INTERACTIVE waiting on a lower QoS thread running at QOS_CLASS_DEFAULT. Investigate ways to avoid priority inversionsSo my question is: is there an issue with my code or is that somehow linked to the beta situation with Xcode/iOS 16?Thanks a lot.", "username": "Sonisan" }, { "code": "", "text": "No one with an idea of what’s happening? ", "username": "Sonisan" }, { "code": "", "text": "Did you read Apple Developer Diagnosing Performance Issues Early? Will explain what’s happening, if not entirely why. Possibly a bug in Realm itself?", "username": "Jack_Woehr" }, { "code": "", "text": "Thank you for your reply.\nYes I briefly came across this page. The warning is directly redirecting me to a specific line in the SDK linked to the semaphores, that’s where I’m lost.\nI should maybe file an issue directly on Realm’s repo…", "username": "Sonisan" }, { "code": "", "text": "Yes, I think that’s the way to go, @Sonisan", "username": "Jack_Woehr" }, { "code": "", "text": "When is the warning presented? App start? After you hit +?", "username": "Jay" }, { "code": "$group.items.append(...)", "text": "Right after hitting the button ‘Send’ on the second View and thus executing $group.items.append(...)", "username": "Sonisan" }, { "code": "Button(\"Send\") {\n $group.items.append(NeoList(title: self.title))\n}\nButton(\"Send\") {\n let x = $group\n print(x)\n let y = NeoList(title: self.title)\n print(y)\n}\n", "text": "I would suggest some troubleshooting - it may or may not provide any insight but it’s a place to startI would temporarily replace thiswithand inspect the console. Ensure the objects are what you expect.", "username": "Jay" }, { "code": "", "text": "The code itself is working as intended, correct objects are showing up.\nBy the way, I have also opened an issue on the repository: Hang risk - thread priority inversion warning (Xcode) · Issue #7902 · realm/realm-swift · GitHub", "username": "Sonisan" } ]
Hang risk thread warning on Xcode
2022-07-30T15:56:32.647Z
Hang risk thread warning on Xcode
6,133
null
[ "node-js", "crud", "mongoose-odm" ]
[ { "code": "const projectSchema = new mongoose.Schema({\n\tname: String,\n price: Number,\n idCoinMarketCap :Number\n});\nlet listePriceId = {\n priceId: [\n {\n price: 32000,\n id: 1,\n },\n {\n price: 1700,\n id: 2,\n },\n\n {\n price: 3000,\n id: 3,\n },\n\n ]\nexports.configContract = (req, res) => {\n\n\n Project.updateMany(\n {},\n\n {\n $set: {\n \"price\": listePriceId.priceId['$idCoinMarketCap']\n },\n },\n\n (err, response) => {\n if (err) return res.status(500).json({ msg: 'update failed', error: err });\n res.status(200).json({ msg: `document updated`, response: response },\n\n );\n\n });\n\n};\n", "text": "HelloI have model of this typeI would like to change the price of each project with respect to the following variableFor this I used uptate to ManyIt doesn’t work unfortunately", "username": "Mielpops" }, { "code": "updateOnebulkWrite()updateManyidCoinMarketCapexports.configContract = async (req, res) => {\n await Project.bulkWrite([\n {\n updateOne: {\n filter: { idCoinMarketCap: 1 },\n update: { price: 32000 }\n }\n },\n {\n updateOne: {\n filter: { idCoinMarketCap: 2 },\n update: { price: 1700 }\n }\n },\n {\n updateOne: {\n filter: { idCoinMarketCap: 3 },\n update: { price: 3000 }\n }\n }\n ]).then((err, response) => {\n if (err) return res.status(500).json({ msg: 'update failed', error: err });\n res.status(200).json({ msg: `document updated`, response: response });\n });\n};\n", "text": "Hello @Mielpops,For the bulk update you have to use bulkWrite() method, because your every document have own condition/query.You can use something like this way, I have used updateOne inside bulkWrite(), you can use updateMany if there are multiple matches of idCoinMarketCap field,Note: the above code is not tested, prepared from your input.", "username": "turivishal" }, { "code": "", "text": "Hello thank you for your reply\nIn my example I put only 3 values\nbut I’m going to have hundreds", "username": "Mielpops" }, { "code": "bulkWrite()", "text": "You can create a payload same as I have created through the loop and pass it to bulkWrite() method.", "username": "turivishal" }, { "code": "", "text": "Requirements likebut I’m going to have hundredsshould be stated at the beginning since as you have seen the solutions will vary. Luckily, @turivishal’s solution is easily adaptable as he mentioned.An alternative that could be interesting would be:", "username": "steevej" }, { "code": "", "text": "Hellothank you both for your answer, I chose the solution of $lookup with $merge.It works very well, thank you very muchI close the topic", "username": "Mielpops" } ]
Problem referencing field in updateMany
2022-08-01T14:47:19.283Z
Problem referencing field in updateMany
2,923
null
[ "node-js", "python", "mongodb-shell" ]
[ { "code": "{\"t\":{\"$date\":\"2022-08-04T13:11:13.414+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn417\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"10.0.3.218:55322\",\"client\":\"conn417\",\"doc\":{\"driver\":{\"name\":\"nodejs|mongosh\",\"version\":\"4.8.1\"},\"os\":{\"type\":\"Linux\",\"name\":\"linux\",\"architecture\":\"x64\",\"version\":\"5.10.118\"},\"platform\":\"Node.js v16.16.0, LE (unified)\",\"version\":\"4.8.1|1.5.4\",\"application\":{\"name\":\"mongosh 1.5.4\"}}}}\n{\"t\":{\"$date\":\"2022-08-04T13:11:13.420+00:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn417\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":true,\"principalName\":\"root\",\"authenticationDatabase\":\"admin\",\"remote\":\"10.0.3.218:55322\",\"extraInfo\":{}}}\n\n{\"t\":{\"$date\":\"2022-08-04T13:14:18.711+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"10.0.3.218:40240\",\"uuid\":\"fbc15467-2262-4f82-b563-dab6961e65ed\",\"connectionId\":421,\"connectionCount\":4}}\n{\"t\":{\"$date\":\"2022-08-04T13:14:18.712+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn421\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"10.0.3.218:40240\",\"client\":\"conn421\",\"doc\":{\"driver\":{\"name\":\"PyMongo\",\"version\":\"4.2.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"x86_64\",\"version\":\"5.10.118\"},\"platform\":\"CPython 3.10.4.final.0\"}}}\n{\"t\":{\"$date\":\"2022-08-04T13:14:18.713+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn421\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"10.0.3.218:40240\",\"uuid\":\"fbc15467-2262-4f82-b563-dab6961e65ed\",\"connectionId\":421,\"connectionCount\":3}}\n\n", "text": "We have a simple setup where mongo is running in a very basic configuration on k8s.From another pod in the same namespace I can login to mongodb with mongosh and the mongo logs look like this:When using the exact same URI in PyMongo, the connection succeeds, but never authenticates, so it ends up timing out after trying to find mongo at 127.0.0.1.", "username": "Chris_Reynolds" }, { "code": "mongosh \"mongodb://USER:PW@HOST:27017/DB?authSource=admin\"\nautomate> db.mycollection.find()\n# Works!!!\nIn [1]: import pymongo\n ...: from pymongo import MongoClient\n ...: client = MongoClient('mongodb://USER:PW@HOST:27017/DB?authSource=admin')\n\nIn [2]: db = client.DB\n\nIn [3]: collection = db.mycollection\n\nIn [4]: collection.find_one()\n\n# TIMES OUT WITH...\nServerSelectionTimeoutError: Could not reach any servers in [('127.0.0.1', 27017)]. Replica set is configured with internal hostnames or IPs?, Timeout: 30s, Topology Description: <TopologyDescription id: 62ebd9efefa227b026e38b11, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('127.0.0.1', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('127.0.0.1:27017: [Errno 111] Connection refused')>]>\n", "text": "Mongosh commands:PyMongo", "username": "Chris_Reynolds" }, { "code": "directConnectionimport pymongo\n ...: from pymongo import MongoClient\n ...: client = MongoClient('mongodb://USER:PW@HOST:27017/DB?authSource=admin', directConnection=True)\n", "text": "I was able to complete this connection using the directConnection keyword argument. As far as I understand it, it’s because this is a non-replicated MongoDB pod?", "username": "Ben_Hayden" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongosh works but pymongo fails with same URI
2022-08-04T14:55:56.840Z
Mongosh works but pymongo fails with same URI
2,020
null
[ "aggregation", "indexes", "performance" ]
[ { "code": "{\n _id: \"someAggregateId-someTaskId\",\n aggregateId: \"someAggregateId\",\n taskId: \"someTaskId\",\n aggregate: { \n id: \"someAggregateId\",\n content: \"someContent\"\n }\n},\n{\n _id: \"someAggregateId-anotherTaskId\",\n aggregateId: \"someAggregateId\",\n taskId: \"anotherTaskId\",\n aggregate: {\n id: \"someAggregateId\",\n content: \"someChangedContent\"\n }\n},\n{\n _id: \"anotherAggregateId-someTaskId\",\n aggregateId: \"anotherAggregateId\",\n taskId: \"someTaskId\",\n aggregate: { \n id: \"anotherAggregateId\",\n content: \"anotherContent\"\n }\n},\n{\n _id: \"anotherAggregateId-oneMoreTaskId\",\n aggregateId: \"anotherAggregateId\",\n taskId: \"oneMoreTaskId\",\n aggregate: { \n id: \"anotherAggregateId\",\n content: \"oneMoreContent\"\n }\n}\n[ \"someTaskId\", \"anotherTaskId\" ]\n[\n aggregate: {\n id: \"someAggregateId\",\n content: \"someChangedContent\"\n },\n aggregate: { \n id: \"anotherAggregateId\",\n content: \"anotherContent\"\n }\n]\n[\n {\n '$addFields': {\n '_taskWeight': [\n {\n '_taskId': 'anotherTaskId', \n '_score': 2\n }, {\n '_taskId': 'someTaskId', \n '_score': 1\n }\n ]\n }\n }, {\n '$addFields': {\n '_taskWeight': {\n '$filter': {\n 'input': '$_taskWeight', \n 'as': 'aggregation_item', \n 'cond': {\n '$eq': [\n '$$aggregation_item._taskId', '$taskId'\n ]\n }\n }\n }\n }\n }, {\n '$addFields': {\n '_taskWeight': {\n '$cond': [\n {\n '$eq': [\n {\n '$size': '$_taskWeight'\n }, 1\n ]\n }, {\n '$arrayElemAt': [\n '$_taskWeight', 0\n ]\n }, {\n '_score': -1\n }\n ]\n }\n }\n }, {\n '$addFields': {\n '_taskWeight': '$_taskWeight._score'\n }\n }, {\n '$match': {\n '_taskWeight': {\n '$not': {\n '$eq': -1\n }\n }\n }\n }, {\n '$sort': {\n '_taskWeight': 1\n }\n }, {\n '$group': {\n '_id': '$aggregateId', \n 'aggregate': {\n '$last': '$aggregate'\n }\n }\n }, {\n '$replaceRoot': {\n 'newRoot': '$aggregate'\n }\n }\n]\n", "text": "Hello,I have a an aggregation query that takes minutes when querying ~100k documents.My documents have a structure like this:My goal is to get the latest of each aggregate with a distinct aggregateId where “latest” is defined by a user-supplied list of taskIds. Documents with a taskId not in the supplied list will be ignored (this happens rarely). If the list of taskIds is this:I expect a result of:My aggregation query looks like this:Note that the first $addFields stage is generated on the fly with each taskId given a score based on its order. In reality, there are hundreds of taskIds. I have not been able to create an index that is used by this query and it is running very slowly.I am aware that I could use $switch in the first $addFields to replace all 4 $addFields at the start with only one but sadly, my remote DB doesn’t support $switch yet.Do you see any way to speed this up significantly, e.g. by using indexes or entirely different queries? I would also be able to change the data structure if needed.Thanks in advance.", "username": "MelvinFrohike_N_A" }, { "code": "", "text": "It seems I have messed up the formatting of my post. Is there any way to edit it?", "username": "MelvinFrohike_N_A" } ]
Slow aggregation query using $group
2022-08-04T15:05:37.178Z
Slow aggregation query using $group
1,791
null
[ "swift" ]
[ { "code": "", "text": "Is there a way to know when sync is complete?\nI would like to display a loading indicator until all data hav been synced.\nI am using swift sdk.\nThanks", "username": "Thierry_Bucco" }, { "code": "", "text": "Please add addProgressNotification using syncSession.Thanks,\nSeshu", "username": "Udatha_VenkataSeshai" } ]
How to know when sync is complete?
2022-07-25T14:05:49.897Z
How to know when sync is complete?
1,677
null
[ "queries", "mongoose-odm" ]
[ { "code": "", "text": "Hi,Do i know the reason of this error while collection query db.banks.find()“Field ‘getMore’ must be of type long in: { getMore: 121154538034027.0, collection: \"Banks\", batchSize: 1000, lsid: { id: UUID(\"6dc081c0-f200-4306-ab2c-43570dff0ff3\") }, $db: \"SV-DB\" }”", "username": "khasim_ali1" }, { "code": "", "text": "Hi @khasim_ali1! Which tool are you using to run this command, and which version of that tool? What exactly are you entering?", "username": "Anna_Henningsen" }, { "code": "", "text": "Thanks Anna, Appreciated your quick reply.We are using mongoose npm module from node.js application, we are using below query. Also it’s not occurring always only few times.await banksModel.find();", "username": "khasim_ali1" }, { "code": "", "text": "Mongoose version - 5.9.7\nMongodb enterprise - 4.4.6", "username": "khasim_ali1" }, { "code": "", "text": "mongoose 5.9.7 depends on a fairly old version of the Node.js driver, namely 3.5.5. It seems like changes that have fixed this problem (in particular, test: fix a number of our most notorious flakey tests · mongodb/node-mongodb-native@7d1f547 · GitHub) have been made in the Node.js driver since.Based on that, I feel confident saying that upgrading mongoose to a more recent version (latest 5.x or 6.x) should address this issue.", "username": "Anna_Henningsen" }, { "code": "", "text": "Thank you so much Anna, Will try migrating to new version and let you know if we face any further issues", "username": "khasim_ali1" }, { "code": "", "text": "Hello Anna,We have migrated the mongoose library but still issue is exist,\nMongoose:- 6.4.3\nMongoDB driver:- 4.7.0\nMongoDB enterprise:- 4.4.0", "username": "khasim_ali1" }, { "code": "", "text": "@khasim_ali1 Are you getting the exact same issue with those versions? Do you have a code snippet that reproduces these? Either way, if this is still happening with the 4.7.0 driver, I’d recommend opening a JIRA bug report about this: https://jira.mongodb.org/projects/NODE/issues", "username": "Anna_Henningsen" }, { "code": "", "text": "Yes Anna, It’s still coming for the same normal queryawait banksModel.find();It’s coming sometimes, not able to reproduce always.", "username": "khasim_ali1" }, { "code": "", "text": "How can we resolve this issue?", "username": "khasim_ali1" }, { "code": "", "text": "Hello Anna,Can you please let me know how to resolve this issue?", "username": "khasim_ali1" }, { "code": "", "text": "@khasim_ali1 Did you open a ticket in https://jira.mongodb.org/projects/NODE/issues as suggested above? If so, can you link to it?", "username": "Anna_Henningsen" } ]
Mongodb query failed - field getMore must be of type long
2022-01-19T12:22:01.242Z
Mongodb query failed - field getMore must be of type long
4,617
null
[ "sharding", "mongodb-shell" ]
[ { "code": "", "text": "[root@rmd-mongo-router log]# mongos --config /etc/mongos.conf\nabout to fork child process, waiting until server is ready for connections.\nforked process: 13495\nchild process started successfully, parent exiting\n[root@rmd-mongo-router log]# mongosh --host rmd-mongo-router --port 27017\nCurrent Mongosh Log ID: 62ebad3dbeeef40c419a983f\nConnecting to: mongodb://rmd-mongo-router:27017/?directConnection=true&appName=mongosh+1.5.4\nUsing MongoDB: 6.0.0\nUsing Mongosh: 1.5.4For mongosh info see: https://docs.mongodb.com/mongodb-shell/[direct: mongos] test> admin = db.getSiblingDB(“admin”)\nadmin\n[direct: mongos] test> db.getSiblingDB(“admin”).createUser(\n… {\n… user: “adminreal”,\n… pwd: “mongo_4U”,\n… roles: [ { role: “userAdminAnyDatabase”, db: “admin” } ]\n… }\n… )\nMongoServerError: command createUser requires authentication\n[direct: mongos] test> sh.addShard( “shardreplica01/rmd-mongo-shared-01:27017”)\nMongoServerError: command addShard requires authenticationHow can i create user on mongos server to add shard to sharded cluster ? how can i resolve this problem ?Thank you", "username": "Dai_Nguyen1" }, { "code": "", "text": "mongos uses the data from config servers\nYou have to create the user on config server", "username": "Ramachandra_Tummala" } ]
Error when add shard to cluster with mongos
2022-08-04T11:40:25.647Z
Error when add shard to cluster with mongos
2,454
null
[ "c-driver" ]
[ { "code": "cmakemake#include <mongoc/mongoc.h>\n\nint main() {\n const char *uri_string = \"mongodb://localhost:27017\";\n mongoc_uri_t *uri;\n mongoc_client_t *client;\n mongoc_database_t *database;\n mongoc_collection_t *collection;\n bson_t *command, reply, *insert;\n bson_error_t error;\n char *str;\n bool retval;\n\n /*\n * Required to initialize libmongoc's internals\n */\n mongoc_init ();\n\n\n printf( \"Hello, world!\\n\" );\n\n}\n", "text": "The version of the driver you are trying to build (branch or tag).\n~ Apparently, successfully built. Version 1.22.0Host OS, version, and architecture.\n~ macOS Monterey 12.4, Apple M1C Compiler and version.\n~ Apple clang version 13.1.6 (clang-1316.0.21.2.5)\n~ InstalledDir: /Library/Developer/CommandLineTools/usr/binCompiling gives …193 warnings generated.\nld: library not found for -l/opt/homebrew/Cellar/mongo-c-driver/1.22.0/lib/cmake/mongoc-1.0hello.c …Content of library directory …ls -lh /opt/homebrew/Cellar/mongo-c-driver/1.22.0/lib/cmake/mongoc-1.0\ntotal 40\n-r–r–r-- 1 james admin 1.9K 29 Jun 17:41 mongoc-1.0-config-version.cmake\n-r–r–r-- 1 james admin 125B 29 Jun 17:41 mongoc-1.0-config.cmake\n-r–r–r-- 1 james admin 1.4K 29 Jun 17:41 mongoc-targets-release.cmake\n-r–r–r-- 1 james admin 4.8K 29 Jun 17:41 mongoc-targets.cmakeI was expecting some files with today’s date within the directory.Any ideas? I will be happy to supply further info on request. Thanks in advance for your time.", "username": "James_Wilson" }, { "code": "", "text": "it is either you haven’t fully built the driver or you are using the wrong library folder.\nhave you followed the installation steps and checked the tutorial on libmongoc — libmongoc 1.22.0 ?", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Thank you I was not aware of the document. I will study it and see if it will lead to a solution to my problem.", "username": "James_Wilson" }, { "code": "", "text": "Thanks for you help. Problem was that I was specifying the header and include paths manually. As such the library was not being found. Using CMake worked, but produced extra files and directories. Initially, pkg-config compiled but gave hundreds of Warnings from /usr/local/include/stdlib.hI removed Python, Xcode and Command Line Tools. Downloaded the DMG for Command Line Tools and installed. The directory /usr/local is now empty and my theory is that is where the problem arose.", "username": "James_Wilson" }, { "code": "", "text": "I don’t know how to clean up a Mac, but try reinstalling C development environment. it seems scrambled a lot.“CMake” is not actual builder, but a maker for files of other build systems such as “make”. You can still invoke a build command with it, but you need to pay attention to its flags and parameters.then take a close look where it puts those builds. you will need in your mongodb compilation path.I don’t own a Mac that I can help only so far.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "You have been very helpful already !! Thank you so much ", "username": "James_Wilson" }, { "code": "", "text": "have you followed the installation steps and checked the tutorial on libmongoc — libmongoc 1.22.0The above helped the solution to my problem", "username": "James_Wilson" } ]
Can't compile Hello, world! program using installed C driver
2022-07-15T14:11:38.130Z
Can&rsquo;t compile Hello, world! program using installed C driver
3,642