image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "data-modeling" ]
[ { "code": "", "text": "Hi guys.I have the same table with the same indexes on Mongodb and PostgreSQL.The table in Mongodb is about 370mb and on PostgreSQL is about 270mb.Why does this happening?Is there an explanation?", "username": "harris" }, { "code": "", "text": "@Pavel_Duchovny Sorry for the disturbing.if you have spare time please check this!Thanks in advance", "username": "harris" }, { "code": "", "text": "Hi @harris,It’s impossible to answer this question without more details.\nWhich versions are you using? Which configurations are you using for both? Anything specific? Which storage engine are you using?\nCan you share that collection publicly so we have a chance to see how it’s designed and if some improvements can be made?\nIs PostgreSQL using any kind of compression or was setup in a way to optimize row compression?Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "In addition to @MaBeuLux88 questions I’d also ask how to checked size. Is this on-disk size? In memory size?", "username": "Asya_Kamsky" }, { "code": "db.mongodb2indextimestamp1.totalIndexSize()db.mongodb2indextimestamp1.dataSize()SELECT pg_size_pretty( pg_total_relation_size('oneindextwocolumns') );SELECT pg_size_pretty( pg_indexes_size('oneindextwocolumns') );", "text": "I am storing my data locally on my pc.I have the default configuration on both of them.I am using 4.4 version in mongodb and postgres12.I dont use any kind of compression.The commands i execute for mongodb is\ndb.mongodb2indextimestamp1.totalIndexSize() for index only\ndb.mongodb2indextimestamp1.dataSize() for collection size without indexand in psql i use\nSELECT pg_size_pretty( pg_total_relation_size('oneindextwocolumns') ); for table size with index\nSELECT pg_size_pretty( pg_indexes_size('oneindextwocolumns') ); size of indexes onlyAre these commands equal?", "username": "harris" }, { "code": "totalSize()storageSize()", "text": "totalSize() actually includes index sizes, you want storageSize() to exclude indexes.", "username": "Asya_Kamsky" }, { "code": "dataSizecollstats.size", "text": "dataSize gives me the same result with collstats.size https://docs.mongodb.com/manual/reference/command/collStats/#std-label-collStats-output so i thought its the same thing.", "username": "harris" }, { "code": "db.mongodb2indextimestamp1.stats()StorageSize()sizetotalindexsizetotalSizedataSizetotalSize", "text": "from db.mongodb2indextimestamp1.stats() i get\nStorageSize() 63mb, size 370mb, totalindexsize 36mb, totalSize 99mb\nYes i edited the the above answer.I meant dataSize not totalSize", "username": "harris" }, { "code": "", "text": "I just read that Postgres compresses values > 2K (including json and jsonb) automatically. Maybe its because of that…", "username": "harris" }, { "code": "StorageSize()size", "text": "StorageSize() 63mb, size 370mbWiredTiger also compresses data - that’s why you see total size be almost 1/4th of data size…Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "So whats the collection actual size so i can be able to do the fair comparison in Postgresql?Thanks in advance\nWith respect Harris", "username": "harris" }, { "code": "", "text": "Depends on why you are doing the comparison. If it’s to figure out how much disk to buy it would be storage size, if it’s how much RAM you need it would be … more complicated Keep in mind that the same ratio wouldn’t hold for different collections with different schema and access patterns. Also, if you want to optimize the amount of disk space used, MongoDB has other compression options which can compress data further.Asya", "username": "Asya_Kamsky" }, { "code": "pg_total_relationDatasizetotalindexsize", "text": "I want to do a fair compare to psql’s command pg_total_relation.Should i use Datasize command and then add totalindexsize for mongodb?", "username": "harris" }, { "code": "docker run --rm -d -p 27017:27017 -h $(hostname) --name mongo mongo:4.4.6 --replSet=test && sleep 4 && docker exec mongo mongo --eval \"rs.initiate();\"\ndocker run --rm --name postgres -p 5432:5432 -e POSTGRES_USER=max -e POSTGRES_PASSWORD=password -d postgres:13.3\n{\n\t\"_id\" : 0,\n\t\"string\" : \"Modi adipisci velit ipsum amet non ut labore aliquam voluptatem quisquam sit magnam porro velit consectetur magnam quisquam velit quisquam sed aliquam labore consectetur voluptatem adipisci voluptatem dolorem ipsum ut non dolorem quaerat est amet tempora dolore modi aliquam adipisci non sed neque velit ipsum porro sed velit aliquam dolore eius quaerat neque tempora ut velit neque velit neque magnam labore labore eius numquam labore dolorem dolore ipsum neque est sit voluptatem dolorem quiquia voluptatem consectetur ut quaerat eius ut aliquam magnam voluptatem quiquia.\",\n\t\"int\" : 857663\n}\nfrom faker import Faker\nfrom lorem.text import TextLorem\nfrom postgres import Postgres\nfrom pymongo import MongoClient\n\nfake = Faker()\n\nlorem = TextLorem(srange=(5, 750)) # nb words min & max\nLOOP = 500\nNB_DOCS_PER_LOOP = 1000\n\n\ndef rand_docs(id_start, nb):\n return [{\n '_id': id_start + i,\n 'string': lorem.sentence()[0:5000], # 5000 maximum characters for Postgres...\n 'int': fake.pyint(min_value=0, max_value=999999)\n } for i in range(nb)]\n\n\nif __name__ == '__main__':\n client = MongoClient()\n pg = Postgres(url=\"postgresql://max:password@localhost/max\")\n db = client.get_database('max')\n coll = db.get_collection('coll')\n coll.drop()\n coll.create_index(\"int\")\n pg.run(\"DROP TABLE my_table\")\n pg.run(\"CREATE TABLE my_table(pk SERIAL PRIMARY KEY, string VARCHAR(5000), int integer)\")\n pg.run(\"CREATE INDEX my_index ON my_table(int)\")\n\n for loop in range(LOOP):\n print(f'Loop {loop + 1}/{LOOP} => Inserted {NB_DOCS_PER_LOOP} docs in MDB & rows in PGS.')\n id_start = loop * NB_DOCS_PER_LOOP\n docs = rand_docs(id_start, NB_DOCS_PER_LOOP)\n coll.insert_many(docs)\n pg.run(\"INSERT INTO my_table(string,int) VALUES \" + ','.join('(\\'' + i.get('string') + '\\',' + str(i.get('int')) + ')' for i in docs))\n\n mdb_stats = db.command(\"collstats\", \"coll\")\n print(\"MongoDB\")\n print(\"Nb docs in MDB: \" + str(mdb_stats.get('count')))\n print(\"Storage size : \" + str(mdb_stats.get('storageSize') / 1000000) + ' MB')\n print(\"Indexes size : \" + str(mdb_stats.get('totalIndexSize') / 1000000) + ' MB')\n print(\"\\nPostgres\")\n print(\"Nb rows in PGS: \" + str(pg.one(\"SELECT count(*) from my_table\")))\n print(\"Storage size : \" + str(pg.one(\"SELECT pg_size_pretty(pg_total_relation_size('my_table'))\")))\n print(\"Indexes size : \" + str(pg.one(\"SELECT pg_size_pretty(pg_indexes_size('my_table'))\")))\nLoop 1/500 => Inserted 1000 docs in MDB & rows in PGS.\nLoop 2/500 => Inserted 1000 docs in MDB & rows in PGS.\n...\nLoop 500/500 => Inserted 1000 docs in MDB & rows in PGS.\nMongoDB\nNb docs in MDB: 500000\nStorage size : 453.238784 MB\nIndexes size : 24.305664 MB\n\nPostgres\nNb rows in PGS: 500000\nStorage size : 536 MB\nIndexes size : 24 MB\ndocker run --rm -d -p 27017:27017 -h $(hostname) --name mongo mongo:4.4.6 --replSet=test --wiredTigerCollectionBlockCompressor=zstd && sleep 4 && docker exec mongo mongo --eval \"rs.initiate();\"\nMongoDB\nNb docs in MDB: 500000\nStorage size : 274.41152 MB\nIndexes size : 27.693056 MB\n\nPostgres\nNb rows in PGS: 500000\nStorage size : 537 MB\nIndexes size : 24 MB\n", "text": "Hi again,This got me interested so I started to write some code to put this to the test a little.A document looks like this:The python algo is pretty simple:For the string generation, I used Lorem Ipsum with a random length between 5 and 750 words and limited the final length to 5000 max for Postgres…Results:In this scenario, with this particular schema and data generation, the index sizes are similar but MongoDB uses 15.44% less storage than Postgres for the data.If I wanted to optimize my MongoDB storage even more, I could also use the zstd compression algo instead of the default snappy to save even more storage space and use my CPU a bit more.Altering the schema or the way the string is generated (for example making them long or shorter) could completely reverse the results. So just like @Asya_Kamsky said, there is basically no way to make this test “fair”. It depends on why you are doing the comparison and why you are trying to optimise. And the results you would come up with would only be valid for your particular use case, schema and data set.Just for the sake of it, I ran the exact same test but this time I started MongoDB with zstd compression algo instead of snappy to see the difference:And this time I got this completely unfair result (because it’s a stock Postgres VS MongoDB using the best compression algo):And this time MongoDB’s storage is 48.90% less than Postgres.But again… It’s not proving anything. Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Thank you for time and explanation.Yes you are right its not proving anything.\nI appreciate it a lot!", "username": "harris" }, { "code": "", "text": "I want to add my 2 cents.Your data model is very important.If you simply copy the SQL normalized way into your MongoDB data model, you deny yourself to have a better more efficient model. I don’t know how to explain it better that with an example of an accounting system.SQL wayMongo wayHere, you save the space of the primary key (and its index) of the transaction that you would need in the second table to relate the credited/debited account of the transaction.Processing wise, with SQL you need a transaction to update 2 tables and update 2 indexes (or 3 indexes if you want to quickly find transactions for a given account). In Mongo, all is done within a single document and only one index (or 2 indexes if you want to quickly find transactions for a given account).Since I upgraded to MongoDB from SQL, I might miss some of the latest improvement made by SQL. My comment is based on the SQL schema of one of the most popular accounting system for small business.", "username": "steevej" } ]
Why does a Mongodb collection have more size from PostgreSQL table size
2021-05-21T12:11:21.370Z
Why does a Mongodb collection have more size from PostgreSQL table size
4,780
null
[ "aggregation", "queries" ]
[ { "code": " {\n \"_id\":\"112211\",\n \"student\":{\n \"student_display_name\":\"Siva\",\n \"first_name\":\"Siva\",\n \"last_name\":\"Ram\",\n \"age\":\"20\",\n \"department\":\"IT\",\n \"section\":\"D\",\n \"edit_sequence\":\"2\",\n \"unique_id\":\"siva_2\",\n \"address\":{\n \"data\":[\n {\n \"student_display_name\":\"Siva\",\n \"first_name\":\"Siva\",\n \"last_name\":\"Ram\",\n \"unique_id\":\"siva_2\",\n \"street\":\"Perter's Park\",\n \"area\":\"BMS Road\",\n \"pincode\":\"560001\"\n },\n {\n \"student_display_name\":\"Siva\",\n \"first_name\":\"Siva\",\n \"last_name\":\"Ram\",\n \"unique_id\":\"siva_1\",\n \"street\":\"St.Mary's Colony\",\n \"area\":\"BMS Road\",\n \"pincode\":\"560001\"\n },\n {\n \"student_display_name\":\"Siva\",\n \"first_name\":\"Siva\",\n \"last_name\":\"Ram\",\n \"unique_id\":\"siva_0\",\n \"street\":\"MG Colony\",\n \"area\":\"BMS Road\",\n \"pincode\":\"560001\"\n }\n ]\n },\n \"student\":{\n \"data\":[\n {\n \"student_display_name\":\"Siva\",\n \"first_name\":\"Siva\",\n \"last_name\":\"Ram\",\n \"age\":\"20\",\n \"department\":\"IT\",\n \"section\":\"B\",\n \"edit_sequence\":\"1\",\n \"unique_id\":\"siva_1\"\n },\n {\n \"student_display_name\":\"Siva\",\n \"first_name\":\"Siva\",\n \"last_name\":\"Ram\",\n \"age\":\"20\",\n \"department\":\"IT\",\n \"section\":\"A\",\n \"edit_sequence\":\"0\",\n \"unique_id\":\"siva_0\"\n }\n ]\n }\n },\n \"college\":\"student college\",\n \"root_table\":\"student\"\n}\n{\n \"$match\":{\n \"$or\":[\n {\n \"student.address.data.pincode\":\"56001\"\n },\n {\n \"$and\":[\n {\n \"student.address.data.data.last_name\":\"Siva\"\n },\n {\n \"student.address.data.data.first_name\":\"Ram\"\n }\n ]\n }\n ]\n }\n}\n {\n \"address\":{\n \"student_display_name\":\"Siva\",\n \"first_name\":\"Siva\",\n \"last_name\":\"Ram\",\n \"unique_id\":\"siva_2\",\n \"street\":\"Perter's Park\",\n \"area\":\"BMS Road\",\n \"pincode\":\"560001\"\n }\n }\n", "text": "I have the below document in a Mongo collection.From this document, I need to query using the following match filters.With these match filters, Further,\nWe will get all the 3 objects under address.data array.\nBut, from these results I want to filter even further based on “student.unique_id” value, so that I will get only one match as below.This is the final result which I want.How to achieve this in MongoDB?", "username": "Jerin_87340" }, { "code": "student.address.data.data.last_name\"$and\":[\n {\n \"student.address.data.data.last_name\":\"Siva\"\n },\n {\n \"student.address.data.data.first_name\":\"Ram\"\n }\n ]\n// starting collection\n> db.array.find()\n{ \"_id\" : 1, \"a\" : [ { \"b\" : 1, \"c\" : 2 } ] }\n{ \"_id\" : 2, \"a\" : [ { \"b\" : 1, \"c\" : 3 }, { \"b\" : 2, \"c\" : 2 } ] }\n> db.array.find( { \"$and\" : [ { \"a.b\" : 1 } , { \"a.c\" : 2 } ] } )\n{ \"_id\" : 1, \"a\" : [ { \"b\" : 1, \"c\" : 2 } ] }\n{ \"_id\" : 2, \"a\" : [ { \"b\" : 1, \"c\" : 3 }, { \"b\" : 2, \"c\" : 2 } ] }\n// Note that the above is equivalent to the following due to implicit $and\n> db.array.find( { \"a.b\" : 1 , \"a.c\" : 2 } )\n{ \"_id\" : 1, \"a\" : [ { \"b\" : 1, \"c\" : 2 } ] }\n{ \"_id\" : 2, \"a\" : [ { \"b\" : 1, \"c\" : 3 }, { \"b\" : 2, \"c\" : 2 } ] }\n// and last what you really want\n> db.array.find( { \"a\" : { \"$elemMatch\" : { \"b\" : 1, \"c\" : 2 } } } )\n{ \"_id\" : 1, \"a\" : [ { \"b\" : 1, \"c\" : 2 } ] }\n", "text": "student.address.data.data.last_nameI see nothing that can be access with …data.data…. Certainly a typo and you have a extra data. Secondly,with the typo corrected does not do what you except. If you want both conditions to be true for the same array element, you have to use https://docs.mongodb.com/manual/reference/operator/query/elemMatch/See the difference here:", "username": "steevej" } ]
Query MongoDB nested document and find the exact matching inner object with having one of its filters taking its own field value as condition
2021-05-22T08:28:24.473Z
Query MongoDB nested document and find the exact matching inner object with having one of its filters taking its own field value as condition
12,886
null
[ "compass" ]
[ { "code": "", "text": "HI there… I have the following connection string I’ve used to successfully connect using Robo3T but it fails in Compass:mongodb://<.user><.password>@<.hostserver>:20017,<.hostserver>:20018,<.hostserver>:20019/?ssl=true&replicaSet=ImportsPlus&readPreference=primary&authMechanism=SCRAM-SHA-256error: “self signed certificate in certificate chain”.I’m using a pasted connection string. I don’t have access to the server config to check any settings. Is my connection string valid? any help appreciated.thanks,", "username": "jeremyfiel" }, { "code": "", "text": "Hi @jeremyfielYou will have to change the compass settings. I’ve had a similar experienced with a Private CA.Once you have pasted in your connection string click on the ‘Fill in connection fields individually’ then ‘More Options’The SSL drop down has many options. If you have a copy of the certificate then user the ‘Server Validation’ option the select the path to the certificate.Otherwise use the ‘Unvalidated’ option.", "username": "chris" }, { "code": "", "text": "thanks for your reply.\nI tried bothServer Validation with the .crt\nand\nUnvalidated\nboth are returning: “Client network socket disconnected before secure TLS connection was established”", "username": "jeremyfiel" }, { "code": "", "text": "Curious.So you have a new error than the one initially reported. With your Robo3T connection was that already set up or was it literally copy pasted into Robo3t and works ?If mTLS is configured I would expect a different error but I have not used mTLS with Compass.", "username": "chris" } ]
Compass fails to connect: self signed certificate in certificate chain, Robo3T connects successfully
2021-05-21T14:12:56.929Z
Compass fails to connect: self signed certificate in certificate chain, Robo3T connects successfully
24,423
null
[ "aggregation", "java" ]
[ { "code": "", "text": "I have a pipeline that seems to work fine in Java until I try to write it to a collection. I thought I could simply add\npipeline.add(new Document(\"$merge\", “testCollection));\nthis doesn’t seem to create the collection. I have tried\npipeline.add(new Document(”$merge\", new Document(“into”,testCollection)));\nbut that doesn’t work either, nor does $out.What am I missing?", "username": "Neil_Youngman" }, { "code": "", "text": "When you say “it doesn’t work” what happens? Do you get an error? What’s the version or server/driver?\nAre you authenticated as a user that can write to the database?", "username": "Asya_Kamsky" }, { "code": "coll.aggregate(asList(Aggregates.merge(\"new_coll\")));\ncoll.aggregate(asList(Aggregates.merge(\"new_coll\"))).into(new ArrayList<>());\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport com.mongodb.client.MongoCollection;\nimport com.mongodb.client.MongoDatabase;\nimport org.bson.Document;\n\nimport java.util.ArrayList;\n\nimport static com.mongodb.client.model.Aggregates.merge;\nimport static java.util.Collections.singletonList;\n\npublic class AggregationFramework {\n\n public static void main(String[] args) {\n String connectionString = \"mongodb://localhost\";\n try (MongoClient mongoClient = MongoClients.create(connectionString)) {\n MongoDatabase db = mongoClient.getDatabase(\"test\");\n MongoCollection<Document> coll = db.getCollection(\"coll\");\n MongoCollection<Document> newColl = db.getCollection(\"new_coll\");\n resetCollections(coll, newColl);\n System.out.println(\"Merge without reading the docs...\");\n coll.aggregate(singletonList(merge(\"new_coll\"))); // lazy\n System.out.println(\"Docs in new_coll: \" + newColl.countDocuments()); // should print zero\n coll.aggregate(singletonList(merge(\"new_coll\"))).into(new ArrayList<>()); // not lazy\n System.out.println(\"Merge and consume the results this time.\");\n System.out.println(\"Docs in new_coll: \" + newColl.countDocuments()); // should print 3 this time.\n }\n }\n\n private static void resetCollections(MongoCollection<Document> coll, MongoCollection<Document> newColl) {\n newColl.drop();\n coll.drop();\n ArrayList<Document> docs = new ArrayList<>();\n docs.add(new Document(\"name\", \"Max\").append(\"age\", 33));\n docs.add(new Document(\"name\", \"Alex\").append(\"age\", 29));\n docs.add(new Document(\"name\", \"Claire\").append(\"age\", 25));\n coll.insertMany(docs);\n }\n}\nMerge without reading the docs...\nDocs in new_coll: 0\nMerge and consume the results this time.\nDocs in new_coll: 3\n", "text": "Hi @Neil_Youngman and welcome in the MongoDB Community !Aggregations are lazy by default. Meaning that if you don’t read the result of the pipeline, the pipeline is simply not executed.So for example:Wouldn’t be executed because the result isn’t consumed.\nOn the other hand:has to read the result of the pipeline and is executed.Here is a code sample to prove my point:Result:Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Thanks Maxime. That was the missing piece of the jigsaw for me. A simple .into(new ArrayList<>()), as you suggested, was all I needed.Neil", "username": "Neil_Youngman" }, { "code": " pipeline.add(new Document(\"$out\", “testCollection\")); pipeline.add(new Document(\"$merge\", “testCollection\"));\"Value expected to be of type DOCUMENT is of unexpected type STRING\"{ $merge: <collection> }pipeline.add(new Document(\"$merge\", new Document(\"into\", \"testCollection\")));", "text": "I still seem to have issues with merge.\nIt was working with\n pipeline.add(new Document(\"$out\", “testCollection\"));\nI changed a few things including changing that to\n pipeline.add(new Document(\"$merge\", “testCollection\"));that threw an exception with the error message \"Value expected to be of type DOCUMENT is of unexpected type STRING\"\nAccording to the manual { $merge: <collection> } is an acceptable simplified form, but I get that exception. I don’t get an exception from pipeline.add(new Document(\"$merge\", new Document(\"into\", \"testCollection\"))); but it doesn’t create new collection.", "username": "Neil_Youngman" }, { "code": "", "text": "There are no errors from pipeline.add()", "username": "Neil_Youngman" }, { "code": "import static com.mongodb.client.model.Aggregates.merge;\nimport static java.util.Collections.singletonList;\n...\ncoll.aggregate(singletonList(merge(\"new_coll\"))).into(new ArrayList<>());\n.first().into()", "text": "I’d recommend to use the Aggregation Framework helper that I used in my code instead of reinventing the wheel. It’s shorter and it will work .There is also a second parameter that you can use to pass merge options if you want any.It looks like you have issues with curly double quotes as well in your code. I’m wondering if the issue couldn’t come from that.Also, if you really don’t want to collect the result, I would use .first() maybe instead of .into() to avoid creating a useless list and save some memory.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "I believe the aggregation helper has some limitations ($lookup) which will affect the rest of the pipeline. I’m not sure if I can mix and match and if I can, it won’t help with the readability.I guess I’d better have a play and see if I can get that to work.I don’t seem to have curly quotes in the original code. I’m not sure where they came from. If they were in the code I was running, I would not expect it to compile.", "username": "Neil_Youngman" }, { "code": "", "text": "OK, I was looking in the wrong place. The source collection name had been changed accidentally, so I was no longer getting any data and merge was refusing to create an empty collection. I thought the issue was the merge stage as I hadn’t intentionally changed the source name.", "username": "Neil_Youngman" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Writing pipeline output to a collection in Java
2021-05-20T16:19:19.476Z
Writing pipeline output to a collection in Java
5,013
null
[ "swift", "app-services-user-auth" ]
[ { "code": "invalidSessioncurrentUser.logOut()currentUser.remove()invalidSession", "text": "In my app, users have the option to delete their account. This will call a Realm function that deletes the user account with all associated data. The problem is, after calling this function, any Realm related call will result in an invalidSession error, which is normal since the local cached user does not exist anymore — I need to log out the user.I tried calling currentUser.logOut() and currentUser.remove() right after the call to the delete function, but both result in an invalidSession error. So, how can I “reset” the session and remove everything related to the old user, to that the app is in a new state, ready for another login?", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "You’ll want to call removeUser before calling the function to delete the user however you do need a valid user to call a function - so what you could do is call a Webhook which triggers the function to clean up the user on the backend Realm Cloud.", "username": "Ian_Ward" }, { "code": "", "text": "@Jean-Baptiste_Beau What SDK are you using?", "username": "kraenhansen" }, { "code": "", "text": "@Ian_Ward I rely on app authentication to make sure users can only delete their own account, for obvious security reasons. Is it possible to have this kind of security using a webhook, even if the user is not connected?@kraenhansen I’m using the iOS SDK 10.7.6.", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "Any news about that?", "username": "Jean-Baptiste_Beau" }, { "code": "currentUser.remove()invalidSessioninvalidSession", "text": "Well, the answer was surprisingly easy: even though currentUser.remove() returns an invalidSession error when the session is, in fact, invalid, it stills logs out the user. So all I had to do was to ignore the error if it’s an invalidSession error and keep on with the cleanup after the logout.Either it’s a bug, or it’s normal behavior and it should be better documented.", "username": "Jean-Baptiste_Beau" }, { "code": "user.remove()invalidSession", "text": "Nevermind. This is only the case for non-anonymous users. For anonymous users, calling user.remove() returns an invalidSession error and doesn’t remove the user.This definitely looks like a bug. @Ian_Ward @kraenhansen do you confirm?", "username": "Jean-Baptiste_Beau" }, { "code": "invalidSessionmongodb-realmUnable to open a realm at path .../Documents/mongodb-realm/[app-id]/server-utility/metadata/sync_metadata.realm.management", "text": "A note on why do I need to handle invalidSession errors with anonymous users:In the doc, it is stated that:Realm may delete an Anonymous user object that is 90 days old (or older). When an account is deleted, it is not recoverable and any associated user data is lost. Documents created or modified by the user remain unaffected.In my case, users can use the app without creating an account through the use of anonymous authentication. Their data is stored in a local Realm, and the anonymous user is only used to get public data from Atlas.When the anonymous user is deleted (by Realm), I simply want the client app to delete the public data cached on the device, log in with a new anonymous user, and re-download the public data, so that the user can keep using the app.I tried to simply delete the whole mongodb-realm folder, but then I get the error:\nUnable to open a realm at path .../Documents/mongodb-realm/[app-id]/server-utility/metadata/sync_metadata.realm.management. If there was a way to trigger the re-creation of those files, this could be a solution.", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "Following up on the solution to delete the whole folder manually. After getting the error above, the app crashes. The next time the app is launched, the files are re-created, and the app can work again.We’re getting closer to the solution. Still, having the app crashing every 90 days to reset the user is not ideal. What is missing is a way to trigger the re-creation of the files.", "username": "Jean-Baptiste_Beau" } ]
[MongoDB Realm] What to do after `invalidSession` error?
2021-04-29T15:26:32.615Z
[MongoDB Realm] What to do after `invalidSession` error?
4,139
null
[ "atlas-device-sync" ]
[ { "code": "2021-02-01 18:24:32.305629+0100 App[3170:1683819] Sync: Connection[1]: Websocket: Expected HTTP response 101 Switching Protocols, but received:\n\nHTTP/1.1 401 Unauthorized\n\ncache-control: no-cache, no-store, must-revalidate\n\nconnection: close\n\ncontent-length: 190\n\ncontent-type: application/json\n\ndate: Mon, 01 Feb 2021 17:24:32 GMT\n\nserver: envoy\n\nvary: Origin\n\nx-envoy-max-retries: 0\n\nx-frame-options: DENY\n\n2021-02-01 18:24:32.305722+0100 App[3170:1683819] Sync: Connection[1]: Connection closed due to error\nInvalidSession", "text": "In my app, I use anonymous authentication to let user access public static data without having to create an account. I have set up a trigger to regularly delete anonymous users to cleanup the database. The anonymous users can also be deleted after 90 days, according to the doc.The problem is that, after deleting an anonymous user account, when the user opens the app again, the following error occurs:And on the server logs, an InvalidSession error is thrown. After this, the client can’t sync anymore to the server.How can I “catch” this error and just make the user login anonymously again when that happens?", "username": "Jean-Baptiste_Beau" }, { "code": "app.currentUser", "text": "Are you able to check app.currentUser and check if it is not null. If it is, then try to log in another anonymous user again?", "username": "Sumedha_Mehta1" }, { "code": "", "text": "We have the same issue after we deleted a user in the App Users tab. If user have an Android device he can’t do anything the app always tries to use the same user ID and fails to connect. We tried all possible kind of data cleanup we can find and nothing seems to work.", "username": "Anton_P" }, { "code": "\n \n }\n }\n \n \n/**\n An error associated with network requests made to the authentication server. This type of error\n may be returned in the callback block to `SyncUser.logIn()` upon certain types of failed login\n attempts (for example, if the request is malformed or if the server is experiencing an issue).\n \n \n - see: `RLMSyncAuthError`\n */\n public typealias SyncAuthError = RLMSyncAuthError\n \n \n/**\n An enum which can be used to specify the level of logging.\n \n \n - see: `RLMSyncLogLevel`\n */\n public typealias SyncLogLevel = RLMSyncLogLevel\n \n \n/**\n A data type whose values represent different authentication providers that can be used with\n \n ", "text": "Do you have an error handler on the client? See here:And there should be an auth error here:", "username": "Ian_Ward" }, { "code": "", "text": "Hi Ian, thanks for your reply. We have an issue with Android devices. On iOS, after we reinstall the app new user ID is issued and a user can log in but on Android, the same ID is issued and a user can’t log in. Not sure how having an error handler could help are there any actions that can be performed during such kinds of errors on Android?", "username": "Anton_P" }, { "code": "", "text": "We first receive session error:Connection[1]: Connect timeout\nCLIENT_CONNECT_TIMEOUT(realm::sync::Client::Error:121): Sync connection was not fully established in time\nConnection[1]: Resolving ‘ws.us-east-1.aws.realm.mongodb.com:443’And some time laterE/REALM_SYNC: Connection[1]: Websocket: Expected HTTP response 101 Switching Protocols, but received:\nHTTP/1.1 401 Unauthorized\ncache-control: no-cache, no-store, must-revalidate\nconnection: close\ncontent-length: 190\ncontent-type: application/json\ndate: Mon, 19 Apr 2021 08:03:21 GMT\nserver: envoy\nvary: Origin\nx-envoy-max-retries: 0\nx-frame-options: DENY\nI/REALM_SYNC: Connection[1]: Connection closed due to error", "username": "Anton_P" }, { "code": "user?.logOut()userIDuser?.logOut()app.removeUser(user)", "text": "user?.logOut() worked. I received a new userID and was able to login successfully after thatUpdate: user?.logOut() worked only after app.removeUser(user) was called despite it says that the user should be removed during log out call.", "username": "Anton_P" }, { "code": "invalidSessioninvalidSession", "text": "Note: this issue is also tracked in this topic. I started the other topic for another reason, but it came down to the same problem.To answer the questions:@Sumedha_Mehta1 the user is not null, that is the problem. It is not null, but it is in an invalid state, and it can’t do anything without getting an invalidSession error. Sure, we could log in another user, but this particular user wouldn’t be deleted, neither would be the associated data, so it doesn’t look like a clean solution.@Ian_Ward detecting the error is not a problem, a call to any Realm function will return an invalidSession so we know there is an error. The problem is, what to do after getting the error, and how to remove the user properly.", "username": "Jean-Baptiste_Beau" } ]
Realm Sync broken after deleting user account
2021-02-01T17:40:14.448Z
Realm Sync broken after deleting user account
4,268
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "Im working on an app where a user can signup and add personal information (such as address, date of birth, etc.). I then register them via the email/password auth. Since the app is targeted for the german market I then use custom functions to send confirmation emails. So far so good. I now need to safe the custom date somehow when the user registers. Is there a way to save the custom user data as soon as I register the user and they are not yet confirmed, aka. pending? What is the best practice in this case?", "username": "Benedikt_Bergenthal" }, { "code": "", "text": "Hi @Benedikt_Bergenthal,Did you find a solution to the problem? we are also having a similar requirement where we ask for 3 information while registering.the realm “register” function takes only “email” and “password”. We need to store the name in custom data object, untill the user confirm confirm the email and status changes from “pending” to “authenticated”.", "username": "Surender_Kumar" }, { "code": "", "text": "How I’d handle this would be to login with anonymous login after registration. You can then call a Realm function (that you write) to store the name ane email address in Atlas. You can then add an authentication trigger to run when the user is confirmed (the trigger should be registered against “CREATE” authentication events). The function associated with that trigger can then link the new user to the data that was anonymously added via the Realm function.You can also link the anonymous user with the user that you’re registering – this is how it’s done on iOS: https://docs.mongodb.com/realm/sdk/ios/advanced-guides/link-user-identities/", "username": "Andrew_Morgan" }, { "code": "", "text": "I’m currently storing the additional infos in a cache DB using a custom API. When the user logs in for the first time using their account I use a trigger to transfer the data to the user and delete it from the cache.\nNot very elegant, but it works. ", "username": "Benedikt_Bergenthal" } ]
Setting custom user data right after register
2021-02-03T18:28:49.391Z
Setting custom user data right after register
3,441
null
[ "performance", "graphql" ]
[ { "code": "", "text": "Hi!Me and my team are experiencing slow response times on our Realm app’s GraphQL API (900-1500ms). A few months ago we were experiencing response times of around 300ms, so it’s a significant increase.I am aware that it is impossible for you to tell what’s causing this increase without knowing the code base. However, we have some general questions regarding what might be the issue.\nThe weird part for us is that the response times are about the same for every kind of request. Our heavier Custom Resolvers take the same time as the most simple default queries that are directly connected to the Atlas collections. If a certain query would have taken more time, it would have been easy for us to detect a memory leak or something similar in that query’s function, but this does not seem to be the case…We have upgraded our Atlas Cluster to M20, and have not released the app yet, so it shouldn’t be related to the traffic load.Our app has grown fairly large, we are using:Everything is developed through github, nothing is done in the console.We tried to disable user/password auth and saw a potential decrease in response time (perhaps 100-200ms quicker in average).\nWe have also tried to remove all functions, custom resolvers, rules and triggers, and we still experienced around 1100ms response times for queries that are directly connected to the Atlas collections (even small collections of like 5 documents).As you probably can tell, we are getting a bit desperate. Do you have any ideas what might be causing these high response times? Could it be some kind of setting/config that can slow down a whole app? Is our app to large? Any help would be appreciated!", "username": "petas" }, { "code": "", "text": "I believe this is not just about you\nwe had the same experience a few days ago\nhappened more than once", "username": "Royal_Advice" }, { "code": "", "text": "Sorry to hear that you’re experiencing this too, @Royal_Advice . What do you mean with “once”? After having implemented certain features? In that case, which?If you mean that some requests spike in response time, we’ve experienced that too. That’s not what I meant in this post however, as we have gotten a consistent increase in average response time.I would, to an extent, feel reliefed if this was something that could be magically solved when Mongo patches Realm, but I don’t think this is the case. As stated above, the response time drops down to the previous 300ms when we revert to older commits, so it seems there’s something with our code base. We’ve tried to isolate the specific commit, but it doesn’t appear to be connected to a single commit…", "username": "petas" }, { "code": "", "text": "UPDATE:We removed all rules (except for one for testing), and the response times dropped from 900-1500ms to 500-700ms. Can anyone explain why this is?We still have to drop it more though. Is anyone aware of some similar caveats as the rules, that might increasing the response times a couple of hundreds of ms?(I am unable to edit the original post, but I accidentally wrote that we tried to remome the rules last friday, however this was not the case. We only removed all functions, custom resolvers and triggers at that point.)", "username": "petas" }, { "code": "", "text": "Hi Petas - I’m not quite sure what rules/permissions you had on your application but in general, adding permissions can make a request slower since we evaluate permissions on a per-request basis. That being said, there are a couple of things that might be worth looking into:", "username": "Sumedha_Mehta1" }, { "code": "\"roles\": [\n {\n \"name\": \"default\",\n \"apply_when\": {},\n \"read\": true,\n \"write\": false,\n \"insert\": false,\n \"delete\": false,\n \"search\": true,\n \"additional_fields\": {}\n }\n ],\n", "text": "Hi @Sumedha_Mehta1, thank you for your response!I see what you’re saying about the rules/permissions, and it definitely makes sense that they could lower the response time. However, you are required to have rules for each collection you want to use in Realm, so it’s not an option for us to remove them. As for the roles of the rules, we haven’t done anything funky at all, so it should be able to run smoothly. The roles-rules look the same for all collections we’re using:No, we are using every custom resolver as resolvers, none can be ‘converted’ to a system function.We are based in Stockholm, and have both our Atlas cluster and Realm region in eu-west - Ireland.We have indexed our collections in a suitable way. However, we don’t see how that or filters could fix the issue, seeing how the response times are about the same for simple queries on collections with 3 documents, as for more complex queries on collections with thousands of documents… This fact makes it feel like it’s something closer to the geographical issue you’re mentioning. However, as stated in the post, the app has been quick in the past, with the Atlas cluster and Realm app deployed in the same region.Does it seem like my list of our features and it’s quantities could be too large for Realm to handle? We don’t think it’s that large nor complex though…We greatly appreciate your help, Sumedha!", "username": "petas" }, { "code": "", "text": "Hey @petas - could you open a support ticket with MongoDB since this is a bit more nuanced and requires looking into your application more.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Just an update for the people who might stumble across this thread in the future.I reported the ticket to the Mongo support team in the end of January, as suggested above. They confirmed that this seems to occur on certain apps, and that they suspect that it’s a bug from their end. Since then I’ve asked for updates every other week, with the constant update that they’re “working on it”. A few weeks ago the ticket was simply closed, without any reported resolvement.We’re very disappointed in the service overall. We were closing up on launch when I first opened this thread, so it was very stressful that this issue randomly appeared right then. Since then we’ve been forced to keep stalling the release, until a point a couple of months ago where it simply couldn’t wait any longer. We had to tweak our frontend a whole lot, and worked hard on concealing the response times as much as possible.If someone is experiencing this kind of problem early on in their development process we would advice you to just switch to another serverless solution. Over three months have passed and nothing has improved.", "username": "petas" }, { "code": "", "text": "Hi @petas - I’m sorry to hear that you didn’t have a great experience with our support/service. For complete transparency, there was a gap in our service implementation around how the GraphQL Schema was being generated for very large JSON schema. This issue was identified earlier this year and is being addressed, though unfortunately it is not a quick bug fix . I’m expecting that this will get resolved ~ end of May.I understand that is not ideal based on your launch, but please let me know if you have any further feedback to pass on. I will also update this thread when we release the fix. You can email me at [email protected]", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Thank you for your thorough response Sumedha! If the support would have offered such transparency (and potential ETA) we would have been able to handle the situation in a different, less stressful, way.However, I think your response(s) on this forum always have been on point, so kudos to you! I have forwarded this information to our stakeholders, and we’re looking forward to an update on the subject by the end of May!", "username": "petas" }, { "code": "", "text": "Update:The Mongo team has just fixed the issue! Our response times are finally back at 150-300 ms again. Great relief!Thank you @Sumedha_Mehta1 for the continuous communication on the process, I’ve appreciated it a lot!", "username": "petas" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Slow response times on Mongo Realm GraphQL
2021-01-15T15:34:47.358Z
Slow response times on Mongo Realm GraphQL
4,675
null
[ "ops-manager", "free-monitoring" ]
[ { "code": " {\n \"state\" : \"enabled\",\n \"message\" : \"Host already monitored by MongoDB Ops Manager, Cloud Manager, or Atlas; Free Monitoring reporting interval set to once per month.\",\n \"url\" : \"\",\n \"userReminder\" : \"\",\n \"ok\" : 1\n }\n", "text": "I was trying out ops manager, but then I tried reversing it. I removed it, but free monitoring isn’t working anymore. When I run the db.enableFreeMonitoring command, I get this outputI tried turning off free monitoring and enabling it again, but that didn’t work either. Any ideas how to disable the other monitors?Thanks", "username": "Victor_Back" }, { "code": "", "text": "Hi!, I had the same problem but after reconnecting to the shell the free monitoring was activated and the URL given in the prompt", "username": "30629143C" } ]
Free monitoring colliding with ops manager
2020-09-21T09:46:49.218Z
Free monitoring colliding with ops manager
2,951
null
[ "upgrading" ]
[ { "code": "", "text": "Earlier i use Mongodb 4.0.9 version , now i unstalled 4.0.9 and I nstalled 4.6 version.then i changed data folder path to existing data path (4.0.9) in config file.\nnow issue is mongodb service is not stating", "username": "Mahipal_Reddy" }, { "code": "mongodC:\\Program Files\\MongoDB\\Server\\4.4\\log", "text": "Hi Mahipal,Welcome to the community.Can you take a peek at the mongod log file and see if you can find any details on the exception during start up? If you are on Windows the log file should be in this location - C:\\Program Files\\MongoDB\\Server\\4.4\\log", "username": "mahisatya" }, { "code": "{\"t\":{\"$date\":\"2021-05-21T00:49:13.148+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.498+05:30\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.499+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.499+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23316, \"ctx\":\"main\",\"msg\":\"Trying to start Windows service '{toUtf8String_serviceName}'\",\"attr\":{\"toUtf8String_serviceName\":\"MongoDB\"}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.501+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":8848,\"port\":27017,\"dbPath\":\"E:/BCS Datamigration/mongodb/data\",\"architecture\":\"64-bit\",\"host\":\"ICSSCPU183\"}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.501+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23398, \"ctx\":\"initandlisten\",\"msg\":\"Target operating system minimum version\",\"attr\":{\"targetMinOS\":\"Windows 7/Windows Server 2008 R2\"}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.501+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"4.4.6\",\"gitVersion\":\"72e66213c2c3eab37d9358d5e78ad7f5c1d0d0d7\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"windows\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.501+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Microsoft Windows 10\",\"version\":\"10.0 (build 18363)\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.501+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"C:\\\\Program Files\\\\MongoDB\\\\Server\\\\4.4\\\\bin\\\\mongod.cfg\",\"net\":{\"bindIp\":\"127.0.0.1,10.10.30.62\",\"port\":27017},\"service\":true,\"storage\":{\"dbPath\":\"E:\\\\BCS Datamigration\\\\mongodb\\\\data\",\"journal\":{\"enabled\":true}},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"E:\\\\BCS Datamigration\\\\mongodb\\\\log\\\\mongod.log\"}}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.502+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"E:/BCS Datamigration/mongodb/data\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.502+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=15851M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],\"}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.510+05:30\"},\"s\":\"F\", \"c\":\"STORAGE\", \"id\":4671205, \"ctx\":\"initandlisten\",\"msg\":\"This version of MongoDB is too recent to start up on the existing data files. Try MongoDB 4.2 or earlier.\"}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.510+05:30\"},\"s\":\"F\", \"c\":\"-\", \"id\":23089, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":4671205,\"file\":\"src\\\\mongo\\\\db\\\\storage\\\\wiredtiger\\\\wiredtiger_kv_engine.cpp\",\"line\":913}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.510+05:30\"},\"s\":\"F\", \"c\":\"-\", \"id\":23090, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.510+05:30\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"initandlisten\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"Got signal: 22 (SIGABRT).\\n\"}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.623+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31380, \"ctx\":\"initandlisten\",\"msg\":\"BACKTRACE: {bt}\",\"attr\":{\"bt\":{\"backtrace\":[{\"a\":\"7FF6C3979143\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/stacktrace_windows.cpp\",\"line\":349,\"s\":\"mongo::`anonymous namespace'::printWindowsStackTraceImpl\",\"s+\":\"43\"},{\"a\":\"7FF6C3979B76\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":263,\"s\":\"mongo::`anonymous namespace'::abruptQuit\",\"s+\":\"76\"},{\"a\":\"7FFA4B2ECAAD\",\"module\":\"ucrtbase.dll\",\"s\":\"raise\",\"s+\":\"1DD\"},{\"a\":\"7FFA4B2EDAB1\",\"module\":\"ucrtbase.dll\",\"s\":\"abort\",\"s+\":\"31\"},{\"a\":\"7FF6C39847EF\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/assert_util.cpp\",\"line\":196,\"s\":\"mongo::fassertFailedWithLocation\",\"s+\":\"16F\"},{\"a\":\"7FF6C266E171\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp\",\"line\":911,\"s\":\"mongo::WiredTigerKVEngine::_openWiredTiger\",\"s+\":\"911\"},{\"a\":\"7FF6C266B92F\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp\",\"line\":767,\"s\":\"mongo::WiredTigerKVEngine::WiredTigerKVEngine\",\"s+\":\"160F\"},{\"a\":\"7FF6C26338FA\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/wiredtiger/wiredtiger_init.cpp\",\"line\":103,\"s\":\"mongo::`anonymous namespace'::WiredTigerFactory::create\",\"s+\":\"35A\"},{\"a\":\"7FF6C2C39147\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/storage_engine_init.cpp\",\"line\":158,\"s\":\"mongo::initializeStorageEngine\",\"s+\":\"6D7\"},{\"a\":\"7FF6C25EE918\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/db.cpp\",\"line\":361,\"s\":\"mongo::`anonymous namespace'::_initAndListen\",\"s+\":\"978\"},{\"a\":\"7FF6C25F119E\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/db.cpp\",\"line\":786,\"s\":\"mongo::`anonymous namespace'::initAndListen\",\"s+\":\"1E\"},{\"a\":\"7FF6C2B62413\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/ntservice.cpp\",\"line\":612,\"s\":\"mongo::ntservice::initService\",\"s+\":\"53\"},{\"a\":\"7FFA4DC52DE2\",\"module\":\"sechost.dll\",\"s\":\"LsaLookupUserAccountType\",\"s+\":\"202\"},{\"a\":\"7FFA4DB17C24\",\"module\":\"KERNEL32.DLL\",\"s\":\"BaseThreadInitThunk\",\"s+\":\"14\"}]}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.623+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6C3979143\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/stacktrace_windows.cpp\",\"line\":349,\"s\":\"mongo::`anonymous namespace'::printWindowsStackTraceImpl\",\"s+\":\"43\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.623+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6C3979B76\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":263,\"s\":\"mongo::`anonymous namespace'::abruptQuit\",\"s+\":\"76\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.623+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFA4B2ECAAD\",\"module\":\"ucrtbase.dll\",\"s\":\"raise\",\"s+\":\"1DD\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.623+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFA4B2EDAB1\",\"module\":\"ucrtbase.dll\",\"s\":\"abort\",\"s+\":\"31\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.623+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6C39847EF\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/assert_util.cpp\",\"line\":196,\"s\":\"mongo::fassertFailedWithLocation\",\"s+\":\"16F\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.623+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6C266E171\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp\",\"line\":911,\"s\":\"mongo::WiredTigerKVEngine::_openWiredTiger\",\"s+\":\"911\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.623+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6C266B92F\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp\",\"line\":767,\"s\":\"mongo::WiredTigerKVEngine::WiredTigerKVEngine\",\"s+\":\"160F\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.623+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6C26338FA\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/wiredtiger/wiredtiger_init.cpp\",\"line\":103,\"s\":\"mongo::`anonymous namespace'::WiredTigerFactory::create\",\"s+\":\"35A\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.623+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6C2C39147\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/storage_engine_init.cpp\",\"line\":158,\"s\":\"mongo::initializeStorageEngine\",\"s+\":\"6D7\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.623+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6C25EE918\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/db.cpp\",\"line\":361,\"s\":\"mongo::`anonymous namespace'::_initAndListen\",\"s+\":\"978\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.623+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6C25F119E\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/db.cpp\",\"line\":786,\"s\":\"mongo::`anonymous namespace'::initAndListen\",\"s+\":\"1E\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.623+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6C2B62413\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/ntservice.cpp\",\"line\":612,\"s\":\"mongo::ntservice::initService\",\"s+\":\"53\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.623+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFA4DC52DE2\",\"module\":\"sechost.dll\",\"s\":\"LsaLookupUserAccountType\",\"s+\":\"202\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.623+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFA4DB17C24\",\"module\":\"KERNEL32.DLL\",\"s\":\"BaseThreadInitThunk\",\"s+\":\"14\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.623+05:30\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":23134, \"ctx\":\"initandlisten\",\"msg\":\"Unhandled exception\",\"attr\":{\"exceptionString\":\"0xE0000001\",\"addressString\":\"0x00007FFA4B013B29\"}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.623+05:30\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":23136, \"ctx\":\"initandlisten\",\"msg\":\"*** stack trace for unhandled exception:\"}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.624+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31380, \"ctx\":\"initandlisten\",\"msg\":\"BACKTRACE: {bt}\",\"attr\":{\"bt\":{\"backtrace\":[{\"a\":\"7FFA4B013B29\",\"module\":\"KERNELBASE.dll\",\"s\":\"RaiseException\",\"s+\":\"69\"},{\"a\":\"7FF6C397A819\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":97,\"s\":\"mongo::`anonymous namespace'::endProcessWithSignal\",\"s+\":\"19\"},{\"a\":\"7FF6C3979B82\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":264,\"s\":\"mongo::`anonymous namespace'::abruptQuit\",\"s+\":\"82\"},{\"a\":\"7FFA4B2ECAAD\",\"module\":\"ucrtbase.dll\",\"s\":\"raise\",\"s+\":\"1DD\"},{\"a\":\"7FFA4B2EDAB1\",\"module\":\"ucrtbase.dll\",\"s\":\"abort\",\"s+\":\"31\"},{\"a\":\"7FF6C39847EF\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/assert_util.cpp\",\"line\":196,\"s\":\"mongo::fassertFailedWithLocation\",\"s+\":\"16F\"},{\"a\":\"7FF6C266E171\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp\",\"line\":911,\"s\":\"mongo::WiredTigerKVEngine::_openWiredTiger\",\"s+\":\"911\"},{\"a\":\"7FF6C266B92F\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp\",\"line\":767,\"s\":\"mongo::WiredTigerKVEngine::WiredTigerKVEngine\",\"s+\":\"160F\"},{\"a\":\"7FF6C26338FA\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/wiredtiger/wiredtiger_init.cpp\",\"line\":103,\"s\":\"mongo::`anonymous namespace'::WiredTigerFactory::create\",\"s+\":\"35A\"},{\"a\":\"7FF6C2C39147\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/storage_engine_init.cpp\",\"line\":158,\"s\":\"mongo::initializeStorageEngine\",\"s+\":\"6D7\"},{\"a\":\"7FF6C25EE918\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/db.cpp\",\"line\":361,\"s\":\"mongo::`anonymous namespace'::_initAndListen\",\"s+\":\"978\"},{\"a\":\"7FF6C25F119E\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/db.cpp\",\"line\":786,\"s\":\"mongo::`anonymous namespace'::initAndListen\",\"s+\":\"1E\"},{\"a\":\"7FF6C2B62413\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/ntservice.cpp\",\"line\":612,\"s\":\"mongo::ntservice::initService\",\"s+\":\"53\"},{\"a\":\"7FFA4DC52DE2\",\"module\":\"sechost.dll\",\"s\":\"LsaLookupUserAccountType\",\"s+\":\"202\"},{\"a\":\"7FFA4DB17C24\",\"module\":\"KERNEL32.DLL\",\"s\":\"BaseThreadInitThunk\",\"s+\":\"14\"}]}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.624+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFA4B013B29\",\"module\":\"KERNELBASE.dll\",\"s\":\"RaiseException\",\"s+\":\"69\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.624+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6C397A819\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":97,\"s\":\"mongo::`anonymous namespace'::endProcessWithSignal\",\"s+\":\"19\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.624+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6C3979B82\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":264,\"s\":\"mongo::`anonymous namespace'::abruptQuit\",\"s+\":\"82\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.624+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFA4B2ECAAD\",\"module\":\"ucrtbase.dll\",\"s\":\"raise\",\"s+\":\"1DD\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.624+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFA4B2EDAB1\",\"module\":\"ucrtbase.dll\",\"s\":\"abort\",\"s+\":\"31\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.624+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6C39847EF\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/assert_util.cpp\",\"line\":196,\"s\":\"mongo::fassertFailedWithLocation\",\"s+\":\"16F\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.624+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6C266E171\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp\",\"line\":911,\"s\":\"mongo::WiredTigerKVEngine::_openWiredTiger\",\"s+\":\"911\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.625+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6C266B92F\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp\",\"line\":767,\"s\":\"mongo::WiredTigerKVEngine::WiredTigerKVEngine\",\"s+\":\"160F\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.625+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6C26338FA\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/wiredtiger/wiredtiger_init.cpp\",\"line\":103,\"s\":\"mongo::`anonymous namespace'::WiredTigerFactory::create\",\"s+\":\"35A\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.625+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6C2C39147\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/storage_engine_init.cpp\",\"line\":158,\"s\":\"mongo::initializeStorageEngine\",\"s+\":\"6D7\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.625+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6C25EE918\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/db.cpp\",\"line\":361,\"s\":\"mongo::`anonymous namespace'::_initAndListen\",\"s+\":\"978\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.625+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6C25F119E\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/db.cpp\",\"line\":786,\"s\":\"mongo::`anonymous namespace'::initAndListen\",\"s+\":\"1E\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.625+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6C2B62413\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/ntservice.cpp\",\"line\":612,\"s\":\"mongo::ntservice::initService\",\"s+\":\"53\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.625+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFA4DC52DE2\",\"module\":\"sechost.dll\",\"s\":\"LsaLookupUserAccountType\",\"s+\":\"202\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.625+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFA4DB17C24\",\"module\":\"KERNEL32.DLL\",\"s\":\"BaseThreadInitThunk\",\"s+\":\"14\"}}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.625+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23132, \"ctx\":\"initandlisten\",\"msg\":\"Writing minidump diagnostic file\",\"attr\":{\"dumpName\":\"C:\\\\Program Files\\\\MongoDB\\\\Server\\\\4.4\\\\bin\\\\mongod.2021-05-20T19-19-13.mdmp\"}}\n{\"t\":{\"$date\":\"2021-05-21T00:49:13.690+05:30\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":23137, \"ctx\":\"initandlisten\",\"msg\":\"*** immediate exit due to unhandled exception\"}\n", "text": "", "username": "Mahipal_Reddy" }, { "code": "", "text": "i am not able to attach that log file here , just i copied here", "username": "Mahipal_Reddy" }, { "code": "mongod4.24.04.44.2", "text": "“msg”:“This version of MongoDB is too recent to start up on the existing data files. Try MongoDB 4.2 or earlier.”The above line indicates mongod can’t start because the upgrade path wasn’t followed. You skipped 4.2 release series when you went from 4.0 to 4.4. New releases generally come with breaking changes, so it’s highly recommended to follow the upgrade path.According to the documentation, first, upgrade to 4.2, and make sure everything works after. If you are satisfied with 4.2 then upgrade once more, this time to 4.4.Hopefully, that should resolve the issue.Mahi", "username": "mahisatya" } ]
Mongodb service start issue
2021-05-20T19:30:21.517Z
Mongodb service start issue
5,551
null
[ "atlas-device-sync" ]
[ { "code": "{\n \"%%true\": {\n \"%function\": {\n \"name\": \"onAllowRead\",\n \"arguments\": [\n \"%%user\",\n \"%%partition\"\n ]\n }\n }\n}\n{\n \"%%true\": {\n \"%function\": {\n \"name\": \"onAllowWrite\",\n \"arguments\": [\n \"%%user\",\n \"%%partition\"\n ]\n }\n }\n}\ncollection::aggregateonAllowWritefalsefalsefalseonAllowWritetrueasynconAllowWriteexports = async function(user, partition)\n{\n if (condition_without_db_query)\n {\n return true;\n }\n else\n {\n const coll = context.services.get('mongodb-atlas').db('some_db').collection('some_coll');\n \n const result = await coll.findOne({ some: condition });\n if (result.something > 0)\n {\n return (result.other === 'yay');\n }\n \n return false;\n }\n};\ntruecondition_without_db_querytrue(result.other === 'yay')collection::aggregateonAllowWritetrueasynccollection::aggregatecollection::aggregate$lookupcollection::aggregate$lookup", "text": "Cross post from Github as we don’t think this issue is specific to the RealmJS SDK and it has become a show-stopper.Using Realm Web SDK Version: 1.2.0.We have Realm Sync permissions set as follows;ReadWriteThis allows control over partition access per user.When calling a Realm user function which internally calls collection::aggregate (and nothing else), the Realm Sync Write permission method onAllowWrite is being invoked, however, it’s not being invoked on every collection. We would like to know how to avoid what should be a read-only query requiring write privilege.The secondary issue with these Realm Sync permissions is if they return false to deny access, the Realm user function which caused their invocation does not terminate immediately, and times out after 90 seconds. If however, rather than returning false we instead throw an exception, the Realm user function terminates immediately. Why does returning false not terminate in the same manner as throwing an exception?The third issue we’re now finding is even when the onAllowWrite method returns true after making an async query, the method that caused the invocation is still timing out. The onAllowWrite method is akin to the following pseudo code;When this method returns true after testing condition_without_db_query everything works without issue. If true is returned when (result.other === 'yay') is tested, the user function that invoked it times out after 90 seconds.For clarity, here’s the call stack order;Web app calls Realm user function\nRealm user function calls collection::aggregate\nonAllowWrite is invoked, returning true after an async query\ncollection::aggregate times out\nRealm user function returns errorEDIT\nWith further testing, we’ve narrowed down the reason why only some collection::aggregate queries are requiring write privilege. It’s only occurring with queries containing a $lookup in the pipeline.Could a Realm team member please explain why a collection::aggregate query containing a $lookup in the pipeline invokes Realm Sync’s write permission ?", "username": "Mauro" }, { "code": "", "text": "This issue has been resolved with the assistance of mongodb support.", "username": "Mauro" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm Sync Permissions issue
2021-05-20T00:49:41.180Z
Realm Sync Permissions issue
2,488
null
[]
[ { "code": "", "text": "Is there a way to add an index for nonempty string field, for example with Partial index to add to index only if the string field exists and is not emptyI couldn’t find a way to get the length with partial, if I were able to then I could use that to check is greater than 0.\nOmit field does not work as “” is zero value for string\n$ne : “” does not work as I don’t think $ne/NOT is supportedSo is there no way to create a partial index for strings that are not empty?", "username": "chris_findon" }, { "code": "", "text": "found this approach that worked for me, didn’t realise that $gt could be used in this way\ndb.ce.createIndex(\n{ management_ip: 1 },\n{ unique: true, partialFilterExpression: { management_ip: { “$exists” : true, “$gt” : “0”, “$type” : “string” }} }\n)", "username": "chris_findon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unique index with empty string
2021-05-20T23:57:42.168Z
Unique index with empty string
4,957
null
[ "backup" ]
[ { "code": "", "text": "Hello,\nis it possible to start a mongodb instance when some collection files are missing from the WiredTiger data folder? Only some collection files missing, ALL other files from the data path are present.\nI tried this on a 3.4 instance, but mongo would not start because of the missing files. I now solved my issue in a different way, but I’m curious to know anyway, just in case I will need this again in the future.On a more general topic, I have to manage this 3.4 instance; we currently cannot upgrade to a more recent version. The instance has some huge collections, so an idea for backup was to do frequent backups via lvm snapshots + data files copy skipping the largest files, and copy the large files less frequently instead, e.g. during weekends.\nBut is such a backup usable in case of disaster recovery? As reported above, I was not able to start the db without some collection files (the largest ones). I also tried replacing the missing files with copies from a different backup, but again it did not work, mongodb complained about wrong checksums and so on.I’m now going back to backups using mongodump, which I mostly used before. I’m only not sure how fast and reliable can be the mongodump of a 1TB collection…Thanks for any info.", "username": "Marco_De_Vitis" }, { "code": "mongodumpmongod", "text": "Hi @Marco_De_Vitisis it possible to start a mongodb instance when some collection files are missing from the WiredTiger data folder? Only some collection files missing, ALL other files from the data path are present.\nI tried this on a 3.4 instance, but mongo would not start because of the missing files. I now solved my issue in a different way, but I’m curious to know anyway, just in case I will need this again in the future.The short answer is no.But is such a backup usable in case of disaster recovery?No. Until you’ve restored from a backup you don’t have a backup, just some files you think are a backup. Regular restoration test are required as part of any backup strategy.I’m now going back to backups using mongodump, which I mostly used before. I’m only not sure how fast and reliable can be the mongodump of a 1TB collection… MongoDB Backup Methods has this to say on mongodump. It has also been discouraged as a production backup method on a few other threads.When connected to a MongoDB instance, mongodump can adversely affect mongod performance. If your data is larger than system memory, the queries will push the working set out of memory, causing page faults. MongoDB Backup Methods Has options with some of their products, you would have to pay for these or have MongoDB Enterprise.Not to mention MongoDB Atlas where you just click and configure your backups.Percona offer a backup tool too. I don’t have any experience with this yet.I have had success building backup nodes on a ZFS filesystem and used GitHub - zfsonlinux/zfs-auto-snapshot: ZFS Automatic Snapshot Service for Linux.", "username": "chris" }, { "code": "", "text": "Thank you @chris.\nMy initial plan was indeed to use filesystem snapshots (with LVM), and then copy data from the snapshot elsewhere, but the size of the >1TB collection made it impossible to keep a full db history. That’s why I tried copying everything but the huge collections, there is no warning about this in MongoDB Backup Methods.\nBut then, in what can be considered a first restoration test, I discovered that recovering such a partial backup is not easy. For the record, I succeded anyway in recovering data by copying files in a 4.4 installation and running --repair.mongodump performance impact is not so important in this case because I plan to do the big dump in non-business hours.Paid services becomes quickly expensive for such sizes, while Percona seems interesting, thanks, but it will need time to be tried and most of all does not work with MongoDB 3.4 which I’m currently forced to use .", "username": "Marco_De_Vitis" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Recovering partial data folder (and backup policies)
2021-05-14T15:52:33.214Z
Recovering partial data folder (and backup policies)
2,378
null
[ "crud" ]
[ { "code": "", "text": "How to change the field data type of an existing document from string to an integer?", "username": "Daipayan_Mandal" }, { "code": "Atlas Cloud ClusterMongoDB Compass", "text": "Hi @Daipayan_Mandal,You can change the data type of a field by using the data type selectors on the right of the field in the Atlas Cloud Cluster as well as MongoDB Compass.For more info on data-type read hereIf you want to update it using Mongo shell or any specific drivers of MongoDB then you can refer to the $convert operator.Hope it helps !!All the Best\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Modify field type
2021-05-20T16:31:58.454Z
Modify field type
19,055
null
[ "aggregation" ]
[ { "code": "{\n \"_id\" : ObjectId(\"5ff32c8b6cff64b8582a7c12\"),\n \"Transaction\" : [ \n {\n \"StatusCode\" : \"1\",\n \"Amount\" : NumberDecimal(\"300\"),\n \"CreatedDateTime\" : ISODate(\"2021-01-01T10:27:41.746Z\")\n }, \n {\n \"StatusCode\" : \"2\",\n \"Amount\" : NumberDecimal(\"-750\"),\n \"CreatedDateTime\" : ISODate(\"2021-01-02T10:27:41.746Z\")\n }, \n {\n \"StatusCode\" : \"1\",\n \"Amount\" : NumberDecimal(\"1500\"),\n \"Date\" : ISODate(\"2021-01-03T10:27:41.746Z\")\n }\n ]\n}\n[{$unwind: \n{\n path: '$Transaction'\n}}, {$group: {\n _id: \"$Transaction.StatusCode\",\n Payments : {\n $push: '$Transaction'\n }\n}}, {$group: {\n _id: null,\n Paid: {\n $push: {\n $arrayElemAt: [\n '$Payments.Amount',\n {\n $indexOfArray: [\n '$Payments',\n {\n $max: '$Payments.CreatedDateTime'\n }\n ]\n }\n ]\n }\n } \n}}]\n{\n \"_id\" : null,\n \"Paid\" : [ \n NumberDecimal(\"1500\"), \n NumberDecimal(\"-750\")\n ]\n}\n", "text": "Consider the below sample document, which I am trying to get the recent values.The Aggregate query that I triedResults which I gotCan this query be possible to optimize without using unwind or any other way to bring this result?", "username": "Sudhesh_Gnanasekaran" }, { "code": "$unwind$unwindTransaction$groupStatusCodeAmount$groupAmountPaiddb.collection.aggregate([\n { $unwind: { path: \"$Transaction\" } },\n {\n $group: {\n _id: \"$Transaction.StatusCode\",\n Payments: { $max: \"$Transaction.Amount\" }\n }\n },\n {\n $group: {\n _id: null,\n Paid: { $push: \"$Payments\" }\n }\n }\n])\n", "text": "Can this query be possible to optimize without using unwind or any other way to bring this result?$unwind is required but you can optimize the query as below:", "username": "turivishal" }, { "code": "TransactionTransaction", "text": "Hi @Sudhesh_Gnanasekaran,any other way to bring this result?Based on the example document, Transaction field is an array that looks like going to keep growing in size as more transactions occur. Even before a document reaches the size limitation, you are likely to encounter challenges at querying data from a document.Depending on your use case, I’d suggest to re-consider your data model. You could try to store a transaction event as a single document instead. See also Building With Patterns: A summary for more data modelling information.Another observation based on the example document, there are two fields that contains date information. I’d suggest to use a consistent schema for the sub-documents within the Transaction field.Regards,\nWan.", "username": "wan" }, { "code": "$unwind$unwind$group$unwind", "text": "$unwind is required but you can optimize the query as below:I addition to what @wan said about the schema, I’d like to add that $unwind may not be necessary here. It depends on what exactly the aggregation is supposed to do. There are multiple ways of processing arrays without having to unwind them, but what I’m not clear on is why you are trying to avoid an unwind.I can see trying to avoid $group but $unwind is an efficient streaming stage so it’s nothing to fear…@Sudhesh_Gnanasekaran can you clarify what your aggregation is trying to do? What I see now is grouping all the transactions of the same status code and then taking the amount from the highest date value - is that what you are trying to do? The problem with your aggregation is there is no way to know which final array element belongs to which status code (because the order is not guaranteed to be by group key value) and you have some unfortunate typos or mistakes which are not uncovered by having a very trivial test document.In general, it’s easier to help if you provide a description of what you want the output of your aggregation to be both in plain English and as a sample document. Otherwise you end up with people giving you advice how to construct an aggregation that does something different than what you want it to!Asya", "username": "Asya_Kamsky" }, { "code": "{\n \"_id\" : null,\n \"Paid\" : [ \n NumberDecimal(\"1500\"), \n NumberDecimal(\"-750\")\n ]\n}\n{\n PaidAmount : 750\n}\n", "text": "Hi @Asya_KamskyI want to take the recent amount value by “CreatedDateTime” with groupby “StatusCode.”If the status code =1, the Amount value 1500 is the recent transaction by date.\nIf the status code =2, the Amount value 700 is the recent transaction by date.Below is the expected resultSo little concern about the $unwind operation will cause any performance issue because we have more than 10 million documents in that collection. Then each transaction array having least 3 to 5 sub document.In this case my final output will be sum of each transaction of recent paid .", "username": "Sudhesh_Gnanasekaran" }, { "code": " { Transactions: [ { status: 1, date: 1, amount: 10}, { status: 2, date: 2, amount: 20} ]}\n { Transactions: [ { status: 1, date: 3, amount: 15}, { status: 2, date: 1, amount: 30} ]}\n { Transactions: [ { status: 3, date: 1, amount: 5} ]}\n_id:null [\n {$unwind:\"$Transactions\"},\n {$sort:{\"Transaction.StatusCode\":1,\"Transaction.CreatedDateTime\":-1}},\n {$group:{_id:\"$Transaction.StatusCode\", latestPayment:{$first:\"$Transaction.Amount\"}}},\n {$group:{_id:null, PaidAmount:{$sum:\"$latestPayment\"}}}\n ]\n$sortallowDiskUse:true$match$unwind", "text": "Create several more test documents and see that the result will be an array with as many entries as you have different status codes but they will not be in order so there is no way to know which amount belongs to which statusCode!Do you want one result document for each input document? Or do you want one result document for your entire collection? How are transactions grouped into documents in your collection? All these things influence how you should write and optimize the aggregation.If your sample documents had these arrays of transactions, what should the output be?Is it the single latest date value for each status that you want? Then first you say you want the result to be _id:null with array of paid amounts (in some order?) but then you say you want to sum each transaction of recent paid. So do you not care about the order because you will be summing them? You just want the latest value for each status code?Simplest way to do that would be:The problem with this is that the $sort cannot use an index (since you are sorting items that are not stored in the collection but unwound version of them), so you will need to use allowDiskUse:true and this aggregation will be slow unless you can limit it to a subset of your original documents with $match first.So in fact, avoiding $unwind here would be key to being able to use an index or being able to keep the result small enough not to spill to disk. There’s a way to do it I think but it’s a bit ugly. I would really suggest reconsidering your schema if at all possible.Asya", "username": "Asya_Kamsky" } ]
How to group an array and get the recent value without using unwind?
2021-05-19T15:52:00.120Z
How to group an array and get the recent value without using unwind?
20,445
null
[]
[ { "code": "", "text": "Scenario : I need to insert a record in collection A & then along with the _ids of that inserted record from collection A , i need to insert another record in collection B .\nFor ex - After insertion\nCollection A record{ _id: ObjectId(“abcdef”) , name: “L” , records : [ { id: “def” , name : “none” } ] }Collection B record{ _id: ObjectId(“b_collection_id”) , a_col_id : ObjectId(“abcdef”) ,\nb_records : [ { id: “b_id” , a_records_id : “def” } ] }This same process needs to be done for Millions of records but for few records insertOne in collection A gives successful response along with all the ids & then got inserted into Collection B but when we try to search the record in Collection in A it wasn’t present there but Collection B record had all the values that are required from collection A record.Now the question here is are there known scenarios of data loss in Mongodb atlas because this has happened to me while loading 500K records & 800K records & both time the loading of records got failed because 1-3 records didn’t appear in collection A but their corresponding records in collection B got created with all the correct data format along with Collection A ids ( that somehow are not present in Collection A ).Is there any solution to it ?\nWill applying writeConcern { w: majority } options would make sure that data loss never happens ?", "username": "Prateek_Gupta1" }, { "code": "try\n{\n // BulkWriteLogic goes here\n}\ncatch(MongoBulkWriteException ex)\n{\n erroredList = new List<string>(); // Declare this error List Globally\n foreach (var item in ex.WriteErrors)\n {\n erroredList .Add(CollectionAListData.ElementAt(item.Index).propertyName);\n }\n}\n", "text": "@Prateek_Gupta1, you can use BulkWrite Insert to catch the failed error records using try and catch.The example which I provided is the c# sample.Using the error list, you can remove or re-insert the data in another collection.", "username": "Sudhesh_Gnanasekaran" }, { "code": "", "text": "I’m confused , could u pls elaborate how can On-Demand Materialized Views can help in this case ?\nI forgot to mention that i’m not using any pipeline . I make two insertOne calls from server one to insert record in Collection A , grab the info of that record & do another insertOne call in Collection B.", "username": "Prateek_Gupta1" }, { "code": "", "text": "Which driver are you using? Looks like node driver but not sure.Your scenario is very simple. I suspect the issue is with your code specially sinceCollection B record had all the values that are required from collection A record.Looks like an object reference or something that is not updated correctly in your code.Now the question here is are there known scenarios of data loss in Mongodb atlasIf you do not handle error and exceptions, then yes you might lose some writes.Will applying writeConcern { w: majority } options would make sure that data loss never happens ?No it does not, it is safer but if the majority cannot write you will get an exception/error that you must handle in your code.", "username": "steevej" }, { "code": "", "text": "Hi @Prateek_Gupta, and welcome to the forums!I need to insert a record in collection A & then along with the _ids of that inserted record from collection A , i need to insert another record in collection BIf the use case requires that for every document A it needs to be in document B please consider to embed document A in document B. See also Embedded one-to-many relationships for more information.For different strategies on data modelling please see Building With Patterns: A SummaryFor use cases that require atomicity of reads and writes to multiple documents in multiple collections (A and B), MongoDB supports multi-document transactions.To add to what @steevej has mentioned above, if you are still facing this issue please provide:Regards,\nWan.", "username": "wan" }, { "code": "{ \n result: { n: 1 , ok: 1 , \"operationTime\": \"6963562057622880257\" , \"$clusterTime\" : { ... } } , \n connection : { ... }, \n \"ops\" : [ { \n \"_id\": \"60a38ac80317027d6a7f53e0\", name : \"name\" , active: true , createdOn : \"date_time\" , \n accounts : [ { number: \"1234\" , active: true , \"id\": \"60a38ac80317f5efd0027d69\" }] \n } ] ,\n \"insertedCount\": 1,\n \"insertedId\": \"60a38ac80317027d6a7f53e0\",\n \"n\": 1,\n \"ok\": 1,\n \"operationTime\": \"6963562057622880257\",\n \"$clusterTime\": {...}\n }\n", "text": "Hey @steevej, Thanks for your response. I wish the problem was with the code but that’s not the case here. Its definitely something wrong with mongo driver/db. I finally managed to replicate the scenario again where I get the successful response from DB that the document got inserted in collection A & after that its corresponding document got inserted in collection B in the following call but the document in record A is no where to be found.@wan\nI’m using nodejs to interact with mongodb.\nVersion:\nnode:14.16.0\ndriver - mongodb: 3.6.5\nDB - Mongodb Atlas.Payload :doc = { name : “name” , active: true , createdOn : “date_time”,\naccounts : [ { number: “1234” , active: true , “id”: “60a38ac80317f5efd0027d69” }]\n}Operation - collection.insertOne(doc, {});DB Response :The response here says insertedCount as 1 & that’s where the validation in code is put to verify if the document got inserted or not . Also this case is only happening when I’m trying to load more than 100K records using the above mentioned steps(all the steps are performed for each of the records that needs to be inserted into the DB) & that too very rarely.I’m now confused as to where to go from here. One thing I’m sure is that I wont be able to alter the schema at all.", "username": "Prateek_Gupta1" }, { "code": "", "text": "First, if you have already have your 100K documents at the beginning of the process, I would recommend that you usehttps://docs.mongodb.com/drivers/node/usage-examples/bulkWrite/I still think there is an issue with your code, especially since you seem to process and result correctly. Since you confirmed that you received insertedCount : 1, then it is inserted. May be it is inserted in the wrong collection as you mentioned:collection A gives successful response along with all the ids & then got inserted into Collection BMay be the variable collection starts to point to collection B under some circumstances. Wherever you print the insert result, I would also print the namespace of the collection to make sure you still insert at the right place.Since you let the system generate _id, it would be nice to see the _id of the document in collection B that has all the fields you wanted in collection A.", "username": "steevej" }, { "code": "", "text": "Current Op that are being performed - For each record → insert record in Collection A & if successful then insert the record in Collection B.First, if you have already have your 100K documents at the beginning of the process, I would recommend that you use bulk OPI cannot use this OP since it wont work for the overall use-case & data-format of the records that I’m receiving and I cant really change the data format that I’m receiving because its being passed to me via some other sources. (Out of My Scope) So only available option for now is to insert the records one by one as per the data-format & how it is being processed in our system & as per the complicated DS we have to maintain to store the data inside Mongodb.May be the variable collection starts to point to collection B under some circumstances. Wherever you print the insert result, I would also print the namespace of the collection to make sure you still insert at the right place.Actually I’m calling two different microservices for insertion from a controller, one microservice endpoint to insert into Collection A & then from its response (doc that got returned from the collection A microsvc after successful insertion operation otherwise error is returned which is handled in Controller itself) I’m creating a new document & calling another microservice to insert into collection B & one microsvc can only interact with only one collection.Since you let the system generate _id, it would be nice to see the _id of the document in collection B that has all the fields you wanted in collection A.Collection B record :{\n“_id”: “60a38ad3165a989c5e2e177f”,\n“active”: true,\n“A_Col_id”: “60a38ac80317027d6a7f53e0”,\n“email”: “[email protected]”,\naccounts : [ { id: “60a38ad3f4405a3ca6cbb18e” , number: “1234” , active: true ,\n“A_Col_Account_id”: “60a38ac80317f5efd0027d69” }]\n}My concern here is that whether is it possible that MongoDB didn’t store the data but returns a successful response, as per my observation yes because I have shared the payload & the success response which Mongodb responded that indicates that it has inserted the document but it was not actually inserted.", "username": "Prateek_Gupta1" }, { "code": "document_a = { ... }\nresult_a = service_a.insert( document_a )\nif result_a is valid \nthen\n document_b = { ... , A_Col_id : result_a._id , ... }\n result_b = service_b.insert( document_b )\n if result_b is valid\n then\n found_a = collection_a.find( result_a._id )\n if found_a is null\n then\n // this should never happen but it does\n endif\n endif\nendif\n\"majority\"j : true", "text": "This is puzzling, I admit.Which Atlas tier are you using?If I understand correctly your code looks like:Could it be that service_b delete in collection_a?Is there any delete anywhere?What is the status of the cluster when the issue happens? From\nthe only thing I can see is a rollback situation. May be you could tryFor clusters where members have journaling enabled, combining \"majority\" write concern with j : true can prevent rollback of write concern acknowledged data.", "username": "steevej" }, { "code": "", "text": "Will applying writeConcern { w: majority } options would make sure that data loss never happens ?You should absolutely be using writeConcern majority (which you can set in the connection string as shown in Atlas examples) or in the driver.Can you also confirm that you are not using secondary reads?By the way, if the two inserts must both happen and if it’s not appropriate to embed the data into a single record, have you considered using transactions to make sure either both documents are inserted or neither one is?In any case, if you are using majority writeConcern then successful writes will be there even if there’s a failure of the primary (and a failover to another node). It would be great to get to the bottom of what’s going on but we definitely need more details about exact versions of server, driver, which Atlas tier, and more details about how you are connecting to the cluster.Asya", "username": "Asya_Kamsky" } ]
MongoDB Atlas insertOne query data loss
2021-05-13T13:22:17.765Z
MongoDB Atlas insertOne query data loss
4,056
null
[ "crud" ]
[ { "code": "", "text": "I need to do this\ndb.statistics1.find().forEach( function (d) {\nd.Active= parseInt(d.Active);\ndb.statistics1.save(d);\n});\nbut the name of files is Tax Rate, yes, a blank space in the middle of the name\nI’ve a export json file. All fields in document are string\nI need to convert some fields into integer and float\nThnaks", "username": "Felipe_Fernandez" }, { "code": "tax ratedb.collection.find( { \"tax rate\": 8 } )", "text": "Hello.but the name of files is Tax Rate, yes, a blank space in the middle of the nameYou can post a sample document with the fields you are facing problems with.If you have a field with spaces like tax rate, refer it by surrounding with quotes (can use single or double quotes). For example,db.collection.find( { \"tax rate\": 8 } )Also, see this topic in the MongoDB Manual: Document - Field Names.", "username": "Prasad_Saya" }, { "code": "", "text": "Hi, thanks\nThe find function works fine. This is not the problem\nThe problem is this script\ndb.statistics1.find().forEach( function (d) {\nd.“Tax Rate”= parseFloat(d.“Tax Rate”);\ndb.statistics1.save(d);\n});\ndoes not work. I need to access to value of “Tax Rate” element in order to change it from string to float/numeric\nThanks", "username": "Felipe_Fernandez" }, { "code": "db.test.find()\n .forEach(d => { \n d['tax rate'] = parseFloat(d['tax rate']); \n db.test.save(d); \n})", "text": "@Felipe_Fernandez, you can do this:", "username": "Prasad_Saya" }, { "code": "", "text": "Works!. Thanks a lot", "username": "Felipe_Fernandez" }, { "code": "$toDoubleupdate db.test.update({}, [ {$set:{ \"tax rate\" : {$toDouble: \"$tax rate\"}}} ], {multi:true})\n$toInt$toLong$convert", "text": "Note that you don’t need to modify the documents in the shell, you can use $toDouble (or any other conversion function) in update command to modify the field.Example:will do the same thing your script is doing without bringing documents to the client and without potential race conditions. You can use $toInt, $toLong or $convert if you want to specify more options.", "username": "Asya_Kamsky" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Convert string to number
2021-05-18T15:53:28.893Z
Convert string to number
17,780
null
[ "aggregation", "queries", "python" ]
[ { "code": "samples.timestamp1mydb1.mongodbbucketnocpu.aggregate(\n [\n {\n \"$match\": {\n \"samples.timestamp1\": {\"$gte\": datetime.strptime(\"2010-01-01 00:00:00\", \"%Y-%m-%d %H:%M:%S\"),\n \"$lte\": datetime.strptime(\"2015-01-01 01:55:00\", \"%Y-%m-%d %H:%M:%S\")},\n\n }\n },\n{ \"$unwind\": \"$samples\" },\n\n {\n \"$group\": {\n\n \"_id\": {\"$dateToString\": {\"format\": \"%Y-%m-%d %H\", \"date\": \"$samples.timestamp1\"}},\n \"max_id13\": {\n \"$max\": \"$samples.id13\"\n }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"day\": \"$_id\",\n \"max_id13\": 1\n }\n },\n {\"$sort\": {\"hour\": -1}},\n { \"$limit\": 5}\n\n\n ]\n)\n{\n\t\"_id\" : ObjectId(\"607f185f2a477a621641cded\"),\n\t\"nsamples\" : 12,\n\t\"samples\" : [\n\t\t{\n\t\t\t\"id1\" : 3758,\n\t\t\t\"id6\" : 2,\n\t\t\t\"id7\" : -79.09,\n\t\t\t\"id8\" : 35.97,\n\t\t\t\"id9\" : 5.5,\n\t\t\t\"id10\" : 0,\n\t\t\t\"id11\" : -99999,\n\t\t\t\"id12\" : 0,\n\t\t\t\"id13\" : -9999,\n\t\t\t\"c14\" : \"U\",\n\t\t\t\"id15\" : 0,\n\t\t\t\"id16\" : 99,\n\t\t\t\"id17\" : 0,\n\t\t\t\"id18\" : -99,\n\t\t\t\"id19\" : -9999,\n\t\t\t\"id20\" : 33,\n\t\t\t\"id21\" : 0,\n\t\t\t\"id22\" : -99,\n\t\t\t\"id23\" : 0,\n\t\t\t\"timestamp1\" : ISODate(\"2010-01-01T00:00:00Z\"),\n\t\t\t\"timestamp2\" : ISODate(\"2009-12-31T19:05:00Z\")\n\t\t},\n\t\t{\n\t\t\t\"id1\" : 3758,\n\t\t\t\"id6\" : 2,\n .\n .\n .\n\n {\"$sort\": {\"samples.timestamp1\": -1}},Sort exceeded memory limit of 104857600 bytes", "text": "I am wondering if i can use sort before grouping in this type of query because i want the sort stage to use the index on samples.timestamp1My data contains about 96k documents that contain 12 subdocuments each.When i tried using {\"$sort\": {\"samples.timestamp1\": -1}}, before group stage my output was thisSort exceeded memory limit of 104857600 bytes\nIs ti possible to use sort before group?What do you think i should do to optimize my query?", "username": "harris" }, { "code": "sortgroupgroupcount", "text": "Sorting happens in memory so the sort stage in aggregation will try and slurp all your data into memory. Hence the error. Given that group generally substantially reduces the amount of data coming out of the pipeline you may get better mileage doing the sort after the group stage. Run a count after the group to see how many documents emerge. This will give you a sense of whether the sort will fit in memory.", "username": "Joe_Drumgoole" }, { "code": "samples.timestamp1samples.timestamp1", "text": "Thank you for you explanation!When i do match samples.timestamp1 from 2010 to 2015 using sort before group it says that exceed memory.when i do match samples.timestamp1 from 2010 to 2011 using sort before group it returns the right generated documents.So its okay using sort before group only for 1 year of documents…So do you think i should stick as it is now? Thanks in advanceWith repsect\nHarris Gekas", "username": "harris" }, { "code": "sortmatchprojectallowDiskUsevar results = db.stocks.aggregate(\n [\n { $project : { cusip: 1, date: 1, price: 1, _id: 0 } },\n { $sort : { cusip : 1, date: 1 } }\n ],\n {\n allowDiskUse: true\n }\n )\n", "text": "sort requires that the documents fit in memory unless your specify allowDiskUse. See the example below. The sorting is limited by memory so my guess is data from 2010 to 2011 fits in memory, data from 2011 to 2015 doesn’t. This makes sense as their are more documents in the second dataset. Try and reduce the data set with match or project if you are having these problems. Failing that turn on allowDiskUse which will be slower.", "username": "Joe_Drumgoole" }, { "code": "mydb1.mongodbbucketnocpu.aggregate(\n [\n {\n \"$match\": {\n \"samples.timestamp1\": {\"$gte\": datetime.strptime(\"2010-01-01 00:00:00\", \"%Y-%m-%d %H:%M:%S\"),\n \"$lte\": datetime.strptime(\"2015-01-01 01:55:00\", \"%Y-%m-%d %H:%M:%S\")},\n\n }\n },\n{ \"$unwind\": \"$samples\" },\n\n {\n \"$group\": {\n\n \"_id\": {\"$dateToString\": {\"format\": \"%Y-%m-%d %H\", \"date\": \"$samples.timestamp1\"}},\n \"max_id13\": {\n \"$max\": \"$samples.id13\"\n }\n }\n },\n {\"$sort\": {\"hour\": -1}},\n { \"$limit\": 5}\n {\n \"$project\": {\n \"_id\": 0,\n \"day\": \"$_id\",\n \"max_id13\": 1\n }\n },\n\n\n\n ]\n)\n", "text": "Yes i tried that but its way slower than just sorting after the group.So i will leave the query as it is here", "username": "harris" }, { "code": "", "text": "Can you post a sample document?", "username": "Joe_Drumgoole" }, { "code": "$matchexplain", "text": "Why do you want to sort before grouping? Your $match stage already ensures that index is being used (if there is one). You can use explain to see the query plan for the aggregation. By the way, this topic is probably a better fit for “Working with Data” as it’s not specific to any Driver or ODM…Asya", "username": "Asya_Kamsky" } ]
Sort before group stage
2021-05-13T19:23:18.318Z
Sort before group stage
7,984
null
[ "aggregation" ]
[ { "code": "", "text": "Guys i am finding hard to understand whats the difference between those two.Please someone explain.Thanks in advance", "username": "harris" }, { "code": "$elemMatch$elemMatch$elemMatch$elemMatch$elemMatch<array>$elemMatch", "text": "There are 2 ways to use $elemMatch operator,Below reference documents clears every doubts of usage with example.1) $elemMatch (query)The $elemMatch operator matches documents that contain an array field with at least one element that matches all the specified query criteria.2) $elemMatch (projection)The $elemMatch operator limits the contents of an <array> field from the query results to contain only the first element matching the $elemMatch condition.", "username": "turivishal" }, { "code": "", "text": "I think we just discussed this topic here: Find or aggregate in this type of query - #6 by harris", "username": "Asya_Kamsky" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Difference between elemmatch projection and elemmatch queryatch
2021-05-19T00:15:03.542Z
Difference between elemmatch projection and elemmatch queryatch
3,283
null
[ "dot-net", "change-streams" ]
[ { "code": "var pipeline = new EmptyPipelineDefinition<ChangeStreamDocument<T>>()\n .Match(change => change.OperationType == ChangeStreamOperationType.Insert ||\n change.OperationType == ChangeStreamOperationType.Update ||\n change.OperationType == ChangeStreamOperationType.Replace)\n .AppendStage<ChangeStreamDocument<T>, ChangeStreamDocument<T>, ChangeStreamOutputWrapper<T>>(\"{ $project: { '_id': 1, 'fullDocument': 1, 'ns': 1, 'documentKey': 1 }}\");\n\nvar options = new ChangeStreamOptions\n{\n FullDocument = ChangeStreamFullDocumentOption.UpdateLookup\n};\n\nusing (var cursor = await coll.WatchAsync(pipeline, options, cancellationToken))\n{\n await cursor.ForEachAsync(async change =>\n {\n // await some handler routine\n }, cancellationToken);\n}\n", "text": "Hi,I am utilizing the MongoDb change stream (C# MongoDB.Driver v2.12.0) to track changes on a single collection.\nIn my experimental use case the collection stores information about execution of threads.\nA thread has two properties:During its execution, a thread can spawn children threads and be blocked until all of the children are not completed. Whenever a children thread completes its execution, it updates the database by decrementing the ‘BlockedCount’ of the parent. Once the ‘BlockedCount’ drops to 0, the parent thread should continue its execution.What I have noticed is that the change events can be different even if the update operations are exactly the same.\nWhat I mean by this is, if I have 1 parent thread and 3 children threads completing their execution, sometimes I would receive:Is this considered a normal behavior, or not?\nAnd if it is, is there some kind of configuration that would prevent this?Here is the code for subscribing to the change stream@yo_adrienne @James_Kovacs", "username": "Ivan_Povazan" }, { "code": "", "text": "This seems to me a racing condition. At the end of the day events are just that and do not guarantee duration they are picked up or sequence they are processed. If there is a way to use semaphores to detect sequence of thread ends and then atomic update of db then it would solve the problem. Sorry, a bit rusty to help with the code.", "username": "MaxOfLondon" }, { "code": "fullDocumentfullDocument", "text": "You are right @MaxOfLondon. There is a race condition in the second scenario, since the event carrying BlockedCount == 0 information should be handled only once.\nThe documentation states that:The fullDocument document represents the most current majority-committed version of the updated document. The fullDocument document may vary from the document at the time of the update operation depending on the number of interleaving majority-committed operations that occur between the update operation and the document lookup.Which basically means that I am responsible for taking care of race conditions ", "username": "Ivan_Povazan" }, { "code": "BlockedCountfullDocument.BlockedCountupdatedFields$projectfullDocument$projectBlockedCountfullDocument", "text": "I’m not sure there is a race condition. The changeStream event is the same when the update is the same. What may differ is the full document lookup - and that depends on how many changes have been applied to it since the update that triggered the change event. We advise to make sure and set any fields you must have at change notification event as either immutable (document key) or be part of the change/update. In other words, the only way you can know what the current blocked count was at the time of update is if you record it in a field that’s updated at the same time - which in some scenarios may be a more complex update than one you are performing right now, along with all the extra costs that come with it.Now, if you are already setting BlockedCount during the update (as it seems like you might be) then it will reflect the correct value at the time of the update and if you are seeing something else then you may be reading the wrong field. Rather than fullDocument.BlockedCount you should be reading updatedFields but I see you are adding a $project stage which is removing everything except fullDocument . Why? Remove the $project and see how the document you get will have correct and timely BlockedCount (but not in the fullDocument field).Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can Change Events be considered unique?
2021-05-11T15:43:19.994Z
Can Change Events be considered unique?
2,446
null
[ "upgrading" ]
[ { "code": "", "text": "Hello,\nI installed a Mongodb 4.2.7 database a while ago on a Debian 10 server (buster).\nI would like to run security updates on this debian server with an “apt update” and an “apt upgrade”.\nAre there any special precautions to be taken with mongodb?Thanks in advance", "username": "GuiVERO_VeroGui" }, { "code": "", "text": "Hi @GuiVERO_VeroGui and welcome in the MongoDB Community !MongoDB 4.2 supports Debian 10. See Production Notes — MongoDB ManualSo you should be fine with the latest version of Debian 10. The latest 4.2.X version is 4.2.14 though. So while you are at it, you could also update MongoDB and eventually upgrade to 4.4.X.If you have to reboot your server after your updates, I would do a graceful shutdown of MongoDB, just to be on the safe side.I’m currently running the latest 4.4.6 on Debian 11 and I don’t have any issues.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Thank you for your reply.\nI don’t want to change the version of mongodb but just do the server security updates.\nCan these security updates cause the mongodb database to malfunction?", "username": "GuiVERO_VeroGui" }, { "code": "libcurl4 openssl liblzma5\nrs.stepDown()", "text": "MongoDB has very little system dependencies as far as I know.Source: https://docs.mongodb.com/manual/tutorial/install-mongodb-on-debian-tarball/#install-mongodb-community-editionAnd I guess these 3 have a few subdependencies.\nIf you don’t anticipate any incredible update on these, you should be safe I guess.In doubt, run a full backup before you do anything. And you will perform your update/upgrade in a rolling manner on your replica set I guess so if you have a problem while doing this operation on the first secondary, it shouldn’t be too hard to fix the problem and your production environment won’t be impacted as long as the other secondary and primary are still up and running.The basic update method with zero downtime is:Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "By the way, you should have a look to MongoDB Atlas because these kind of troubles don’t exist in Atlas ! It’s fully automated for you.", "username": "MaBeuLux88" } ]
Mongodb on debian10
2021-05-20T12:06:32.974Z
Mongodb on debian10
2,865
null
[ "golang" ]
[ { "code": "", "text": "Hello, I with my team use old mongo driver GitHub - globalsign/mgo: The MongoDB driver for Go and we want to change them to official driver go.mongodb.org/mongo-driver.This drivers (ofcourse) have a different types, and we can’t use both of them. Maybe exists fork in github with adapter from old to new driver.We cannot rewrite all requests, because it will take a month or more to work. (We really have large number of requests)Thank you for help!", "username": "111506" }, { "code": "mgogo-mongo-driver", "text": "The company I am working for now faced the same issue.What we did was have both the mongo drivers installed (mgo and go-mongo-driver). All the newer APIs were re-written to the official mongo driver by the import alias method. Then we gradually re-wrote the existing APIs to the official driver.The below post was of great help during the migration process:Migrating from community drivers to the official MongoDB Go DriverUnfortunately, there is no straightforward way built-in way of doing this (at least none that I am aware of).", "username": "Harshavardhan_Kumare" }, { "code": "connOpt := options.Client().SetRegistry(mgocompat.Registry)", "text": "Thank you for your help it was usefull.\nI asked this question in mongo-driver jira board and in reply they send to me link to package. This package handle bson from mgo: mgocompat package - go.mongodb.org/mongo-driver/bson/mgocompat - Go Packages\nI just set this property to connect and problems with unmushal to mgo bson was resolved.\nSimple example usage:\nconnOpt := options.Client().SetRegistry(mgocompat.Registry)", "username": "111506" }, { "code": "", "text": "Thanks a lot for the info.This is a piece of much-needed information.", "username": "Harshavardhan_Kumare" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What simple way for change used driver
2021-05-14T08:02:21.649Z
What simple way for change used driver
2,907
https://www.mongodb.com/…7_2_1024x409.png
[ "queries", "python" ]
[ { "code": "", "text": "Hi All,I have a question regarding allowDiskUse=True, I am using pymongo and I wrote a large query and due to it\nsize I have to use allowDiskUse=True in order to make the query run, but when I’m looping through the received cursor object it seems as it empty.\nCan someone please show how to loop through received data ?Thanks\npymongo11149×460 19.4 KB", "username": "Andrey_Krimer" }, { "code": "", "text": "Hi @Andrey_Krimer welcome to the community!The code you posted for looping through the result set looks ok, so it might be the query itself. For some simple sanity check: is the query actually returns anything (e.g. try it out in the mongo shell), does the collection has any data in it, and please check if you’re connecting to the correct server Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi KevinThank yo for your answer, yes I have run via Robo 3T as well and yes it is returning large amount of entries.", "username": "Andrey_Krimer" }, { "code": "find()", "text": "Hi @Andrey_KrimerI suggest trying out with a simpler query (e.g. a small find()) and check if it returns the expected result.Since there is nothing wrong with your looping code, and you are certain you are connected to the correct server and the correct database, the only place to look is the query itself and how the method was called.If you can post a simplified code, it will be helpful to spot issues with it.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi @kevinadiYou were correct, it was my query, I have resolved it and it work now,\nthank you for your answer and support Thanks\nAndrey", "username": "Andrey_Krimer" } ]
Pymongo allowDiskUse=True and Empty Cursor
2021-05-12T12:53:27.379Z
Pymongo allowDiskUse=True and Empty Cursor
3,137
null
[]
[ { "code": "", "text": "Hello MongoDB community we are delighted to have just been selected in the Atlas COVID-19 credit program of MongoDB .We need to urgently set up MongoDB Atlas for Electronic Patient Records for patients struggling with COVID19, and connect it to our app and to Microsoft Teams.So, how do we get started?#covid-19", "username": "RAI_tech" }, { "code": "", "text": "Hi @RAI_tech and welcome in the MongoDB Community !Great news that you have been selected in this program !Here is a guide to get started with Atlas: https://docs.atlas.mongodb.com/getting-started/I also have a blog post with a Youtube video to present how to create a free M0 cluster. But it’s exactly the same process with a bigger tier.https://www.mongodb.com/quickstart/free-atlas-cluster/Feel free to ask more questions if you need more help to get started, I will keep an eye on this topic.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
MongoDB COVID-19 Atlas Credit Program
2021-05-20T11:38:20.494Z
MongoDB COVID-19 Atlas Credit Program
1,802
null
[ "data-modeling" ]
[ { "code": "{\n\"name\": \"guild1\",\n\"id\": abcd,\n\"totalExp\": 10101001010101,\n\"tag\": \"GUILD_ONE\"\n},\n{\n\"name\": \"guild2\",\n\"id\": abce,\n\"totalExp\": 10101001010101,\n\"tag\": \"GUILD_TWO\"\n}\n{\"member_id\": 1111,\n\"current_alliance\": \"guild1\",\n\"alliances\": {\"guild1\": {\"11-05-2021\": 100000,\n \"12-05-2021\": 100000,\n \"13-05-2021\": 100000,\n \"14-05-2021\": 100000,\n \"15-05-2021\": 100000,}\n },\n {\"guild2\": {\"06-05-2021\": 100000,\n \"07-05-2021\": 100000,\n \"08-05-2021\": 100000,\n \"09-05-2021\": 100000,\n \"10-05-2021\": 100000}\n }\n},\n{\"member_id\": 2222,\n\"current_alliance\": \"guild1\",\n\"alliances\": {\"guild1\": {\"11-05-2021\": 100000,\n \"12-05-2021\": 100000,\n \"13-05-2021\": 100000,\n \"14-05-2021\": 100000,\n \"15-05-2021\": 100000,}\n }\n}\ncursor = db[\"member\"].find({'current_alliance': \"guild1\"})\nresult = await db[\"member\"].bulk_write([ UpdateOne( {\"member_id\":1111},{'$set': {'alliances.guild1.15-05-2021': '100000'}}),\n UpdateOne({\"member_id\":2222},{'$set': {'alliances.guild1.15-05-2021': '100000'}})\n #...for uo to 100 members...\n ordered=False)\n", "text": "Here is my current mongoDB database idea:alliance info collectionmember collectionInfo\n-I am trying to store an unlimited number of days experience for EACH member.\n-I thought they could each get their own document otherwise the alliance info documents would become massive over time.\n-Members can join and leave different alliances, but I am trying to keep all their experience history for all their alliances they have been in.\n-Language is python.\n-Each alliance can have a total of 100 current members.\n-I want to store member’s experience for an alliance even if they leave it.\n-For each document in the alliance info collection their will be up to 100 documents in the member collection.Reading\nQueries will be per alliance, so if I was querying “guild1”, I would need to select 100 documents from the member collection.\nIf i had an index for “current_alliance” I thought I could use this query to select all members from one:Updating\nEach day I will add the member’s daily experience for the current alliance they are in, so an example for updating “guild1”:Thoughts\n-Both the queries above seem to work however i dont know how optimal bulk_write() or find() are?\n-Is doing a bulk_write() for 100 documents in one collection OK?\n-And also when I need to retrieve data I have to select all 100 member’s documents from the member collection before i can do anything (for one alliance).\n-Does this model look OK?\nThanks very much!", "username": "Co0kei" }, { "code": "", "text": "Hi Co0kei, welcome to the community.This seems an interesting project and it shows you are already considering a lot of things like storage, performance, usability and ease of development by asking directed questions.Since this appears to be going in direction of a big data project I feel you need to step back from the implementation and take few moments to clearly define your use cases, requirements and assumptions first.A few questions I would ask myself:The more you capture even the informal way the better and you will find it easier to take decisions and design the system. (Tip: use Freeplane to capture your ideas, use draw.io to sketch your initial designs). Yes, it takes time but if you jump right into coding you will make mistakes that will be very costly to rectify, jeopardise or even forfeit your project.Best of luck and if you do define your requirements in clear way I would be even interested to discuss decisions and help design.", "username": "MaxOfLondon" }, { "code": "", "text": "Hey MaxOfLondon, thank you very much for your reply! I was very happy to see a reply and such a detailed one as well!I have tried using draw.io but haven’t figured out how to “draw” a database design. I ended up thinking about the questions you suggested.I have also been thinking a lot about the data model of the database and wonder whether the model I suggested would lead to good performance or if it is an inefficient design.I hope the questions below help to illustrate the use of data and any further replies would be greatly appreciated! Thank you for your time.Who is going to use the system?\n-The system is used by anyone to view alliances experience data.\n-The important part of the design though is to have EVERY members EVERY days experience value recorded (not just one monthly total of all days). So that a member can look back and see their exact experience from a particular day in the past.What data will system need to produce?\n-The system will only need to return experience values, and for one particular alliance at a time. So each READ operation from the database will be collecting members from the same alliance.Examples of what people could be requesting:\n-A Daily leaderboard per alliance - Needs to retrieve today’s data from all 100 member’s documents.\n-A weekly leaderboard per alliance - Needs to retrieve last 7 day’s data from all 100 member’s documents.\n-A monthly leaderboard per alliance - needs to retrieve last 30 days data from all 100 member’s documents.\n-Past month leaderboards per alliance - E.g. say for January select all values that have keys “XX-01-2021” from all 100 member’s documents.\n-A yearly leaderboard per alliance - E.g. for 2021 select all values that have keys “XX-XX-2021” from all 100 member’s documents\n-And past year leaderboards per alliance - So for 2021, 2022, 2023…\n-An individual member’s daily experience history (as long back as it goes BUT only for the alliance that the member is currently in) that shows their daily experience AND position for EVERY day they have data. For example say today they got 100,000 xp and placed number 1 out of 100 in their alliance on one day. Then yesterday they got 10,000 xp and placed number 80 out of 100.What is the most likely operation to be and most data intense?\nMost popular queries I predict:\n-The individual member experience history. I think this would be used the most, however in most cases people would just look at THIS month’s experience and maybe last months. But less commonly people will wish to look through past months and go back many months.\n-Maybe database design STORES ALL MEMBERS (from one alliance) last 30 or 60 days (two months) daily XP together in one additional document or the alliance info document to reduce reads/writes?.\n-I think the individual member history is the most intensive as it not only shows their daily experience but also their daily POSITION (in their alliance) ranked by experience of other members.\n-Then I think the daily experience of each member (daily leaderboard per alliance), weekly,- monthly and finally yearly.List all operations that are envisaged to be needed\n-Inserting a new alliance info document in one collection and then the corresponding 100 members into the member collection, when a new alliance is registered (starts getting experience tracked).\n-Updating all member’s documents in the member collection with their daily experience.\n-When a person requests an alliance’s data from the database, e.g. a monthly leaderboard, the member’s experience must be within say 10 minutes of their actual experience (say there is a “last_updated” value stored in the database). If not less than 10 mins I request the alliance’s members up to date data from a public API (which only shows the past 7 days daily experience but for all members though!) and update all 100 member’s daily experience values for that alliance.\n-When a member changes alliance, the “current_alliance” value will change to their new alliance’s name, and in the member’s “alliances” field (in their document) the new alliance name will be added which will then store all the experience the member gets whilst in this alliance.\n-When a member LEAVES an alliance their experience from that alliance is still stored however is it NOT used in ANY queries whatsoever. It is only used if that member were to re-join that alliance again. Then it would be used in daily, monthly leaderboards etc for that alliance.What are my assumptions?\n-Idea: some check to stop updating an alliance’s history if all the members have 0 experience for say the last month (the data for this alliance no longer saved as it is inactive).Do your approx db sizing\n-I hope to be able to implement a good data model so that the database can continue to grow in size without having performance issues in the future that would require drastic data modelling redesign.\n-For database sizing: I hope to be able to store data for thousands of alliances which will make hundreds of thousands of member documents in the member collection (database would start small, but hopefully I can quickly increase number of alliances being tracked!)\nThink of potential bottlenecks\n-Current bottlenecks I see are updating the 100 member’s separate documents for an alliance (as im not sure how intensive a bulk_write() is. IF no data is requested to be viewed for a certain alliance throughout a day, then all the members in this alliance will have their daily experience for today updated just once at the end of the day.\n-But if there is an alliance that is being requested from lots, the alliance must update the daily experience of all its members if it has not been updated in the past 10 mins (to keep experience up to date as members obtain it, and not be hours incorrect). Example: someone requests a daily leaderboard for an alliance but it was “last_updated” over 10 mins ago, so first the database is updated (all the member’s todays exp values are retrieved (from an API which provides all the members from one alliance) and written to database, then the data is read from the database and used to provide the daily leaderboard. Then if another person requests a monthly leaderboard for the SAME ALLIANCE just 1 minute later the data is just read again from the database without having any WRITE operations to be executed.", "username": "Co0kei" }, { "code": "", "text": "Hi @Co0kei, I was unavailable and only now managed to read your response. Let me digest and think about it a bit then we can discuss what solution might be appropriate in terms of db design. One observation though, storing all data for always is not really practicable and inevitably will lead to performance issues and might not even be possible unless you are financed adequately Other though: in line with GDPR user can request data to be removed", "username": "MaxOfLondon" }, { "code": "", "text": "Hi @Co0kei, nice progress, the model 2 seems better suited for reads and data transfer.\nI was also thinking about tracking history of alliance membership, guild name changes, etc and got this initial draft - work in progress.image867×577 29.1 KBAs it is late today I will have another go tomorrow - this might change drastically - but welcome your thoughts.\nTake care.", "username": "MaxOfLondon" }, { "code": "", "text": "Hi MaxOfLondon, thank you very much for your interest and even designing a draft!\nSorry if I didn’t explain correctly but an alliance or guild is the same thing (just a group of 100 members).\nIt’s an interesting draft, but I think there will have to be writes to many documents (when updating each member’s daily experience)?What would you think of using model 2 which I wrote above where each alliance/guild has a document per month that has all the members in it. I think this would be beneficial as when reading the data, less documents would be read, as only 1 for 1 month or 2 for 2 months (instead of 100 for all the members). I am less inclined to model 1 now as I think reading a years worth of data could be unnecessary for the majority of queries.Thank you for showing interest in this project! Take care!", "username": "Co0kei" }, { "code": "", "text": "Hi Co0kei, Thank you for the clarification, I think I now understand what you are aiming at. Yes, the second model seems reasonable in that case. I think the most common query will be individual wanting to know ranking within their guild so sorting result when retrieving could achieve that but I’d suggest that for longer time intervals of time (last: 7 day, 30 day, 365 day) you calculate it once a day using crontab and store in separate collection rather than calculate it from all the data on each read.\nI wouldn’t worry about updating multiple collections or several updates at same time too much, just make model logical enough to get information you want read with least cost of query and data transfer.Best,\nMax", "username": "MaxOfLondon" } ]
Designing a database structure
2021-05-15T12:41:52.990Z
Designing a database structure
2,544
null
[ "containers", "ops-manager", "kubernetes-operator" ]
[ { "code": " ---\n apiVersion: v1\n kind: Secret\n metadata:\n name: mongodb-conn-info\n namespace: mongodb-test\n type: Opaque\n stringData:\n password: \"MyPassword!\"\n url: \"mongodb://admin:MyPassword!@mongodb\"\n ---\n apiVersion: v1\n kind: Secret\n metadata:\n name: mongodb-ops-mgr-key\n namespace: mongodb-test\n type: Opaque\n stringData:\n user: \"YWVWNBJB\"\n publicApiKey: \"ed93e78d-c6ab-4f8f-b08b-4b8347652710\"\n ---\n kind: ConfigMap\n apiVersion: v1\n metadata:\n name: mongodb-ops-mgr-project\n namespace: mongodb-test\n data:\n baseUrl: \"http://ops-manager-svc.mongodb.svc.cluster.local:8080\"\n projectId: 5fd41ac23879d749eee766f3\n orgId: 5fd2f83804191904d6e1d1c1\n ---\n apiVersion: mongodb.com/v1\n kind: MongoDB\n metadata:\n name: mongodb\n namespace: mongodb-test\n spec:\n shardCount: 2\n mongodsPerShardCount: 3\n mongosCount: 2\n configServerCount: 3\n version: 4.2.3\n project: mongodb-ops-mgr-project\n credentials: mongodb-ops-mgr-key\n type: ShardedCluster\n persistent: true\n podSpec:\n persistence:\n single:\n storage: 30Gi\n storageClass: px-db-xfs\n ---\n apiVersion: mongodb.com/v1\n kind: MongoDBUser\n metadata:\n name: mongodb-admin-user\n namespace: mongodb-test\n spec:\n passwordSecretKeyRef:\n name: mongodb-conn-info\n key: password\n username: \"admin\"\n db: \"admin\"\n mongodbResourceRef:\n name: mongodb\n roles:\n - db: \"admin\"\n name: \"clusterAdmin\"\n - db: \"admin\"\n name: \"userAdminAnyDatabase\"\n - db: \"admin\"\n name: \"readWrite\"\n - db: \"admin\"\n name: \"userAdminAnyDatabase\"\n", "text": "I’ve just installed the Kubernetes Enterprise Operator (latest) and Ops Manager (4.4.4) on my kubernetes cluster following the guide https://www.mongodb.com/blog/post/running-mongodb-ops-manager-in-kubernetes.I’ve created my organization, project, API key. I have a single yaml file with the configmaps/secret/resource all defined. When I apply it nothing happens. The mongodb never shows an updated state and the description shows no events.Below is my yaml.", "username": "Jean-Philippe_Steinm" }, { "code": "", "text": "I had same issue… this blof is out dated ;/", "username": "N_A_N_A5" }, { "code": "", "text": "I did this all last week. Everything worked for me. GitHub - alberttwong/mongodb-kubernetes-enterprise-operator-quickstart. I’d also add that since you need to have a EA subscription to use the k8s operator, I’d submit a ticket and get a SLA on your support ticket.", "username": "Albert_Wong" }, { "code": "", "text": "I did the same. with values representing our resources , and it worked fine", "username": "frank_pinto1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Creating MongoDB resource on K8s doesn't work
2020-12-12T01:59:50.851Z
Creating MongoDB resource on K8s doesn&rsquo;t work
3,004
https://www.mongodb.com/…df660e1cc4c8.png
[ "connecting" ]
[ { "code": "", "text": "i have installed NosqlBooster and trying to connect to altas rarely its get connectedI have whitelisted IP addresses as well as created different database users ., Tried VPN , tried firewall and antivirus enable and disable but not worked for me.Tried on compass but same errorEven tried all stackoverflow answersPlease help me to resolve this issue.Screenshot_151362×767 59.1 KB", "username": "kunal_gharate" }, { "code": "", "text": "Hi @kunal_gharate,Would like to know couple of details about your system and the way you’re connecting. Would like to know following things:Your OS\nNoSQLBooster version\nDo you have any SSH details left in the SSH tab of NoSQLBooster?\nDo you have any data / collection yet in the database you’re trying to access?", "username": "viraj_thakrar" }, { "code": "mongodb+srv://readonly:[email protected]/testreadonlyreadonly", "text": "I just downloaded it and connected to both a localhost MongoDB and an Atlas Cluster with NoSQLBooseter. I run a public read-only cluster at mongodb+srv://readonly:[email protected]/test can you try connecting to that cluster to see if it works? (yes. the username is readonly and the password is readonly, you can only query this cluster). I just verified it works with NoSQLBooster (Version 6.2.13 (6.2.13)) for me.", "username": "Joe_Drumgoole" }, { "code": "", "text": "OS = WIN 10/64bit\nNOSQL BOOSTER : 6.0.5there is option for ssh in nosql\nYes in my altas have 100+ records", "username": "kunal_gharate" }, { "code": "", "text": "same nosqlbooster and db working for my friend", "username": "kunal_gharate" }, { "code": "", "text": "mongodb+srv://readonly:[email protected]/testScreenshot_11704×624 47.8 KB", "username": "kunal_gharate" }, { "code": "", "text": "Screenshot 2021-04-28 at 16.32.011019×529 16.3 KBIs this what your test URL looks like?Are you by any chance using a proxy server that blocks port 27017?", "username": "Joe_Drumgoole" }, { "code": "", "text": "Try switching DNS provider by using Google’s 8.8.8.8 and 8.8.4.4. It looks like your current provider does not support the new seedlist feature of modern name servers. If that is the case I would also be scared of un-installed security patches.", "username": "steevej" }, { "code": "mongodb://readonly:[email protected]:27017,demodata-shard-00-01.rgl39.mongodb.net:27017,demodata-shard-00-02.rgl39.mongodb.net:27017/ssl=truessl=true", "text": "Try this URL:mongodb://readonly:[email protected]:27017,demodata-shard-00-01.rgl39.mongodb.net:27017,demodata-shard-00-02.rgl39.mongodb.net:27017/ssl=trueThis removes the SRV requirement. Note the addition of ssl=true.", "username": "Joe_Drumgoole" }, { "code": "", "text": "Yes, its working on different IP address", "username": "kunal_gharate" }, { "code": "", "text": "Does that mean that the long form URL worked? i.e. it was your name server not parsing the mongodb+srv:// URL format?", "username": "Joe_Drumgoole" }, { "code": "", "text": "The issue with operating system I have reset my network setting and after that it’s working normally .", "username": "kunal_gharate" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Connection refused when i used NOSQLBOOSTER and Compass
2021-04-27T08:48:39.123Z
Connection refused when i used NOSQLBOOSTER and Compass
9,646
null
[ "atlas-device-sync" ]
[ { "code": "", "text": "Hi,Following the migration guide I’ve set my app to development mode and ran the client.\nThe sync seems to be working. But Scheme on the dashboard shows nothing and presents a label \" Allow access to your cluster data. Realm prevents access to your data by default. \"", "username": "donut" }, { "code": "", "text": "Hi @donut, could you please share a screen capture of the page in question?", "username": "Andrew_Morgan" }, { "code": "", "text": "Screen Shot 2021-05-18 at 0.05.342852×1380 222 KB", "username": "donut" }, { "code": "", "text": "If sync were enabled and in development turned on then I’d expect to see this banner:\nimage1090×72 9.74 KBCould you please show the “Sync” page?Also, could you please include a link to the guide that you’re following?", "username": "Andrew_Morgan" }, { "code": "", "text": "Hi,I’ve turned development back on. See image.The instructions I’m following: Realm Legacy Migration Guide - Realm Legacy Migration Guide\nScreen Shot 2021-05-18 at 21.48.322612×1518 258 KB\n", "username": "donut" }, { "code": "", "text": "To start with a clean slate (and figure out what’s gone wrong), I’d suggest trying this:Let us know what you see.Another thing to check is whether the data that’s being synced is also showing up in your Atlas collections.", "username": "Andrew_Morgan" }, { "code": "", "text": "OK so there seems to be an error:Screen Shot 2021-05-19 at 8.45.292114×1050 149 KB", "username": "donut" }, { "code": "Object", "text": "This error is showing that the partition of the object in the mobile app doesn’t match the partition key of the partition you’re trying to write to.Your mobile app’s Object classes should not include the partition key – the SDK will automatically set it based on the partition that you opened your realm with.", "username": "Andrew_Morgan" }, { "code": "", "text": "Hmm, according to the doc, I should add the partition key to the object class:Screen Shot 2021-05-19 at 11.19.451028×352 63.2 KBIs that incorrect?", "username": "donut" }, { "code": "", "text": "Hi @donut, that part of the documentation is out of date. I’ve flagged that to the owner that it needs to be updated.", "username": "Andrew_Morgan" }, { "code": "", "text": "OK… what about the _id part? should be there?", "username": "donut" }, { "code": "", "text": "Looks like it got resolved now (with _id).", "username": "donut" }, { "code": "Object", "text": "Going back to my earlier comments. It’s not an error to include the partition key in your Object class but if you include it then you need to ensure that you set it to the same value as the partition you opened the realm with.", "username": "Andrew_Morgan" }, { "code": "", "text": "OK, but according to the error you can see it mentions “_partitionKey” twice, so I’m not sure where the mismatch is.", "username": "donut" }, { "code": "_partitionKey{ partition: \"_partitionKey\" }{ partition: \"\" }", "text": "It looks like you opened the partition with the partition value set to _partitionKey (i.e. objects that contain { partition: \"_partitionKey\" }) but you have an object containing { partition: \"\" } – or something similar.", "username": "Andrew_Morgan" }, { "code": "", "text": "OK, so:", "username": "donut" }, { "code": "", "text": "It’s now optional to include the partition in your Object classes – I don’t see an upside in including it.In the Realm UI, the partition field appears in the schema definition, and you specify it as the partitioning key when enabling sync.", "username": "Andrew_Morgan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Scheme isn't visible after turning off dev mode
2021-05-17T19:53:00.132Z
Scheme isn&rsquo;t visible after turning off dev mode
2,163
null
[ "swift", "performance" ]
[ { "code": "", "text": "Hi we have just started testing the upgrade from 5.5.2 to 10.7.6 and are seeing a huge hit on query performance. Like 10x slower or perhaps even more. We are running local only at the moment.Things that took one or two seconds are now taking minutes. They are quite complex queries but under 5.5.x they were always very fast.Is there something significant that has changed with realm-core or realm-cocoa that could be causing this ? We’re running on macOS with Intel and Apple Silicon and the perform hit seems the same on both.This is going to be a complete show stopper for moving to MongoDB Realm unless it can be resolved.", "username": "Duncan_Groenewald" }, { "code": "", "text": "OK I just downloaded the prebuilt binary for RealmSwift 10.7.6 and performance is back to what we are used to seeing.Anyone know if there is some Xcode / compiler setting one can use to ensure performance is good when debugging applications. I haven’t tested a production build using SPM to see if that improves performance.I am just guessing now but could it be that in debug mode RealmSwift/Core perform poorly ?", "username": "Duncan_Groenewald" }, { "code": "", "text": "Hi @Duncan_Groenewald – this doesn’t properly address your question, but I’ve switched over to using SPM for RealmSwift and it’s been working well for me (love being able to easily use a development branch when trying out new features).", "username": "Andrew_Morgan" }, { "code": "", "text": "Our app started with 3.x and have migrated through 5 and then on to 10.7.x - it’s macOS and our dataset is 2GB+. No SPM and we’ve not moved to M1 yet. Not used or needed to use the prebuilt binary either.We are not seeing any real performance issues/differences from 5 to 7Are you using indexedProperties?I am not sure what a difference would be between a pre-built binary and building it in line with the app so that’s something that may be an issue. Do you have any other details or have some way to repo the issue?", "username": "Jay" }, { "code": "", "text": "Bear in mind the performance issue is only when running in DEBUG mode using SPM - when running in DEBUG mode with prebuilt binary no issues - same performance a Release Build.There are no performance problems when building for Release using SPM. However we have some complex stuff that takes a few seconds to run to calculate a lot of statistics and when this is done with SPM in debug mode it takes a long time - maybe 30 seconds (up from less than 2 seconds).It would be nice to be able to use SPM and avoid any DEBUG overhead - since we are not debugging RealmSwift.No big deal since we are just using the prebuilt binaries.", "username": "Duncan_Groenewald" }, { "code": "", "text": "@Andrew_Morgan - thanks please note that the performance hit is ONLY when running in DEBUG mode but it is pretty severe if testing with a lot of data. Other than that SPM is great - but the prebuilt binary works fine and means we don’t have performance problems in DEBUG model.", "username": "Duncan_Groenewald" } ]
Just testing upgrade from 5.5.2 to 10.7.6 and seeing a huge performance impact on queries
2021-05-19T05:55:01.867Z
Just testing upgrade from 5.5.2 to 10.7.6 and seeing a huge performance impact on queries
2,285
null
[ "python", "motor-driver" ]
[ { "code": "replace_oneAwaitable[UpdateResult]UpdateResultAwaitable", "text": "When using pymongo, there is a package named pymongo-stubs to support type hint.\nNow come to motor, for example, collection’s replace_one method returns Awaitable[UpdateResult] instead of UpdateResult.\nIs there any way to make motor support type hint?\nI have already googled “motor-stubs”, “motor type hint” etc. but there is likely no solution.\nCopying files from pymongo-stubs and adding Awaitable works for me but is there a better solution?", "username": "Xuesong_Zhong" }, { "code": "", "text": "Hi @Xuesong_ZhongI think this question is better moved to the “Drivers & ODMs” category as I’m not familiar with the pymongo-stubs package as it is not part of our M220P course. I would recommend moving this question to that category and you will find a wider audience who are hopefully more familiar with that package.Kindest regards,\nEoin", "username": "Eoin_Brazil" }, { "code": "", "text": "We have not started working on type hints for motor yet. The ticket to watch is: https://jira.mongodb.org/browse/MOTOR-331.", "username": "Shane" }, { "code": "", "text": "Thanks for reply and advice, I’m pretty new here and not very familiar with category and tag. And I’m sorry to post it in an inappropriate category.", "username": "Xuesong_Zhong" }, { "code": "", "text": "Thanks for the information. Perhaps the only option currently is to write what I need by myself.", "username": "Xuesong_Zhong" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Support for type hint?
2021-05-19T04:27:24.210Z
Support for type hint?
7,047
https://www.mongodb.com/…_2_1023x381.jpeg
[]
[ { "code": "", "text": "In the Realm UI, I can’t edit the sync permissions (Development & Production modes) unless I terminate sync and re-enable it.While building the application, this is fine. However, terminating and re-enabling Sync also invalidates my realm file on the client, and I have to delete the app and re-install it.I can reason around why you have to terminate sync to edit the permissions, but it feels very scary to have an irrecoverable realm file that I don’t have access to (as far as I know) in the app that I can delete on behalf of the user and re-sign in fresh.Am I missing something obvious?Here is the error log:CleanShot 2021-05-19 at 10.03.05@2x2576×960 182 KB", "username": "Majd_Taby" }, { "code": "", "text": "That’s correct although you generally wouldn’t be changing the permissions structure once you enable sync. In production, we would recommend using custom user data or function as your permission structure and then you can dynamically add partitions to be readable or writable in a document or custom user data that the system checks to determine access. That way you don’t need to terminate sync but can change access.https://docs.mongodb.com/realm/sync/permissions/#function-rules", "username": "Ian_Ward" } ]
Changing Sync Permission Rules
2021-05-19T17:02:03.625Z
Changing Sync Permission Rules
1,451
null
[ "connecting", "security" ]
[ { "code": "", "text": "Hello Developers, i asked a host provider about how to deploy my node.js/MongDB app, where my MongoDB is hosted on Atlas. They answered that they want to know the IP and Port of external MongoDB server in order to whitelist outgoing firewall exceptions at their firewall. How can i find the IP and Port of my Database hosted in Mongo Atlas? Thanks", "username": "petridis_panagiotis" }, { "code": "27017ping hostname", "text": "Hi @petridis_panagiotis,How can i find the IP and Port of my Database hosted in Mongo Atlas?Regarding the ports, you may find the Troubleshoot Connection Issues documentation useful, more specifically the following:Atlas clusters operate on port 27017 . You must be able to reach this port to connect to your clusters. Additionally, ensure that the appropriate ports are open for the following:In regards to finding the IP for each of the nodes in your Atlas cluster, you can perform a simple ping hostname. To find the hostnames of each node within your cluster, you can click the metrics button on the cluster as shown in the below example:\nAfter clicking the metrics button, the hostname:port should be in a similar format to the below example:(Where cluster0-shard-00-00.abcde.mongodb.net is the hostname)They answered that they want to know the IP and Port of external MongoDB server in order to whitelist outgoing firewall exceptions at their firewall.I must note that there are cases where the public IP’s can change. For this reason, it may be better that they whitelist the hostname rather than the IP address.Hope this helps.Kind Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Thank you. In my Cluster0 i have 3 shards 00-00, 00-01 and 00-02 the first two are secondary and the last is the primary, so its better to send to the host all three shards?", "username": "petridis_panagiotis" }, { "code": "", "text": "Technically they are tree nodes of the same shard. But terminology aside you will need to be ale to connect to every node in the cluster.If a host fails or, more likely a rolling upgrade the primary role will transition to another one of the nodes, and you will need to be able to connect to it.", "username": "chris" }, { "code": "", "text": "So send only one shard hostname or all of them?", "username": "petridis_panagiotis" }, { "code": "", "text": "you will need to be ale to connect to every node in the cluster.Every node in the cluster means all of them.", "username": "chris" }, { "code": "", "text": "Nice. Some last curious question… This hostname is unique for every user or this represents a shared host where multiple users have hosted their clusters?", "username": "petridis_panagiotis" }, { "code": "", "text": "Hi @petridis_panagiotis,For shared tier instances (M0, M2 and M5) which exist on a shared environment, the hostname(s) in your cluster are unique but would trace back to the same IP as the shared host machine.For dedicated tier instances (M10+), the hostname(s) for each node and the associated public IP’s would be unique.Kind Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
IP & Port of External MongoDB server
2021-05-19T09:39:54.753Z
IP &amp; Port of External MongoDB server
15,416
null
[]
[ { "code": "", "text": "Hi,Following the migration (Realm Legacy Migration Guide - Realm Legacy Migration Guide) from realm cloud to realm atlas, it seems I need to come up with “legacy_realm_path”. Where can I locate this path?Thanks!", "username": "donut" }, { "code": "/myRealm/branch345", "text": "It’s the name of the realm on the Cloud so /myRealm or /branch345", "username": "Ian_Ward" } ]
What is legacy_realm_path?
2021-05-19T19:02:54.851Z
What is legacy_realm_path?
1,654
null
[ "dot-net" ]
[ { "code": "", "text": "When using MongoDB C# driver UpdateOneAsync function, is it possible to get an\nAcknowledged UpdateResult, whose MatchedCount and ModifiedCount are not equal?", "username": "Ivan_Povazan" }, { "code": "{$set : {name: \"Max\"}}", "text": "Hi @Ivan_Povazan,I think the answer is yes if your update isn’t changing anything in the doc. For example if you try to {$set : {name: \"Max\"}} but this is already the value in the doc. Then you get a +1 for MatchedCount but 0 for ModifiedCount for this particular doc.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Can UpdateOneAsync give UpdateResult with MatchedCount != ModifiedCount?
2021-05-18T12:48:56.043Z
Can UpdateOneAsync give UpdateResult with MatchedCount != ModifiedCount?
4,214
null
[ "mongoose-odm", "connecting" ]
[ { "code": "{\n autoIndex: false,\n promiseLibrary: Promise,\n poolSize: 10,\n autoReconnect: true,\n reconnectTries: 30,\n reconnectInterval: 1 * 1000,\n connectTimeoutMS: 180 * 1000,\n socketTimeoutMS: 180 * 1000,\n keepAlive: true,\n keepAliveInitialDelay: 10 * 1000,\n useNewUrlParser: true\n }\n", "text": "Hello Guys!I’m working with NodeJs and MongoDB and I’m connecting to MongoDB through Mongoose driver.My setup:Below is my connections options:-Below is my connection string:-mongodb+srv://${DB_USERNAME}:${DB_PASSWORD_ENCRYPTED)}@${DB_URI}/${mongoose.DB}?retryWrites=true&w=majorityI’m getting connection 9 to XXXX-XXXX-XXXX-shard-00-01.dkvup.mongodb.net:27017 timed out Error for one of the API.Can anyone assist me on what causing the above error and possibly the best connection configuration on production environment to avoid such errors.", "username": "Rahul_Kosamkar" }, { "code": "DB_PASSWORD_ENCRYPTED", "text": "Hi @Rahul_Kosamkar and welcome in the MongoDB Community !First, the latest version of the driver is 3.6.7. It shouldn’t make a big difference but using the latest is usually the best option. I assume you are using the latest 4.4 version on MongoDB Atlas as well so everything is up-to-date and the versions are aligned? Mongoose’s latest version is 5.12.10 as well so maybe I would also align this.Second, can you please confirm that, using the same user/password/URI on the same computer you are trying to run this code from, you can connect to this Atlas cluster using the mongo client or mongosh? If this doesn’t work I would check that the IP address of this server has been correctly added in the IP access list in Atlas. I would also check the user/password is correct.Also, why do you have DB_PASSWORD_ENCRYPTED? The password isn’t encrypted. It should be in plain text.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Connection 9 to XXXX-XXXX-XXXX-shard-00-01.dkvup.mongodb.net:27017 timed out
2021-05-19T08:25:20.793Z
Connection 9 to XXXX-XXXX-XXXX-shard-00-01.dkvup.mongodb.net:27017 timed out
2,222
null
[ "database-tools", "backup" ]
[ { "code": "", "text": "I know this topic is old, but I think I am having a similar problem. I have been searching from a solution all over Mongo community/forumsI am running a mongodump using --gzip followed by a mongorestore but could observe that the number of indexes existing in the cosmosdb are not equal to the indexes restored by the mongorestore command. The thing is there are no errors.Could anybody please help with your prior experience.Appreciate your help in advance.", "username": "Swaroop_Mohanty" }, { "code": "", "text": "Hi @Swaroop_Mohanty and welcome in the MongoDB Community !CosmosDB is a product built by MS Azure that is faking the MongoDB APIs. MongoDB has nothing to do with it. CosmosDB isn’t the real MongoDB, it’s just trying to imitate the MongoDB API.From my latest tests (March 4, 2021), CosmosDB is failing 66.99% of the 1239 compatibility tests that are 100% green on MongoDB Atlas.MongoDB doesn’t support CosmosDB and there are absolutely no warranty that any of the MongoDB tools would work with a system that tries to imitate the features of MongoDB… And is apparently not very good at it…What you are trying to do would totally work on a real MongoDB system.More comparison details here:Comparing MongoDB as a Service OfferingsAlso… I would recommend that you have a serious look at the pricing difference between MongoDB Atlas and Cosmos because running the tests I mentioned above costs about 0.50$ on Atlas and more than 200$ on Cosmos…Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Not able to restore indexes using mongorestore
2021-05-19T11:09:43.635Z
Not able to restore indexes using mongorestore
4,874
null
[ "schema-validation" ]
[ { "code": "", "text": "I want to make the field unique how is that possible on Mongodb Realm schema declaration. I know that _id is unique but i want to have other fields to be unique", "username": "Aaron_Parducho" }, { "code": "", "text": "The JSON Schema standard is designed to validate the structure of a document, rather than to cross-reference values between multiple documents.You can ensure that no duplicate keys get persisted in Atlas by adding unique indexes (but, as with the JSON schema), that wouldn’t prevent the client app creating an Object that broke that constraint.That means that you’ll need some app-side checking. If you need to check for unique values across all partitions (some of which the user can’t access) then you could add a backend Realm Function (that can see all data) that makes the check – your client app can then call that function before adding a new Object to Realm.", "username": "Andrew_Morgan" }, { "code": "", "text": "I would ask what the use case is? Is this unique for objects in a single partition or for all partitions?Also, what would the expected bahavior be if you attempt to write an object that had a field value that was not unique? Some kind of error or failure?And as @Andrew_Morgan mentioned - can you just add client side code to ensure uniqueness - perhaps before even attempting to write? e.g. stop the user from entering a duplicate email address as the user is attempting to enter it.", "username": "Jay" } ]
How to make schema field unique?
2021-05-19T02:52:30.628Z
How to make schema field unique?
10,579
null
[ "change-streams" ]
[ { "code": "", "text": "Team,When end user deletes a nested array object, respsective change stream document doesnt show deleted entity in updatedescription.removedfields also fulldocument is null, is there a way to get this. We do not need the fullDocuement we are only looking for the deleted object under removed fields ?Example sample doc structure:When a user deletes array.objX we need that obj in removed fields._id:<>\nfield1:<>\nfield2<>\nArraysample:Array\n-0:obj1\n-1:obj2\n-2:obj3", "username": "vinay_murarishetty" }, { "code": "updateLookup#!/usr/bin/env bash\necho '\ncursor = db.users.watch([],{\"fullDocument\":\"updateLookup\"});\n\nwhile (!cursor.isExhausted()) {\n if (cursor.hasNext()) {\n print(tojson(cursor.next()));\n }\n}' | mongo\nfullDocumentupdateLookup", "text": "Hi @vinay_murarishetty and thanks for your question !It’s currently impossible to retrieve the “old” version of the document in a change stream. The only thing you can do is retrieve the entire document “post” update by using the updateLookup option:The fullDocument field in your change event is always populated for an insert or a replace. It can be populated if you provide the updateLookup option for an update and is never provided for a delete operation as there is no document anymore… But it’s always the most recent majority committed version of the document that you get.Stay tune for MongoDB 5.0 though because it could potentially change fairly soon !More info at Events | MongoDB! No spoilers (almost)!", "username": "MaBeuLux88" }, { "code": "", "text": "Hi @MaBeuLux88,Thanks for the update, but here the operation type is update where user deletes a nested array object(not the complete document) with in the main document as mentioned in above structure. In this case why the updatedescription.removefields doesnt contain the removed object ? We need this to be populated for our use case.", "username": "vinay_murarishetty" }, { "code": "> db.coll.insertOne({firstname:\"Maxime\",surname:\"Beugnet\", pets: [{name: \"Bob\"},{name: \"Raoul\"}]})\n{\n\t\"acknowledged\" : true,\n\t\"insertedId\" : ObjectId(\"60a52dc14328bfa256cd18ba\")\n}\npets$pop> db.coll.updateOne({}, {$pop: {pets: 1}})\n{\n\t\"_id\" : {\n\t\t\"_data\" : \"8260A52E05000000012B022C0100296E5A10041EA1C19171774980BB0020FFA9BD24F746645F6964006460A52DC14328BFA256CD18BA0004\"\n\t},\n\t\"operationType\" : \"update\",\n\t\"clusterTime\" : Timestamp(1621437957, 1),\n\t\"ns\" : {\n\t\t\"db\" : \"test\",\n\t\t\"coll\" : \"coll\"\n\t},\n\t\"documentKey\" : {\n\t\t\"_id\" : ObjectId(\"60a52dc14328bfa256cd18ba\")\n\t},\n\t\"updateDescription\" : {\n\t\t\"updatedFields\" : {\n\t\t\t\"pets\" : [\n\t\t\t\t{\n\t\t\t\t\t\"name\" : \"Bob\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"removedFields\" : [ ]\n\t}\n}\npets$unset> db.coll.updateOne({}, {$unset: {surname: 1}})\n{\n\t\"_id\" : {\n\t\t\"_data\" : \"8260A52E3A000000012B022C0100296E5A10041EA1C19171774980BB0020FFA9BD24F746645F6964006460A52DC14328BFA256CD18BA0004\"\n\t},\n\t\"operationType\" : \"update\",\n\t\"clusterTime\" : Timestamp(1621438010, 1),\n\t\"ns\" : {\n\t\t\"db\" : \"test\",\n\t\t\"coll\" : \"coll\"\n\t},\n\t\"documentKey\" : {\n\t\t\"_id\" : ObjectId(\"60a52dc14328bfa256cd18ba\")\n\t},\n\t\"updateDescription\" : {\n\t\t\"updatedFields\" : {\n\t\t\t\n\t\t},\n\t\t\"removedFields\" : [\n\t\t\t\"surname\"\n\t\t]\n\t}\n}\n", "text": "What you described is an update of the array, not a deletion of field. That’s why it appears in the updates and not in the deletes.Let me explain with an example. First I insert a document:Then I remove one of my pet from the pets array by updating it with a $pop operation:Which results in this change event:As expected, the pets field is the one that has been updated. One of its value (here a subdocument) has been removed.If I $unset one of my fields though:I do get this change event that contains, this time, a field deletion:This works as designed.I hope this helps. Sorry if it’s not what you expected. There is currently no work around to “fix” this to my knowledge, but stay tuned for MongoDB 5.0…Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Change stream does not contain removed fields when nested array object is deleted
2021-05-17T12:00:44.788Z
Change stream does not contain removed fields when nested array object is deleted
3,871
null
[ "backup" ]
[ { "code": "", "text": "Hi,\nI have create backups with mongodump utility. Having 2 servers in replica set configuration.\nMongo backups are in directory /var/backups/18May2021 . And I want to restore it on another server which is standalone . mongo server with backup configuration ip is for example 12.12.17.11 and the server which is standalone and wanting to restore backup here is 12.12.17.18. How can I do it, inside the backup folder /var/backups/18May2021 there are two folders admin and products.", "username": "Nanuka_Zedginidze" }, { "code": "mongorestore --host 12.12.17.18 --port <port number> -u <admin user> --authenticationDatabase <auth db> /var/backups/18May2021\n", "text": "Hi Nanuka_Zedginidze,If you are taking a backup with mongodump then you will use the mongorestore tool to restore the data. You just add the standalone host as the “host” in the restore string. It would look something like thisThe admin and products directory are the databases in your MongoDB so they will both be restored.", "username": "tapiocaPENGUIN" }, { "code": "", "text": "Thank u ,\nso in case of not using auth it will be:mongorestore --host 12.12.17.18 --port 27017 /var/backups/18May2021.", "username": "Nanuka_Zedginidze" }, { "code": "", "text": "Yes, if there is no authentication on your Mongo cluster you would not provide a username or authentication DB", "username": "tapiocaPENGUIN" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongo db backup configuration
2021-05-18T10:28:34.233Z
Mongo db backup configuration
2,111
null
[]
[ { "code": "", "text": "I am not able to create a free tier cluster in Mongodb atlas", "username": "Rahul_S1" }, { "code": "", "text": "Please post a screenshot of what you tried that shows the issue you are experimenting.One thing to know is that you must have a project and you may only have 1 free tier inside this project.", "username": "steevej" }, { "code": "", "text": "Hey @Rahul_S1- sorry to hear you are experiencing issues while creating a free tier cluster.Please let us know in more detail what errors you are experiencing or if you were able to deploy a cluster in the end!", "username": "Jesse_Krasnostein" }, { "code": "", "text": "Same problem. Stuck with this message : “Deploying your changes failed; we will try again shortly.”\nWhat can I do ?", "username": "Quentin_COSTER" }, { "code": "", "text": "same problem, the message says : “Deploying your changes failed; we will try again shortly.”attaching SSScreenshot from 2021-05-10 23-58-481371×652 119 KB", "username": "rabinson_dev" }, { "code": "", "text": "Folks please accept our apologies for the delays here, see https://status.cloud.mongodb.com/ – unfortunately we’re working through a problem with our underlying TLS certificate provider", "username": "Andrew_Davidson" }, { "code": "", "text": "Dear Andrew, any updates?", "username": "Dominic_Lim" }, { "code": "", "text": "Hi @Andrew_Davidson, I am having a similar issue as @robinson_dev. Has this been resolved?mondodb stuck1366×768 79.1 KB", "username": "George_Githuma" }, { "code": "", "text": "@Andrew_Davidson I can see from status page that this was resolved on May 10th but can it be a recurrent issue? mongodb status1366×768 83.2 KB", "username": "George_Githuma" }, { "code": "us-east-1", "text": "Hi George, I contacted support earlier. Apparently, the region us-east-1 is full. So I just decided to pick another region.", "username": "Dominic_Lim" }, { "code": "", "text": "@Dominic_Lim Thanks for the hack! let me try it out.", "username": "George_Githuma" }, { "code": "", "text": "@Dominic_Lim Thanks for the hack, it worked!mongodb worked1366×768 90.7 KB", "username": "George_Githuma" }, { "code": "", "text": "Folks apologies for the hiccups here: as folks up-thread alluded to today’s M0 provisioning issue was unrelated to the TLS cert provider mentioned above.Today we had a log-jam in the backing infrastructure used to power these M0 free tiers in US-East-1 stemming from a significant uptick in deployment velocity combined with provisioning issues: we’ve implemented an enhancement to more aggressively perform the backend provisioning to prevent these issues and they should now be resolved.-Andrew", "username": "Andrew_Davidson" }, { "code": "", "text": "No problems! Glad it worked out! I wouldn’t have come back here if I didn’t see a reply of this thread sent to my mail haha", "username": "Dominic_Lim" }, { "code": "", "text": "Thanks for the update Andrew! No worries at all! Thank you! =)", "username": "Dominic_Lim" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to create cluster in Atlas
2020-06-15T10:37:01.723Z
Unable to create cluster in Atlas
13,939
null
[ "replication" ]
[ { "code": "", "text": "Hi Team,How to deployment MongoDB Replication and Shard in Azure portal step by step explain.if you have Document or URL Please share me", "username": "hari_dba" }, { "code": "", "text": "Hi @hari_dba,Here are some resource you can refer toHope It helps !!Regards,\nKushagra", "username": "Kushagra_Kesav" } ]
MonoDB setup In Azure
2021-05-19T13:59:36.497Z
MonoDB setup In Azure
1,995
null
[ "aggregation", "queries", "python" ]
[ { "code": "mydb1.mongodbbucketright.find(\n {\"samples.timestamp1\": {\"$gte\": datetime.strptime(\"2010-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\"),\n \"$lte\": datetime.strptime(\"2015-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\")},\n \"samples.id13\":{\"$gt\":5}},\n\n {\"samples.$\": 1 })\nmydb1.mongodbbucketright.aggregate([\n\n {\n \"$match\": {\n \"samples.timestamp1\": {\"$gte\": datetime.strptime(\"2010-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\"),\n \"$lte\" :datetime.strptime(\"2015-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\")},\n \"samples.id13\": {\"$gt\": 5}\n }\n },\n { \"$unwind\": \"$samples\" },\n {\n \"$match\": {\n \"samples.timestamp1\": {\"$gte\": datetime.strptime(\"2010-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\"),\n \"$lte\": datetime.strptime(\"2015-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\")},\n \"samples.id13\": {\"$gt\": 5}\n }\n },\n\n\n])\n", "text": "Hello guys\nMy collection consist of nested documents.I have a query that look like this:Is position projection going to help here?\nSometimes it seems that i get less results than expected\nSo i changed the query and now it look like this:I want the query to be optimized\nWhat is your opinion in that?\nDid i choose the right option?\nUsing mongodb with pymongoThanks in advance", "username": "harris" }, { "code": "samples$elemMatchfind {\"samples\":{\"$elemMatch\":{\n \"timestamp1\": {\n \"$gte\": datetime.strptime(\"2010-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\"),\n \"$lte\": datetime.strptime(\"2015-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\")\n },\n \"id13\":{\"$gt\":5}\n }}}\n", "text": "The problem with your first version with positional projection is that it’s ambiguous. Since samples is an array, what if different elements match the two clauses in the query predicate? Which element do you want returned by that projection?If you don’t intend for the condition to be satisfied by two different elements, you must use $elemMatch in your query. Certainly that’s what the aggregation syntax is doing - after unwind it will only match original elements that match both conditions. Change your original find and it’s going to return only the correct results you want:Asya", "username": "Asya_Kamsky" }, { "code": "mydb1.mongodbbucketnocpu3index.aggregate(\n [\n {\n \"$match\": {\n \"samples.timestamp1\": {\"$gte\": datetime.strptime(\"2019-01-01 00:00:00\", \"%Y-%m-%d %H:%M:%S\"),\n \"$lte\": datetime.strptime(\"2020-12-31 01:55:00\", \"%Y-%m-%d %H:%M:%S\")},\n\n }\n },\n{ \"$unwind\": \"$samples\" },\n {\n \"$group\": {\n\n \"_id\": {\"$dateToString\": {\"format\": \"%Y-%m-%d %H\", \"date\": \"$samples.timestamp1\"}},\n \"max_id13\": {\n \"$max\": \"$samples.id13\"\n }\n }\n },\n\n {\n \"$project\": {\n \"_id\": 0,\n \"day\": \"$_id\",\n \"max_id13\": 1\n }\n },\n {\"$sort\": {\"day\": -1}},\n { \"$limit\": 5}\n ]\n)\n", "text": "Thank you @Asya_Kamsky yes the elemMatch seems to be way faster than the aggregation .I have one thing more to ask.Can i use elemMatch in an pipeline like this:Instead of match?Will ElemMatch be better?Can we use elemMatch on an aggregate like this?", "username": "harris" }, { "code": "$elemMatch", "text": "I’m not sure I understand the question - I don’t see $elemMatch in the code you posted…", "username": "Asya_Kamsky" }, { "code": "", "text": "Hi @Asya_Kamsky.There is something i camt understand.what is the difference between elemmatch projection and elem query.i used find with elemmatch,is this elemmatch query or elemmatch projection i can’t understand", "username": "harris" }, { "code": "", "text": "Elemmatch returns only the first subdocument that agrees with the condition?", "username": "harris" }, { "code": "$elemMatch$elemMatch", "text": "In a query, $elemMatch will indicate entire document should be matched if any array element matches a particular condition.In a projection, $elemMatch indicates the first matching element of an array should be the only array element projected.", "username": "Asya_Kamsky" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Find or aggregate in this type of query
2021-05-12T14:50:22.235Z
Find or aggregate in this type of query
4,413
null
[ "queries", "performance" ]
[ { "code": "", "text": "We are using a mongo db replica set and facing the issue with slow query. Our db size is increasing day by day and because of that when the we run the query it’s taking a time. We have increased the CPU ,Ram and Storage but still its taking time to find the query from the bulk database.Please help", "username": "pankaj_thapliyal" }, { "code": "", "text": "Hi @pankaj_thapliyal - welcome to the forums!This could be the result of a number of different factors and I think the question would get more assistance in the “Working with Data” category as it sounds like this is not directly related to a problem or issue with the M312 course.In terms of learning about your slow query, you can download Compass and use it to analyse the query plan (see this docs page on how to do this). This might help highlight if you need an index or if you might need a different better index to support your query.In M312, Chapter 3 is entirely focused on identifying slow queries and I would recommend that you follow this chapter and use the mtools package with your log files as this will definitely provide you with further details on the slow query.I would also recommend Chapter 5 in M312 as this deals with poor schema design as the symptoms you outline indicate this could equally be an issue.If you are running Atlas with a support plan, you might want to consider Flex Consulting as this would allow you to work with our Professional Services team to identify any query problems in your application and for them to provide you with solutions.Hope this helps and kindest regards,\nEoin", "username": "Eoin_Brazil" } ]
Slow Query Operation
2021-05-19T05:19:36.725Z
Slow Query Operation
2,661
null
[ "monitoring" ]
[ { "code": "", "text": "HiI would like to know how do I view some details of concurrency control in mongo db 4.4.5 and MongoCompass 1.26.1Assuming the following example:4 update on the same field…db.work_order.updateMany ({id_work_order: {$ gt: 1}}, {$ set: {service_description: “TEST1”}})\ndb.work_order.updateMany ({id_work_order: {$ gt: 1}}, {$ set: {service_description: “TEST2”}})\ndb.work_order.updateMany ({id_work_order: {$ gt: 1}}, {$ set: {service_description: “TEST3”}})\ndb.work_order.updateMany ({id_work_order: {$ gt: 1}}, {$ set: {service_description: “TEST4”}})how do I see details like: Represents Shared (S) lock or Represents Exclusive (X) lock.thanks", "username": "MATEUS_GUILHERME_DA" }, { "code": "", "text": "Hi @MATEUS_GUILHERME_DA, welcome to the forums!If you want to find out more about concurrency in MongoDB and the type of locking used, I’d suggest checking out our concurrency FAQ page.In terms of using Compass, you will need to use the beta functionality of the embedded MongoDB Shell (see this docs page for more details). You can use that shell to issue your commands to the database and check the locks either with db.serverStatus() or with db.currentOp().Alternatively, you can use Compass to insert the data and in a separate terminal window have the command line mongostat running so you can then see the locking as the operations occur.Hope this helps.Kindest regards,\nEoin", "username": "Eoin_Brazil" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Details of concurrency control
2021-05-19T00:35:32.648Z
Details of concurrency control
1,734
null
[ "python", "crud", "motor-driver" ]
[ { "code": "releaasedlastupdateddatetimelastupdatednew Date(\"2017-04-10\")pydanticmotordatetimemotor", "text": "In the movies collection we can see two different types of date type. The releaased key has ISODate datatype, but the lastupdated key has a string datatype.\nI can convert a date string to a datetime type and send it to MongoDB. If I retrieve the data, then I’ll get a string as the lastupdated value.\nI can write new Date(\"2017-04-10\") in the Mongo Shell.\nI created a pydantic model and use motor as the mongoDB driver in Python.\nHowever, sending a datetime dict entry to mongoDB denerate a date string instead of ISODate. How can I create a new date entry with the ISODate type in Python with motor driver?Suppose I have a birthday field in my document (time is irrelevant). What is the best way to define the datatype, a date string data type or ISODate?", "username": "ywiyogo" }, { "code": "", "text": "Hi @ywiyogoI think this is a broader question outside of M220P and you might find it useful to post in the Drivers & ODMs forum around motor as M220P only used PyMongo nor does the course use pydantic or any type hinting library.In terms of MongoDB, BSON has a specific Date type (see this docs page). It is a 64-bit integer that represents the number of milliseconds since the Unix epoch (Jan 1, 1970). In PyMongo datetime.datetime objects are used to store dates and times in MongoDB documents (see this docs page). Essentially, any driver whether PyMongo or indeed motor maps a data back to the underlying BSON.In Python’s case with MongoDB, the recommended format is to use datetime.datetime object to store a time field such as your birthday field.Hopefully this helps answer your question and for a wider audience, I’d refer you to the Drivers and ODMs forum.Kindest regards\nEoin", "username": "Eoin_Brazil" }, { "code": "datetime.datetime{ \"_id\" : \"60a18c13732f278885f81a03\", \"name\" : \"Joe Doel\", \"email\" : \"[email protected]\", \"birthday\" : ISODate(\"2000-06-04T00:00:00Z\"), \"location\" : \"Berlin\" }\n{ \"_id\" : \"60a18c2d732f278885f81a04\", \"name\" : \"Alice Will\", \"email\" : \"[email protected]\", \"birthday\" : \"1996-02-01T00:00:00\", \"location\" : \"Amsterdam\" }\ndatetime", "text": "Thanks Eoin for the hint. I’ve just moved this thread to the “Drivers and ODMs”.The issue I observed was that if I insert a document with a date entry in Mongo Shell, it creates ISODate. If I use Pydantic and Motor with datetime.datetime, it creates date string. This is the output of my example:How can I achieve a consistency in my date type for Mongo Shell and Python?\nI’ll try using PyMongo with datetime if it also creates ISODate or not.", "username": "ywiyogo" }, { "code": "", "text": "Hi @ywiyogoSo essentially the MongoShell itself is a custom driver and it wraps a Date object with the ISODate for convenience (see this page for more details). In terms of query in Python for date/times, you should use datetime, the representations between the two mechanisms to query the data means it’s not possible to easily find consistency and instead you should use the appropriate date representation for the MongoShell or for the Python Driver.Hopefully, this helps to clarify your question, and good luck with what you are building!Kindest regards,\nEoin", "username": "Eoin_Brazil" }, { "code": "", "text": "So, it’s not possible to have consistency between both drivers.\nThanks for the clarification @Eoin_Brazil!", "username": "ywiyogo" }, { "code": "", "text": "Hi @ywiyogoNot in the visual presentation sense, in the underlying data since both are consistent to the same BSON date type but that’s not rendered in the same consistent fashion and I think that was your question.Glad this helped!\nEoin", "username": "Eoin_Brazil" }, { "code": "", "text": "Motor (and PyMongo) insert datetime objects as BSON date type in the same way that the mongo shell inserts ISODate objects. It sounds like Pydantic is automatically converting Python datetime objects into strings before sending the document to Motor.", "username": "Shane" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to insert an ISODate entry from Python with Pydantic and Motor
2021-05-16T13:41:19.461Z
How to insert an ISODate entry from Python with Pydantic and Motor
15,664
null
[ "replication" ]
[ { "code": "", "text": "I have a five member replica set running at AWS. I have added a hidden replica to the set (the “snapshot replica”). The snapshot replica’s mongo databases are stored on an EBS volume. Once a day it shuts down mongo for a few seconds and initiates a snapshot.I’d like to verify my snapshots. I want to start up a new mongo instance attached to the snapshot. (Let’s call it “the validator”)I can do this. The problem is that it still has it still has it’s replica set configuration.I would like to somehow purge the replica settings before starting mongod, so that there is no possiblity of something replication related going wrong. (e.g. accidentally turning every replica into a master running on 127.0.0.1:27017).How would I do this, or do something equivalent to ensure that “the validator” never speaks to the active replica set?Thank you for any help,-jeff", "username": "Jeff_Younker" }, { "code": "replication.replSetName--replSet--bind_ipsudo mongodb mongod --bind_ip 127.0.0.1 --port 7777 --dbpath /path/to/data", "text": "Once a day it shuts down mongo for a few seconds and initiates a snapshot.I do this online, not in AWS but Azure.I can do this. The problem is that it still has it still has it’s replica set configuration.It does. But as long as the replication.replSetName ans/or --replSet is not set it will not attempt to join nor be a valid taget for the cluster to connect to.It is a good idea to use a different port and/or --bind_ip ‘just in case’If you are attaching the snapshot as you are I would do it like this.\nsudo mongodb mongod --bind_ip 127.0.0.1 --port 7777 --dbpath /path/to/data", "username": "chris" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to prevent instance with restored snapshot from joining replica set
2021-05-18T01:43:21.586Z
How to prevent instance with restored snapshot from joining replica set
1,794
null
[]
[ { "code": "", "text": "Just wondering, would it be crazy to create a DB for every customer? I started to created them by using the customerId as the DB name. Just wondering if I would run into performance problems down the road.I don’t plan to have that big of a customer base, maybe 2-3k customersThanks!", "username": "Alan_Spurlock" }, { "code": "", "text": "@Alan_Spurlock Create a separate database for each tenant and configure the database connection string based on the tenantId.If your product maintains fewer records for every customer means you can have a single database with tenantId.", "username": "Sudhesh_Gnanasekaran" }, { "code": "", "text": "Hi @Alan_Spurlock,Welcome to MongoDB community.My concern here is that with 2-3k customers you will endup with 2-3k X (number of collections per customer) = total collection number.We have a known antipattern with large amount of collections and you might hit it with this design. Therefore, if possible I will consider doing a customerId field per document and index it to avoid thousands of collections.I suggest to read the following blogs:Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks for all that info! So… I am 100% guilty of setting this up as normal sql. I am so used to normalizing everything. My only concern is chat and call logs, but I can store the logs and chat with a max count and create another row to start new. (per customerId and chatId / callId)I love reading articles like that. Love the humor too.", "username": "Alan_Spurlock" } ]
DB for every customer
2021-05-17T14:23:53.406Z
DB for every customer
3,410
null
[ "queries", "crud" ]
[ { "code": " if (!userdata.inventory[itemAdded.internalName]) {\n await client.db.userdata.updateOne({\n id: targetUser.id,\n }, {\n $set : { [`inventory.${itemAdded.internalName}`] : {} }\n }).then( async () => {\n await client.db.userdata.updateOne({\n id: targetUser.id,\n }, {\n $inc : { [`inventory.${itemAdded.internalName}.quantity`] : 0 }\n });\n })\n\n }\n if (!userdata.inventory[itemAdded.internalName][`1`]) {\n handledID = '1'\n await client.db.userdata.updateOne({\n id: targetUser.id,\n }, {\n $set : { [`inventory.${args[2]}.1`] : {} }\n });\n } else if (!userdata.inventory[itemAdded.internalName][`2`]) {\n handledID = '2'\n await client.db.userdata.updateOne({\n id: targetUser.id,\n }, {\n $set : { [`inventory.${args[2]}.2`] : {} }\n });\n } else if (!userdata.inventory[itemAdded.internalName][`3`]) {\n handledID = '3'\n await client.db.userdata.updateOne({\n id: targetUser.id,\n }, {\n $set : { [`inventory.${args[2]}.3`] : {} }\n });\nuserdata.inventory[itemAdded.internalName]undefinedundefined", "text": "I’m working with MongoDB and Javascript currently and ran into an issue I’m not sure how to go about.First, here I check if a certain property exists and if not, I create it:Afterwards, I’m trying to work with the property:The issue is, this command only works from the second try. On every new user I run it on, it always fails at the first time, but works from the second one. For the first time userdata.inventory[itemAdded.internalName] returns undefined, even though if it didn’t exist I just created (and awaited) it a few lines above.But if I run the command again on the same user (or on anyone who I am not running it on for the first time) it works flawlessly, it recognizes the data existing and is able to work with it. But the issue is that it should work for the first time too. I tried calling the second code block with a 30(!) second delay and it still returns undefined at the first try. Why am I not able to access the data I just created within the same command?", "username": "Lord_Wasabi" }, { "code": "", "text": "Hi @Lord_Wasabi,Can you share a full code snippet.and the error?I am not sure where is userdata intialized? Do you query it somehow?I am not certain why in $set you have brackets on the left side? What are you trying to do push an empty object to a new array??Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "const amountAdded = parseInt(args[1], 10)\nconst itemAdded = await client.db.itemdata.findOne({internalName: `${args[2]}`})\nconst userdata = await client.db.userdata.findOne({id: targetUser.id})\n\nif (itemAdded.type === 'pet') {\n\nlet handledID = 0\n\n if (!userdata.inventory[itemAdded.internalName]) {\n await client.db.userdata.updateOne({\n id: targetUser.id,\n }, {\n $set : { [`inventory.${itemAdded.internalName}`] : {} }\n }).then( async () => {\n await client.db.userdata.updateOne({\n id: targetUser.id,\n }, {\n $inc : { [`inventory.${itemAdded.internalName}.quantity`] : 0 }\n });\n })\n\n }\n\n if (!userdata.inventory[itemAdded.internalName][`1`]) {\n handledID = '1'\n await client.db.userdata.updateOne({\n id: targetUser.id,\n }, {\n $set : { [`inventory.${args[2]}.1`] : {} }\n });\n } else if (!userdata.inventory[itemAdded.internalName][`2`]) {\n handledID = '2'\n await client.db.userdata.updateOne({\n id: targetUser.id,\n }, {\n $set : { [`inventory.${args[2]}.2`] : {} }\n });\n } else if (!userdata.inventory[itemAdded.internalName][`3`]) {\n handledID = '3'\n await client.db.userdata.updateOne({\n id: targetUser.id,\n }, {\n $set : { [`inventory.${args[2]}.3`] : {} }\n });\n } \n\n\n if (handledID !== 0) {\n await client.db.userdata.updateOne({\n id: targetUser.id,\n }, {\n $set : { [`inventory.${args[2]}.${handledID}.rarity`] : args[3] }\n });\n\n await client.db.userdata.updateOne({\n id: targetUser.id,\n }, {\n $set : { [`inventory.${args[2]}.${handledID}.lvl`] : 1 }\n });\n\n await client.db.userdata.updateOne({\n id: targetUser.id,\n }, {\n $set : { [`inventory.${args[2]}.${handledID}.currentXP`] : 0 }\n });\n } \n}\n(node:1364) UnhandledPromiseRejectionWarning: TypeError: Cannot read property '1' of undefined\nat Object.execute (C:\\Users\\wasab\\Desktop\\Csibebot\\commands\\dev\\giveitem.js:90:60)\nat processTicksAndRejections (internal/process/task_queues.js:93:5)\n(node:1364) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)\n", "text": "Thanks for your reply, of course it was not the whole code as it’s over 200 lines long and most of it is unrelated. However, I’ll try to post the full relevant code below:Running it for the first time on a user results in this error:However, running it on them again works fine. I’d just like to make it work on the first try aswell.", "username": "Lord_Wasabi" }, { "code": "if (!userdata.inventory[itemAdded.internalName][])if (!userdata)await client.db.userdata.updateOne({\n id: targetUser.id,\n }, {\n $push : {`inventory.${itemAdded.internalName}`: {quantity : 1} }\n })\n...\n", "text": "Hi @Lord_Wasabi,Your issue is on the java script side, since you cannot access members of non existing array. During your first run mongoDB returns nothing so you can’t access a member of undefined js object.Instad of if (!userdata.inventory[itemAdded.internalName][1]) I think you should just ask if (!userdata) …There is no array elements in inventory so how can you access a numbered object in mongodb statmenets? What are you trying to do ? Insert empty objects into an array?If yes this is done via a $push operator:Also for all the others, do same opertion without specifying array numbers…Thanks,\nPavel", "username": "Pavel_Duchovny" }, { "code": "if (!userdata.inventory[itemAdded.internalName][])if (!userdata)userdataInventoryitemAdded.internalName1userdata.inventory[itemAdded.internalName", "text": "Instad of if (!userdata.inventory[itemAdded.internalName][ 1 ]) I think you should just ask if (!userdata)The thing is, this is not the only thing I store about users. Their userdata profile could exist without this property existing in it.There is no array elements in inventory so how can you access a numbered object in mongodb statmenets?There are no arrays at all. Inventory is an object, with other objects in them (itemAdded.internalName in this case), which have other objects in them, called numbers. I’m trying to see if an object called 1 exists in userdata.inventory[itemAdded.internalName and if not, create it as a new object and add further properties to it", "username": "Lord_Wasabi" }, { "code": "", "text": "Here is an example of the structure", "username": "Lord_Wasabi" }, { "code": "if (userdata.inventory) \n{\nvar internalName = userdata.inventory[itemAdded.internalName][\"1\"];\nif (!internalName) {\n...\n}\n", "text": "Ok I see,This is not the best way to represent numeric positioning an you might hit lots of JS issues , why not to use arrays of objects which is the proper way.If you have no option to change this data structI would than suggest you to first set the data into a variable and then test it. For example:But this is pure JS topic. I would suggest exploring using arrays instead of numeric fields…Thanks,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks for the reply. I wouldn’t like to clutter up the forum with JS topics, so I’ll look into restructuring my database or using the solution you offered", "username": "Lord_Wasabi" } ]
Issue with accessing the data I just created
2021-05-17T17:12:41.341Z
Issue with accessing the data I just created
3,063
null
[]
[ { "code": "", "text": "Using the native Mongdb Node.js driver (3.6.6) and am trying to work with date.I’m having issues with dates. All my calls to the DB seem to use standard JSON and not EJSON. For example, the following object for import fails. {“EventDate”: {\"$date\":“2021-05-03T23:24:00.000Z”}}Also, when data is returned, the format is standard JSON. Is there some way to enable or use EJSON in the driver?", "username": "Cory_Engel" }, { "code": "mongoimport{\"EventDate\": {\"$date\":\"2021-05-03T23:24:00.000Z\"}}{\"$date\":\"2021-05-03T23:24:00.000Z\"}mongoimport --db=test --collection=json_coll --file=inp-json.jsonmongo{ \"_id\" : ObjectId(\"60a334a8c568b2172d52e65d\"), \"EventDate\" : ISODate(\"2021-05-03T23:24:00Z\") }", "text": "Hello @Cory_Engel, welcome to the MongoDB Community forum!… the following object for import fails. {“EventDate”: {“$date”:“2021-05-03T23:24:00.000Z”}}I tried the following procedure to import the above data using mongoimport.The JSON file content: {\"EventDate\": {\"$date\":\"2021-05-03T23:24:00.000Z\"}}The date data representation {\"$date\":\"2021-05-03T23:24:00.000Z\"} is Extended JSON.Import:mongoimport --db=test --collection=json_coll --file=inp-json.jsonQueried from mongo shell:{ \"_id\" : ObjectId(\"60a334a8c568b2172d52e65d\"), \"EventDate\" : ISODate(\"2021-05-03T23:24:00Z\") }The import is successful and the created document is fine too.MongoDB stores data as BSON types, and there is a Date type too.", "username": "Prasad_Saya" }, { "code": "", "text": "Thanks. I’m able to do that too, and also using Compass or the shell. But the issue is via the Mongodb driver for node.js. When I try it via the driver, it fails.", "username": "Cory_Engel" }, { "code": "", "text": "Hello @Cory_Engel, please share the code you had tried.", "username": "Prasad_Saya" }, { "code": "exports.create = async (req, res) => {\n try {\n console.log(req.body);\n const Result = await db\n .getDb()\n .db()\n .collection(req.params.collection)\n .insertOne(req.body);\n res.status(201).json({\n status: 'Success',\n data: {\n entry: Result\n }\n });\n } catch (err) {\n res.status(404).json({\n status: 'Failed',\n message: err\n });\n }\n};\n", "text": "Here is the relevant snippet. In the failing case, req.body = {“EventDate”: {\"$date\":“2021-05-03T23:24:00.000Z”}}Thanks for your assistance!", "username": "Cory_Engel" }, { "code": "Error: key $date must not start with '$'mongo${ $fld: \"some value\" }\n{ fld: { $sub-fld: \"some value\" } }\n", "text": "Hello @Cory_Engel,I tried to insert the document using NodeJS driver v3.6.3 and MongoDB v4.2.8 - as you had mentioned it fails with an error: Error: key $date must not start with '$'But, I could insert the same document via mongo shell. There is a rule that only top-level fields with names starting with a $ cannot be inserted (see Documents - Field Names). In the shell, the following first document fails, but the second gets inserted.But, with NodeJS Driver code, both the inserts fail.I saw this post on Stack Overflow with some similar discussion: How to Insert records into mongo with Node where records have an $oid .", "username": "Prasad_Saya" } ]
Using EJSON in NodeJS
2021-05-17T15:56:20.017Z
Using EJSON in NodeJS
3,655
https://www.mongodb.com/…9a3d90290dbc.png
[ "queries" ]
[ { "code": "", "text": "Hi, are there any MongoDB loop operators that I can use or I need to wrap the operation in a forEach?image1005×576 42.4 KB", "username": "Christian_Angelo_15065" }, { "code": "", "text": "The following should fill you need:", "username": "steevej" }, { "code": "", "text": "Thanks, it solves my problem.image974×843 53.7 KB", "username": "Christian_Angelo_15065" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb loop operator
2021-05-16T07:11:06.092Z
Mongodb loop operator
1,866
https://www.mongodb.com/…0_2_1024x537.png
[ "unity", "developer-hub" ]
[ { "code": "", "text": "Hi Everyone,You may or may not be aware, but we have a Realm SDK for Unity in the works! To give everyone a taste, I’ve cooked up a tutorial that makes use of it:Learn how to use Unity and the Realm SDK for Unity to build an infinite runner style game.The tutorial is for building an infinite runner type game. Think Temple Run or Subway Surfer where you kind of run forever dodging obstacles and collecting rewards.The SDK is currently alpha so expect that there could be bugs. As the SDK progresses and improves, I’ll be sharing more content on how to build amazing games that persist to Realm and MongoDB!Please don’t hesitate to reach out if you have any questions.Best,", "username": "nraboy" }, { "code": "public class PlayerStats : RealmObject\n{\n // https://academy.realm.io/posts/realm-primary-keys-tutorial/\n [PrimaryKey]\n public string Username { get; set; }\n\n public RealmInteger<int> Score { get; set; }\n\n public PlayerStats() { }\n\n public PlayerStats(string username, int score)\n {\n this.Username = username;\n this.Score = score;\n }\n\n}\n", "text": "Hi there,I’ve been following this tutorial.\nCan somebody please explain to me why the method PlayerStats() is being declared twice in this script?Thanks!", "username": "Manuel_Tausch" }, { "code": "", "text": "Hi @Manuel_Tausch,Are you referring to the constructor method and the overloaded constructor method? Since it is a class, we need to be able to set the variables. We use the overloaded version throughout the project, but Realm has a requirement that we have a basic constructor as well.Does that help?", "username": "nraboy" }, { "code": "", "text": "Hi Nic,Thanks for your reply!\nI guess I haven’t seen this syntax before, especially since I’m new to C#, coming from the Python side of things.\nSo you class definition is PlayerStats, then your basic constructor is “public PlayerStats() { }” and the overloaded version is “public PlayerStats(string username, int score)”?\nDo you have a link by any chance where I can read up about this in detail?\nI’m definitely curious as to why you need to declare this constructor twice or is that just some quirk that I need to accept and not wonder too much about it?Cheers,\nManu", "username": "Manuel_Tausch" }, { "code": "", "text": "Hey @Manuel_Tausch,This link might help for method and constructor overloading:A Computer Science portal for geeks. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions.If you can accomplish what you want using the default constructor, then you don’t need to create others. However, you can create as many variations as you want, as long as you have the default, even if you don’t plan to use the default.Best,", "username": "nraboy" }, { "code": "", "text": "Awesome, thanks a lot! I will review that 100%!!!Cheers,\nM", "username": "Manuel_Tausch" } ]
Build an Infinite Runner Game with Unity and the Realm Unity SDK
2021-03-15T15:55:01.706Z
Build an Infinite Runner Game with Unity and the Realm Unity SDK
4,507
null
[]
[ { "code": "", "text": "In the web ui my Atlas cluster is showing a warning icon to the left of the cluster name and when clicked on is also showing a yellow warning icon next to the primary and both secondary nodes.I have checked my network access setup was allow from anywhere and have not hit any operation limits I am aware of, in addition the Atlas platform status is showing no issues and aws is showing no issues.I changed my network access setup to my current ip to try to force the cluster to respond but now the cluster is stuck on “configuring MongoDB”If I try to connect through the mongo shell I get multiple CONNPOOL and NETWORK errors:\n I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to mba-cluster-shard-00-01.s6jpa.mongodb.net:27017\n W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set atlas-kshaii-shard-0Is there any way to fix this or even understand what the warnings are?Thankfully this is my development cluster but the lack of any information and the cluster just failing is seriously worrying!", "username": "mba_cat" }, { "code": "", "text": "We’re having the same issue here. Our test M0 cluster is in AWS eu-central-1 and is inaccessible.I tried to add my IP and also upgrade it to M2 but now it is stuck on\n“We are deploying your changes (current action: configuring MongoDB)”", "username": "Jurn_Ho" }, { "code": "", "text": "Maybe a platform issue then - my cluster is in the same location aws eu-central-1Is this only impacting your M0 test cluster?", "username": "mba_cat" }, { "code": "", "text": "My cluster has just come back online as of 2021-05-17 21:03 London, approximate outage was around 20 minutes …Edit: The realm app connected to the cluster was obviously failing during the outage, looks like its performance was degraded but is now returning to the baseline.", "username": "mba_cat" }, { "code": "", "text": "Ours is also up again now, (the upgrade to M2 failed).We have other M0 clusters and they were unaffected (also in eu-central-1).", "username": "Jurn_Ho" }, { "code": "", "text": "Strange, the only changes I was making at the time were to my Mongo Realm application function which deployed successfully, then when running my unit tests against the endpoint the cluster began returning errors and when looking at the M0 Atlas cluster in the Mongo dashboard it was showing the warning triangle for the cluster and all nodes.There was no additional information in either the activity log or any notifications and the system status was showing no issues for mongo and aws.Just to confirm this was the warning icon;Screenshot 2021-05-17 at 21.02.32816×447 41.5 KBLooks like we will need a Mongo support agent to comment if there were api / network / load / platform issues for the period 2021-05-17 20:24 => 2021-05-17 21:03", "username": "mba_cat" }, { "code": "", "text": "same problem about 30 minutes ago, now its seem to work fine and no warning icon", "username": "Juan_Diaz_1" } ]
Cluster showing warning icon but no information
2021-05-17T19:55:23.320Z
Cluster showing warning icon but no information
5,643
null
[]
[ { "code": "db.playerTest.updateMany({},{$set: {\"clubs\": [[ { \"clubId\": ObjectId(\"6076030465508936f00e086c\")}, {\"name\": \"Augusta National Golf Club\"}, {\"nickName\": \"Augusta\"}, {\"logoPath\": \"augusta.png\"}]]}})ArrayObjects{ acknowledged: true,\n insertedId: null,\n matchedCount: 0,\n modifiedCount: 0,\n upsertedCount: 0 }\n", "text": "The following code does not seem to update the 2 documents I have in this collection:db.playerTest.updateMany({},{$set: {\"clubs\": [[ { \"clubId\": ObjectId(\"6076030465508936f00e086c\")}, {\"name\": \"Augusta National Golf Club\"}, {\"nickName\": \"Augusta\"}, {\"logoPath\": \"augusta.png\"}]]}})It needs to be an Array of ObjectsIt returns the following output, which suggests the syntax is good, but is not being applied:", "username": "Dan_Burt" }, { "code": "> db.playerTest.find()\n{ \"_id\" : 0 }\n{ \"_id\" : 1 }\n> db.playerTest.updateMany({},{$set: {\"clubs\": [[ { \"clubId\": ObjectId(\"6076030465508936f00e086c\")}, {\"name\": \"Augusta National Golf Club\"}, {\"nickName\": \"Augusta\"}, {\"logoPath\": \"augusta.png\"}]]}})\n{ \"acknowledged\" : true, \"matchedCount\" : 2, \"modifiedCount\" : 2 }\n> db.playerTest.find()\n{ \"_id\" : 0, \"clubs\" : [ [ { \"clubId\" : ObjectId(\"6076030465508936f00e086c\") }, { \"name\" : \"Augusta National Golf Club\" }, { \"nickName\" : \"Augusta\" }, { \"logoPath\" : \"augusta.png\" } ] ] }\n{ \"_id\" : 1, \"clubs\" : [ [ { \"clubId\" : ObjectId(\"6076030465508936f00e086c\") }, { \"name\" : \"Augusta National Golf Club\" }, { \"nickName\" : \"Augusta\" }, { \"logoPath\" : \"augusta.png\" } ] ] }\n", "text": "db.playerTest.updateMany({},{$set: {“clubs”: [[ { “clubId”: ObjectId(“6076030465508936f00e086c”)}, {“name”: “Augusta National Golf Club”}, {“nickName”: “Augusta”}, {“logoPath”: “augusta.png”}]]}})With the same code I get:I can see one of 2 thingsShare the output of:", "username": "steevej" }, { "code": "ArrayArraydb.playertest.updateMany({},{$set: {\"clubs\": [ { \"clubId\": ObjectId(\"6076030465508936f00e086c\"), \"name\": \"Augusta National Golf Club\", \"nickName\": \"Augusta\", \"logoPath\": \"augusta.png\"}]}})", "text": "I feel such a fool… collection wasn’t camelCase! Doh!Now the correct data is being applied, I also corrected that the previous syntax was created Array's of Array's. Updated to. by removing a set of square braces:db.playertest.updateMany({},{$set: {\"clubs\": [ { \"clubId\": ObjectId(\"6076030465508936f00e086c\"), \"name\": \"Augusta National Golf Club\", \"nickName\": \"Augusta\", \"logoPath\": \"augusta.png\"}]}})", "username": "Dan_Burt" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Shell - Insert new Array field
2021-05-17T12:27:45.862Z
MongoDB Shell - Insert new Array field
2,126
null
[]
[ { "code": "", "text": "Hi Team,Could you please elaborate 10 points or more Difference between Mongodb and microsoft SQL Server , why All are preferred Mongodb other than RDBMS", "username": "hari_dba" }, { "code": "", "text": "Hi @hari_dba,It surely depends upon the use cases.\nFor more info and differences kindly refer to this medium article.In case of any further questions, feel free to reach out.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "I do not think I have 10 but here what I have:", "username": "steevej" }, { "code": "", "text": "Hi @hari_dba,Can you give a little more context on what your use case is or what you are planning to build? I could think of many points to answer in terms of differences but they are all context dependant and are probably better asked in a general forum rather than in a course forum. Do you have a specific question around M100 that we can address or clarify in terms of why you want more elaboration on these differences or is there an improvement to a lesson or indeed a new lesson you think might be useful to add to this course?Kindest regards,\nEoin", "username": "Eoin_Brazil" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Major different between Mongodb and microsoft SQL server
2021-05-15T15:56:13.317Z
Major different between Mongodb and microsoft SQL server
2,119
null
[ "aggregation", "node-js" ]
[ { "code": "/**\n [\n { date: '2021-05-12T03:00:00+02:00', v: 19.81 },\n { date: '2021-05-12T04:00:00+02:00', v: 19.59 },\n { date: '2021-05-12T05:00:00+02:00', v: 19.31 },\n { date: '2021-05-12T06:00:00+02:00', v: 19.14 },\n { date: '2021-05-12T07:00:00+02:00', v: 18.02 },\n { date: '2021-05-12T08:00:00+02:00', v: 20.81 },\n { date: '2021-05-12T09:00:00+02:00', v: 24.91 },\n { date: '2021-05-12T10:00:00+02:00', v: 26.62 },\n...\n*/\ncollection.aggregate([\n{ $match: {_id: ObjectID(\"xxxxxxxxxxxxxxxxxxx\")}},\n{ \n\t$project: {\n\t\t [`mydataArray`]: {\n\t\t\t$filter: {\n\t\t\t\tinput: `$mydataArray`, // array i filterering\n\t\t\t\tas : \"item\", \n\t\t\t\tcond : { \n\t\t\t\t\t$and: \n\t\t\t\t\t\t[\n\t\t\t\t\t\t\t{ $gte :[\"$$item.date\", moment(from).format() ] },\n\t\t\t\t\t\t\t{ $lte :[\"$$item.date\", moment(to).format() ] } \n\t\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n]) // then how to delete ?????\n", "text": "Hello . I have this kind of data ( dates an values )\nI know how to select and return them but how to delete my selection in an aggregation /$project like this ??? no solution anywhere thanks a lot for your help", "username": "Upsylon_Developpemen" }, { "code": "", "text": "Check the example from the below link will be helpful.", "username": "Sudhesh_Gnanasekaran" }, { "code": "collection.updateOne(\n { _id: ObjectID(\"xxxxxxxxxxxxxxxxxxx\") },\n {\n $pull: {\n mydataArray: {\n date: {\n $gte: moment(from).format(),\n $lte: moment(to).format()\n }\n }\n }\n }\n)\n", "text": "Hello @Upsylon_Developpemen, Welcome to MongoDB Community Forum,It does not required aggregation if you want to update documents, as per your aggregation i can understand you have to delete array of object by combination of dates, so you can use update method with $pull operator,", "username": "turivishal" }, { "code": "", "text": "Many thanks for this solution! so simple like that!\nI really regret that the documentation only skims over things, The CRUD is only a small part of the job generally. It is really painful, that time wasted. luckily this forum is there.\nThanks again.", "username": "Upsylon_Developpemen" }, { "code": "", "text": "Helle thanks but remove is dont work with node driver ", "username": "Upsylon_Developpemen" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How delete a projection whith a $filter?
2021-05-13T05:57:17.908Z
How delete a projection whith a $filter?
2,235
https://www.mongodb.com/…7_2_1023x471.png
[ "atlas-triggers" ]
[ { "code": "", "text": "Hi,\nWe have an ordered Database trigger in realm on insert and replace to a collection that holds products. It runs a function, validating a product, to then send it to aws via kinesis.PutRecord. Everything works fine and on inserts of ~100 products the trigger takes max 1000ms per product.Recently we added more functionality to the function that the trigger runs. This functionality searches in other collections for data connected to the product, in order to find if any data is missing.\nAfter adding this functionality, the function itself takes max 4s (so 4000ms) to run per product (using the UI with the exact same data that the triggers receive).\nHowever, when the trigger now runs the function it times out, taking longer than 90000ms.\nWhat I find really strange, (except for the fact that the function should only take about 4000ms not 90000ms to run) is that the first 3 of 100 triggers seems to run as expected, until then all the remaining triggers time out\n\nScreenshot 2021-05-07 at 12.44.581530×704 80.9 KB\n\nAny idea of what could cause this issue?Worth mentioning is that it has worked sometimes. The exact same 100 products have been inserted and the triggers have been able to finish (with the same function) without reaching the time limit. The times have however, also in that case, escalated from ~6000ms to ~60000ms after the first few trigger runs.Very thankful for any help you could provide.", "username": "clakken" }, { "code": "", "text": "Hi @clakken, are you able to share the function and your trigger definition?", "username": "Andrew_Morgan" }, { "code": "{\n \"id\": \"5fa9016e8591bc0b25ca4207\",\n \"name\": \"productCollectionTrigger\",\n \"type\": \"DATABASE\",\n \"config\": {\n \"service_name\": \"mongodb-atlas\",\n \"database\": \"custom-made\",\n \"collection\": \"validatedProduct\",\n \"operation_types\": [\n \"INSERT\",\n \"REPLACE\"\n ],\n \"full_document\": true\n },\n \"function_name\": \"handleProductsCustomMadeAvailability\",\n \"disabled\": false\n}\nexports = async (changeEvent, testDatabase) => {\n const { fullDocument } = changeEvent\n const { artnocol, validation } = fullDocument\n\n const database = testDatabase || 'custom-made'\n const cluster = context.services.get('mongodb-atlas')\n\n const validatedCollection = cluster\n .db(database)\n .collection('validatedProduct')\n \n let missingMeasurements = []\n // ===> START: the following is what seems to affect the execution time (but only when called through the trigger)\n try {\n const { rulegroup, lectrabody: fit, lectracuff: cufflectra } = fullDocument\n missingMeasurements = await context.functions.execute(\n 'findMissingMeasurements',\n { rulegroup, fit, cufflectra },\n testDatabase\n )\n } catch (error) {\n console.error('findMissingMeasurements:', error, 'for product')\n }\n // <==== END\n\n const newAvailableCustomMade = validation ?\n validation.isValid &&\n validation.missingComponentTypes.length === 1 &&\n validation.missingComponentTypes.includes('size') &&\n validation.isInStock.result && \n missingMeasurements && \n missingMeasurements.length < 1\n : false \n\n try {\n await validatedCollection.updateOne({ artnocol }, { $set: { availableCustomMade: newAvailableCustomMade, missingData: missingMeasurements} }) \n } catch (error) {\n throw `Could not update document for product ${artnocol}: ${error}` \n }\n\n try {\n var fullUpdatedDocument = await validatedCollection.findOne({ artnocol })\n } catch (error) {\n throw `Could not find updated document for product ${artnocol}: ${error}` \n }\n\n // send updated availability to Kinesis -> Product service\n try {\n const {\n artnocol,\n availableCustomMade,\n lastValidatedAt,\n priceGroup,\n stock,\n } = fullUpdatedDocument\n\n const type = 'custom-made'\n const message = {\n sku: artnocol,\n isAvailableCustomMade: availableCustomMade,\n lastValidatedAt,\n priceGroup: priceGroup || '',\n stock: stock || 0,\n }\n \n const awsService = context.services.get('aws')\n \n const response = await awsService\n .kinesis('eu-west-1')\n .PutRecord({\n Data: JSON.stringify({ type, message }),\n StreamName: context.values.get('aws-kinesis-stream'),\n PartitionKey: '1',\n })\n\n if (response) {\n console.log(`Succesfully sent custom made availability to kinesis for product ${artnocol}`)\n }\n } catch (error) {\n throw `Could not send custom made availability to kinesis for product ${artnocol}: ${error}`\n }\n}\nexports = async (input, testDatabase) => {\n const { rulegroup, fit, cufflectra } = input\n\n const cluster = context.services.get('mongodb-atlas')\n const database = 'custom-made-config'\n\n let sizesDocument = ''\n try {\n sizesDocument = await cluster.db(database).collection('sizesPerFit').findOne({ fitno: fit })\n } catch (error) {\n throw `findMissingMeasurements: ${error}`\n }\n\n if (!sizesDocument) {\n throw `findMissingMeasurements: could not find sizePerFit document for fit ${fit}`\n }\n \n const measurementPromiseArray = []\n\n for (const size of sizesDocument.sizes) {\n measurementPromiseArray.push(context.functions.execute(\n 'getDefaultMeasurements',\n {\n rulegroup,\n fitno: fit,\n size,\n cufflectra,\n },\n testDatabase\n ))\n }\n try {\n var measurementsPerSize = await Promise.all(measurementPromiseArray)\n } catch (error) {\n throw `findMissingMeasurements: getDefaultMeasurements: ${error}`\n }\n\n const measurementPoints = ['neck','waist','chest','length','cuffright','cuffleft', 'sleeveright', 'sleeveleft']\n const missingMeasurements = []\n\n for (let i = 0; i < sizesDocument.sizes.length; i++) {\n const missingMeasurementsPerSize = { size: sizesDocument.sizes[i], missingMeasurements: [] }\n\n for (const measurementPoint of measurementPoints) {\n if (!(measurementPoint in measurementsPerSize[i])) {\n missingMeasurementsPerSize.missingMeasurements.push(measurementPoint)\n }\n }\n if (missingMeasurementsPerSize.missingMeasurements.length > 0) {\n missingMeasurements.push(missingMeasurementsPerSize)\n }\n }\n return missingMeasurements\n}\nexports = async (input, testDatabase) => {\n const { rulegroup, fitno, size, cufflectra } = input\n\n const expectedInput = {\n rulegroup,\n fitno,\n size,\n cufflectra,\n }\n\n const missingInformation = Object.keys(expectedInput).reduce((acc, key) => {\n return !expectedInput[key] ? [...acc, `${key}: ${expectedInput[key]}`] : acc\n }, [])\n\n if (missingInformation.length) {\n throw `getDefaultMeasurements: Missing the following input: ${missingInformation.join(\n ', '\n )}`\n }\n\n const database = testDatabase || 'custom-made-component'\n const db = context.services.get('mongodb-atlas').db(database)\n const collection = db.collection('measurements')\n\n let validMeasurements = {}\n\n const measurementPoints = {\n NECK: 'neck',\n WAIST: 'waist',\n CHEST: 'chest',\n LENGTH: 'length',\n CUFF_RIGHT: 'cuff right',\n CUFF_LEFT: 'cuff left',\n SLEEVE_RIGHT: 'right sleeve',\n SLEEVE_LEFT: 'left sleeve',\n SHORT_SLEEVE: 'short sleeve',\n }\n\n // Function to remove the ugly \"Cuff <LECTRA> Left/Right\" from the measurementpoint.\n const removeLectraFromPoint = (measurementPoint) => {\n let pointWords = measurementPoint.split(' ')\n if (\n pointWords[0] === 'cuff' &&\n (pointWords[2] === 'right' || pointWords[2] === 'left')\n ) {\n pointWords.splice(1, 1)\n } else if (pointWords[0] === 'short') {\n pointWords.splice(2, 1)\n }\n return pointWords.join(' ')\n }\n\n const setMeasurementByPoint = (measurement) => {\n const actualPointName = removeLectraFromPoint(measurement.measuementpoint)\n switch (actualPointName) {\n case measurementPoints.NECK:\n validMeasurements = {\n ...validMeasurements,\n neck: measurement.defaultvalue,\n }\n break\n case measurementPoints.WAIST:\n validMeasurements = {\n ...validMeasurements,\n waist: measurement.defaultvalue,\n }\n break\n case measurementPoints.CHEST:\n validMeasurements = {\n ...validMeasurements,\n chest: measurement.defaultvalue,\n }\n break\n case measurementPoints.LENGTH:\n validMeasurements = {\n ...validMeasurements,\n length: measurement.defaultvalue,\n }\n break\n case measurementPoints.CUFF_RIGHT:\n validMeasurements = {\n ...validMeasurements,\n cuffright: measurement.defaultvalue,\n }\n break\n case measurementPoints.CUFF_LEFT:\n validMeasurements = {\n ...validMeasurements,\n cuffleft: measurement.defaultvalue,\n }\n break\n case measurementPoints.SLEEVE_RIGHT:\n validMeasurements = {\n ...validMeasurements,\n sleeveright: measurement.defaultvalue,\n }\n break\n case measurementPoints.SLEEVE_LEFT:\n validMeasurements = {\n ...validMeasurements,\n sleeveleft: measurement.defaultvalue,\n }\n break\n case measurementPoints.SHORT_SLEEVE:\n validMeasurements = {\n ...validMeasurements,\n sleeveright: measurement.defaultvalue,\n sleeveleft: measurement.defaultvalue,\n }\n break\n default:\n return\n }\n }\n\n try {\n const measurements = await collection\n .find({\n rulegroup,\n fitno,\n size,\n })\n .toArray()\n\n if (measurements && measurements.length > 0) {\n const isValidForCuffPromiseArray = []\n\n measurements.forEach(measurement => {\n isValidForCuffPromiseArray.push(context.functions.execute(\n 'getMeasurementPointValidForCuff',\n { cufflectra },\n { measuementpoint: measurement.measuementpoint },\n testDatabase\n ))\n })\n\n const isValidForCuffArray = await Promise.all(isValidForCuffPromiseArray)\n \n isValidForCuffArray.forEach((isValidForCuff, index) => {\n if (isValidForCuff && isValidForCuff.result) {\n setMeasurementByPoint(measurements[index])\n }\n })\n \n } else {\n throw `getDefaultMeasurements: Cannot find any measurements matching: { rulegroup: ${rulegroup}, fitno: ${fitno}, size: ${size} }`\n }\n\n return validMeasurements\n } catch (error) {\n throw error\n }\n}\nexports = async (input, source, testDatabase) => {\n const { cufflectra, cuffgroup, fitno, rulegroup } = input\n const { measuementpoint } = source\n\n if (!measuementpoint) {\n return {\n error: `getMeasurementPointValidForCuff: No measurementpoint was included in the source argument`\n }\n }\n\n const database = testDatabase || 'custom-made-component'\n\n const cluster = context.services.get('mongodb-atlas')\n const db = cluster.db(database)\n const configCollection = cluster\n .db('custom-made-config')\n .collection('componentDetailPossibilities')\n\n const chosenCuff =\n (cufflectra || cuffgroup) &&\n (await db.collection('cuffs').findOne({\n ...(cuffgroup && { cuffgroup }),\n ...(rulegroup && { rulegroup }),\n ...(fitno && { fitno }),\n ...(cufflectra && { cufflectra }),\n }))\n\n // Find digits in measuementpoint\n const lectra = measuementpoint.match(/(\\d+)/)\n\n var lectraMatch = true\n if (lectra) {\n if (chosenCuff && (cufflectra || (cuffgroup && fitno && rulegroup)))\n lectraMatch = chosenCuff.cufflectra === lectra[0]\n else\n return {\n error: `Cuff measurement can not be determined 'valid for cuff' without cuffgroup, rulegroup and fit, or cufflectra`,\n }\n }\n\n if (chosenCuff) {\n const cuffDetailPossibilities = await configCollection.findOne({\n componentname: 'cuff',\n })\n const componentvalue = chosenCuff[cuffDetailPossibilities.componentkey] // sleevelength = \"Short\" || \"Long\"\n const detailoptionobjects = cuffDetailPossibilities.detailoptionobjects\n\n for (const object of detailoptionobjects) {\n if (object.stringValue === componentvalue) {\n var measurementPointsValidForCuff = object.measurementpoints\n }\n }\n }\n const pointMatch = measurementPointsValidForCuff?.filter((point) => {\n // Find spaces in point\n if (/\\s/.test(point)) {\n const words = point.split(' ')\n return (\n measuementpoint.includes(words[0]) && measuementpoint.includes(words[1])\n )\n }\n return measuementpoint.includes(point)\n })\n\n return {\n result: pointMatch ? pointMatch.length > 0 && lectraMatch : false,\n }\n}", "text": "Yes, sure!\nThere is quite alot of code, pasting the functions in the order they are called.Trigger definition:handeProductsCustomMadeAvailability:findMissingMeasurements:getDefaultMeasurements:getMeasurementPointValidForCuff:", "username": "clakken" }, { "code": "", "text": "Hi again,\nI made a try run with the code that previously seemed to cause the time out, and without having made any changes to the code whatsoever from the previously failed runs, it now works as expected.\nIt does however make me feel quite unsure of realm, seeing as the same implementation can result in different outcomes…", "username": "clakken" } ]
Trigger timing out (>90000ms) when function itself only takes 2000-4000s
2021-05-07T11:00:36.176Z
Trigger timing out (&gt;90000ms) when function itself only takes 2000-4000s
2,962
null
[ "aggregation" ]
[ { "code": "c{\"name\": \"A\", \"version\": 1, \"cost\": 5, \"value\": 3},\n{\"name\": \"A\", \"version\": 2, \"cost\": 10, \"value\": 2},\n{\"name\": \"B\", \"version\": 1, \"cost\": 3, \"value\": 5},\n{\"name\": \"B\", \"version\": 3, \"cost\": 7, \"value\": 2}\n{\"name\": \"A\", \"version\": 2, \"cost\": 10, \"value\": 2},\n{\"name\": \"B\", \"version\": 3, \"cost\": 7, \"value\": 2}\ndb.c.aggregate([\n {\n $group: {\n _id: {\"name\": \"$name\"},\n \"max(c_version)\": {$max: \"$version\"}\n }\n },\n {\n $project: {\"name\": \"$_id.name\", \"version\": \"$max(c_version)\", \"_id\": 0}\n }\n])\nwith max_version as (\n select\n name,\n max(version) as version\n from c\n group by\n name\n)\nselect\n c.cost, c.name, c.value, c.version\nfrom c join max_version\non c.name = max_version.name\nand c.version = max_version.version;\n", "text": "I have a collection c of documents that look like this:Across all documents, name + version is a unique identifier (e.g., there is only one document with name A and version 1).For each name, I want to select the document with the largest version. In this case,I can group to get the largest version for each name,but then of course I lose information on the other columns.In SQL, I’d use a subquery or CTE, but this doesn’t seem possible with mongo:", "username": "Luis_de_la_Torre" }, { "code": "db.test.aggregate([\n { $sort: { name: 1, version: -1 } },\n { $group: { _id: \"$name\", doc_with_max_ver: { $first: \"$$ROOT\" } } },\n { $relaceWith: \"$doc_with_max_ver\" }\n])", "text": "Hello @Luis_de_la_Torre, welcome to the MongoDB Community forum!You can try this approach:", "username": "Prasad_Saya" }, { "code": "", "text": "Thank you! There is a typo (relace), but this is exactly what I needed.", "username": "Luis_de_la_Torre" }, { "code": "{ $relaceWith: \"$doc_with_max_ver\" }{ $replaceWith: ...", "text": "{ $relaceWith: \"$doc_with_max_ver\" }@Luis_de_la_Torre, you are correct. It should be { $replaceWith: ...", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Selecting documents with largest value of a field
2021-05-14T21:16:14.061Z
Selecting documents with largest value of a field
13,816
null
[ "aggregation", "golang" ]
[ { "code": "(AtlasError) coordinates.coordinates is not allowed in this atlas tier\nmatchStage := bson.M{\n\t\"coordinates.coordinates\": bson.M{\n\t\t\"$geoWithin\": bson.M{\n\t\t\t\"$geometry\": bson.M{\n\t\t\t\t\"type\": \"Polygon\",\n\t\t\t\t\"coordinates\": coordinates,\n\t\t\t},\n\t\t},\n\t},\n}\n", "text": "Greetings,I’m using Go with the official MongoDB driver to run an aggregation query with three stages (match, addFields, project), but I’m getting the following error:The exact same query works fine in MongoDB Compass and the same query without the match stage works in Go, so the issue appears to be with the match stage:The same query works in Go as a find query, so I’m guessing that the free tier M0 that I’m currently using limits geospatial matching in aggregation, but I’m not quite sure since this limitation is not listed among the known limitations.The size of the whole collection is only 4 MB and I’m using the latest version of the driver as well as Go.Could you please advise on:Thank you!", "username": "SMthefirst" }, { "code": "coordinates.coordinates geoWithinStage := bson.D{{\"coordinates.coordinates\", bson.D{\n\t\t{\"$geoWithin\", bson.D{\n\t\t\t{\"$geometry\", bson.D{\n\t\t\t\t{\"type\", \"Polygon\"},\n\t\t\t\t{\"coordinates\", coordinates}},\n\t\t\t}},\n\t\t}},\n\t}}\n\tpipeline := mongo.Pipeline{\n\t\t{{\"$match\", geoWithinStage}},\n\t}\n\tcursor, err := collection.Aggregate(context.Background(), pipeline)\npipelinefind()$matchcoordinates.coordinates$match$geoWithin", "text": "Hi @SMthefirst, welcome to the forums!Given the information provided, it seems that this error is because the aggregation stage specified coordinates.coordinates at the top level instead of $match.For example you should be able to use the following code:You can also print pipeline variable in the above example to see the Aggregation Pipeline before being sent to the server.The same query works in Go as a find query, so I’m guessing that the free tier M0 that I’m currently using limits geospatial matching in aggregationThis is likely because you can use $geoWithin in find() without specifying the $match stage. The first entry is the name of the field instead of an operator i.e. coordinates.coordinates(AtlasError) coordinates.coordinates is not allowed in this atlas tierIn this case, the error message is misleading. I’ll raise this issue internally for an improvement.\nAs you should be able to use $match with $geoWithin operator in M0 Free Tier (lowest tier)If the issue persists, please provide:Regards,\nWan.", "username": "wan" }, { "code": "\"$match\": bson.M{ … },matchStagebson.M{}", "text": "Thank you for your help, sir!This was a very silly oversight on my part and then instead of solving the issue with the query, I got confused by the misleading error message and looked for solutions elsewhere (thank you for addressing that internally).A simple addition of \"$match\": bson.M{ … }, at the top level of my matchStage query solved the issue (I prefer the bson.M{} syntax because it’s cleaner, but please let me know if this could present further issues in this context).", "username": "SMthefirst" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" }, { "code": "bson.M{}bson.M", "text": "Hi @SMthefirst,Glad you got it working!I prefer the bson.M{} syntax because it’s cleaner, but please let me know if this could present further issues in this contextThe difference here is that bson.D is an ordered representation of a BSON document. This should be used when the order of the elements matters, such as MongoDB command documents. If the order of the elements does not matter, you can use bson.M instead.In the context of a single $match pipeline stage as the example shown here, you could use bson.M.Regards,\nWan.", "username": "wan" } ]
(AtlasError) [key] is not allowed in this atlas tier
2021-04-24T17:57:30.945Z
(AtlasError) [key] is not allowed in this atlas tier
8,588
null
[ "data-modeling", "swift", "atlas-device-sync" ]
[ { "code": "", "text": "Error:Ending session with error: failed to validate upload changesets: SET instruction had incorrect partition value for key “_partition” { expectedPartition: {6051e7da417a6f01bb8a3323}, foundPartition: } (ProtocolErrorCode=212)Logs:[ “Session was active for: 0s” ]Partition:6051e7da417a6f01bb8a3323Session Metrics:{}Remote IP Address:101.53.254.85SDK:Realm Cocoa v10.7.2Platform Version:Version 14.4.1 (Build 18D61)", "username": "Muhammad_Awais" }, { "code": "", "text": "@Muhammad_Awais Generally this means that the client is sending an object which has a different partitionKey value than the one used to open the realm. This is not allowed by the system and is also why we would recommend not setting the Realm Object partitionKey value yourself manually in code - to avoid these errors. You can leave the partitionKey out of your schema and the system will fill it in for you.", "username": "Ian_Ward" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Cannot find 'RealmProperty' in scope
2021-04-16T11:24:36.241Z
Cannot find &lsquo;RealmProperty&rsquo; in scope
3,016
https://www.mongodb.com/…1_2_1024x449.png
[ "app-services-data-access" ]
[ { "code": "", "text": "I’m trying to wrap my head around Realm and the Rules section where first, I need to apply rules to collections. I have a “users” collection (people who sign up to use the application\") and a, for the sake of this topic a “posts” collection.\nDuring the Rules setup I see these template options. What is a “sharing list” and what would a possible use case be for it given my two collections?Screen Shot 2021-05-14 at 1.27.14 PM1175×516 55.8 KB", "username": "Andrew_W" }, { "code": "", "text": "@Andrew_W If you select that template the UI will display the configuration parameters you will need to set. In the sharing list case, each document will have an array field which contains the userIds for users which should be allowed to view that document.", "username": "Ian_Ward" } ]
What is a sharing list?
2021-05-14T17:31:42.457Z
What is a sharing list?
4,130
null
[ "queries", "replication" ]
[ { "code": "db.oplog.rs.find().sort(\n{$natural:-1}\n)\n", "text": "I have a PSA replica cluster, and after doing an insert I can see entries in collections but unable to find entries in oplog with this command.I was always able to see all changes in oplog with this command in a single node replicaset but this command is not working in PSA setup. Does this always show latest entries in oplog?", "username": "Sameer_Kattel" }, { "code": "localuse localdb.getSiblingDB('local').oplog.rs.find().sort({$natural:-1})", "text": "Does this always show latest entries in oplog?Yes.The oplog is in the local database, so use local before the query. Or you can do it in a one liner:\ndb.getSiblingDB('local').oplog.rs.find().sort({$natural:-1})", "username": "chris" }, { "code": "", "text": "db.getSiblingDB(‘local’).oplog.rs.find().sort({$natural:-1})Thanks, I am already running it in local db and I can see other entries.Now I see the issue, can’t see oplog entries when doing inserts in a transaction. Inserts are committed but still can’t see entries in oplog.When inserts are not done in a transaction can see oplog entries for the inserts done", "username": "Sameer_Kattel" }, { "code": "opcapplyOps{\n\t\"lsid\" : {\n\t\t\"id\" : UUID(\"0fbbf122-a1c0-43f5-8253-b8a9ede5968b\"),\n\t\t\"uid\" : BinData(0,\"Y5mrDaxi8gv8RmdTsQ+1j7fmkr7JUsabhNmXAheU0fg=\")\n\t},\n\t\"txnNumber\" : NumberLong(3),\n\t\"op\" : \"c\",\n\t\"ns\" : \"admin.$cmd\",\n\t\"o\" : {\n\t\t\"applyOps\" : [\n\t\t\t{\n\t\t\t\t\"op\" : \"i\",\n\t\t\t\t\"ns\" : \"mydb1.foo\",\n\t\t\t\t\"ui\" : UUID(\"e479700c-d729-49f0-8917-e9bf4eb43831\"),\n\t\t\t\t\"o\" : {\n\t\t\t\t\t\"_id\" : ObjectId(\"609fdc4c34d8db74b91a333a\"),\n\t\t\t\t\t\"abc\" : 1\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"op\" : \"i\",\n\t\t\t\t\"ns\" : \"mydb2.bar\",\n\t\t\t\t\"ui\" : UUID(\"1234f9c0-696e-48a6-99c1-78bf59154093\"),\n\t\t\t\t\"o\" : {\n\t\t\t\t\t\"_id\" : ObjectId(\"609fdc4c34d8db74b91a333b\"),\n\t\t\t\t\t\"xyz\" : 999\n\t\t\t\t}\n\t\t\t}\n\t\t]\n\t},\n\t\"ts\" : Timestamp(1621089356, 3),\n\t\"t\" : NumberLong(1),\n\t\"wall\" : ISODate(\"2021-05-15T14:35:56.153Z\"),\n\t\"v\" : NumberLong(2),\n\t\"prevOpTime\" : {\n\t\t\"ts\" : Timestamp(0, 0),\n\t\t\"t\" : NumberLong(-1)\n\t}\n}\n\n", "text": "@Sameer_KattelHere is an oplog from the Callback API example, I used Python.You will see that the transaction is an op of type c. The individual operations of the transaction are in the applyOps array.", "username": "chris" }, { "code": "", "text": "Thanks!\nI realize my mistake, was looking for “op”: “i” entries and was looking for my collection “ns” .", "username": "Sameer_Kattel" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Not able to find data modifications entries in oplog in PSA setup
2021-05-14T04:54:49.411Z
Not able to find data modifications entries in oplog in PSA setup
2,882
null
[ "connecting", "configuration" ]
[ { "code": "", "text": "Hello,\nI’ve been trying to find the Doc’s on how to change the default ports for our Atlas cluster (M10).\nCan some one point me to where this info might be located.\nThanks.\nPete", "username": "Pete_Veys" }, { "code": "27017", "text": "Hi @Pete_Veys,I’ve been trying to find the Doc’s on how to change the default ports for our Atlas cluster (M10).As the documentation states, Atlas clusters operate on port 27017 . You must be able to reach this port to connect to your clusters.You cannot reconfigure the cluster to operate on a different port.May I ask for the reason you wish to reconfigure this port number?However, I hope this answers your question.Kind Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Change Atlas default ports
2021-05-15T01:30:30.560Z
Change Atlas default ports
5,389
null
[ "monitoring" ]
[ { "code": "", "text": "Hello Everyone,I want to know how can we log each and every transaction coming from drivers?Thanks", "username": "Am_Novice" }, { "code": "", "text": "@Am_Novice, hopefully, this will help if you are working in c# driver.", "username": "Sudhesh_Gnanasekaran" } ]
Logging each and every statement hitting MongoDB
2021-04-30T06:50:16.112Z
Logging each and every statement hitting MongoDB
3,336
null
[ "installation" ]
[ { "code": "", "text": "hi there!\nI’ve an issue while installing MongoDB in my Mac m1chip 2020.error:Permission denied @ rb_sysopen - /Users/muppanasaikarthikeya/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plistcan anyone please help me with this.", "username": "Karthikeya_Muppana" }, { "code": "brew doctorbrewbrew doctor", "text": "Welcome to the MongoDB Community @Karthikeya_Muppana!The error looks like a file permission problem and should be unrelated to using an M1 mac.I would try running brew doctor via Terminal.app, as this will detect & resolve common permission and install issues that affect brew.If brew doctor doesn’t help, I would try:sudo chown $(whoami) ~/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plistIf neither of those suggestions are effective, can you confirm the command you were running when you encountered this error?Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Permission denied when installing MongoDB via Homebrew
2021-05-13T21:43:59.563Z
Permission denied when installing MongoDB via Homebrew
9,296
https://www.mongodb.com/…aecf026657e2.png
[ "database-tools" ]
[ { "code": "", "text": "quisiera una ayuda para saber como puedo cargar un archivo json a mongobd desde el consola, tengo la version 4.4 y siempre me genera error mongoimport982×288 18.5 KBagradeceria me ayudaran", "username": "jhperez88_jhperez88" }, { "code": "", "text": "Hi @jhperez88_jhperez88. Welcome to the community.Instalación de las herramientas de la base de datos en Windows", "username": "MaxOfLondon" } ]
Cargar archivo json a mongodb
2021-05-15T01:28:30.451Z
Cargar archivo json a mongodb
4,495
null
[]
[ { "code": "", "text": "Atlas automatically changed our TLS certificates for our clusters. They are now signed by the ISRG Root X1 certificate. This change brought our app down for 20 hours while we tried to figure out what is happening.My first thought: What the f*** was Atlas thinking by making this change?! Something this important, something that causes actual downtime in our production environment should have multiple emails, notifications, pop-up boxes when we log-in, even phone calls and text messages.The damage caused by Atlas is done. I feel deeply disappointed in Atlas. I will lick my wounds and carry on.Now my second thought: How can I avoid this in the future? And, telling me to scour the Atlas support DB constantly is not an option.-Frank Cohen, CEO, Clever Moe", "username": "Frank_Cohen" }, { "code": "", "text": "Hi Frank,I’m so sorry to hear you experienced an outage related to this change. We’ve sent a serious of communications about this including a method to move to the new CA ahead of time but we are investigating potential gaps in our operational communications on this topic. We’re working on a thorough post-mortem.But taking a step back, you’re right: emails are clearly not enough here. Out of curiosity, what programming language driver are you using? I believe we may need to target certain communities more susceptible to risk based on trust store affinity more aggressively than others and we do have an understanding of language framework used per cluster.Please accept my apologies. I will be in touch directly with you this week via email to try and learn more about your experience if you don’t mind.Andrew", "username": "Andrew_Davidson" } ]
TLS certificates changed - brought down our app
2021-05-10T21:03:06.027Z
TLS certificates changed - brought down our app
1,698
null
[ "replication", "security" ]
[ { "code": "", "text": "I want to set up realtime backup in a another datacenter by adding a member over internt.\nAccess are restricted by firewall.\nAre the Oplog are transferred in clear, which means that a man is middle can read them ?", "username": "Paul_Langeard" }, { "code": "", "text": "Welcome to the MongoDB Community @Paul_Langeard!If you are planning to connect to your deployment over a public network, best practice would be to enable network encryption (TLS/SSL) and configure role-based access control before binding your deployment to listen to public network interfaces. You can further limit exposure by connecting your remote replica set member via VPN/VPC instead of directly over the internet.Please review the MongoDB Security Checklist for some recommended security measures.If you do not configure your deployment for network encryption, data will be transferred in the clear.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks for your answer. Role-base access are in place, I read already that documents.\nI would like to know how the opLog are transmitted, if they can be intercepted and read ?", "username": "Paul_Langeard" }, { "code": "mongodmongos", "text": "I would like to know how the opLog are transmitted, if they can be intercepted and read ?Hi Paul,As I commented above, you need to configure your deployment for network encryption (TLS/SSL) to secure communication. Setup of TLS encryption is based on providing certificates that can ideally be validated against an issuing authority. For more information, please see Configure mongod and mongos for TLS/SSL.If you haven’t configured network encryption, any data sent to/from your deployment (or between members of your deployment) will not be encrypted so eavesdropping of unsecured network traffic is possible.Oplog data is transmitted using the same MongoDB Wire Protocol and transport mechanisms used by MongoDB drivers.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Replication over internet / man in the middle
2021-05-15T09:57:06.826Z
Replication over internet / man in the middle
2,125
null
[ "aggregation", "queries", "node-js", "mongoose-odm" ]
[ { "code": "commentsreplies{\n _id:34\n name:\"Palestine\"\n}\n{\n _id:\"1234\",\n body:\" hello ! I love mongodb, but its hard\",\n likes:[\"34\"],\n comments:{\n _id:\"3453\",\n body:\"me I don't like mongodb i like sql \"\n likes:[\"34\"],\n replies:{\n _id:\"2345\",\n body:\"both of them are great\"\n likes:[\"34\"],\n }\n }\n}\nliked:truelikes{\n _id:\"1234\",\n body:\" hello ! I love mongodb, but its hard\",\n liked:true,\n likesCount:1\n comments:{\n _id:\"3453\",\n body:\"me I don't like mongodb i like sql \"\n liked:true,\n likesCount:1\n replies:{\n _id:\"2345\",\n body:\"both of them are great\"\n liked:true,\n likesCount:1\n }\n }\n}\n{\n _id:\"1234\",\n body:\" hello ! I love mongodb, but its hard\",\n liked:true,\n likesCount:1\n comments:{\n _id:\"3453\",\n body:\"me I don't like mongodb i like sql \"\n liked:true,\n likesCount:1\n }\n}\nconst result = await PostModel.aggregate([\n {\n $project: {\n likesCount: { $size: \"$likes\" },\n commentsCount: { $size: \"$comments\" },\n liked: { $in: [ID(uid), \"$likes\"] },\n likes: 1,\n ||\n ||\n \\/\nNote: i dont know if i need to `$group` first or `$unwind`\n ................................................\n }\n}\n", "text": "I have this users and posts collection with embedded docs comments and replies:The question is that, i want to aggregate into the posts and get all the them, after that i want to append new key value to both post, comments, replies, witch gonna specify if the user liked the post or not ex: liked:true\nNote: I have the authenticated user id ready . I want check if the user Id exist in the likes docs on each sub trees (posts, comments, replies )To be more specific about this question this is my expected humble result :Note : If you can help to query upto the comments its okay for me :\nResult expected :This is what I tried :…\nI have a hard time finding how to perform the next step", "username": "Dimer_Bwimba_Mihanda" }, { "code": "const result = await PostModel.aggregate([\n {\n $project: {\n likesCount: { $size: \"$likes\" },\n commentsCount: { $size: \"$comments\" },\n liked: { $in: [ID(uid), \"$likes\"] },\n likes: 1,\n ||\n ||\n \\/\nNote: i dont know if i need to `$group` first or `$unwind`\n ................................................\n }\n", "text": "First… There are some fundamental problems with your query.Why are you using “const” as your variable? This should be var in my opinion cause const (constant) implies that the result will not change…Second…\nEnsure your references are to actual fields. I see that you are projecting the field “likes” into “likesCount” but I do not see a “likes” field in your schema.Third… Ensure that you properly reference the ID field. If you want an object ID you must use “_id” not ID(uid). If you don’t want to pass the object ID to the projection, then you need to create a separate ID field and reference it there.As for the other commands… It depends on what your working with… If you are using arrays… You may have to $map them to modify any data in them… Hopefully this helps…As far as $unwind, and $group… I don’t know if you are getting the result you expect from the query.", "username": "David_Thompson" } ]
Mongoose Aggregate on an Embedded/Nested Documents
2021-05-14T13:49:07.385Z
Mongoose Aggregate on an Embedded/Nested Documents
4,806
null
[ "swift" ]
[ { "code": "", "text": "The iOS SDK documentation states the following:Concurrency Concerns\nSince transactions block each other, it is best to avoid opening transactions on both the UI thread and a background thread. If you are using Sync, avoid opening transactions on the UI thread altogether, as Realm processes synchronizations on a background thread. If a background transaction blocks your UI thread’s transaction, your app may appear unresponsive.So that is the stated policy and yet the o-fish iOS app written by Realm developers writes on the main thread. Why is that? I’ve found it cumbersome to update a managed object with values from the UI but then write on a background queue. I was hoping the o-fish app would show me the “right” way to do it.", "username": "Nina_Friend" }, { "code": "", "text": "I believe that is a warning that’s been used across multiple drivers and doesn’t apply nearly as much for iOS (we’re looking to see whether we can remove the warning altogether).In the work that I’ve done (including O-FISH), I haven’t hit any problems with updating Realm in the main thread (and for SwiftUI it would be a real pain to do otherwise).My recommendation would be to go ahead and update Realm from the UI thread, but just have that warning in the back of your mind if you see latency issues.", "username": "Andrew_Morgan" }, { "code": "", "text": "That statement in the documentation cost me a lot of time changing code and making it more complex. It should be changed ASAP before it misleads anyone else.", "username": "Nina_Friend" }, { "code": "", "text": "Thanks for the feedback, Nina! I’ll raise this issue with the rest of the docs team to see if we can improve the wording here to avoid misleading folks – while I think we can all agree that it’s best to avoid large transactions on the UI thread in general, it’s definitely a bit more complicated than that and there are times when you need to do exactly that!", "username": "Nathan_Contino" }, { "code": "", "text": "More guidance on when to use a background queue is needed. For example, if you should use a background queue for a large transaction, define what you mean by “large.”", "username": "Nina_Friend" } ]
Write on a background queue - or maybe not
2021-05-13T23:45:52.814Z
Write on a background queue - or maybe not
2,095
null
[ "java", "connecting" ]
[ { "code": "\"stack_trace\":\"com.mongodb.MongoSocketWriteException: Exception sending message\n\tat com.mongodb.internal.connection.InternalStreamConnection.translateWriteException(InternalStreamConnection.java:550)\n\tat com.mongodb.internal.connection.InternalStreamConnection.sendMessage(InternalStreamConnection.java:432)\n\tat com.mongodb.internal.connection.InternalStreamConnection.sendCommandMessage(InternalStreamConnection.java:272)\n\tat com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:256)\n\tat com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:83)\n\tat com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:33)\n\tat com.mongodb.internal.connection.InternalStreamConnectionInitializer.initializeConnectionDescription(InternalStreamConnectionInitializer.java:103)\n\tat com.mongodb.internal.connection.InternalStreamConnectionInitializer.initialize(InternalStreamConnectionInitializer.java:60)\n\tat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:128)\n\tat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117)\n\tat java.base/java.lang.Thread.run(Thread.java:834)\\nCaused by: javax.net.ssl.SSLHandshakeException: extension (5) should not be presented in certificate_request\n\tat java.base/sun.security.ssl.Alert.createSSLException(Alert.java:128)\n\tat java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117)\n\tat java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:308)\n\tat java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:264)\n\tat java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:255)\n\tat java.base/sun.security.ssl.SSLExtensions.<init>(SSLExtensions.java:89)\n\tat java.base/sun.security.ssl.CertificateRequest$T13CertificateRequestMessage.<init>(CertificateRequest.java:757)\n\tat java.base/sun.security.ssl.CertificateRequest$T13CertificateRequestConsumer.consume(CertificateRequest.java:861)\n\tat java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:392)\n\tat java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444)\n\tat java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:421)\n\tat java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:178)\n\tat java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:164)\n\tat java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1152)\n\tat java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1063)\n\tat java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:402)\n\tat java.base/sun.security.ssl.SSLSocketImpl.ensureNegotiated(SSLSocketImpl.java:716)\n\tat java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:970)\n\tat com.mongodb.internal.connection.SocketStream.write(SocketStream.java:99)\n\tat com.mongodb.internal.connection.InternalStreamConnection.sendMessage(InternalStreamConnection.java:429)\n\t... 9 common frames omitted\\n\"}\n{\"@timestamp\":\"2021-05-11T22:06:44.201Z\",\"@version\":\"1\",\"message\":\"Exception in monitor thread while connecting to server cluster0-shard-00-00.4nnvh.mongodb.net:27017\",\"logger_name\":\"org.mongodb.driver.cluster\",\"thread_name\":\"cluster-ClusterId{value='609afff3e29d375d86141922', description='null'}-cluster0-shard-00-00.4nnvh.mongodb.net:27017\",\"level\":\"INFO\",\"level_value\":20000,\"stack_trace\":\"com.mongodb.MongoSocketWriteException: Exception sending message\n\tat com.mongodb.internal.connection.InternalStreamConnection.translateWriteException(InternalStreamConnection.java:550)\n\tat com.mongodb.internal.connection.InternalStreamConnection.sendMessage(InternalStreamConnection.java:432)\n\tat com.mongodb.internal.connection.InternalStreamConnection.sendCommandMessage(InternalStreamConnection.java:272)\n\tat com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:256)\n\tat com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:83)\n\tat com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:33)\n\tat com.mongodb.internal.connection.InternalStreamConnectionInitializer.initializeConnectionDescription(InternalStreamConnectionInitializer.java:103)\n\tat com.mongodb.internal.connection.InternalStreamConnectionInitializer.initialize(InternalStreamConnectionInitializer.java:60)\n\tat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:128)\n\tat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117)\n\tat java.base/java.lang.Thread.run(Thread.java:834)\\nCaused by: javax.net.ssl.SSLHandshakeException: extension (5) should not be presented in certificate_request\n\tat java.base/sun.security.ssl.Alert.createSSLException(Alert.java:128)\n\tat java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117)\n\tat java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:308)\n\tat java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:264)\n\tat java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:255)\n\tat java.base/sun.security.ssl.SSLExtensions.<init>(SSLExtensions.java:89)\n\tat java.base/sun.security.ssl.CertificateRequest$T13CertificateRequestMessage.<init>(CertificateRequest.java:757)\n\tat java.base/sun.security.ssl.CertificateRequest$T13CertificateRequestConsumer.consume(CertificateRequest.java:861)\n\tat java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:392)\n\tat java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444)\nat java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:421)\n\tat java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:178)\n\tat java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:164)\n\tat java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1152)\n\tat java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1063)\n\tat java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:402)\nat java.base/sun.security.ssl.SSLSocketImpl.ensureNegotiated(SSLSocketImpl.java:716)\n\tat java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:970)\nat com.mongodb.internal.connection.SocketStream.write(SocketStream.java:99)\n\tat com.mongodb.internal.connection.InternalStreamConnection.sendMessage(InternalStreamConnection.java:429)\n\t... 9 common frames omitted\\n\"}\n", "text": "When I try to connect to mongo from an app running inside minikube, I get the following stack trace:I do not get this stack trace when I run outside of minikube, as a stand-along app", "username": "Andrew_Weiss" }, { "code": "", "text": "Hi there.This topic has previously been discussed at SSLHandshakeException : should not be presented in certificate_request.Regards,\nJeff", "username": "Jeffrey_Yemin" }, { "code": "", "text": "The solution I am seeing from your link, is to move to a different version of jdk. I do not have this option, is there any other solution?", "username": "Andrew_Weiss" }, { "code": "", "text": "I think you just need to update to the latest patch release for whatever version you’re on. Is that not possible?I think another option is to disable TLS 1.3.", "username": "Jeffrey_Yemin" }, { "code": "", "text": "I tried using TLS 1.2, and that also did not work. Are you saying to exclude TLS entirely? If so, do you know how to start up the JDK without TLS?", "username": "Andrew_Weiss" }, { "code": "", "text": "So, I tried the following, to disable TLS 1.3, and this did not work:\nSslContextFactory.Server sslContextFactory = new SslContextFactory.Server();\nsslContextFactory.setExcludeProtocols(“TLSv1.3”);", "username": "Andrew_Weiss" }, { "code": "", "text": "Just tried upgrading to latest version 11 of jdk, 11.0.11, and this did not work either.", "username": "Andrew_Weiss" }, { "code": "", "text": "I was able to start java with TLSv1.2, and this did work.", "username": "Andrew_Weiss" } ]
Getting internal stack trace from mongo connection
2021-05-12T16:19:16.492Z
Getting internal stack trace from mongo connection
5,413
null
[ "transactions" ]
[ { "code": "", "text": "As per MongoDB documentation, transactions only works for replica sets and not single node. Why such requirement? Isn’t it is easier to do transaction stuff on a single node rather than a distributed system?", "username": "Rajat_Goel" }, { "code": "", "text": "Hi @Rajat_Goel,Since transactions are built on concepts of logical sessions they require mecahnics (like oplog) which are only available in replica set environment.You can always convert a standalone to a single noded replica set and transactions will work with this one node.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hello, Are there plans of mongo team to enable transactions for standalone servers in a next version of MongoDB? That would be great.", "username": "Ivan_Cabrera" }, { "code": "mongod--replset", "text": "Hi Iván,Welcome to the community.You can convert a standalone to a single-node replica set to enable transactions. All you have to do is start mongod with --replset flag.The link posted by Pavel above goes into more detail.", "username": "mahisatya" }, { "code": "", "text": "Hello Mahi, yes I got it, however, I wonder if this feature will be available in a future (out of the box) for standalone servers? (in order to avoid that process of conversion )", "username": "Ivan_Cabrera" } ]
Why replica set is mandatory for transactions in MongoDB?
2020-09-23T05:58:53.537Z
Why replica set is mandatory for transactions in MongoDB?
22,873
null
[ "security", "graphql" ]
[ { "code": "", "text": "Is there any option available to whitelist Realm GraphQL endpoint to a machine or server for security purposes. Please let me know", "username": "mo_dew" }, { "code": "", "text": "Hey Mo - can you expand on what you mean by whitelist? If you’re looking for static IPs for Realm which you can add to your client, we have them published here. (note: these may change very rarely)If you’re looking to only allow incoming requests to Realm from certain IPs, you can actually set that in your permissioning by:Hope that helps - we’re also planning on introducing a specific IP access list feature to MongoDB Realm to improve this experience.Sumedha", "username": "Sumedha_Mehta1" }, { "code": "", "text": "I have a REST endpoint (for eg. IP for this server is XX.XX.XX.XX) that will interact with REALM endpoint. For this, I want to establish a secure connection between both. I don’t want other systems to access the REALM endpoint, It should only process the request from IP XX.XX.XX.XX.\nI believe you have already given the option to handle this situation, Thanks. I am very excited to wait for the IP access list feature in REALM.\nDo you have any idea, when this feature will be available?", "username": "mo_dew" }, { "code": "", "text": "Hi Mo, it will likely be a few more months until you are able to use the feature.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Hi @Sumedha_Mehta1\ncontext.request is always empty in the function.\nIt works nice when I run the functions from portal (https://realm.mongodb.com/). When I call realm graph from postman, context.request is always empty.Please help, As per your feedback I have set up the role and unable to get the remote IP in function and it is the blocker for me.", "username": "mo_dew" }, { "code": "", "text": "That issue should be resolved as of last week.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Whitelist Realm GraphQL endpoint to a machine or server
2021-04-08T11:24:53.231Z
Whitelist Realm GraphQL endpoint to a machine or server
2,840
null
[ "atlas-functions" ]
[ { "code": "exports = function() {\n const mongodbAtlas = context.services.get(\"mongodb-atlas\");\n const auctions = mongodbAtlas.db(\"myFirstDatabase\").collection(\"items\");\n //Date now\n var now = new Date();\n //Get customer with last bid\n const findHighestBidder = auctions.aggregate([\n {$match: { $and: [ \n {endDate: {$lt: now}}, \n {status: \"active\" }\n ] }},\n { $project : { status: 1, bidHistory: 1 } },\n {$addFields : {bidHistory : {$reduce : {\n input : \"$bidHistory\", \n initialValue : {bid : 0}, \n in : {$cond: [{$gte : [\"$$this.bid\", \"$$value.bid\"]},\"$$this\", \"$$value\"]}}\n }}}\n])\n const result = findHighestBidder.toArray\n\n //This returns the results as expected\n return result\n //This returns: \"{ results: {} }\"\n return context.http.post({\n url: \"http://27b1e5a30df2.ngrok.io/api/stripe/test\",\n body: {result} ,\n encodeBodyAsJSON: true\n })\n};", "text": "I’m trying to send an array over HTTP POST, it’s sending just an empty response. When I return the array to the console, it’s full. Any clue what’s going here?", "username": "Al_B" }, { "code": "{ Results : result}\n", "text": "Hi @Al_B,I think there is a fundamental issue in the code, as aggregate does not return an array but a promise…So either you resolve it with .then syntax or use async in function declaration and await.Further, I remember . toArray() with brackets.Also is the body correct or there should be a field hosting arrayThanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "const findHighestBidder = auctions.aggregate(pipeline).toArray()\n .then((results) => {\n return results\n })\n .catch(err => console.error(err))\n \n return findHighestBidder\n \n return context.http.post({\n url: \"http://679339cf1551.ngrok.io/api/stripe/test\",\n body: {Results: findHighestBidder} ,\n encodeBodyAsJSON: true\n })", "text": "Hi Pavel, I made the following changes. When I return the function, I get the array. When I return the POST method, the body is still empty.", "username": "Al_B" }, { "code": "", "text": "@Al_B,Why do you have 2 returns without a condition…\nThis in itself does not allow the http call to happen.More over , I think the http.call must be a part of the then() and not outside…Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Very good thanks for the input.I had two returns because I was testing both. I always commented out one when running the function.", "username": "Al_B" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
POST HTTP Array: what am I doing wrong?
2021-05-11T01:29:04.391Z
POST HTTP Array: what am I doing wrong?
1,889
https://www.mongodb.com/…2_2_1024x502.png
[ "atlas-functions" ]
[ { "code": "", "text": "I try to use Realm 3rd party service s3 and I followed the guide to do it but I am getting this error. Can any one point me to correct direction how to fixed this. \nimage1044×512 20.9 KB\n", "username": "Aaron_Parducho" }, { "code": "Body", "text": "Hi @Aaron_Parducho, could you please share your code that’s uploading the file? In particular, what data type are you setting the Body field to?", "username": "Andrew_Morgan" } ]
I would like to use Realm 3rd party service AWS S3 for storing my files
2021-05-14T09:15:00.051Z
I would like to use Realm 3rd party service AWS S3 for storing my files
2,029
null
[ "java", "connecting" ]
[ { "code": "Successfully added user: {\n \"user\" : \"test\",\n \"roles\" : [\n {\n \"role\" : \"root\",\n \"db\" : \"admin\"\n },\n \"readWriteAnyDatabase\"\n ]\n}\n", "text": "Hi Team,Facing issues with MongoSecurityException while trying to connect with mongo server using URI,com.mongodb.MongoSecurityException: Exception authenticating MongoCredential{mechanism=SCRAM-SHA-1, userName=‘test’, source=‘dbName’, password=, mechanismProperties=}Caused by: com.mongodb.MongoCommandException: Command failed with error 18 (AuthenticationFailed): ‘Authentication failed.’ on server localhost:33132. The full response is {“ok”: 0.0, “errmsg”: “Authentication failed.”, “code”: 18, “codeName”: “AuthenticationFailed”}The environment is ppc64le/UBI 8.3 (RHEL 8.3 based container environment).\nMongoDB server version: 4.4.4\nMongo driver version: 3.12.8Mongo server container logs:\nMongoDB server version: 4.4.4From mongo client:\nCreate a MongoClient(MongoClientURI) instance with MongoClientURI(mongodb://test:password@hostip:port/dbName) and connect with server.Any pointers help would be great.\nRevert back, if more information required.Thanks in advance!!", "username": "Maniraj_Deivendran" }, { "code": "/dbNamedbNameadmin/dbName", "text": "Hi there,I suspect the issue is with the connection string. Appending /dbName to it indicates that the credential is defined in the dbName database, but it’s likely that it’s actually defined in the admin database. Try removing /dbName from the connection string and see if it works.Regards,\nJeff", "username": "Jeffrey_Yemin" }, { "code": "", "text": "Hi Jeff,Thank you very much for your support.\nI checked by removing the /dbName from the MongoClientURI(mongodb://user:pass@host:port).No positive results and getting the below failure,com.mongodb.MongoSecurityException: Exception authenticating MongoCredential{mechanism=SCRAM-SHA-1, userName=‘test’, source=‘admin’, password=, mechanismProperties=}Caused by: com.mongodb.MongoCommandException: Command failed with error 18 (AuthenticationFailed): ‘Authentication failed.’ on server hostip:port. The full response is {“ok”: 0.0, “errmsg”: “Authentication failed.”, “code”: 18, “codeName”: “AuthenticationFailed”}FYI, I tried the same code with x86_64 and no authentication errors observed. This is specific to PowerPc64.Regards,\nManiraj", "username": "Maniraj_Deivendran" }, { "code": "", "text": "I don’t know what this could be. There should be no different in the behavior of the driver or the JVM on PowerPc64 that would affect authentication.", "username": "Jeffrey_Yemin" }, { "code": "", "text": "Okay.At first, faced issue as mentioned below,com.mongodb.MongoNodeIsRecoveringException: Command failed with error 11600 (InterruptedAtShutdown): ‘Index build failed: 03c6cb78-341b-4789-bf06-b1872ed7876a: Collection graylog.grants ( 01a76922-bca4-43a7-a3ad-790b57c17002 ) :: caused by :: interrupted at shutdown’ on server docker0 Ip:33792. The full response is {“ok”: 0.0, “errmsg”: “Index build failed: 03c6cb78-341b-4789-bf06-b1872ed7876a: Collection graylog.grants ( 01a76922-bca4-43a7-a3ad-790b57c17002 ) :: caused by :: interrupted at shutdown”, “code”: 11600, “codeName”: “InterruptedAtShutdown”}com.mongodb.MongoCommandException: Command failed with error 13 (Unauthorized): ‘command dropDatabase requires authentication’ on server docker0 Ip:33792. The full response is {“ok”: 0.0, “errmsg”: “command dropDatabase requires authentication”, “code”: 13, “codeName”: “Unauthorized”}To resolve this issue added user:pass@ to the MogoClientURI and finally ended with “com.mongodb.MongoSecurityException: Exception authenticating MongoCredential{mechanism=SCRAM-SHA-1, userName=‘test’” issue.Any pointers help would be great. Thanks.", "username": "Maniraj_Deivendran" }, { "code": "Command failed with error 18 (AuthenticationFailed): ‘Authentication failed.’ on server hostip:port. \nThe full response is {“ok”: 0.0, “errmsg”: “Authentication failed.”, “code”: 18, “codeName”: \n“AuthenticationFailed”}\ncom.mongodb.MongoSecurityException: Exception authenticating\nMongoCredential{mechanism=SCRAM-SHA-1, userName=‘test’, source=‘admin’, password=,\nmechanismProperties=}\nauthSourceauthMechanismmongo --port 27017 -u test -p pwd --authenticationDatabase 'admin'\n", "text": "Hi,As Jeff says there shouldn’t be any behavioural difference between authentication and server architectures.The exception:Looks as expected with an invalid username / password combination.Shows that it is failing but using the expected mechanism against the admin database. You could try an connection string that sets the authSource and authMechanism explicitly:mongodb://test:pwd@host1/?authSource=admin&authMechanism=SCRAM-SHA-1Can you connect via the command line without the Java driver?If that fails it indicates the issue is not with the java driverRoss", "username": "Ross_Lawley" }, { "code": "", "text": "Hi Ross,I have checked via the command line and it’s working,sh-4.4$ mongo --port 27017 -u test -p pass --authenticationDatabase ‘admin’\nMongoDB shell version v4.4.4\nconnecting to: mongodb://localhost:27017/?authSource=admin&compressors=disabled&gssapiServiceName=mongodb\nImplicit session: session { “id” : UUID(“d6d2f296-c4e2-42cc-aaf8-f41962e0de56”) }\nMongoDB server version: 4.4.4\nThe server generated these startup warnings when booting:\n2021-05-14T11:58:05.616+00:00: Soft rlimits too low\n2021-05-14T11:58:05.616+00:00: lockedMemoryBytes: 65536\n2021-05-14T11:58:05.616+00:00: minLockedMemoryBytes: 1048576\nMongoDB Enterprise > show users;\nMongoDB Enterprise >No improvement by changing the URI as below,\nmongodb://test:pass@host1:port/?authSource=admin&authMechanism=SCRAM-SHA-1The exception:com.mongodb.MongoSecurityException: Exception authenticating MongoCredential{mechanism=SCRAM-SHA-1, userName=‘test’, source=‘admin’, password=, mechanismProperties=}Caused by: com.mongodb.MongoCommandException: Command failed with error 18 (AuthenticationFailed): ‘Authentication failed.’ on server docker0 Ip:34121. The full response is {“ok”: 0.0, “errmsg”: “Authentication failed.”, “code”: 18, “codeName”: “AuthenticationFailed”}Thanks.", "username": "Maniraj_Deivendran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoSecurityException: Exception authenticating MongoCredential
2021-05-13T09:20:00.924Z
MongoSecurityException: Exception authenticating MongoCredential
64,839
null
[]
[ { "code": "{\n \"_id\": {\n \"$oid\": \"609de83a3495584e98327521\"\n },\n \"email\": \"[email protected]\",\n \"name\": \"test middlename person\",\n \"ip\": \"192.168.0.1\",\n \"hash\": \"af1bd74f67760edbb1df27f8af09d4267a8d3309178c6b95523ec757fe1d81d6\",\n \"twitter\": \"\",\n \"facebook\": \"\",\n \"userid\": \"wef23f23fwefwwfhgeyukjykuyk\"\n}\ndb.whatever.aggregate([\n {\n \"$search\": {\n \"wildcard\": {\n \"path\": \"email\",\n \"query\": \"test.pe*\"\n }\n }\n }\n])\n", "text": "I am trying to use the wildcard operator found here: https://docs.atlas.mongodb.com/reference/atlas-search/wildcard/ but whenever I try to run it, I get the following error:MongoError: Unrecognized pipeline stage name: ‘$search’My documents are like this:I am trying to run the following wildcard query:My mongodb is version 4.4.6 Community edition.Please let me know why the query is not working.", "username": "Rohan_Patra" }, { "code": "", "text": "Hi @Rohan_Patra,Welcome to MongoDB community.The relevant feature you pointed is only available in MongoDB Atlas cloud deployments.You cannot use it on a local installation. Atlas offer a free tier and I suggest you try it out.The local installation have text indexes and $text operators, those use other technology and are less capable though…Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Wildcard Queries
2021-05-14T04:33:20.990Z
Wildcard Queries
2,120
null
[ "node-js", "data-modeling" ]
[ { "code": "Collection: user\n\n{\n Name: \"Edgar\",\n Accounts: [\n {\n ID: \"ABC\",\n Balance: 30\n },\n {\n ID: \"DEF\",\n Balance: 40\n }\n ]\n}\nCollection: Account\n\n{\n ID: \"ABC\",\n Movements: [\n {ID: 1, Movement: 10},\n {ID: 2, Movement: 10},\n {ID: 3, Movement: 10}\n ]\n}\n\n{\n ID: \"DEF\",\n Movements: [\n {ID: 1, Movement: 20},\n {ID: 2, Movement: -10},\n {ID: 3, Movement: 30}\n ]\n}\n", "text": "Sometimes, Mongo modeling recommends duplicating data by efficiency and referencing other collections, instead of opting for Embed (when the number of records is very high n to Zillions).In that case, what would be the best approximation if we change a row of the other collection, which requires modifying the data we have replicated in our collection by efficiency?The use of transactions according to Mongo is not recommended.For example. I’m going to put the code as JavaScript objects (so that it is simpler and more short).Imagine that an account can have thousands of details records, and that in the user’s information (the most consulted) we only want to have a photo of the situation of the user’s accounts, without having to go to each of his accounts.Thank you in advance,", "username": "Antonio_Ubeda_Montero" }, { "code": "", "text": "Hi Antonio,This seems to be a perfect candidate for Computed Pattern.While you insert value into Account’s collection Movements array you would need to compute user’s Account.Balance by increasing it by value of the Movement.If Movements array length is a concern (and as you stated it would contain zillions of elements) I would also consider Bucket Pattern. In reality each adjustment would track date and time of the Movement which could be used for bucketing and preferably sharding.Best,\nMax", "username": "MaxOfLondon" }, { "code": "", "text": "Could you be more especific, please? Imagine that you delete a user and you have to delete all of his movements. Do you need to use transactions or are there a better way to do that? What if you have locally placed your database, not in Mongo Atlas.To sum up: When you need to separate data in two collections, but they are conected each other, how we should manage them?Please, answer this questions not only giving name of patterns.Thank you,", "username": "Antonio_Ubeda_Montero" }, { "code": "", "text": "Hi Antonio,When you need to separate data in two collections, but they are conected each other, how we should manage them?There is no simple rule. The way you approach design depends on many things.\nEssentially, when you have data scattered then application needs to delete it from all collections (non-atomic operation if you do not use transaction) but if you really need to do it in a transaction is another matter. Transactions are expensive and often with careful design and depending on how system is used you can get away with not using it.\nFor example operations on a document and everything embedded in it is always atomic operation out of the box so design can benefit from that.\nConsider your example of deleting user. Since, as you said, user is most queried entity you could delete user only and defer removing all movement data to scheduled job afterwards which will benefit from fast response and full task accomplished later.This design is not ideal though because deleting from zillion collection in one go uses a lot of resource, that’s why I was suggesting bucketing and sharting.Take care.", "username": "MaxOfLondon" }, { "code": "", "text": "I clicked the like heart especially for:Consider your example of deleting user. Since, as you said, user is most queried entity you could delete user only and defer removing all movement data to scheduled job afterwards which will benefit from fast response and full task accomplished later.", "username": "steevej" }, { "code": "", "text": "How could you defer deleting zillions side? Is there any batch procedure available in Mongodb? Thank you.", "username": "Antonio_Ubeda_Montero" }, { "code": "", "text": "A cronjob or trigger based architecture can be utilised to execute script that will consume queued reference to purge records.\nThe application can use webservice to enqueue user_id to be deleted then script can process that queue for example.", "username": "MaxOfLondon" } ]
How to deal with replicated fields in MongoDB if the source is updated in the other collection
2021-04-28T18:58:40.610Z
How to deal with replicated fields in MongoDB if the source is updated in the other collection
2,107
https://www.mongodb.com/…9_2_1023x576.png
[ "python", "performance" ]
[ { "code": "import pymongo\n\nclient = pymongo.MongoClient('localhost', 27017)\ndb = client['test']\ncollection = db['test']\n\nwith open('file', 'rb') as f:\n file_content = f.read()\n\ncollection.insert_one({'file': file_content})\n_op_msg_uncompressed", "text": "Hi,\nI’d like to insert binary files into MongoDB and I’d like to avoid GridFS, all files will be smaller than 10 MB.\nBut I noticed quite high memory usage (on the client side) while inserting binary file into MongoDB.My setup:I created test binary file with exactly 10 000 000 bytes. This snippet works (file is correctly stored). But when trying to insert it using this snippetand using memory profiler (filprofiler), pymongo driver use 2.5 times more memory just for inserting this file (see attached image). Stacktrace ends with pymongo function _op_msg_uncompressed.\nmemory2557×1439 309 KBI would like to know, If there is any chance to avoid this memory usage and why this happens.Thanks", "username": "Petr_Klejch" }, { "code": "{ \"file\" : file_content }", "text": "When you have worries like that, the real first thing to do is to establish a baseline for your benchmark.In this case, I would check different scenarios.Check memory usage with a 1 byte file. That will establish a baseline for the simply using the API.Check memory usage for reading the file into file_content. This will establish a baseline or simply reading the 10MB file. Make sure you use file_content somehow to make sure the optimizer does not simply ignore the statement if you do nothing with the variable. (I do not know python enough to know if it could or not)Check memory usage without calling insert_one() but while creating the object { \"file\" : file_content }. If python creates the object by coping file_content vs referencing it then you might end up with twice the use memory right there. Note that the optimizer might not use the memory if you do nothing with that object. I suggest to assign it to a variable that you export, this way we hope the optimizer won’t optimize. This will establish a baseline for simply create the JSON document.The other memory usage would then be the network buffer used to send the API call and its payload over the wire. But that’s harder to find and have no clue how I could do it with python.[EDITED since I pressed the button too quickly]2.5 x (file size) does not seem excessive to me.", "username": "steevej" }, { "code": "file_content = f.read()file_content{ \"file\" : file_content }insert_oneinsert_oneinsert_oneinsert_one", "text": "Hi, thanks for the response!So that leads to my original question (now asked much more clearly - thanks for the assistance): Why inserting a 10 MB file use another 25 MB of memory, if the file is already loaded in the memory ?", "username": "Petr_Klejch" }, { "code": "", "text": "Very nice work.I don’t know python enough to help further but with what you share I really hope someone will step up and we all will learn. I will follow this thread closely.", "username": "steevej" }, { "code": "", "text": "Hi @Petr_Klejch, thanks for reporting this issue. I suspect pymongo is working as designed here and this is a side effect of the way that we serialize messages in our C extensions. However, we can probably optimize this path to reduce the peak memory usage. I’ve filed an optimization ticket here: https://jira.mongodb.org/browse/PYTHON-2716Please follow the Jira ticket for updates. For convenience, I’ve copied the description here:Our theory is the extra memory comes from using the buffer.h API in our C extensions. The issue is that when a buffer needs to grow we simply double the size until the buffer is large enough to accommodate the new write. So in this case:So in total the peak memory is around 25MB. We should investigate if it’s possible to reduce the memory usage by using a zero-copy method to convert from the internal buffer to a Python bytes object.", "username": "Shane" }, { "code": "", "text": "Hello @Shane,\nthank you very much for your help and for creating a ticket !I will watch the created ticket for any further updates.", "username": "Petr_Klejch" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
High memory usage while inserting binary file
2021-05-12T14:22:53.094Z
High memory usage while inserting binary file
3,652
null
[ "backup" ]
[ { "code": "", "text": "Mongorestore is erroring out after successful mongodump from my source db which is 4.2.1. Target is azure cosmos mongo API which is at 4.0Getting the error in the first collection itself -\nFailed: : error creating collection: error running create command: (FailedToParse) Unrecognized field: idIndex.‘collation’.Any leads?", "username": "Ranilakshmi_Rangaraj" }, { "code": "", "text": " Welcome to the MongoDB Community @Ranilakshmi_Rangaraj!Azure Cosmos DB is a Microsoft cloud database product with partial emulation for popular database APIs like MongoDB, Cassandra, and Gremlin. Cosmos’ MongoDB API provides an incomplete emulation of MongoDB using an entirely independent server implementation. MongoDB drivers and tools are not tested against CosmosDB, so if you encounter a compatibility error you should report this to Cosmos support.Based on the error message you received, it appears that Cosmos’ API does not recognise the Collation support that was introduced in MongoDB 3.4 (November 2016). It looks like collation support has been on Cosmos’ long term road map since about 2 years ago.If collation (or full MongoDB feature support) is important to your use case, I would consider using MongoDB Atlas on Azure for a managed data service.The alternative would be to remove all usage of MongoDB features and data types that are not supported by Cosmos, but I’m not aware of a straightforward way to do so. You will also have to adjust your application code and capacity planning to consider Cosmos-specific resource management based on Request Units (RUs).Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongorestore is not working with Cosmos
2021-05-13T14:35:16.485Z
Mongorestore is not working with Cosmos
4,394
null
[ "react-js", "app-services-hosting" ]
[ { "code": "/hosting/files/", "text": "I’ve got my Realm ‘backend’ repo set up with GitHub deployment so pushing to master deploys to Realm. Awesome! Further, I’m thinking of using Realm static hosting to host the app (in the past I’ve used Netlify, which I love, but why not consolidate?)My question is this. Normally I’d use a separate repo for my client app. Realm static hosting lives inside the ‘backend’ repo (/hosting/files/). It feels unwieldy to me to have the entire React project inside that already-nested directory.So I could use a separate client repo and make a deployment script that builds and copies the react app into the hosting dir., but that ALSO feels unwieldy to me.Am I just being overly picky? What would you do? Are there pros and cons to either approach? Or something else?", "username": "Ted_Hayes" }, { "code": "", "text": "Personally, I’d probably keep everything in the same repo.If you do want to keep your client app in a separate repo then that can work too. Note that you can have many Realm apps and so you could create a new one that you only use for the static hosting. You can then sync that app with the repo that contains your client app. Of course, you’d still need to respect the directory structure.", "username": "Andrew_Morgan" }, { "code": "create-react-app/hosting/files", "text": "Oh, that’s interesting…\nW.r.t. keeping everything in the same repo, are there any caveats/cons? I’m going to use create-react-app for this project, but it’s unclear to me how to set that up in this case. Would you store the source in the root, and then modify the build script to build to /hosting/files? What else would have to be modified?", "username": "Ted_Hayes" }, { "code": "", "text": "Possibly, would just need to check that the import into the Realm app didn’t object to the source files being there (if it does then the source code would need to be stored in a different repo and then I’d look into using GitHub actions to copy the build files over).", "username": "Andrew_Morgan" }, { "code": "/clientcreate-react-app.env/client../hosting/files/hosting/files", "text": "Good news! I created a /client directory and ran create-react-app there, then created a .env file in /client that specifies the correct build path (../hosting/files). Ran a test build, committed and pushed and everything works perfectly! One tiny detail that you might want to add to the docs is that /hosting/files can’t be empty or deployment will fail.", "username": "Ted_Hayes" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Recommended workflow for React app with GitHub deployment
2021-05-12T17:45:57.938Z
Recommended workflow for React app with GitHub deployment
4,248
null
[ "queries", "swift" ]
[ { "code": "let query: BSONDocument = [\n \"someID\": [\n \"$in\": [1,2,3]\n ]\n]\nlet query: BSONDocument = [\n \"someID\": [\n \"$in\": passedInArray\n ]\n]\n", "text": "I’m using MongoSwiftSync, and am attempting to do a simple $in query.For this query:This compiles fine. If I pass an [Int] into a function, and then reference it:then I get an error:error: cannot convert value of type ‘[Int]’ to expected dictionary value type ‘BSON’If I check the type of the initial array and passedInArray, they both come back as Array. What am I doing wrong with the passedInArray? The driver isn’t happy about it.Thanks.", "username": "Mark_Windrim" }, { "code": "passedInArray[BSON]BSONpassedInArraylet passedInArray: [BSON] = [1, 2, 3]\n\nlet query: BSONDocument = [\n \"someID\": [\n \"$in\": .array(passedInArray)\n ]\n]\nExpressibleByBSONDocumentBSONBSONDocumentExpressibleByDictionaryLiteral[String: BSON]BSONExpressibleByExpressibleByDictionaryLiteral[String: BSON]BSON.documentExpressibleByArrayLiteral[BSON]BSON.arrayExpressibleByIntegerLiteralBSON.int32BSON.int64IntExpressibleByBooleanLiteralBSON.boollet d: BSON = [\"a\": 1] // BSON.document([\"a\": 1])\nBSON.documentBSONDocument[\"a\": 1]let b: BSON = true // BSON.bool(true)\nBSONDocument\"someID\"String[\n \"$in\": [1,2,3]\n]\nBSON[String: BSON]$inString[1, 2, 3][BSON]BSONExpressibleByBSONlet query: BSONDocument = [\n \"someID\": BSON.document([\n \"$in\": BSON.array([BSON.int64(1), BSON.int64(2), BSON.int64(3)])\n ])\n]\npassedInArrayBSONBSON.array.array(...)[BSON][Int][Int][BSON]myArray.map { BSON.int64($0) }BSONDocumentBSONExpressibleBy", "text": "Hi @Mark_Windrim, welcome to the forums and thanks for reaching out!The short answer, and what you need to get your code to compile, is to both declare that passedInArray is an [BSON] when you create it, and explicitly state the corresponding BSON enum case that passedInArray corresponds to:The long answer and why you need this is slightly tricky, but I will do my best to explain. It involves Swift’s type inference capabilities and ExpressibleBy protocols.The BSON library has both a BSONDocument type which is essentially an ordered map of strings to BSON values, and a BSON type, which is an enum with associated values, where each case corresponds to a different BSON type.The BSONDocument type conforms to the ExpressibleByDictionaryLiteral protocol, where the dictionary is a [String: BSON].The BSON type conforms to a number of ExpressibleBy protocols:For example, one could do something likeWhich would result in an instance of the BSON enum with case .document wrapping a BSONDocument created from [\"a\": 1].Or:So in your first example, since you’ve added the BSONDocument type annotation, the compiler infers that \"someID\" is a String and thatis a BSON, and since it is a dictionary literal it is inferred to be a [String: BSON].Thus the compiler infers that $in is a String. Since [1, 2, 3] is an array literal, the compiler infers it to be an [BSON], and infers the individual elements 1, 2, 3, which are integer literals, to be BSONs as well.Written without the help of the ExpressibleBy protocol implementations for BSON, your first document would look like:Fortunately, you do not need to include the explicit enum cases for all of those.Now, visiting your second example: the problem is that, since passedInArray is not an array literal and is just a plain old array, the corresponding BSON type, BSON.array, cannot be automatically instantiated from it. Thus, you need the .array(...) around it (the compiler can infer the BSON prefix), and when initializing it you need to tell the compiler that it’s an [BSON] and not an [Int].You could also convert an [Int] to an [BSON] like: myArray.map { BSON.int64($0) }.Hopefully that is helpful, and let me know if you have further questions! In short, when writing out a document, if you are using a literal you do not need to state which BSON enum case it corresponds to, but if you are using a variable, you do.Relevant links:\nDocumentation for BSONDocument: BSONDocument Structure Reference\nDocumentation for BSON type: BSON Enumeration Reference\nBlog post on the ExpressibleBy protocols: Swift ExpressibleBy protocols: What they are and how they work internally in the compiler", "username": "kmahar" }, { "code": "func someFunction( incomingArray: [Int32]) throws -> MongoCursor<...>\nlet bsonArray: [BSON] = incomingArray\nerror: cannot assign value of type '[Int32]' to type '[BSON]'\nlet bsonArray: [BSON] = .array(incomingArray) \nerror: type '[BSON]' has no member 'array'\n", "text": "Hi,Wow. Thank you for such a detailed response. I was able to get things to work as long as I defined the array within the function, but if I pass it into the function, then I still run into issues. ie:It is the incomingArray above that I can’t get into BSON.results in:and if I do:results in:If I define the array specifically [1,2,3], then everything works as expected.", "username": "Mark_Windrim" }, { "code": "BSONmaplet bsonArray = incomingArray.map { BSON.int32($0) } // this gives you an [BSON]\nlet bson = BSON.array(bsonArray) // this gives you a BSON.array\nbson", "text": "I think you need to convert the individual array elements to BSONs as well. You could do this via map, like:And then use bson in your document.Let me know if that works.", "username": "kmahar" }, { "code": "let bson = BSON.array(bsonArray)", "text": "let bson = BSON.array(bsonArray)That did the trick! Thanks. I really appreciate all your help with this. Your response was very detailed.Mark", "username": "Mark_Windrim" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Swift driver - issue encoding BSON array
2021-05-13T17:22:58.703Z
Swift driver - issue encoding BSON array
4,368
null
[ "atlas-functions", "atlas-triggers" ]
[ { "code": "", "text": "Hi all,After spending some time searching I couldn’t find a clear way forward so here goes:Any idea on how to send the results to my backend?Thanks!", "username": "Al_B" }, { "code": "", "text": "Hi @Al_B and welcome in the MongoDB Community !Humm I think you are doing this the other way around. When the backend node server needs the data, he can consume a REST (webhook) or a GraphQL API for example to retrieve the data it needs.I guess you could also send a POST command to your backend though with the data when it’s ready.In this scenario, you don’t need a webhook. You just execute a function with the trigger that sends a POST command to your backend which needs to be listening, of course.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "exports = function() {\n const mongodbAtlas = context.services.get(\"mongodb-atlas\");\n const auctions = mongodbAtlas.db(\"myFirstDatabase\").collection(\"items\");\n //Date now\n var now = new Date();\n //Get customer with last bid\n const findHighestBidder = auctions.aggregate([\n {$match: { $and: [ \n {endDate: {$lt: now}}, \n {status: \"active\" }\n ] }},\n { $project : { status: 1, bidHistory: 1 } },\n {$addFields : {bidHistory : {$reduce : {\n input : \"$bidHistory\", \n initialValue : {bid : 0}, \n in : {$cond: [{$gte : [\"$$this.bid\", \"$$value.bid\"]},\"$$this\", \"$$value\"]}}\n }}}\n])\n const result = findHighestBidder.toArray\n\n //This returns the results as expected\n return result\n //This returns: \"{ results: {} }\"\n return context.http.post({\n url: \"http://27b1e5a30df2.ngrok.io/api/stripe/test\",\n body: {result} ,\n encodeBodyAsJSON: true\n })\n};", "text": "This worked!However, when trying to send an array, it’s sending just an empty response.\nWhen I return the array to the console, it’s full. Any clue what’s going here?", "username": "Al_B" }, { "code": "toArrayconst result = findHighestBidder.toArray()\ntoArray()result", "text": "I think you have a couple of issue in the above piece of code.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "const findHighestBidder = auctions.aggregate(pipeline).toArray()\n .then((results) => {\n return results\n })\n .catch(err => console.error(err))\n \n//I get the desired array here\n return findHighestBidder\n\n//Results is still returning nothing \n return context.http.post({\n url: \"http://679339cf1551.ngrok.io/api/stripe/test\",\n body: {Results: findHighestBidder} ,\n encodeBodyAsJSON: true\n })\n", "text": "Hi Maxime,I made the following changes. When I return the function, I get the array. When I return the POST method, the body is still empty.", "username": "Al_B" }, { "code": " coll.find(query, project).sort(sort).toArray()\n .then( docs => {\n response.setBody(JSON.stringify(docs));\n response.setHeader(\"Contact\",\"[email protected]\");\n });\nexports = function(payload, response) { ... }\n", "text": "I could be wrong, but maybe your need to stringify the docs in the body?I have something like this in one of my function:It’s an HTTP service implemented in Realm so here I have a payload and a response object.That’s why I have a response object here. Here the backend calls this GET webhook and I provide an HTTP answer.", "username": "MaBeuLux88" }, { "code": "findHighestBidderbody.Resultsexports = async function() {\n ...\n const findHighestBidder = await auctions.aggregate(pipeline).toArray();\n return context.http.post({\n url: \"http://679339cf1551.ngrok.io/api/stripe/test\",\n body: { Results: findHighestBidder },\n encodeBodyAsJSON: true\n })\n}\nhttp.post.then()exports = function() {\n ...\n return auctions.aggregate(pipeline).toArray().then(results => {\n return context.http.post({\n url: \"http://679339cf1551.ngrok.io/api/stripe/test\",\n body: { Results: results },\n encodeBodyAsJSON: true\n })\n })\n}\n", "text": "The issue is that findHighestBidder is a promise, not a string (or whatever other type the server expects body.Results to be). You need to either:", "username": "nlarew" }, { "code": "", "text": "Excellent, this worked! Thanks so much ", "username": "Al_B" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to get Realm Trigger + Webhook to work together?
2021-05-04T17:26:38.187Z
How to get Realm Trigger + Webhook to work together?
4,430
null
[ "java", "android" ]
[ { "code": "> Task :app:compileDebugJavaWithJavac\nNote: Version 10.5.0 of Realm is now available: https://static.realm.io/downloads/java/latest\nNote: Processing class Album\nNote: Processing class Contact\nNote: Processing class DeviceInfo\nNote: Processing class Media\nNote: Processing class Playlist\nNote: Processing class Share\nNote: Creating DefaultRealmModule\nNote: [1] Wrote GeneratedAppGlideModule with: []\nNote: Some input files use or override a deprecated API.\nNote: Recompile with -Xlint:deprecation for details.\nNote: Some input files use unchecked or unsafe operations.\nNote: Recompile with -Xlint:unchecked for details.\n", "text": "When doing a build, the output for the compileDebugJavaWithJavac task results in:However, if I change my references in the build script to 10.5.0 (i.e. classpath ‘io.realm:realm-gradle-plugin:10.5.0’ in the project level build.gradle, and annotationProcessor ‘io.realm:realm-annotations-processor:10.5.0’ in the app module level build.gradle) I get dependency errors that the build system cannot find these.Could someone post the proper settings for a gradle build to retrieve version 10.5.0? The documentation still references 10.4.0.", "username": "Tad_Frysinger" }, { "code": "", "text": "@Tad_Frysinger: Thanks for reaching out to us. Let me check and get back to you.", "username": "Mohit_Sharma" }, { "code": "", "text": "Other folks are experiencing the same issue, see my post on SO here.", "username": "Tad_Frysinger" }, { "code": "", "text": "@Tad_Frysinger: Yes, our team is looking into it to fix it. Thank you once again.", "username": "Mohit_Sharma" }, { "code": "", "text": "Hi Mohit - any update on a resolution here?", "username": "Tad_Frysinger" }, { "code": "", "text": "@Tad_Frysinger: 10.5.0 is now available, can you please check once again.", "username": "Mohit_Sharma" } ]
Android Studio 4.2 references new version but I cannot find it?
2021-05-09T13:59:27.309Z
Android Studio 4.2 references new version but I cannot find it?
3,771
null
[ "queries", "text-search" ]
[ { "code": "", "text": "I am currently working as an Intern , and i am assigned a task to increase the speed of the Query, which is in the search bar.That is substring of an email should be queried accordingly.Can I use text indexes for email and make it work?", "username": "Zephaniah_N_A" }, { "code": "$text", "text": "Hi @Zephaniah_N_A - welcome to the MongoDB Community Forum!From what I understand in your question, you have a collection that stores emails as documents, and you’d like to add or speed up a search box that allows you to search for email documents that contain certain words. If that’s the case, then a text index and the $text operator should work reasonably well for you.If your MongoDB cluster is hosted on MongoDB Atlas then you could use Atlas Search instead. It’s a little more complex to set up but it’s more powerful.Mark", "username": "Mark_Smith" }, { "code": "", "text": "Hi @Mark_Smith , yeah the text sesrches are working fine for some of the cases.I meant some of the cases because let us consider an email zeph with any domain, my expected behavious should be that if i type for ze the query should return zeph which isn’t the case when using text indexes instead what i found out is it is using delimiters like .!-,. to split the token which is not the ideal behavious for my problem", "username": "Zephaniah_N_A" }, { "code": "", "text": "Just to clarify, you’re searching for an email address, not an email body?", "username": "Mark_Smith" }, { "code": "", "text": "Yes, the email address in the sense it is stored as a string right.", "username": "Zephaniah_N_A" } ]
Text Index for Substring of a Field?
2021-05-11T13:14:55.041Z
Text Index for Substring of a Field?
3,678
null
[ "data-modeling", "atlas-device-sync" ]
[ { "code": "class Child{\n String partitionKey = \"child\";\n}\nclass Parent{\n String partitionKey = \"parent\";\n Child child;\n}\n", "text": "Let’s say we have this model:If we choose partitionKey as our partition key and try sync partition “parent” without syncing “child”. Does realm sync child when syncing “parent”?", "username": "mahdi_shahbazi" }, { "code": "import RealmSwift\n\n@objcMembers class Parent: Object, ObjectKeyIdentifiable {\n dynamic var _id = UUID().uuidString\n dynamic var child: Child?\n}\n\n@objcMembers class Child: EmbeddedObject, ObjectKeyIdentifiable {\n dynamic var _id = UUID().uuidString\n}\npartitionKeyParentChildParentParentParentpartitionKeyChild", "text": "Not sure which SDK you’re using, but if I were working with Swift then I’d declare the classes like this:Note that there’s no need to add partitionKey to Parent as the SDK will handle that.As Child is embedded in Parent, it will be stored as an embedded document within Parent docs in Atlas.When you open a Realm, you provide the partition you want to work with – it will then sync all documents from the Parent collection where partitionKey matches the string you specify for the partition (including the Child data.You can read a lot more about how Realm partitions work in this article.", "username": "Andrew_Morgan" }, { "code": "", "text": "Hey @Andrew_Morgan\nThanks for reply. I have read your article about strategies. The structure that I added in my question is just a sample. I know there is better strategies for this sample but think it as part of a bigger structure that you have to use this way.\nMaybe adding structure made my question complex but it’s a simple question.\nif model A has a reference to model B but they are in different portion, Does mongo sync model B during syncing model A?", "username": "mahdi_shahbazi" }, { "code": "ChildParentParentChildChildParentchildnil", "text": "What I described was embedding (where on the Atlas side, Child documents are actually sub-documents within the Parent collection.The other way to model this is to define relationships between the Parent and Child collections in the backend Realm schema. In this case, the Child will only get synced if its partition is set to the same value as the Parent document – otherwise, child will be set to nil.", "username": "Andrew_Morgan" } ]
Does mongo realm sync any model that has relation with a synced model?
2021-05-12T12:39:53.401Z
Does mongo realm sync any model that has relation with a synced model?
2,040
null
[ "node-js" ]
[ { "code": "Logged in with user 5ff8c70ddd74f48bbe641a4c\nConnection[1]: Session[1]: client_reset_config = false, Realm exists = true, async open = false, client reset = false\nConnection[1]: Connected to endpoint '\"remote-ip-here\"' (from '192.168.1.212:60173')\nERROR: Connection[1]: Websocket: Expected HTTP response 101 Switching Protocols, but received:\nHTTP/1.1 401 Unauthorized\nexport const realmApp = new Realm.App({ id: 'my-app-id' });\nvar realm: Realm;\n\nasync function run() {\n const credentials = Realm.Credentials.anonymous();\n await realmApp.logIn(credentials);\n console.log('Logged in with user', realmApp.currentUser?.id);\n realm = await Realm.open({\n schema: [TaskSchema],\n sync: {\n user: realmApp.currentUser as Realm.User,\n partitionValue: 'myPartition',\n },\n });\n}\n\nrun().catch((err) => {\n console.error('Failed to open realm:', err);\n});\n", "text": "I have been following the NodeJS MongoDB Realm Quick Start tutorial. The connection is made with the MongoDB Atlas and then immediately the app quits withI have enabled Realm Sync and enabled development mode and anonymous access.My code is pretty much the same as mentioned in the tutorial:Any ideas?", "username": "Sheikh_Muhammad_Umar" }, { "code": "", "text": "Sheikh,Make sure that you have a sufficient Node.js version (> 10), seeThen clear your local MongoDB Realm cacherm -rf mongodb-realmAnd try again.Richard Krueger", "username": "Richard_Krueger" }, { "code": "Connection[1]: Connected to endpoint '<remote-ip-here>:443' (from '192.168.8.105:41332')\nERROR: Connection[1]: Websocket: Expected HTTP response 101 Switching Protocols, but received:\nHTTP/1.1 401 Unauthorized\n", "text": "Hi Richard thank you for the reply. I am using NodeJS version 12.20.1 so I do not think that is causing issues. I have an IoT app that collects data from some devices all day long. I am now using api keys in my app. The app does not error out as before immediately when the app is launched. But after some hours it crashes with the same error.I have tried removing the mongodb-realm folder twice now, but still the same issue occurs. Any further suggestions as to what might be actually happening here?", "username": "Sheikh_Muhammad_Umar" }, { "code": "is-internet-availableisInternetAvailable()isInternetAvailable().then(async(internetConnected = console.log)=>{\n //gives true when internet available else gives false\n console.log(\"internet connected \", internetConnected) \n if(internetConnected){\n\n //if old user exists, then logout\n if(app.currentUser){\n //logout first\n app.currentUser.logOut()\n\n //and then login\n user = await app.logIn(credentials).catch(err=>{\n console.log(\"err\")\n })\n }\n else{\n //login directly if previous user doesn't exist\n user = await app.logIn(credentials).catch(err=>{\n console.log(\"err\")\n })\n }\n }\n }).catch(err=>{\n console.log(err)\n })\nsetTimeout()", "text": "I had encountered the same issue a while ago. I got no help from the online community and had to come up with my own DIY solution. The error code 401 means the user is unauthorized and must re-login to be authorized. So I implemented a code while opening the app if the device is connected to the internet, log out the user, and re-login. To find out internet connectivity, I installed is-internet-available package and used isInternetAvailable() function.Now I am facing a situation where if the app is idle for a little longer period of time, it gives the same error. If you also face the problem you can make this a function and call it in a specific period using the setTimeout() function.", "username": "Maneez_Paudel" } ]
Working with MongoDB Realm tutorial for Node js gives HTTP/1.1 401 Unauthorized error
2021-01-08T23:35:50.048Z
Working with MongoDB Realm tutorial for Node js gives HTTP/1.1 401 Unauthorized error
4,081
https://www.mongodb.com/…f_2_1024x757.png
[ "database-tools" ]
[ { "code": "mongodump --uri ${DB_URI} --gzip --archive=\"${archive}\"mongorestore --uri ${mongo.uri} --drop --gzip --archive=\"$dumpPath\" --nsInclude=\"${mongo.sourceDatabase}.${mongo.sourceCollection}\" --nsFrom=\"${mongo.sourceDatabase}.${mongo.sourceCollection}\" --nsTo=\"${mongo.targetDatabase}.${mongo.targetCollection}", "text": "Hi all,We have a process of working with mongodb dumps. The flow is as follows:Both DBs have the same 4.2.5 versionEverything works great, except the fact that Date data type gets converted into String data type (at least this is what we see exploring the collection items) + Date format is changed from Instant (2019-06-15 11:45:58.364Z) to something having the time zone (2019-06-15T11:45:58.364+00:00)diff1156×855 37.9 KB", "username": "Vladyslav_Baidak" }, { "code": "mongodump --versionmongorestore --version", "text": "Hi @Vladyslav_Baidak,Apologies for the late reply. Are you still having this issue? Out of curiosity, I tried replicating your procedure using the same MongoDB version but didn’t seem to have this issue. Both mongodump and mongorestore are not trying to do anything fancy and just dump the raw BSON data. They do not tamper with the data at all, so it’s curious how you can get the wrong type restored.If this is still an issue, the output of mongodump --version and mongorestore --version might be useful. Also, are you certain that there is no app touching the collection after restore?Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi @kevinadi ,\nThanks for your reply.I’ve check this one more time and it seems that the issue is on our side, not related to mongodump / mongorestore.So I think we can close the ticket\nThanks!", "username": "Vladyslav_Baidak" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
Mongodump / Mongorestore Changes Date Type to String
2021-04-15T14:14:25.441Z
Mongodump / Mongorestore Changes Date Type to String
2,903
null
[ "dot-net", "indexes" ]
[ { "code": "", "text": "There was a CreateIndexOptions that had a Unique bool property on it, but the overloads of collection.Indexes.CreateOne() that use that are marked as obsolete/deprecating. Now it uses a CreateOneIndexOptions that does not have a property for Unique. Where do I set an index to be unique?On a separate note, I’m finding this c# driver is hard to follow along with and match to the corresponding Mongodb documentation. It feels like there are conventions or style that are unique to mongodb (and not common to .net) that may be getting followed, but its not mentioned or explained anywhere. It would be a great timesaver if that were part of the “Getting Started”.", "username": "Andrew_Stanton" }, { "code": "var dbClient = new MongoClient(\"mongodb://127.0.0.1:27017\");\nvar db = dbClient.GetDatabase(\"test\");\nvar collection = db.GetCollection<Books>(\"books\");\n\n// Create the unique index on the field 'title'\nvar options = new CreateIndexOptions { Unique = true };\ncollection.Indexes.CreateOne(\"{ title : 1 }\", options);\n\n// List all the indexes on the collection\nvar ixList = collection.Indexes.List().ToList<BsonDocument>();\nixList.ForEach(ix => Console.WriteLine(ix));\n", "text": "Hello @Andrew_Stanton, welcome to the MongoDB Community forum!Here is the code to create a Unique index on a field using C# Driver:Here is the MongoDB C# Tutorial link for the Administration; see the specific topic Indexes within the same page.There is also a Getting Started → Admin Quick Tour. Look for the topic Indexes on the same page.", "username": "Prasad_Saya" }, { "code": "#pragma var options = new CreateIndexOptions\n {\n Unique = isUnique,\n Name = $\"{collectionName}_{fieldName}\"\n };\n#pragma warning disable CS0618 // Type or member is obsolete\n var createdIndexName = collection.Indexes.CreateOne($\"{{ {fieldName} : 1 }}\", options);\n#pragma warning restore CS0618 // Type or member is obsolete", "text": "Thanks @Prasad_Saya, thats the way I’m doing it, but I have to add #pragma to disable the obsolete error.I cant find the correct non-obsolete unique index creation function in this c# driver. Any ideas?", "username": "Andrew_Stanton" }, { "code": "titlebooksvar cmdStr = \"{ createIndexes: 'books', indexes: [ { key: { title: 1 }, name: 'title-uniq-1', unique: true } ] }\";\nvar cmd = BsonDocument.Parse(cmdStr);\nvar result = mongoClient.GetDatabase(\"test\").RunCommand<BsonDocument>(cmd);\nConsole.WriteLine(result);\n", "text": "Hello @Andrew_Stanton, try this one.This is by running the Database Command for createIndexes. The following creates a unique index on the title field of the books collection.", "username": "Prasad_Saya" }, { "code": "", "text": "Hi @Prasad_Saya,How would you suggest going about checking the index is not already created?. Or is mongo handling duplicated index creation itself?. Thank you!.", "username": "Alejandro_Nagy" }, { "code": "mongodb.collection_name.getIndexes()", "text": "Hello @Alejandro_Nagy, welcome to the MongoDB Community forum!From mongo shell, you can run the following command to see all the indexes on a collection, e.g.,:db.collection_name.getIndexes()If you try to create the same index again, the command is ignored - the duplicated index creation is not possible.", "username": "Prasad_Saya" }, { "code": "", "text": "", "username": "Stennie_X" } ]
What is the syntax for creating a new unique index on a field using the c# driver?
2021-02-14T04:51:56.137Z
What is the syntax for creating a new unique index on a field using the c# driver?
19,438
null
[ "aggregation", "golang" ]
[ { "code": "{\n _id : \"AAA\",\n status: \"active\",\n desc: \"the desc of AAA\",\n creation_date: \"2019-05-11T10:59:55.627+00:00\"\n}\nbson.D{{\n \"$lookup\", bson.D{\n {\"from\", \"alias\"},\n {\"localField\", \"_id\"},\n {\"foreignField\", \"place_id\"},\n {\"as\", \"aliases\"}\n }\n}}\n{\n _id: ObjectId(\"605af6b150d88dccc7bbadd8\"),\n place_id: \"AAA\",\n content: \"it is the alias of AAA\"\n}\n{\n _id : \"AAA\",\n status: \"active\",\n desc: \"the desc of AAA\",\n creation_date: \"2019-05-11T10:59:55.627+00:00\",\n aliases: [\"it is the alias of AAA\"]\n}\n{\n _id : \"AAA\",\n status: \"active\",\n desc: \"the desc of AAA\",\n creation_date: \"2019-05-11T10:59:55.627+00:00\",\n aliases:\"\"\n}\n{\n _id : \"AAA\",\n status: \"active\",\n desc: \"the desc of AAA\",\n creation_date: \"2019-05-11T10:59:55.627+00:00\",\n aliases: [\"it is the alias of AAA\"]\n}\n", "text": "Hello, I use a pipeline to get some data from collection A, the document Da in collection A looks like:At the end of the pipeline there is a $lookup stage, it is:The document in collection “alias” looks likeThe returned document from the pipeline should be:The issue is: the aliases field is not in the result, only document Da is returned. It seems the $lookup stage not work, but all stages ahead of $lookup stage works fine, otherwise Da will not be returned.But, if I manually add an “aliases” field in the document Da in collection A, even the “aliases” is empty, i.e. if I change Da to:then if I execute the same pipeline, with the ending stage is the $lookup, the return result of the pipeline will be:in other words, the pipeline running correctly. the $lookup seems worked.\nI don’t know why such issue happened, thanks for the help.", "username": "Zhihong_GUO" }, { "code": "{\n _id : \"AAA\",\n status: \"active\",\n desc: \"the desc of AAA\",\n creation_date: \"2019-05-11T10:59:55.627+00:00\",\n aliases: [\"it is the alias of AAA\"]\n}\n{\n\t\"_id\" : \"AAA\",\n\t\"status\" : \"active\",\n\t\"desc\" : \"the desc of AAA\",\n\t\"creation_date\" : \"2019-05-11T10:59:55.627+00:00\",\n\t\"aliases\" : [\n\t\t{\n\t\t\t\"_id\" : ObjectId(\"605af6b150d88dccc7bbadd8\"),\n\t\t\t\"place_id\" : \"AAA\",\n\t\t\t\"content\" : \"it is the alias of AAA\"\n\t\t}\n\t]\n}\n", "text": "Given the sample documents you provided and the pipeline you provided you should not get the following as result:It should be (as tested in the shell):So there is some other manipulations you are doing (to get only the string in the array) that you are not telling us about and I suspect the problem lies there. In the shell I get exactly the same result whether a field aliases exist or not in the A collection.Please provide the whole pipeline.", "username": "steevej" }, { "code": "{\n _id : \"AAA\",\n status: \"active\",\n desc: \"the desc of AAA\",\n creation_date: \"2019-05-11T10:59:55.627+00:00\",\n location : {\n type : \"Point\",\n coordinates : [ 5.90375, 10.2892]\n}\nfunc createGeoStage(lng float32, lat float32, radius int32) (jsonStage string) {\n\n\tgeoStage := `\n\t{\n\t\t\"$geoNear\":{\n\t\t\t\"includeLocs\":\"location\",\n\t\t\t\"distanceField\":\"distance\",\n\t\t\t\"near\":{\n\t\t\t\t\"type\":\"Point\",\n\t\t\t\t\"coordinates\":[ %f, %f]\n\t\t\t},\n\t\t\t\"maxDistance\": %v,\n\t\t\t\"spherical\":true\n\t\t}\n\t}`\n\tgeoStage = fmt.Sprintf(geoStage, lng, lat, radius)\n\treturn geoStage\n}\n\nbson.D{{\n \"$lookup\", bson.D{\n {\"from\", \"alias\"},\n {\"localField\", \"_id\"},\n {\"foreignField\", \"place_id\"},\n {\"as\", \"aliases\"}\n }\n}}\n\"searching\":[[{\"Key\":\"$geoNear\",\"Value\":[{\"Key\":\"includeLocs\",\"Value\":\"location\"},{\"Key\":\"distanceField\",\"Value\":\"distance\"},{\"Key\":\"near\",\"Value\":[{\"Key\":\"type\",\"Value\":\"Point\"},{\"Key\":\"coordinates\",\"Value\":[5.9037,10.289]}]},{\"Key\":\"maxDistance\",\"Value\":2000},{\"Key\":\"spherical\",\"Value\":true}]}],[{\"Key\":\"$lookup\",\"Value\":[{\"Key\":\"from\",\"Value\":\"alias\"},{\"Key\":\"localField\",\"Value\":\"_id\"},{\"Key\":\"foreignField\",\"Value\":\"place_id\"},{\"Key\":\"as\",\"Value\":\"aliases\"}]}]],\n\tcur, err := placeColl.Aggregate(context.TODO(), pipeline, options.Aggregate()) // use the default options of aggregate\n\n", "text": "Hello Steeve, thanks for the support. The whole document Da in collection A isThe stage before the lookup is a geoNear:By setting the location as [5.9037, 10.289] and radius = 2000, I can create the geonear stage and return Da.\nThe lookup stage isThe full pipeline printed by the logging:I am using golang driver and the way I call the Aggregate is:There are $limit and $skip operators in the pipeline, but I think they will not impact the result array.", "username": "Zhihong_GUO" }, { "code": "\tlookupStage := bson.M{\n\t\t\"$lookup\": bson.M{\n\t\t\t\"from\": \"alias\",\n\t\t\t\"let\": bson.M{\"placeID\": \"$_id\"},\n\t\t\t\"pipeline\": []bson.M{\n\t\t\t\tbson.M{\n\t\t\t\t\t\"$match\": bson.M{\n\t\t\t\t\t\t\"$expr\": bson.M{\"$eq\": []string{\"$place_id\", \"$$placeID\"}},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tbson.M{\n\t\t\t\t\t\"$project\": bson.M{\"content\": 1, \"_id\": 0},\n\t\t\t\t},\n\t\t\t},\n\t\t\t\"as\": \"aliases\",\n\t\t},\n\t}\n", "text": "Hello again, there is a view on Collection place, the view is created by the pipeline below:But when I run the pipeline with geoNear directly on the view, I get error said “$geoNear should be the first stage of the pipeline”. Then I change to run the $genNear on the collection A, in the mean time I add a $lookup stage after the $genNear to join search some content in alias collection.\nIt seems when I run the pipeline on collection, it use some the result from its view, because in the view creation process the “content” field in alias document is kept as “aliases” in the document Da. But if there is no “aliases” field, in the collection A, the pipeline can’t get the content by the lookup stage.\nWhat I can confirm is I did run the pipeline on collection, not on the view.", "username": "Zhihong_GUO" }, { "code": "bson.M{\n\t\t\t\t\t\"$project\": bson.M{\"content\": 1, \"_id\": 0},\n\t\t\t\t},\n", "text": "So there is some other manipulations you are doing (to get only the string in the array)This is the manipulation I was writing about:To be honest, now I am lost. That is why we prefer to have exact collections names, real documents. When things are redacted, some times, they are not redacted consistently and we are lost.Can you confirm the exact names of your collections and views? You started with A but now it seems that it is named places or something like that.", "username": "steevej" }, { "code": "", "text": "Hello, the Collection A is “place” collection, and there is another collection “alias”, and on the “place” collection I create a view “place_view_with_alias” which is created by a pipeline with $lookup operator. The $lookup will find docs in the “alias” collection from the “place” collection, and $project the “content” to be “aliases” in the view “place_view_with_alias”. I confirm I use $project operator to scissor the content. But when I run the pipeline (geoNear + lookup) on the “place” collection, in the $lookup stage there is no $project operator as you have mentioned. I can confirm that the $project operator is only used in the pipeline creating the view. My problem is when I using the (geoNear+lookup) pipeline on the “place” collection, it return the docs from the view “place_view_with_alias” (if I add “aliases” field in “place” collection), or return doc without anything from alias (if there is no “aliases” in place collection).", "username": "Zhihong_GUO" }, { "code": "{\n\t\"$lookup\" : {\n\t\t\"from\" : \"alias\",\n\t\t\"let\" : {\n\t\t\t\"placeID\" : \"$_id\"\n\t\t},\n\t\t\"pipeline\" : [\n\t\t\t{\n\t\t\t\t\"$match\" : {\n\t\t\t\t\t\"$expr\" : {\n\t\t\t\t\t\t\"$eq\" : [\n\t\t\t\t\t\t\t\"$place_id\",\n\t\t\t\t\t\t\t\"$$placeID\"\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$project\" : {\n\t\t\t\t\t\"content\" : 1,\n\t\t\t\t\t\"_id\" : 0\n\t\t\t\t}\n\t\t\t}\n\t\t],\n\t\t\"as\" : \"aliases\"\n\t}\n}\n{\n\t\"_id\" : \"AAA\",\n\t\"status\" : \"active\",\n\t\"desc\" : \"the desc of AAA\",\n\t\"creation_date\" : \"2019-05-11T10:59:55.627+00:00\",\n\t\"aliases\" : [\n\t\t{\n\t\t\t\"content\" : \"it is the alias of AAA\"\n\t\t}\n\t]\n}\n", "text": "As I am not familiar with golang, I transformed your pipeline into a shell query that gave me:and that almost work as I get:I am too unfamiliar with golang and I don’t think I can help further. Hopefully, a more skilful goland user can.", "username": "steevej" }, { "code": "", "text": "@steeve, Hello Steeve, thank you again for the kindly help.", "username": "Zhihong_GUO" } ]
Lookup aggregation issue
2021-05-10T14:47:39.470Z
Lookup aggregation issue
5,824
null
[ "queries", "python" ]
[ { "code": "", "text": "Hello!I’m using pymongo with Python 3.9.4 on Windows 10.When I do a find and get a Cursor object I call count but that produces this deprecation warning:DeprecationWarning: count is deprecated. Use Collection.count_documents instead.The problem there is that deprecation warning is for the Collection class, not the Cursor.Regardless, I tried just calling count_documents on the object and it cries foul: it’s a Cursor, not a Collection.res_structs = mongo_coll_structs.find( {‘JobName’ : job_name, ‘ImportStatus’: { ‘$ne’: ‘success’ } }\nprint(\"GetStructsForJobs: found this many structs for \" + job_name + \" : \" + str(res_structs.count()))Here are the packages I have installed:Package Versionboto3 1.17.51\nbotocore 1.20.51\njmespath 0.10.0\npip 21.1.1\npymongo 3.11.4\npython-dateutil 2.8.1\ns3transfer 0.3.7\nsetuptools 49.2.1\nsix 1.15.0\nurllib3 1.26.4", "username": "Paul_Hamrick" }, { "code": "", "text": "I found the solution to my problem: I was misinterpreting that deprecation warning.The warning is instructing me to use Collection.count_documents instead of Cursor.count, it is NOT telling me to use Cursor.count_documents because that doesn’t exist… duh.", "username": "Paul_Hamrick" }, { "code": "", "text": "The learning curve, heh? All been there. Welcome to the community @Paul_Hamrick.", "username": "MaxOfLondon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Cursor, count vs count_documents, Deprecation Warning
2021-05-12T16:18:40.478Z
Cursor, count vs count_documents, Deprecation Warning
13,603
null
[ "atlas-triggers" ]
[ { "code": "", "text": "I am trying to add a database trigger to an app which is connected to an M2 mongodb cluster and got a message saying ‘maximum database trigger count for cluster size=‘M0’ is 5’.I have several apps connected to this cluster but have less than 10 triggers configured, which is the limit for M2.Another strange issue is that when I configure the trigger, on the list of clusters I see an empty list item that I am able to select but without a name.I have 2 set of apps, one connected to an M0 cluster for dev and one connected to M2 for production.Thx\nMichael", "username": "michael_schiller" }, { "code": "", "text": "Hi @michael_schiller,Was this an M0 which was upgraded to M2? If so, can you try make a change to the settings in the Realm App UI (such as enabling/disabling Wire Protocol Connections) and then deploying? I believe this should make Realm aware of the changes.Kind Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Thank you, that solved the issue ", "username": "michael_schiller" }, { "code": "", "text": "Happy to help Michael and glad to hear it’s resolved the issue! Please consider accepting this answer as the solution for future users if possible ", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to add a database trigger to an app connected to M2 cluster
2021-05-12T09:44:08.311Z
Unable to add a database trigger to an app connected to M2 cluster
2,358
null
[ "queries", "php" ]
[ { "code": "", "text": "Hello, I’m just beginner to learn mongodb,but i have a question about mongodb script\nHow to query data by date using phpi can query data by date in mongodb using this script{ Date : { $gt:ISODate(‘2021-01-01’), $lt:ISODate(‘2021-02-30’)}}here my code in php but It doesn’t work,$startdate = ‘2021-01-01’;\n$stopdate = ‘2021-02-30’;\nforeach ($collection->find( [‘Date’ => [’$gt’=>$startdate ],[’$lt’=>$stopdate]]) as $doc){\n$doc[‘somedata’];\n}", "username": "Kittipot_Anuwet" }, { "code": "Date", "text": "Looks like you are comparing Date to a string right now. Have you looked at this docs page: PHP: MongoDB\\BSON\\UTCDateTime - Manual?", "username": "Asya_Kamsky" } ]
How to query data by date using php
2021-05-11T08:44:12.675Z
How to query data by date using php
8,039
null
[ "queries", "php" ]
[ { "code": "", "text": "Hi Team,I am using Mongo DB with YII2 (PHP framework), I am fetching records from database(Mongo) and I have one word and searching in description, if word present in description so I don’t have to fetch such record from database(Mongo).Please let me know Mongo DB query and how that I have to add YII2 .Please find below query in MySQL and same I need in Mongo DB.\nselect * from product where description not like “%Soap%”Request you to please help.", "username": "Pooja_Makhija" }, { "code": "db.product.find( { \"description\": {\"$not\": /Soap/} })\n/Soap/Soap\"$not\"", "text": "You can use analogous MQL query:/Soap/ has implicit wildcards before and after, in other words it matches any description that has the string Soap anywhere in it, and \"$not\" … well, it’s pretty self-explanatory…Asya", "username": "Asya_Kamsky" } ]
Mongo DB with Yii2 - word present in description ignore that record
2021-05-12T15:14:36.106Z
Mongo DB with Yii2 - word present in description ignore that record
2,622
null
[]
[ { "code": "failed to import app: cannot link cluster for Atlas service via source controlrealm-cli", "text": "I just set up my Realm app for deployment via GitHub (cool!) but when I tried it just now I got failed to import app: cannot link cluster for Atlas service via source controlI had already successfully used realm-cli to configure the app (including linking cluster) and everything worked fine. The commit I am pushing has nothing to do with Atlas, it’s just a file in the hosting dir.", "username": "Ted_Hayes" }, { "code": "", "text": "Hi @Ted_Hayes, are you able to share the repo so that I can try it out?", "username": "Andrew_Morgan" }, { "code": "realm-clidata_sources/mongodb-atlas/config.json \"name\": \"mongodb-atlas\",\n \"type\": \"mongodb-atlas\",\n \"config\": {\n \"clusterName\": \"Cluster0\",\n \"readPreference\": \"primary\",\n \"wireProtocolEnabled\": false\n },\n \"version\": 1\n}\nname", "text": "I can’t really share the repo unfortunately, but it was just exported using realm-cli from a basic project I had started. I hadn’t even changed or configured anything.I thought perhaps it could have to do with the Export for Source Control limitations, but I’m using realm-cli beta and that doesn’t offer that flag (perhaps they just made it compliant by default?)So I inspected data_sources/mongodb-atlas/config.json:This seems compliant except for the name field, so I tried taking that out but still got the same error.", "username": "Ted_Hayes" }, { "code": "config.clusterNamerealm_config.json", "text": "I was able to recreate the error.Firstly, you don’t need to flag the export as for source control with the new CLI.I was able to fix the issue by removing the config.clusterName attribute and leaving realm_config.json unchanged (that part of the documentation is out of date - you shouldn’t edit that file after it’s been exported).", "username": "Andrew_Morgan" }, { "code": "\"clusterName\": \"Cluster0\"mongodb-atlas/config.json", "text": "Aha, I got it to work by removing \"clusterName\": \"Cluster0\" from mongodb-atlas/config.json. Is that a bug on the Realm end, then?Thank you so much!!!", "username": "Ted_Hayes" }, { "code": "", "text": "Glad to hear that the workaround works. The engineering team is working on the fix as we type.", "username": "Andrew_Morgan" } ]
Failed to import app: cannot link cluster for Atlas service via source control
2021-05-11T20:13:58.321Z
Failed to import app: cannot link cluster for Atlas service via source control
2,590
null
[]
[ { "code": "", "text": "Currently I use Netlify for static hosting, which will run a build script when I push a commit. AFAICT this is not currently a feature for Realm. Does anyone know if that is planned? I didn’t see it on the feedback site so I created it: Build system for static hosted SPA – MongoDB Feedback Engine", "username": "Ted_Hayes" }, { "code": "", "text": "Thanks for creating the feature request.I wonder if there’s a way to achieve the same results using GitHub Actions?", "username": "Andrew_Morgan" }, { "code": "", "text": "Oh, I hadn’t thought of that—that could work!", "username": "Ted_Hayes" } ]
Build script for static hosted SPA?
2021-05-12T15:37:07.460Z
Build script for static hosted SPA?
1,523
https://www.mongodb.com/…a27f2b2eb2f.jpeg
[ "app-services-user-auth" ]
[ { "code": "", "text": "I have implemented the solution provided by @Pavel_Duchovny, mentioned in the topic - Delete anonymous users upon log out: trigger?I found that anonymous users are not deleted with API key having “Organization Member” only permission. But users get deleted only if it is having “Organization Owner” permission.\nSS929×493 46 KB\nCan anybody please guide what minimum permission this API Key should have in order to delete anonymous user? What is the best practice?Because giving API key “Organization Owner Permission” i.e, root permission to delete only anonymous users is very risky and unnecessary.", "username": "Sudarshan_Roy" }, { "code": "", "text": "You’re deleting users and so it doesn’t surprise me that you’d need elevated privs. Did you try creating an API key at the project rather than org level – at least the privs would be constrained to a specific project.", "username": "Andrew_Morgan" }, { "code": "", "text": "Initially, I have created API keys from Project Access Manager only, with following permission. But it did not wor - could not able to delete users. Tried with different other combinations of permission, still it did not work.ScS696×453 22.2 KBAfter that I went to Organization Access Manager and found the same key is available there also. I elevated its privilege to Organization Owner. Then It started working.As per my thought, API Keys having a project level permission of “Project Data Access Admin” should be able to delete user.", "username": "Sudarshan_Roy" }, { "code": "", "text": "Did you try “Project Owner”?", "username": "Andrew_Morgan" }, { "code": "", "text": "Yaah… Tried. But that too did not work…", "username": "Sudarshan_Roy" }, { "code": "", "text": "For me, it works with “Project Owner” and “Organization member” levels. It doesn’t work with any level below “Project Owner”.", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "Still, having \"read-only access to the organization (settings, users, and billing) \" seems a bit overkill for the task. Why does an API key created in a project automatically have access to the organization data?", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "For me, it works with “Project Owner” and “Organization member” levels. It doesn’t work with any level below “Project Owner”.I have tried this combination. It worked for me also.", "username": "Sudarshan_Roy" } ]
Minimum permission API key should have to delete anonymous user
2021-05-12T07:58:38.518Z
Minimum permission API key should have to delete anonymous user
3,122
null
[ "golang" ]
[ { "code": "", "text": "Hello,\nIs it possible to run mongo js scripts written as strings via the mongo-go-driver?\nI’d like to write some scripts in dedicated files, and execute them via the driver to manage the authentication etc… But I don’t figured out a way to do this Thanks in advance!", "username": "Fabrizio_Cirelli" }, { "code": "", "text": "I don’t think there is a built-in mechanism to do this. Even I am looking for this feature and didn’t find a way.Your only option is to write a custom function yourself.P.S. Let me know if you find any easy way to do it.", "username": "Harshavardhan_Kumare" } ]
Execute a script from mongo-go-driver?
2021-05-07T09:05:54.840Z
Execute a script from mongo-go-driver?
2,547
null
[ "queries", "golang" ]
[ { "code": "bson.D{{\"$or\", []interface{}{\n bson.D{{\"date\", bson.M{\"$eq\": bsontype.Null}}},\n bson.D{{\"date\", bson.M{\"$exists\": false}}},\n},\n", "text": "Hi Guys, im trying to query mongodb documents (from Go) if a certain column does not exist or if the column value is null, so far with the query i have it only return rows where the column does not exist and leaves out rows where the value is null. here is what i have for the filter:this query does not give any errors, it just returns only rows where the date column does not exist, leaving out those where it exists and the value is null. does anyone have an idea of how to achieve this in Go? all response is appreciated. thanks…", "username": "the_tochi" }, { "code": "nilbson.D{{\"$or\", []interface{}{\n bson.D{{\"date\", nil}},\n bson.D{{\"date\", bson.M{\"$exists\": false}}},\n},\nnullfindQry := bson.D{{\"date\", nil}} // will fetch documents if the key doesn't exist too in addition to null\n", "text": "You could use the nil type check which is a built-in datatype in golang.Note: By default, MongoDB considers keys that don’t exist in a document as nullTherefore the below query will also work for your scenario.", "username": "Harshavardhan_Kumare" } ]
Query if a field exist or is null from Golang
2021-05-07T01:40:17.614Z
Query if a field exist or is null from Golang
14,756
null
[ "database-tools", "vscode" ]
[ { "code": "\"MongoDB\": {\n \"path\": \"mongo\",\n \"args\": [\"$Env:MDB_CONNECTION_STRING\"]\n}\n", "text": "Hello,This is regarding VSC extension and whether it is possible to add the MongoDB Shell to the terminal options along side (Powershell, CMD and Git Bash for example), in such a way that if I am currently connected to a MongoDB database, it would allow me instantly duplicate a connection.Much in the same way the MongoDB: Launch MongoDB Shell, fires off a: -mongo $Env:MDB_CONNECTION_STRING;Or even better, if I use the split option on a currently created connection, it would duplicate that connection. At the moment if I use WIN CTRL-SHIFT-5, it spawns off a powershell session, which isn’t what I would like.If it’s possible already, then let me know. I have tried this within the integrated terminal settings for windows JSON: -No joy, it comes back with a “The terminal process “mongo ‘$Env:MDB_CONNECTION_STRING’” failed to launch (exit code: 1).”Also it would be useful if $ENV:MDB_CONNECTION_STRING was set/reset when a connection was made to the MongoDB database, and not when the MongoDB: Launch MongoDB Shell was instigated.ThanksPS - How is the VSC extension coming along, it’s been a been a bit quiet on the enhancements lately.PPS - Where do we put feature requests nowadays?", "username": "NeilM" }, { "code": "", "text": "Hi @NeilM,thank you for the feedback on this. I recommend adding this feature request to our feedback portal: https://feedback.mongodb.com/forums/929236-mongodb-for-vs-code.There is, however a workaround for what you are trying to do. It sounds like your goal is to quickly open a new shell instance connected to the server/cluster the extension is currently connected to. A command for that is available in the command palette:\n\nThat means you can assign a keyboard shortcut to it and every time you use that shortcut, you’ll get a new shell instance.I am curious: what do you do with multiple shell instances connected to the same cluster?And yes, you are right. We have been pretty quiet on enhancements. We will probably do a bugfix release soon and probably a bigger one in a month or so.", "username": "Massimiliano_Marcon" }, { "code": "", "text": "The issue with the Launch new shell, is that it creates a new shell completely, not a split. I ideally don’t want to be going back and forth between integrated terminal views.The reason why.Sometimes I will want to look at data, while on another terminal session refine a query, or I want to check that the playground has given me the correct results, which it doesn’t always. Sometimes comes back with [ ] btw.If I have the mongodb connections tab open, then I lose a right side (CTRL-B), and at the moment the schema analyze view in connections, isn’t as good as the compass one.If I am at a terminal level, then I can spawn multiple terminal sessions quickly all nicely spaced. Looking at data via the connections tab (So not always a good space trade off), and fills up the screen with tabs. See screen shot.I suppose it’s all about having options on how to work (The view below is on a 42\" monitor so it’s a lot more readable than it seems).VSCTerminalView3850×2108 618 KB", "username": "NeilM" }, { "code": "", "text": "Oh by the way this linkhttps://feedback.mongodb.com/forums/929236-mongodb-for-vs-codeCan we please make the box where we enter the suggestion text box re-sizable, so I can read the whole of the text, there didn’t seem to be an option to resize in chrome.Would it be possible to allow add images (about a 1000kb), since I had to cross post back to this page to link in a screenshot.Why was suggestions/features moved away from Github, since it seems to be re-inventing the wheel with that new page?", "username": "NeilM" }, { "code": "", "text": "Thank you for the feedback.It wasn’t really moved from Github. At MongoDB, we use https://feedback.mongodb.com as a centralized repository of suggestions for all products and it easier for me if feature requests are submitted through that portal.We know Github is friendlier but that portal give us a way to track things cross-product and cross-customer. What we’d often do when we get feature requests through Github issues is to move them over to the feedback portal, but that’s a manual process so they don’t always get moved over. We are looking to improve that process though.Unfortunately, the feedback portal uses a 3rd party solution so the degree of customizability is limited.", "username": "Massimiliano_Marcon" }, { "code": "", "text": "It must have been free or a bolt on to another product. Some customer service portal, I’d imagineI will just knock up suggestions in a text editor instead and paste it in the box going forward, though the limit on images is a bit of a challenge.Do you know what the restrictions are, since it may be worth mentioning them on the page e.g. size, image type, documents types allowed etc.", "username": "NeilM" } ]
VSC Extension : MongoDB integration into the Terminal options?
2021-05-11T16:31:51.939Z
VSC Extension : MongoDB integration into the Terminal options?
4,089
null
[ "app-services-cli" ]
[ { "code": "", "text": "When trying to login with the latest real-cli 2 beta on windows 10 I am getting the error:\nlogin failed: Post “/api/admin/v3.0/auth/providers/mongodb-cloud/login”: unsupported protocol scheme “”\nWith previous version I have no issue.Thx\nMichael", "username": "michael_schiller" }, { "code": "realm-cli --version", "text": "Hi @michael_schiller I’m trying to reproduce this now. What’s your output from realm-cli --version?", "username": "Andrew_Morgan" }, { "code": "2.0.0-beta.5", "text": "I’m getting the same error as you on Windows (using 2.0.0-beta.5), but I can log in without problems from a Mac. I’ll look into it", "username": "Andrew_Morgan" }, { "code": "--realm-url https://realm.mongodb.com --atlas-url https://cloud.mongodb.com", "text": "@michael_schiller the engineers are working on this problem, but a workaround is to add these options --realm-url https://realm.mongodb.com --atlas-url https://cloud.mongodb.comYou only need to include the extra options once.", "username": "Andrew_Morgan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to login with realm-cli 2 beta
2021-05-12T11:33:03.567Z
Unable to login with realm-cli 2 beta
3,382